Tag Archives: news

EU Court to Decide on BitTorrent Questions in Copyright Trolling Case

Post Syndicated from Andy original https://torrentfreak.com/bittorrent-related-eu-court-191116/

During the summer we reported on the renewed efforts of Golden Eye (International) and Mircom, companies with a track record of targeting alleged BitTorrent pirates with demands for cash settlements to make supposed lawsuits disappear.

After filing no complaints in the UK for years, the pair teamed up in an effort to squeeze the personal details of thousands of Internet users from the hands of ISP Virgin Media. Somewhat unusually given previous compliance in alleged anti-piracy matters, Virgin put up a pretty big fight.

In the end, the cases brought by Golden Eye and Mircom were proven to be so lacking in evidence that a judge in the High Court threw out the companies’ claims. Nevertheless, there are more countries than just the UK to target.

Cyprus-based Mircom (full name Mircom International Content Management & Consulting) has another case on the boil, this time against Telenet, the largest provider of cable broadband in Belgium. In common with previous cases, this one is also about the unlicensed sharing of pornographic movies using BitTorrent.

Mircom says it has thousands of IP addresses on file which can identify Telenet subscribers from which it wants to extract cash payments. However, it needs the ISP’s cooperation to match the IP addresses to those customers and the case isn’t progressing in a straightforward manner.

As a result, the Antwerp Business Court (Ondernemingsrechtbank Antwerpen) has referred several questions in the matter to the European Court of Justice. As usual, there are several controversial as well as technical points under consideration.

The first complication concerns how BitTorrent itself works. When a regular user participates in a BitTorrent swarm, small downloaded parts of a movie are then made available for upload. In this manner, everyone in a swarm can gain access to all of the necessary parts of the movie.

Anyone who obtains all of the parts (and therefore the whole movie) becomes a ‘seeder’ if he or she continues to upload to the swarm.

However, a question with three parts sent to the EU Court appears to seek clarity on whether uploading small pieces of a file, which are unusable in their own right, constitutes an infringement and if so, where the limit lies. It also deals with potential ignorance on the user’s part when it comes to seeding.

1. (a) Can the downloading of a file via a peer-to-peer network and the simultaneous provision for uploading of parts (‘pieces’) thereof (which may be very fragmentary as compared to the whole) (‘seeding’) be regarded as a communication to the public within the meaning of Article 3(1) of Directive 2001/29, (1) even if the individual pieces as such are unusable? If so,

1. (b) is there a de minimis threshold above which the seeding of those pieces would constitute a communication to the public?

1. (c) is the fact that seeding can take place automatically (as a result of the torrent client’s settings), and thus without the user’s knowledge, relevant?

While the above matters are interesting in their own right, it’s Mircom’s position that perhaps provokes the most interest and has resulted in the next pair of questions to the European Court of Justice.

To be clear – Mircom is not a content creator. It is not a content distributor. Its entire purpose is to track down alleged infringers in order to claim cash settlements from them on the basis that its rights have been infringed. So what rights does it have?

Mircom claims to have obtained the rights to distribute, via peer-to-peer networks including BitTorrent, a large number of pornographic films produced by eight American and Canadian companies. However, despite having the right to do so, Mircom says it does not distribute any movies in this fashion.

Instead, it aims to collect money from alleged infringers, returning a proportion of this to the actual copyright holders, to whom it paid absolutely nothing for the rights to ‘distribute’ their movies via BitTorrent.

Interesting to say the least, a situation that has resulted in a second question with two parts being referred to the EUCJ;

2. (a) Can a person who is the contractual holder of the copyright (or related rights), but does not himself exploit those rights and merely claims damages from alleged infringers — and whose economic business model thus depends on the existence of piracy, not on combating it — enjoy the same rights as those conferred by Chapter II of Directive 2004/48 (2) on authors or licence holders who do exploit copyright in the normal way?

2. (b) How can the license holder in that case have suffered ‘prejudice’ (within the meaning of Article 13 of Directive 2004/48) as a result of the infringement?

A third question asks whether the specific circumstances laid out in questions 1 and 2 are relevant when assessing the correct balance between the enforcement of intellectual property rights and the right to a private life and protection of personal data.

Finally, question four deals with a particularly interesting aspect of BitTorrent swarm data monitoring and subsequent data processing in respect of the GDPR.

4. Is, in all those circumstances, the systematic registration and general further processing of the IP-addresses of a ‘swarm’ of ‘seeders’ (by the licence holder himself, and by a third party on his behalf) legitimate under the General Data Protection Regulation and specifically under Article 6(1)(f) thereof?

There are already considerable concerns that the tracking data collected and processed as part of the case in hand may not have been handled as required under the GDPR. That, on top of the conclusion that Mircom fits the ‘copyright troll’ label almost perfectly, makes this a very interesting case to follow.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Now available: Batch Recommendations in Amazon Personalize

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/now-available-batch-recommendations-in-amazon-personalize/

Today, we’re very happy to announce that Amazon Personalize now supports batch recommendations.

Launched at AWS re:Invent 2018, Personalize is a fully-managed service that allows you to create private, customized personalization recommendations for your applications, with little to no machine learning experience required.

With Personalize, you provide the unique signals in your activity data (page views, sign-ups, purchases, and so forth) along with optional customer demographic information (age, location, etc.). You then provide the inventory of the items you want to recommend, such as articles, products, videos, or music: as explained in previous blog posts, you can use both historical data stored in Amazon Simple Storage Service (S3) and streaming data sent in real-time from a JavaScript tracker or server-side.

Then, entirely under the covers, Personalize will process and examine the data, identify what is meaningful, select the right algorithms, train and optimize a personalization model that is customized for your data, and is accessible via an API that can be easily invoked by your business application.

However, some customers have told us that batch recommendations would be a better fit for their use cases. For example, some of them need the ability to compute recommendations for very large numbers of users or items in one go, store them, and feed them over time to batch-oriented workflows such as sending email or notifications: although you could certainly do this with a real-time recommendation endpoint, batch processing is simply more convenient and more cost-effective.

Let’s do a quick demo.

Introducing Batch Recommendations
For the sake of brevity, I’ll reuse the movie recommendation solution trained in this post on the MovieLens data set. Here, instead of deploying a real-time campaign based on this solution, we’re going to create a batch recommendation job.

First, let’s define users for whom we’d like to recommend movies. I simply list their user ids in a JSON file that I store in an S3 bucket.

{"userId": "123"}
{"userId": "456"}
{"userId": "789"}
{"userId": "321"}
{"userId": "654"}
{"userId": "987"}

Then, I apply a bucket policy to that bucket, so that Personalize may read and write objects in it. I’m using the AWS console here, and you can do the same thing programmatically with the PutBucketAcl API.

Now let’s head out to the Personalize console, and create a batch inference job.

As you would expect, I need to give the job a name, and select an AWS Identity and Access Management (IAM) role for Personalize in order to allow access to my S3 bucket. The bucket policy was taken care of already.

Then, I select the solution that I want to use to recommend movies.

Finally, I define the location of input and output data, with optional AWS Key Management Service (KMS) keys for decryption and encryption.

After a little while, the job is complete, and I can fetch recommendations from my bucket.

$ aws s3 cp s3://jsimon-personalize-euwest-1/batch/output/batch/users.json.out -
{"input":{"userId":"123"}, "output": {"recommendedItems": ["137", "285", "14", "283", "124", "13", "508", "276", "275", "475", "515", "237", "246", "117", "19", "9", "25", "93", "181", "100", "10", "7", "273", "1", "150"]}}
{"input":{"userId":"456"}, "output": {"recommendedItems": ["272", "333", "286", "271", "268", "313", "340", "751", "332", "750", "347", "316", "300", "294", "690", "331", "307", "288", "304", "302", "245", "326", "315", "346", "305"]}}
{"input":{"userId":"789"}, "output": {"recommendedItems": ["275", "14", "13", "93", "1", "117", "7", "246", "508", "9", "248", "276", "137", "151", "150", "111", "124", "237", "744", "475", "24", "283", "20", "273", "25"]}}
{"input":{"userId":"321"}, "output": {"recommendedItems": ["86", "197", "180", "603", "170", "427", "191", "462", "494", "175", "61", "198", "238", "45", "507", "203", "357", "661", "30", "428", "132", "135", "479", "657", "530"]}}
{"input":{"userId":"654"}, "output": {"recommendedItems": ["272", "270", "268", "340", "210", "313", "216", "302", "182", "318", "168", "174", "751", "234", "750", "183", "271", "79", "603", "204", "12", "98", "333", "202", "902"]}}
{"input":{"userId":"987"}, "output": {"recommendedItems": ["286", "302", "313", "294", "300", "268", "269", "288", "315", "333", "272", "242", "258", "347", "690", "310", "100", "340", "50", "292", "327", "332", "751", "319", "181"]}}

In a real-life scenario, I would then feed these recommendations to downstream applications for further processing. Of course, instead of using the console, I would create and manage jobs programmatically with the CreateBatchInferenceJob, DescribeBatchInferenceJob, and ListBatchInferenceJobs APIs.

Now Available!
Using batch recommendations with Amazon Personalize is an easy and cost-effective way to add personalization to your applications. You can start using this feature today in all regions where Personalize is available.

Please send us feedback, either on the AWS forum for Amazon Personalize, or through your usual AWS support contacts.

Julien

New – Insert, Update, Delete Data on S3 with Amazon EMR and Apache Hudi

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-insert-update-delete-data-on-s3-with-amazon-emr-and-apache-hudi/

Storing your data in Amazon S3 provides lots of benefits in terms of scale, reliability, and cost effectiveness. On top of that, you can leverage Amazon EMR to process and analyze your data using open source tools like Apache Spark, Hive, and Presto. As powerful as these tools are, it can still be challenging to deal with use cases where you need to do incremental data processing, and record-level insert, update, and delete.

Talking with customers, we found that there are use cases that need to handle incremental changes to individual records, for example:

  • Complying with data privacy regulations, where their users choose to exercise their right to be forgotten, or change their consent as to how their data can be used.
  • Working with streaming data, when you have to handle specific data insertion and update events.
  • Using change data capture (CDC) architectures to track and ingest database change logs from enterprise data warehouses or operational data stores.
  • Reinstating late arriving data, or analyzing data as of a specific point in time.

Starting today, EMR release 5.28.0 includes Apache Hudi (incubating), so that you no longer need to build custom solutions to perform record-level insert, update, and delete operations. Hudi development started in Uber in 2016 to address inefficiencies across ingest and ETL pipelines. In the recent months the EMR team has worked closely with the Apache Hudi community, contributing patches that include updating Hudi to Spark 2.4.4 (HUDI-12), supporting Spark Avro (HUDI-91), adding support for AWS Glue Data Catalog (HUDI-306), as well as multiple bug fixes.

Using Hudi, you can perform record-level inserts, updates, and deletes on S3 allowing you to comply with data privacy laws, consume real time streams and change data captures, reinstate late arriving data and track history and rollbacks in an open, vendor neutral format. You create datasets and tables and Hudi manages the underlying data format. Hudi uses Apache Parquet, and Apache Avro for data storage, and includes built-in integrations with Spark, Hive, and Presto, enabling you to query Hudi datasets using the same tools that you use today with near real-time access to fresh data.

When launching an EMR cluster, the libraries and tools for Hudi are installed and configured automatically any time at least one of the following components is selected: Hive, Spark, or Presto. You can use Spark to create new Hudi datasets, and insert, update, and delete data. Each Hudi dataset is registered in your cluster’s configured metastore (including the AWS Glue Data Catalog), and appears as a table that can be queried using Spark, Hive, and Presto.

Hudi supports two storage types that define how data is written, indexed, and read from S3:

  • Copy on Write – data is stored in columnar format (Parquet) and updates create a new version of the files during writes. This storage type is best used for read-heavy workloads, because the latest version of the dataset is always available in efficient columnar files.
  • Merge on Read – data is stored with a combination of columnar (Parquet) and row-based (Avro) formats; updates are logged to row-based “delta files” and compacted later creating a new version of the columnar files. This storage type is best used for write-heavy workloads, because new commits are written quickly as delta files, but reading the data set requires merging the compacted columnar files with the delta files.

Let’s do a quick overview of how you can set up and use Hudi datasets in an EMR cluster.

Using Apache Hudi with Amazon EMR
I start creating a cluster from the EMR console. In the advanced options I select EMR release 5.28.0 (the first including Hudi) and the following applications: Spark, Hive, and Tez. In the hardware options, I add 3 task nodes to ensure I have enough capacity to run both Spark and Hive.

When the cluster is ready, I use the key pair I selected in the security options to SSH into the master node and access the Spark Shell. I use the following command to start the Spark Shell to use it with Hudi:

$ spark-shell --conf "spark.serializer=org.apache.spark.serializer.KryoSerializer"
              --conf "spark.sql.hive.convertMetastoreParquet=false"
              --jars /usr/lib/hudi/hudi-spark-bundle.jar,/usr/lib/spark/external/lib/spark-avro.jar

There, I use the following Scala code to import some sample ELB logs in a Hudi dataset using the Copy on Write storage type:

import org.apache.spark.sql.SaveMode
import org.apache.spark.sql.functions._
import org.apache.hudi.DataSourceWriteOptions
import org.apache.hudi.config.HoodieWriteConfig
import org.apache.hudi.hive.MultiPartKeysValueExtractor

//Set up various input values as variables
val inputDataPath = "s3://athena-examples-us-west-2/elb/parquet/year=2015/month=1/day=1/"
val hudiTableName = "elb_logs_hudi_cow"
val hudiTablePath = "s3://MY-BUCKET/PATH/" + hudiTableName

// Set up our Hudi Data Source Options
val hudiOptions = Map[String,String](
    DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY -> "request_ip",
    DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY -> "request_verb", 
    HoodieWriteConfig.TABLE_NAME -> hudiTableName, 
    DataSourceWriteOptions.OPERATION_OPT_KEY ->
        DataSourceWriteOptions.INSERT_OPERATION_OPT_VAL, 
    DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY -> "request_timestamp", 
    DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY -> "true", 
    DataSourceWriteOptions.HIVE_TABLE_OPT_KEY -> hudiTableName, 
    DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY -> "request_verb", 
    DataSourceWriteOptions.HIVE_ASSUME_DATE_PARTITION_OPT_KEY -> "false", 
    DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY ->
        classOf[MultiPartKeysValueExtractor].getName)

// Read data from S3 and create a DataFrame with Partition and Record Key
val inputDF = spark.read.format("parquet").load(inputDataPath)

// Write data into the Hudi dataset
inputDF.write
       .format("org.apache.hudi")
       .options(hudiOptions)
       .mode(SaveMode.Overwrite)
       .save(hudiTablePath)

In the Spark Shell, I can now count the records in the Hudi dataset:

scala> inputDF2.count()
res1: Long = 10491958

In the options, I used the integration with the Hive metastore configured for the cluster, so that the table is created in the default database. In this way, I can use Hive to query the data in the Hudi dataset:

hive> use default;
hive> select count(*) from elb_logs_hudi_cow;
...
OK
10491958
...

I can now update or delete a single record in the dataset. In the Spark Shell, I prepare some variables to find the record I want to update, and a SQL statement to select the value of the column I want to change:

val requestIpToUpdate = "243.80.62.181"
val sqlStatement = s"SELECT elb_name FROM elb_logs_hudi_cow WHERE request_ip = '$requestIpToUpdate'"

I execute the SQL statement to see the current value of the column:

scala> spark.sql(sqlStatement).show()
+------------+                                                                  
|    elb_name|
+------------+
|elb_demo_003|
+------------+

Then, I select and update the record:

// Create a DataFrame with a single record and update column value
val updateDF = inputDF.filter(col("request_ip") === requestIpToUpdate)
                      .withColumn("elb_name", lit("elb_demo_001"))

Now I update the Hudi dataset with a syntax similar to the one I used to create it. But this time, the DataFrame I am writing contains only one record:

// Write the DataFrame as an update to existing Hudi dataset
updateDF.write
        .format("org.apache.hudi")
        .options(hudiOptions)
        .option(DataSourceWriteOptions.OPERATION_OPT_KEY,
                DataSourceWriteOptions.UPSERT_OPERATION_OPT_VAL)
        .mode(SaveMode.Append)
        .save(hudiTablePath)

In the Spark Shell, I check the result of the update:

scala> spark.sql(sqlStatement).show()
+------------+                                                                  
|    elb_name|
+------------+
|elb_demo_001|
+------------+

Now I want to delete the same record. To delete it, I pass the EmptyHoodieRecordPayload payload in the write options:

// Write the DataFrame with an EmptyHoodieRecordPayload for deleting a record
updateDF.write
        .format("org.apache.hudi")
        .options(hudiOptions)
        .option(DataSourceWriteOptions.OPERATION_OPT_KEY,
                DataSourceWriteOptions.UPSERT_OPERATION_OPT_VAL)
        .option(DataSourceWriteOptions.PAYLOAD_CLASS_OPT_KEY,
                "org.apache.hudi.EmptyHoodieRecordPayload")
        .mode(SaveMode.Append)
        .save(hudiTablePath)

In the Spark Shell, I see that the record is no longer available:

scala> spark.sql(sqlStatement).show()
+--------+                                                                      
|elb_name|
+--------+
+--------+

How are all those updates and deletes managed by Hudi? Let’s use the Hudi Command Line Interface (CLI) to connect to the dataset and see now those changes are interpreted as commits:

This dataset is a Copy on Write dataset, that means that each time there is an update to a record, the file that contains that record will be rewritten to contain the updated values. You can see how many records have been written for each commit. The bottom line of the table describes the initial creation of the dataset, above there is the single record update, and at the top the single record delete.

With Hudi, you can roll back to each commit. For example, I can roll back the delete operation with:

hudi:elb_logs_hudi_cow->commit rollback --commit 20191104121031

In the Spark Shell, the record is now back to where it was, just after the update:

scala> spark.sql(sqlStatement).show()
+------------+                                                                  
|    elb_name|
+------------+
|elb_demo_001|
+------------+

Copy on Write is the default storage type. I can repeat the steps above to create and update a Merge on Read dataset type by adding this to our hudiOptions:

DataSourceWriteOptions.STORAGE_TYPE_OPT_KEY -> "MERGE_ON_READ"

If you update a Merge on Read dataset and look at the commits with the Hudi CLI, you can see how different Merge on Read is compared to Copy on Write. With Merge on Read, you are only writing the updated rows and not whole files as with Copy on Write. This is why Merge on Read is helpful for use cases that require more writes, or update/delete heavy workload, with a fewer number of reads. Delta commits are written to disk as Avro records (row-based storage), and compacted data is written as Parquet files (columnar storage). To avoid creating too many delta files, Hudi will automatically compact your dataset so that your reads are as performant as possible.

When a Merge On Read dataset is created, two Hive tables are created:

  • The first table matches the name of the dataset.
  • The second table has the characters _rt appended to its name; the _rt postfix stands for real-time.

When queried, the first table return the data that has been compacted, and will not show the latest delta commits. Using this table provides the best performance, but omits the freshest data. Querying the real-time table will merge the compacted data with the delta commits on read, hence this dataset being called “Merge on Read”. This will result in the freshest data being available, but incurs a performance overhead, and is not as performant as querying the compacted data. In this way, data engineers and analysts have the flexibility to choose between performance and data freshness.

Available Now
This new feature is available now in all regions with EMR 5.28.0. There is no additional cost in using Hudi with EMR. You can learn more about Hudi in the EMR documentation. This new tool can simplify the way you process, update and delete data in S3. Let me know which use cases are you going to use it for!

Danilo

Company That Acquired ‘Copyright Troll’ Warns ISPs & VPN Providers

Post Syndicated from Andy original https://torrentfreak.com/company-that-acquired-copyright-troll-warns-isps-vpn-providers-191115/

While movie and music companies have regularly filed copyright lawsuits against alleged BitTorrent pirates over the past decade and beyond, the companies operating the machinery behind the scenes are less well known.

One exception was to be found in GuardaLey, an entity that provided tracking data and business structure for numerous lawsuits, notably the massive action targeting alleged pirates of the movies The Hurt Locker and The Expendables.

While these lawsuits and others like them attracted plenty of headlines, GuardaLey itself rarely experienced much scrutiny, at least not to the extent where its complex business dealings were made available to the public.

Earlier this year the waters appeared to be muddied again when 100% of its alleged US-operations were ‘acquired’ by American Films Inc. which promised to target peer-to-peer networks in order to target “repeat infringers.”

Since then, nothing has been heard of American Films Inc, which at the time of the GuardaLey acquisition was described as a “shell company.” Now, however, the company appears to have even grander plans after another acquisition, this time of “strategic data company” Maker Data Services LLC.

“This acquisition is important because it adds to the evidence of BitTorrent related copyright infringement that American Films can provide to its clients,” says John Carty, American Films’ CEO.

“This type of forensic evidence is only available from a few sources, most of which only supply the largest industry associations.”

However, it’s the next set of claims that are likely to raise the most eyebrows, including a veiled threat to not only take powerful Internet service providers to court, but also VPN companies.

“American Films has positioned itself as the go-to data provider for independent filmmakers that want to take action against the direct infringers, Internet Service Providers, VPN Providers, and others that allow, encourage, and profit from BitTorrent copyright infringement,” a company statement reads.

According to various sources, at the time of writing American Films stock is currently changing hands at around $0.04, has one employee, but decides not to supply any financial information by way of accounts.

More information is available on Maker Data Services LLC if one visits its website, but it’s not a particularly confidence-inspiring experience, even for a one-year-old company.

“Our company has created a tool that will search the internet. Our tool is able to find any relevant data that could affect the operations of our clients, that is, the businesses we serve,” the Maker Data site reads.

“We deal mostly with real estate data and people data to ensure that Real Estate businesses have all the vital information to make sound decisions and drive their businesses forward.

“Our real estate data and analytics services will always give you the actual value of a home before buying for better decision making.”

While there might potentially be some synergies between the above and “forensic” anti-piracy activity, the claim elsewhere on the site that the company has “state-the-art software” does not extend to the bug-ridden WordPress installation powering the site.

Every page displays database errors and much of the site consists of ‘articles’ carrying little more than placeholder posts, graphics and text, presumably put there by the creators of the website.

Google “site:makerdataservices.com” for many more..

Along with the acquisition of Maker Data Services comes the appointment of a new CTO for American Films, Craig Campbell, formerly of Fidelity Investments.

His “main focus” will be “managing the build-out of BitTorrent products for copyright enforcement utilizing the combined data resources now available at American Films.”

How the business model of American Films will develop is for the future to reveal but the acquisitions announced by the company thus far only raise more questions, not provide more answers. To be brutal, it’s only the inclusion of GuardaLey’s reputation as a ‘copyright troll’ within the equation that provokes curiosity.

Litigating successful lawsuits against powerful ISPs or even VPN providers seems not only an incredibly lofty goal, but also an extremely costly and risky proposition. Part of the solution to the latter pair of roadblocks, perhaps, lies in the company’s stated aim.

“American Films seeks to create alternative investment participation vehicles that provide necessary funding to appropriate projects while offering reasonable return on investment and mitigation of business risks traditionally encountered in the film industry,” the company states.

A for-hire firewall for ‘copyright trolling’ or the next Rightscorp? Only time will tell but ISPs and VPN providers probably aren’t worried too much just yet.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Court Punishes Copyright ‘Troll’ Lawyer for Repeatedly Lying to The Court

Post Syndicated from Ernesto original https://torrentfreak.com/court-punishes-copyright-troll-lawyer-for-repeatedly-lying-to-the-court-191115/

Over the past several years, independent photographers have filed more than a thousand lawsuits against companies that allegedly use their work without permission.

As many targets are mainstream media outlets, these can be seen as David vs. Goliath battles. However, the nature of these cases is described as classic copyright-trolling by many.

The driving force behind this copyright crusade is New York lawyer Richard Liebowitz, a former photographer, who explained his motives to TorrentFreak when he just got his firm started more than three years ago.

“Companies are using other people’s hard work and profiting off of it. It is important for photographers and the creative community to unite and stand up for their rights and protect their work,” Liebowitz said.

In the years that followed Liebowitz filed hundreds of new cases a year, trying to obtain settlements. While many of the photographers have a legitimate claim, the lawyer’s antics were increasingly criticized both in and outside of court.

In recent weeks, things only got worse.

In a case that was filed on behalf of photographer Jason Berger, targeting Imagina Consulting, Liebowitz failed to show up at a discovery hearing last April, without informing the court.

The lawyer later explained that this was due to a death in his family. However, since there were other issues that put the lawyer’s credibility in doubt, Judge Cathy Siebel decided to request evidence or documentation regarding who died, when, and how he was notified.

In the following months, Liebowitz explained that his grandfather had passed away on April 12, but he didn’t provide any documentation to back this up. Even after the court imposed sanctions of $100 for each business day he didn’t comply, nothing came in.

Instead of providing proof, the lawyer appeared to keep stalling, while stating that a death certificate is a personal matter.

This led some people to wonder whether Liebowitz’ grandfather had indeed passed away. Frustrated with the refusal to comply with her demands, Judge Siebel raised the sanctions to $500 per day earlier this month, criticising the lawyer for his behavior.

The order (pdf), picked up by Law360, instructed the New York lawyer to show up in court this week, to explain “why he should not be incarcerated” until he provides documented proof.

“Failure to appear as directed will subject Mr. Liebowitz to arrest by the United States Marshals Service without further notice,” Judge Siebel wrote.

It turns out that an arrest wasn’t needed as Liebowitz did show up at the hearing this week. Realizing that there may be trouble ahead, he entered the courtroom with two criminal defense lawyers at his side, for what would become a turbulent hearing.

After six months, the lawyer finally presented the death certificate the court had requested. This proved that he didn’t lie about the death of his grandfather, but he hadn’t been truthful either as this occurred three days earlier than Liebowitz said, on April 9.

Judge Siebel wasn’t happy about this, to say the least. According to The Smoking Gun, which covered the case in detail, she said that Liebowitz “chose to repeat that lie six, eight, ten times” as part of a “long-term campaign of deception.”

“I question Mr. Liebowitz’s fitness to practice,” Seibel added at one point during the hearing.

Liebowitz’s lawyer, Richard Greenberg, who has known the lawyer and his family for years, explained that his client’s misrepresentations were not “intentional” and that he “was in a daze” following the death of his grandfather.

However, Judge Seibel didn’t fall for this and countered that it would be “completely implausible” that this “haze” would have continued for months. According to her, Liebowitz intentionally lied to the court, noting that it was clearly not an honest mistake.

Greenberg also tried to get the sanctions lowered, which he said had risen to $3,700 over the past weeks. According to a letter sent to the court earlier this week, the attorney noted that Liebowitz had already paid a high price for his wrongdoing, including bad publicity.

“Richard has suffered horrible publicity as a result of being held in contempt and threatened with incarceration by this Court. And of course Richard, a young and inexperienced lawyer, is scared of the damage to his professional career as a result of his conduct and these proceedings,” Greenberg wrote.

“At the risk of appearing to minimize the seriousness of this matter, which counsel would not dare to do, counsel urges this Court to find that Richard has suffered or been penalized enough for his lapse or misconduct,” the letter (pdf) adds.

Judge Seibel didn’t seem convinced by these arguments though, and Liebowitz had to cough up for sanctions. According to Leonard French’s coverage, he paid $3,700 in court. That was $100 short according to the Judge, but she accepted it nonetheless.

The earlier contempt rulings also bring more bad news for the lawyer. He now has to disclose these to other courts as well as prospective clients, which likely doesn’t help his business.

In addition, Judge Seibel has referred the matter to the Grievance Committee, which will decide if further sanctions are appropriate, which could lead to trouble at the New York bar.

Needless to say, this is yet more bad news for the attorney. He can continue to practice law, at least for now, but everyone seems to agree that the attorney needs help and not just on the legal front.

Liebowitz’s own lawyer and family friend, Greenberg, recommended him to enroll in a CLE course to learn how to manage a small law firm. In addition, he was advised to seek psychotherapy to deal with several other issues.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Canadian Court Rejects Reverse Class Action Against BitTorrent Pirates

Post Syndicated from Ernesto original https://torrentfreak.com/canadian-court-rejects-reverse-class-action-against-bittorrent-pirates-191114/

Movie studio Voltage Pictures is no stranger to suing BitTorrent users.

The company and its subsidiaries have filed numerous lawsuits against alleged pirates in the United States, Europe, Canada and Australia, and likely made a lot of money doing so.

Voltage and other copyright holders who initiate these cases generally rely on IP addresses as evidence. With this information in hand, they ask the courts to order Internet providers to hand over the personal details of the associated account holders, so the alleged pirates can be pursued for settlements.

In Canada, Voltage tried to get these personal details from a large group of copyright infringers by filing a reverse class-action lawsuit, which is relatively rare. The movie company argued that this is a cheaper way to target large numbers of infringers at once.

The lawsuit in question was initially filed in 2016 and dragged on for years. The case revolves around a representative defendant, Robert Salna, who provides WiFi services to tenants. Through Salna, Voltage hoped to catch a group of infringers.

As the case went on the Canadian Internet Policy and Public Interest Clinic (CIPPIC) took interest in the case. The group, which is connected to the University of Ottawa, eventually intervened to represent anonymous defendants.

Among other things, CIPPIC argued that the movie company failed to identify an actual infringer. It targets multiple ‘infringing’ IP-addresses, which are not unique and can be used by multiple persons at once. In addition, unprotected WiFi networks may be open to the public at large.

Since the IP-addresses are not necessarily the infringers, Voltage has no reasonable cause to file the reverse class action, CIPPIC’s submission argued.

This week the Federal Court of Canada ruled on the matter and Justice Boswell agreed with CIPPIC.

“I agree with CIPPIC’s submissions that Voltage’s pleadings do not disclose a reasonable cause of action with respect to primary infringement.  While Voltage alleges that its forensic software identified a direct infringement in [sic] Voltage’s films, Voltage has failed to identify a Direct Infringer in its amended notice of application,” he writes.

Judge Boswell also agreed with CIPPIC’s critique of the class action procedure. These piracy cases deal with multiple infringers which will all have different circumstances. Reverse class action lawsuits are less suited to this scenario.

“A class proceeding is not a preferable procedure for the just and efficient resolution of any common issues which may exist.  The proposed proceeding would require multiple individual fact-findings for each class member on almost every issue.” 

The Judge further notes that there are other preferable means for Voltage to pursue its claims. These include joinder and consolidation of individual claims.

Based on these and other conclusions, Judge Boswell dismissed Voltage’s motion to certify the case as a reverse class action. In addition, the movie company was ordered to pay the costs of the proceeding, which could run to tens of thousands of dollars.

This is an important ruling as it takes a clear stand against the reverse class action strategy for this type of piracy case. And it may even go further than that. According to law professor Michael Geist, it can impact future file-sharing cases as well. 

“I think the decision does have implications that extend beyond this specific class action strategy as it calls into doubt the direct link between IP address and infringement and raises questions about whether merely using BitTorrent rises to the level of secondary infringement,” Geist tells TorrentFreak.

CIPPIC’s director David Fewer is also happy with the outcome. He tells the Globe and Mail that if the motion was accepted, it could have “seriously expanded the threat of copyright liability to anyone allowing others to use an internet connection.”

While the ruling is a clear dismissal of the reverse class action approach, there are similar file-sharing cases in Canada that have proven to be more effective. As long as this practice remains profitable, it will probably not go away.

A copy of Judge Boswell’s order is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Kodi Addon & Build Repositories Shut Down Citing Legal Pressure

Post Syndicated from Andy original https://torrentfreak.com/kodi-addon-build-repositories-shut-down-citing-legal-pressure-191114/

Being involved in the development of third-party Kodi addons and ‘builds’ (Kodi installations pre-customized with addons and tweaks) is a somewhat risky activity.

Providing simple access to otherwise restricted movies and TV shows attracts copyright holders, and that always has the potential to end badly. And it does, pretty regularly.

On November 1, 2019, UK-focused Kodi platform KodiUK.tv made an announcement on Twitter, stating briefly that “Something has happened this morning. Sorry!” While that could mean anything, an ominous follow-up message indicated that a statement would be released in due course “detailing the future”.

Several hours later, KodiUK.tv confirmed what fans already knew, that it had taken down its site. Why that happened remained open to question but a few hours ago the group confirmed that legal action was to blame.

“We took our website offline 10 days ago closed our repo and the builds due to legal demands against us,” KodiUK.tv announced on Twitter.

“We will say more when we can bring the site back up safely. But the builds & repo will not be back nor will we host any add-ons anymore for anyone.”

dad life kodi build

The closure is particularly bad news for anyone who used the popular DadLife Kodi build that was previously installable via the group’s repository. Whether it will find a new official home somewhere else is open to question.

But there is more bad news too. In an announcement posted a few hours ago to its Facebook page, Kodi builds and addon repository OneNation revealed that it too had shut down, again as a result of legal pressure.

“Unfortunately due to outside Legal pressures this group will close with immediate effect along with our Repository etc. We would just like to thank each and every one of you for all your support over the years,” OneNation wrote.

Noting they’d had an “absolute blast”, OneNation added they were going out with their “heads held high” having done things their way, without “robbing links from others” or accepting payment in any “shape or form”.

OneNation: another one bites the dust

OneNation went down with strict instructions for no-one to contact the team for any further information and to treat any additional information published online as “hearsay.” That means that confirming who applied the legal pressure will be reliant on word from the anti-piracy groups most likely to be have been involved.

TorrentFreak has contacted the Alliance for Creativity and Entertainment and the Federation Against Copyright Theft for comment. We’ll post an update here if any confirmation or denials are received from either group.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Accelerate SQL Server Always On Deployments with AWS Launch Wizard

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/accelerate-sql-server-always-on-deployments-with-aws-launch-wizard/

Customers sometimes tell us that while they are experts in their domain, their unfamiliarity with the cloud can make getting started more challenging and take more time. They want to be able to quickly and easily deploy enterprise applications on AWS without needing prior tribal knowledge of the AWS platform and best practices, so as to accelerate their journey to the cloud.

Announcing AWS Launch Wizard for SQL Server
AWS Launch Wizard for SQL Server is a simple, intuitive and free to use wizard-based experience that enables quick and easy deployment of high availability SQL solutions on AWS. The wizard walks you through an end-to-end deployment experience of Always On Availability Groups using prescriptive guidance. By answering a few high-level questions about the application such as required performance characteristics the wizard will then take care of identifying, provisioning, and configuring matching AWS resources such as Amazon Elastic Compute Cloud (EC2) instances, Amazon Elastic Block Store (EBS) volumes, and an Amazon Virtual Private Cloud. Based on your selections the wizard presents you with a dynamically generated estimated cost of deployment – as you modify your resource selections, you can see an updated cost assessment to help you match your budget.

Once you approve, AWS Launch Wizard for SQL Server provisions these resources and configures them to create a fully functioning production-ready SQL Server Always On deployment in just a few hours. The created resources are tagged making it easy to identity and work with them and the wizard also creates AWS CloudFormation templates, providing you with a baseline for repeatable and consistent application deployments.

Subsequent SQL Server Always On deployments become faster and easier as AWS Launch Wizard for SQL Server takes care of dealing with the required infrastructure on your behalf, determining the resources to match your application’s requirements such as performance, memory, bandwidth etc (you can modify the recommended defaults if you wish). If you want to bring your own SQL Server licenses, or have other custom requirements for the instances, you can also select to use your own custom AMIs provided they meet certain requirements (noted in the service documentation).

Using AWS Launch Wizard for SQL Server
To get started with my deployment, in the Launch Wizard Console I click the Create deployment button to start the wizard and select SQL Server Always On.


The wizard requires an AWS Identity and Access Management (IAM) role granting it permissions to deploy and access resources in my account. The wizard will check to see if a role named AmazonEC2RoleForLaunchWizard exists in my account. If so it will be used, otherwise a new role will be created. The new role will have two AWS managed policies, AmazonSSMManagedInstanceCore and AmazonEC2RolePolicyforLaunchWizard, attached to it. Note that this one time setup process will be typically performed by an IAM Administrator for your organization. However, the IAM user does not have to be an Administrator and CreateRole, AttachRolePolicy, and GetRole permissions are sufficient to perform these operations. After the role is created, the IAM Administrator can delegate the application deployment process to another IAM user who, in turn, must have the AWS Launch Wizard for SQL Server IAM managed policy called AmazonLaunchWizardFullaccess attached to it.

With the application type selected I can proceed by clicking Next to start configuring my application settings, beginning with setting a deployment name and optionally an Amazon Simple Notification Service (SNS) topic that AWS Launch Wizard for SQL Server can use for notifications and alerts. In the connectivity options I can choose to use an existing Amazon Virtual Private Cloud or have a new one created. I can also specify the name of an existing key pair (or create one). The key pair will be used if I want to RDP into my instances or obtain the administrator password. For a new Virtual Private Cloud I can also configure the IP address or range to which remote desktop access will be permitted:
Instances launched by AWS Launch Wizard for SQL Server will be domain joined to an Active Directory. I can select either an existing AWS Managed AD, or an on-premises AD, or have the wizard create a new AWS Managed Directory for my deployment:

The final application settings relate to SQL Server. This is also where I can specify a custom AMI to be used if I want to bring my own SQL Server licenses or have other customization requirements. Here I’m just going to create a new SQL Server Service account and use an Amazon-provided image with license included. Note that if I choose to use an existing service account it should be part of the Managed AD in which you are deploying:

Clicking Next takes me to a page to define the infrastructure requirements of my application, in terms of CPU and network performance and memory. I can also select the type of storage (solid state vs magnetic) and required SQL Server throughput. The wizard will recommend the resource types to be launched but I can also override and select specific instance and volume types, and I can also set custom tags to apply to the resources that will be created:

The final section of this page shows me the cost estimate based on my selections. This data in this panel is dynamically generated based on my prior selections and I can go back and forth in the wizard, tuning my selections to match my budget:

When I am happy with my selections, clicking Next takes me to wizard’s final Review page where I can view a summary of my selections and acknowledge that AWS resources and AWS Identity and Access Management (IAM) permissions will be created on my behalf, along with the estimated cost as was shown in the estimator on the previous page. My final step is to click Deploy to start the deployment process. Status updates during deployment can be viewed on the Deployments page with a final notification to inform me on completion.

Post-deployment Management
Once my application has been deployed I can manage its resources easily. Firstly I can navigate to Deployments on the AWS Launch Wizard for SQL Server dashboard and using the Actions dropdown I can jump to the Amazon Elastic Compute Cloud (EC2) console where I can manage the EC2 instances, EBS volumes, Active Directory etc. Or, using the same Actions dropdown, I can access SQL Server via the remote desktop gateway instance. If I want to manage future updates and patches to my application using AWS Systems Manager another Actions option takes me to the Systems Manager dashboard for managing my application. I can also use the AWS Launch Wizard for SQL Server to delete deployments performed using the wizard and it will perform a roll-back of all AWS CloudFormation stacks that the service created.

Now Available
AWS Launch Wizard for SQL Server is generally available and you can use it in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), South America (Sao Paulo), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Seoul), Asia Pacific (Tokyo), EU (Frankfurt), EU (Ireland), EU (London), and EU (Stockholm). Support for the AWS regions in China, and for the GovCloud Region, is in the works. There is no additional charge for using AWS Launch Wizard for SQL Server, only for the resources it creates.

— Steve

AWS Data Exchange – Find, Subscribe To, and Use Data Products

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-data-exchange-find-subscribe-to-and-use-data-products/

We live in a data-intensive, data-driven world! Organizations of all types collect, store, process, analyze data and use it to inform and improve their decision-making processes. The AWS Cloud is well-suited to all of these activities; it offers vast amounts of storage, access to any conceivable amount of compute power, and many different types of analytical tools.

In addition to generating and working with data internally, many organizations generate and then share data sets with the general public or within their industry. We made some initial steps to encourage this back in 2008 with the launch of AWS Public Data Sets (Paging Researchers, Analysts, and Developers). That effort has evolved into the Registry of Open Data on AWS (New – Registry of Open Data on AWS (RODA)), which currently contains 118 interesting datasets, with more added all the time.

New AWS Data Exchange
Today, we are taking the next step forward, and are launching AWS Data Exchange. This addition to AWS Marketplace contains over one thousand licensable data products from over 80 data providers. There’s a diverse catalog of free and paid offerings, in categories such as financial services, health care / life sciences, geospatial, weather, and mapping.

If you are a data subscriber, you can quickly find, procure, and start using these products. If you are a data provider, you can easily package, license, and deliver products of your own. Let’s take a look at Data Exchange from both vantage points, and then review some important details.

Let’s define a few important terms before diving in:

Data Provider – An organization that has one or more data products to share.

Data Subscriber – An AWS customer that wants to make use of data products from Data Providers.

Data Product – A collection of data sets.

Data Set – A container for data assets that belong together, grouped by revision.

Revision – A container for one or more data assets as of a point in time.

Data Asset – The actual data, in any desired format.

AWS Data Exchange for Data Subscribers
As a data subscriber, I click View product catalog and start out in the Discover data section of the AWS Data Exchange Console:

Products are available from a long list of vendors:

I can enter a search term, click Search, and then narrow down my results to show only products that have a Free pricing plan:

I can also search for products from a specific vendor, that match a search term, and that have a Free pricing plan:

The second one looks interesting and relevant, so I click on 5 Digit Zip Code Boundaries US (TRIAL) to learn more:

I think I can use this in my app, and want to give it a try, so I click Continue to subscribe. I review the details, read the Data Subscription Agreement, and click Subscribe:

The subscription is activated within a few minutes, and I can see it in my list of Subscriptions:

Then I can download the set to my S3 bucket, and take a look. I click into the data set, and find the Revisions:

I click into the revision, and I can see the assets (containing the actual data) that I am looking for:

I select the asset(s) that I want, and click Export to Amazon S3. Then I choose a bucket, and Click Export to proceed:

This creates a job that will copy the data to my bucket (extra IAM permissions are required here; read the Access Control documentation for more info):

The jobs run asynchronously and copy data from Data Exchange to the bucket. Jobs can be created interactively, as I just showed you, or programmatically. Once the data is in the bucket, I can access and process it in any desired way. I could, for example, use a AWS Lambda function to parse the ZIP file and use the results to update a Amazon DynamoDB table. Or, I could run an AWS Glue crawler to get the data into my Glue catalog, run an Amazon Athena query, and visualize the results in a Amazon QuickSight dashboard.

Subscription can last from 1-36 months with an auto-renew option; subscription fees are billed to my AWS account each month.

AWS Data Exchange for Data Providers
Now I am going to put my “data provider” hat and show you the basics of the publication process (the User Guide contains a more detailed walk-through). In order to be able to license data, I must agree to the terms and conditions, and my application must be approved by AWS.

After I apply and have been approved, I start by creating my first data set. I click Data sets in the navigation, and then Create data set:

I describe my data set, and have the option to tag it, then click Create:

Next, I click Create revision to create the first revision to the data set:

I add a comment, and have the option to tag the revision before clicking Create:

I can copy my data from an existing S3 location, or I can upload it from my desktop:

I choose the second option, select my file, and it appears as an Imported asset after the import job completes. I review everything, and click Finalize for the revision:

My data set is ready right away, and now I can use it to create one or more products:

The console outlines the principal steps:

I can set up public pricing information for my product:

AWS Data Exchange lets me create private pricing plans for individual customers, and it also allows my existing customers to bring their existing (pre-AWS Data Exchange) licenses for my products along with them by creating a Bring Your Own Subscription offer.

I can use the provided Data Subscription Agreement (DSA) provided by AWS Data Exchange, use it as the basis for my own, or I can upload an existing one:

I can use the AWS Data Exchange API to create, update, list, and manage data sets and revisions to them. Functions include CreateDataSet, UpdataSet, ListDataSets, CreateRevision, UpdateAsset, and CreateJob.

Things to Know
Here are a couple of things that you should know about Data Exchange:

Subscription Verification – The data provider can also require additional information in order to verify my subscription. If that is the case, the console will ask me to supply the info, and the provider will review and approve or decline within 45 days:

Here is what the provider sees:

Revisions & Notifications – The Data Provider can revise their data sets at any time. The Data Consumer receives a CloudWatch Event each time a product that they are subscribed to is updated; this can be used to launch a job to retrieve the latest revision of the assets. If you are implementing a system of this type and need some test events, find and subscribe to the Heartbeat product:

Data Categories & Types – Certain categories of data are not permitted on AWS Data Exchange. For example, your data products may not include information that can be used to identify any person, unless that information is already legally available to the public. See, Publishing Guidelines for detailed guidelines on what categories of data are permitted.

Data Provider Location – Data providers must either be a valid legal entity domiciled in the United States or in a member state of the EU.

Available Now
AWS Data Exchange is available now and you can start using it today. If you own some interesting data and would like to publish it, start here. If you are a developer, browse the product catalog and look for data that will add value to your product.

Jeff;

 

 

New – Import Existing Resources into a CloudFormation Stack

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-import-existing-resources-into-a-cloudformation-stack/

With AWS CloudFormation, you can model your entire infrastructure with text files. In this way, you can treat your infrastructure as code and apply software development best practices, such as putting it under version control, or reviewing architectural changes with your team before deployment.

Sometimes AWS resources initially created using the console or the AWS Command Line Interface (CLI) need to be managed using CloudFormation. For example, you (or a different team) may create an IAM role, a Virtual Private Cloud, or an RDS database in the early stages of a migration, and then you have to spend time to include them in the same stack as the final application. In such cases, you often end up recreating the resources from scratch using CloudFormation, and then migrating configuration and data from the original resource.

To make these steps easier for our customers, you can now import existing resources into a CloudFormation stack!

It was already possible to remove resources from a stack without deleting them by setting the DeletionPolicy to Retain. This, together with the new import operation, enables a new range of possibilities. For example, you are now able to:

  • Create a new stack importing existing resources.
  • Import existing resources in an already created stack.
  • Migrate resources across stacks.
  • Remediate a detected drift.
  • Refactor nested stacks by deleting children stacks from one parent and then importing them into another parent stack.

To import existing resources into a CloudFormation stack, you need to provide:

  • A template that describes the entire stack, including both the resources to import and (for existing stacks) the resources that are already part of the stack.
  • Each resource to import must have a DeletionPolicy attribute in the template. This enables easy reverting of the operation in a completely safe manner.
  • A unique identifier for each target resource, for example the name of the Amazon DynamoDB table or of the Amazon Simple Storage Service (S3) bucket you want to import.

During the resource import operation, CloudFormation checks that:

  • The imported resources do not already belong to another stack in the same region (be careful with global resources such as IAM roles).
  • The target resources exist and you have sufficient permissions to perform the operation.
  • The properties and configuration values are valid against the resource type schema, which defines its required, acceptable properties, and supported values.

The resource import operation does not check that the template configuration and the actual configuration are the same. Since the import operation supports the same resource types as drift detection, I recommend running drift detection after importing resources in a stack.

Importing Existing Resources into a New Stack
In my AWS account, I have an S3 bucket and a DynamoDB table, both with some data inside, and I’d like to manage them using CloudFormation. In the CloudFormation console, I have two new options:

  • I can create a new stack importing existing resources.

  • I can import resources into an existing stack.

In this case, I want to start from scratch, so I create a new stack. The next step is to provide a template with the resources to import.

I upload the following template with two resources to import: a DynamoDB table and an S3 bucket.

AWSTemplateFormatVersion: "2010-09-09"
Description: Import test
Resources:

  ImportedTable:
    Type: AWS::DynamoDB::Table
    DeletionPolicy: Retain
    Properties: 
      BillingMode: PAY_PER_REQUEST
      AttributeDefinitions: 
        - AttributeName: id
          AttributeType: S
      KeySchema: 
        - AttributeName: id
          KeyType: HASH

  ImportedBucket:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain

In this template I am setting DeletionPolicy  to Retain for both resources. In this way, if I remove them from the stack, they will not be deleted. This is a good option for resources which contain data you don’t want to delete by mistake, or that you may want to move to a different stack in the future. It is mandatory for imported resources to have a deletion policy set, so you can safely and easily revert the operation, and be protected from mistakenly deleting resources that were imported by someone else.

I now have to provide an identifier to map the logical IDs in the template with the existing resources. In this case, I use the DynamoDB table name and the S3 bucket name. For other resource types, there may be multiple ways to identify them and you can select which property to use in the drop-down menus.

In the final recap, I review changes before applying them. Here I check that I’m targeting the right resources to import with the right identifiers. This is actually a CloudFormation Change Set that will be executed when I import the resources.

When importing resources into an existing stack, no changes are allowed to the existing resources of the stack. The import operation will only allow the Change Set action of Import. Changes to parameters are allowed as long as they don’t cause changes to resolved values of properties in existing resources. You can change the template for existing resources to replace hard coded values with a Ref to a resource being imported. For example, you may have a stack with an EC2 instance using an existing IAM role that was created using the console. You can now import the IAM role into the stack and replace in the template the hard coded value used by the EC2 instance with a Ref to the role.

Moving on, each resource has its corresponding import events in the CloudFormation console.

When the import is complete, in the Resources tab, I see that the S3 bucket and the DynamoDB table are now part of the stack.

To be sure the imported resources are in sync with the stack template, I use drift detection.

All stack-level tags, including automatically created tags, are propagated to resources that CloudFormation supports. For example, I can use the AWS CLI to get the tag set associated with the S3 bucket I just imported into my stack. Those tags give me the CloudFormation stack name and ID, and the logical ID of the resource in the stack template:

$ aws s3api get-bucket-tagging --bucket danilop-toimport

{
  "TagSet": [
    {
      "Key": "aws:cloudformation:stack-name",
      "Value": "imported-stack"
    },
    {
      "Key": "aws:cloudformation:stack-id",
      "Value": "arn:aws:cloudformation:eu-west-1:123412341234:stack/imported-stack/..."
    },
    {
      "Key": "aws:cloudformation:logical-id",
      "Value": "ImportedBucket"
    }
  ]
}

Available Now
You can use the new CloudFormation import operation via the console, AWS Command Line Interface (CLI), or AWS SDKs, in the following regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), EU (Frankfurt), EU (Ireland), EU (London), EU (Paris), and South America (São Paulo).

It is now simpler to manage your infrastructure as code, you can learn more on bringing existing resources into CloudFormation management in the documentation.

Danilo

EU Academics Publish Recommendations to Limit Negative Impact of Article 17 on Users

Post Syndicated from Andy original https://torrentfreak.com/eu-academics-publish-recommendations-to-limit-negative-impact-of-article-17-191113/

Despite some of the most intense opposition seen in recent years, on March 26, 2019, the EU Parliament adopted the Copyright Directive.

The main controversy surrounded Article 17 (previously known as Article 13), which places greater restrictions on user-generated content platforms like YouTube.

Rightsholders, from the music industry in particular, welcomed the new reality. Without official licensing arrangements in place or strong efforts to obtain licensing alongside best efforts to take down infringing content and keep it down, sites like YouTube (Online Content Sharing Service Providers – OCSSP) can potentially be held liable for infringing content.

This uncertainty led many to fear for the future of fair use, with the specter of content upload platforms deploying strict automated filters that err on the side of caution in order to avoid negative legal consequences under the new law.

While the legislation has been passed at the EU level, it still has to be written into Member States’ local law. With that in mind, more than 50 EU Academics have published a set of recommendations that they believe have the potential to limit restrictions on user freedoms as a result of the new legislation.

A key recommendation is that national implementations should “fully explore” legal mechanisms for broad licensing of copyrighted content. The academics are calling for this to ensure that the preventative obligations of OCSSPs are limited in application wherever possible.

The academics hope that broad licensing can avoid situations where to avoid liability, OCSSPs would otherwise have to prove they have made “best efforts” to ensure works specified by rightsholders are rendered inaccessible or show that they have “acted expeditiously” to remove content and prevent its reupload following a request from a rightsholder.

“Otherwise, the freedom of EU citizens to participate in democratic online content creation and distribution will be encroached upon and freedom of expression and information in the online environment would be curtailed,” the academics warn.

The academics’ recommendations are focused on ensuring that non-infringing works don’t become collateral damage as OCSSPs scramble to cover their own backs and avoid liability.

For example, the preventative obligations listed above should generally not come into play when content is used for quotation, criticism, or review, or for the purpose of caricature, parody or pastiche. If content is removed or filtered incorrectly, however, Member States must ensure that online content-sharing service providers put in place an “effective and expeditious” complaint and redress system.

The prospect of automatic filtering at the point of upload was a hugely controversial matter before Article 17 passed but the academics believe they have identified ways to ensure that freedom of expression and access to information can be better protected.

“[W]e recommend that where preventive measures [as detailed above] are applied, especially where they lead to the filtering and blocking of uploaded content before it is made available to the public, Member States should, to the extent possible, limit their application to cases of prima facie [upon first impression] copyright infringement,” the academics write.

“In this context, a prima facie copyright infringement means the upload of protected material that is identical or equivalent to the ‘relevant and necessary information’ previously provided by the rightholders to OCSSPs, including information previously considered infringing. The concept of equivalent information should be interpreted strictly.”

The academics say that if content is removed on the basis of prima facie infringement, users are entitled to activate the complaint and redress procedure. If there is no prima facie infringement, content should not be removed until its legal status is determined.

In cases where user-uploaded content does not meet the prima facie standard but matches “relevant and necessary information” (fingerprints etc) supplied by rightsholders, OCSSPs must grant users the ability to declare that content is not infringing due to fair use-type exceptions.

“The means to provide such declaration should be concise, transparent, intelligible, and be presented to the user in an easily accessible form, using clear and plain language (e.g. a standard statement clarifying the status of the uploaded content, such as ‘This is a permissible quotation’ or ‘This is a permissible parody’),” the recommendations read.

If users don’t provide a declaration within a “reasonable” time following upload, the OCSSP (YouTube etc) should be “allowed” to remove the content, with users granted permission to activate the complaint and redress procedure.

Rightsholders who still maintain that content was removed correctly must then justify the deletion, detailing why it is a prima facie case of infringement and not covered by a fair use-type exemption, particularly the one cited by the user.

A human review should then be conducted at the OCSSP, which should not be held liable for infringement under Article 17 until the process is complete and legality determined.

Given that Article 17 has passed, there appears to be limited room to maneuver and there is a long way to go before all Member States write its terms into local law.

However, even if the above safeguarding recommendations are implemented, it’s clear that substantial resources will have to be expended to ensure that everyone’s rights are protected. As a result, platforms lacking YouTube-sized budgets will undoubtedly feel the pinch.

Safeguarding User Freedoms in Implementing Article 17 of the Copyright in the Digital Single Market Directive: Recommendations from European Academics is available here.

 

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

‘Copyright’ Sting Targeting 15-Year-Old Backfires With Arrest Warrants & Record Sales

Post Syndicated from Andy original https://torrentfreak.com/copyright-sting-targeting-15-year-old-backfires-with-arrest-warrant-big-sales-191112/

Krathong float (credit)

Loi Krathong is an annual festival celebrated in Thailand and some neighboring countries during which ‘krathong’ (decorated baskets) are floated on a river.

These beautiful items are often made by locals looking to generate relatively small sums to help support their families and in some cases fund their education. Sadly, there are others who see the creations as an opportunity to generate cash for themselves in an entirely more sinister fashion.

According to local media reports, earlier this month a 15-year-old girl known as ‘Orm’ or ‘Orn’ (we’ll settle on the former) was contacted on Facebook by a stranger who placed an order for 136 krathong floats. The order carried specific instructions for them to be adorned with faces of cartoon characters owned by Japanese company San-X.

When Orm took 30 completed floats to a local mall, at the request of a supposed “copyright agent” she was reportedly arrested by police for ‘copyright infringement’. She was told to pay a fine of 50,000 baht, around US$1,650, a figure that was later negotiated down to 5,000 baht, US$1,650, by her grandfather, a former policeman.

“After receiving the order, I made krathong baskets from 8am to 1.30am the next day so that I could fill the order, only to be arrested,” Orm said.

“Normally I do not make any basket with a copyrighted character. This customer stressed they wanted copyrighted characters. After being arrested I cried all night because I have never faced such legal action before.”

The action against the teenager provoked outcry in the community after the chief of a local police station said it had worked with the ‘copyright agent’ on the sting operation, Bangkok Post reported.

However, all was not what it seemed. TAC Consumer PLC, which represents San-X, issued a statement stating that it had not participated in the operation against the teenager and had assigned one of its lawyers to the case. But worse was to come.

After news of the scandal spread, other victims of the scam came forward, saying they too had been arrested and settled for even larger amounts having borrowed the money from family members. They identified the ‘copyright agent’ as the same man who targeted the teenager.

When news reached local TV, a reporter helped to track down the ‘copyright agent’, who was discovered to be a local motorcycle taxi driver called ‘Nan’ whose wife sells meatballs in the area.

Yesterday, as pressure mounted against local police, a commander announced that after 40 similar complaints were filed against the ‘copyright agent’, they would be seeking arrest warrants by the end of the week. While that news will be celebrated in its own right, the knock-on effect of all the publicity is doing wonders for Orm’s work.

After making 360 floats to sell during the Loy Krathong festival, people queued up to buy them. They sold out in an hour, making herself around 8,110 baht in profit, around US$267.00. She told local media she was “delighted” by the response having sold just 30 in previous years.

Half of the money will go towards her school fees and the rest will go to her family to help with household expenses.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Hollywood Praises Australia’s Anti-Piracy Laws, But More Can Be Done

Post Syndicated from Ernesto original https://torrentfreak.com/hollywood-praises-australias-anti-piracy-laws-but-more-can-be-done-191111/

For years on end, entertainment industry insiders have regularly portrayed Australia as a piracy-ridden country.

However, after several legislative updates, the tide appears to have turned. This is the conclusion reached by the Motion Picture Association (MPA) in a recent report.

The industry group, which is largely made up of Hollywood studios, along with the recently added Netflix, continuously monitors Australia’s anti-piracy efforts. In recent years, things have been going in the right direction.

A short summary of its findings was recently reported to the US Government as part of the annual trade barriers consultation.

The MPA’s overview is generally a summary of copyright challenges and shortcomings around the world. However, Australia is one of the few exceptions when it comes to anti-piracy enforcement. In fact, the industry group is rather positive about the progress the country has made.

“Australia has developed excellent tools to fight online piracy, including effective laws allowing for no-fault injunctive relief against ISPs and ‘search engine service providers’,” the MPA writes in its report.

The report points out that in recent years piracy rates have declined significantly Down Under. Pirate site blocking and other measures have helped to boost interest in legal subscription services, including Netflix, it suggests.

The MPA is also positive about recent developments regarding takedown notices. The Australian Competition and Consumer Commission is currently considering the introduction of a mandatory takedown notice scheme, one that would be stricter than the DMCA-style standard which is common today.

“This would include procedures for urgent take downs (extending to pre-release or new-release films and TV shows as well as live entertainment content), as well as ‘stay down’ obligations to ensure that content already identified as infringing does not quickly re-appear,” the MPA notes.

The Hollywood-backed group supports this initiative and adds that companies who breach the new takedown standard should face “meaningful” penalties.

Aside from the positive remarks in Australia, the MPA informs the US Government that there is room for improvement as well. For example, the police could offer more help with piracy-related investigations, something that’s lacking today.

In addition, the MPA is worried about an ongoing Copyright Modernization consultation where further exceptions to copyright are being considered. This includes new definitions of fair dealing or fair use, which are seen as a threat by the industry group.

“This consultation risks undermining the current balance of IP protection in Australia that has fueled the country’s creative industries, and could create significant market uncertainty and effectively weaken Australia’s infrastructure for intellectual property protection,” the MPA states.

Closing out the list is a recommendation to propose tough anti-camcording piracy laws. While fewer illegal recordings are sourced from Australia today, the current penalties for this activity are simply not enough to act as a proper deterrent, the group says.

The last request is far from new. The same demands have appeared in previous reports, as is the case with many of the recommendations throughout the MPA’s report, which are often copied verbatim year after year.

The full overview of the MPA’s trade barrier comments to the US Trade Representative is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

15 Years of AWS Blogging!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/15-years-of-aws-blogging/

I wrote the first post (Welcome) to this blog exactly 15 years ago today. It is safe to say that I never thought that writing those introductory sentences would lead my career in such a new and ever -challenging dimension. This seems like as good of a time as any to document and share the story of how the blog came to be, share some of my favorite posts, and to talk about the actual mechanics of writing and blogging.

Before the Beginning
Back in 1999 or so, I was part of the Visual Basic team at Microsoft. XML was brand new, and Dave Winer was just starting to talk about RSS. The intersection of VB6, XML, and RSS intrigued me, and I built a little app called Headline Viewer as a side project. I put it up for download, people liked it, and content owners started to send me their RSS feeds for inclusion. The list of feeds took on a life of its own, and people wanted it just as much as they wanted the app. I also started my third personal blog around this time after losing the earlier incarnations in server meltdowns.

With encouragement from Aaron Swartz and others, I put Headline Viewer aside and started Syndic8 in late 2001 to collect, organize, and share them. I wrote nearly 90,000 lines of PHP in my personal time, all centered around a very complex MySQL database that included over 50 tables. I learned a lot about hosting, scaling, security, and database management. The site also had an XML-RPC web service interface that supported a very wide range of query and update operations. The feed collection grew to nearly 250,000 over the first couple of years.

I did not know it at the time, but my early experience with XML, RSS, blogging, and web services would turn out to be the skills that set me apart when I applied to work at Amazon. Sometimes, as it turns out, your hobbies and personal interests can end up as career-changing assets & differentiators.

E-Commerce Web Services
In parallel to all of this, I left Microsoft in 2000 and was consulting in the then-new field of web services. At that time, most of the web services in use were nothing more than cute demos: stock quotes, weather forecasts, and currency conversions. Technologists could marvel at a function call that crossed the Internet and back, but investors simply shrugged and moved on.

In mid-2002 I became aware of Amazon’s very first web service (now known as the Product Advertising API). This was, in my eyes, the first useful web service. It did something non-trivial that could not have been done locally, and provided value to both the provider and the consumer. I downloaded the SDK (copies were later made available on the mini-CD shown at right), sent the developers some feedback, and before I knew it I was at Amazon HQ, along with 4 or 5 other early fans of the service, for a day-long special event. Several teams shared their plans with us, and asked for our unvarnished feedback.

At some point during the day, one of the presenters said “We launched our first service, developers found it, and were building & sharing apps within 24 hours or so. We are going to look around the company and see if we can put web service interfaces on other parts of our business.”

This was my light-bulb moment — Amazon.com was going to become accessible to developers! I turned to Sarah Bryar (she had extended the invite to the event) and told her that I wanted to be a part of this. She said that they could make that happen, and a few weeks later (summer of 2002), I was a development manager on the Amazon Associates team, reporting to Larry Hughes. In addition to running a team that produced daily reports for each member of the Associates program, Larry gave me the freedom to “help out” with the nascent web services effort. I wrote sample programs, helped out on the forums, and even contributed to the code base. I went through the usual Amazon interview loop, and had to write some string-handling code on the white board.

Web Services Evangelist
A couple of months in to the job, Sarah and Rob Frederick approached me and asked me to speak at a conference because no one else wanted to. I was more than happy to do this, and a few months later Sarah offered me the position of Web Services Evangelist. This was a great match for my skills and I took to it right away, booking events with any developer, company, school, or event that wanted to hear from me!

Later in 2003 I was part of a brainstorming session at Jeff Bezos’ house. Jeff, Andy Jassy, Al Vermeulen, me, and a few others (I should have kept better notes) spent a day coming up with a long list of ideas that evolved into EC2, S3, RDS, and so forth. I am fairly sure that this is the session discussed in How AWS Came to Be, but I am not 100% certain.

Using this list as a starting point, Andy started to write a narrative to define the AWS business. I was fortunate enough to have an office just 2 doors up the hall from him, and spent a lot of time reviewing and commenting on his narrative (read How Jeff Bezos Turned Narrative into Amazon’s Competitive Advantage to learn how we use narratives to define businesses and drive decisions). I also wrote some docs of my own that defined our plans for a developer relations team.

We Need a Blog
As I read through early drafts of Andy’s first narrative, I began to get a sense that we were going to build something complex & substantial.

My developer relations plan included a blog, and I spent a ton of time discussing the specifics in meetings with Andy and Drew Herdener. I remember that it was very hard for me to define precisely what this blog would look like, and how it would work from a content-generation and approval perspective. As is the Amazon way, every answer that I supplied basically begat even more questions from Andy and Drew! We ultimately settled on a few ground rules regarding tone and review, and I was raring to go.

I was lucky enough to be asked to accompany Jeff Bezos to the second Foo Camp as his technical advisor. Among many others, I met Ben and Mena Trott of Six Apart, and they gave me a coupon for 1000 free days of access to TypePad, their blogging tool.

We Have a Blog
Armed with that coupon, I returned to Seattle, created the AWS Blog (later renamed the AWS News Blog), and wrote the first two posts (Welcome and Browse Node API) later that year. Little did I know that those first couple of posts would change the course of my career!

I struggled a bit with “voice” in the early days, and could not decide if I was writing as the company, the group, the service, or simply as me. After some experimentation, I found that a personal, first-person style worked best and that’s what I settled on.

In the early days, we did not have much of a process or a blog team. Interesting topics found their way in to my inbox, and I simply wrote about them as I saw fit. I had an incredible amount of freedom to pick and choose topics, and words, and I did my best to be a strong, accurate communicator while staying afield of controversies that would simply cause more work for my colleagues in Amazon PR.

Launching AWS
Andy started building teams and I began to get ready for the first launches. We could have started with a dramatic flourish, proclaiming that we were about to change the world with the introduction of a broad lineup of cloud services. But we don’t work that way, and are happy to communicate in a factual, step-by-step fashion. It was definitely somewhat disconcerting to see that Business Week characterized our early efforts as Jeff Bezo’s Risky Bet, but we accept that our early efforts can sometimes be underappreciated or even misunderstood.

Here are some of the posts that I wrote for the earliest AWS services and features:

SQS – I somehow neglected to write about the first beta of Amazon Simple Queue Service (SQS), and the first mention is in a post called Queue Scratchpad. This post references AWS Zone, a site built by long-time Amazonian Elena Dykhno before she even joined the company! I did manage to write a post for Simple Queue Service Beta 2. At this point I am sure that many people wondered why their bookstore was trying to sell messages queues, but we didn’t see the need to over-explain ourselves or to telegraph our plans.

S3 – I wrote my first Amazon S3 post while running to catch a plane, but I did manage to cover all of the basics: a service overview, definitions of major terms, pricing, and an invitation for developers to create cool applications!

EC2 – EC2 had been “just about to launch” for quite some time, and I knew that the launch would be a big deal. I had already teased the topic of scalable on-demand web services in Sometimes You Need Just a Little…, and I was ever so ready to actually write about EC2. Of course, our long-scheduled family vacation was set to coincide with the launch, and I wrote part of the Amazon EC2 Beta post while sitting poolside in Cabo San Lucas, Mexico! That post was just about perfect, but I probably should have been clear that “AMI” should be pronounced, and not spelled out, as some pundits claim.

EBS – Initially, all of the storage on EC2 instances was ephemeral, and would be lost when the instance was shut down. I think it is safe to say that the launch of EBS (Amazon EBS (Elastic Block Store) – Bring Us Your Data) greatly simplified the use of EC2.

These are just a few of my early posts, but they definitely laid the foundation for what has followed. I still take great delight in reading those posts, thinking back to the early days of the cloud.

AWS Blogging Today
Over the years, the fraction of my time that is allocated to blogging has grown, and now stands at about 80%. This leaves me with time to do a little bit of public speaking, meet with customers, and to do what I can to keep up with this amazing and ever-growing field. I thoroughly enjoy the opportunities that I have to work with the AWS service teams that work so hard to listen to our customers and do their best to respond with services that meet their needs.

We now have a strong team and an equally strong production process for new blog posts. Teams request a post by creating a ticket, attaching their PRFAQ (Press Release + FAQ, another type of Amazon document) and giving the bloggers early internal access to their service. We review the materials, ask hard questions, use the service, and draft our post. We share the drafts internally, read and respond to feedback, and eagerly await the go-ahead to publish.

Planning and Writing a Post
With 3100 posts under my belt (and more on the way), here is what I focus on when planning and writing a post:

Learn & Be Curious – This is an Amazon Leadership Principle. Writing is easy once I understand what I want to say. I study each PRFAQ, ask hard questions, and am never afraid to admit that I don’t grok some seemingly obvious point. Time after time I am seemingly at the absolute limit of what I can understand and absorb, but that never stops me from trying.

Accuracy – I never shade the truth, and I never use weasel words that could be interpreted in more than one way to give myself an out. The Internet is the ultimate fact-checking vehicle, and I don’t want to be wrong. If I am, I am more than happy to admit it, and to fix the issue.

Readability – I have plenty of words in my vocabulary, but I don’t feel the need to use all of them. I would rather use the most appropriate word than the longest and most obscure one. I am also cautious with acronyms and enterprise jargon, and try hard to keep my terabytes and tebibytes (ugh) straight.

Frugality – This is also an Amazon Leadership Principle, and I use it in an interesting way. I know that you are busy, and that you don’t need extra words or flowery language. So I try hard (this post notwithstanding) to keep most of my posts at 700 to 800 words. I’d rather you spend the time using the service and doing something useful.

Some Personal Thoughts
Before I wrap up, I have a couple of reflections on this incredible journey…

Writing – Although I love to write, I was definitely not a natural-born writer. In fact, my high school English teacher gave me the lowest possible passing grade and told me that my future would be better if I could only write better. I stopped trying to grasp formal English, and instead started to observe how genuine writers used words & punctuation. That (and decades of practice) made all the difference.

Career Paths – Blogging and evangelism have turned out to be a great match for my skills and interests, but I did not figure this out until I was on the far side of 40. It is perfectly OK to be 20-something, 30-something, or even 40-something before you finally figure out who you are and what you like to do. Keep that in mind, and stay open and flexible to new avenues and new opportunities throughout your career.

Special Thanks – Over the years I have received tons of good advice and 100% support from many great managers while I slowly grew into a full-time blogger: Andy Jassy, Prashant Sridharan, Steve Rabuchin, and Ariel Kelman. I truly appreciate the freedom that they have given me to develop my authorial voice and my blogging skills over the years! Ana Visneski and Robin Park have done incredible work to build a blogging team that supports me and the other bloggers.

Thanks for Reading
And with that, I would like to thank you, dear reader, for your time, attention, and very kind words over the past 15 years. It has been the privilege of a lifetime to be able to share so much interesting technology with you!

Jeff;

 

Spammers Abuse Medium.com to Spread ‘Pirate’ Scams

Post Syndicated from Ernesto original https://torrentfreak.com/spammers-use-medium-to-spread-pirate-scams-191110/

Founded in 2012 by former Twitter CEO Evan Williams, online publishing platform Medium.com swiftly became the go-to place for many authors.

The site has featured works of renowned writers, politicians, high profile activists, major companies, as well as average Joes.

Today, Medium has millions of daily visitors, making it one of the 100 most visited websites in the world. The majority of these are drawn to the compelling and informative writings, but the site has proven a draw to scammy ‘pirates’ as well.

Every week, hundreds, if not thousands of articles appear that promise people the latest pirated movies and TV-shows. Whether it’s a high-definition copy of Joker, Terminator: Dark Fate, or Maleficent: Mistress of Evil, it’s available. Supposedly.

Here’s an example of a Joker movie that was promoted this week, but there are many more.

People who click on the links are often disappointed though. They typically point to a page where people can start a stream instantly, but after a generic intro, they are required to sign up for a “free account,” that requires a credit card for ‘validation’ purposes.

Needless to say, this isn’t a good idea. Aside from the obvious copyright issues, these services don’t promise what they offer. After all, many of the pirated films they advertise are not available in high-quality formats yet.

The goal of this strategy is to have these links show up high in search results. A site like Medium has a good reputation in search engines, and as a result, the articles promoting these scams are more visible in search results than the average pirate site.

This appears to be an effective strategy, especially since Google has started to push down results from known pirate platforms.

This practice is not new either. Many other reputable sites, including Facebook, Google Maps, Change.org, Steam, and others, have been abused in a similar fashion in the past.

TorrentFreak reached out to Medium and the company informed us that it’s a free and open platform that allows anyone to share stories and ideas. However, it takes swift action after any alleged infringements are reported.

“We fully comply with the DMCA and all other relevant copyright laws,” a Medium spokesperson said, pointing to its DMCA policy.

“When we discover bad actors, both through manual and automatic detection, they are assessed in terms of our policies and rules against those behaviors, and removed from Medium.”

These types of scams aren’t a major problem for copyright holders, as it will mostly result in disappointed and frustrated pirates. However, prospective pirates who fall for them may eventually be charged for something they didn’t sign up for.

For Medium this scam practice could lead to unexpected problems as well. Google received hundreds of takedown notices for Medium.com links over the past several weeks which, in theory, makes it a candidate for a downranking penalty. Unless Google reviews sites manually before applying a penalty, of course.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

ACE Hits Two More Pirate Streaming Sites, Seizes More Openload Domains

Post Syndicated from Andy original https://torrentfreak.com/ace-hits-two-more-pirate-streaming-sites-seizes-more-openload-domains-191110/

After a standing start just over two years ago, the Alliance for Creativity and Entertainment quickly became the most feared anti-piracy group on the planet.

Compromised of around three dozen entertainment companies, including the major Hollywood studios, Netflix and Amazon, the group now targets piracy on a global scale, sharing resources and costs to tackle infringement wherever it might be.

Last week the group took down Openload and Streamango, a dramatic and significant action by any standard. However, as documented here on several occasions (1,2,3), the anti-piracy group also shuts down smaller players with little to no fanfare. Today we can report that another two sites have joined the club.

The first, IPTVBox.plus, appears to have been a seller/reseller of IPTV services targeted at the Brazilian market. Its packages started off pretty cheaply, less than US$4.50 for around 1000 standard definition channels.

The ‘master’ package, however, offered an impressive 13,000 mixed SD, HD and ‘FullHD’ channels for around US$9.70 per month, almost double the price but still cheap by most standards.

IPTVBox.plus…..gone

Thanks to the intervention of ACE, however, the site’s domain is now in the hands of the MPA. A notice on the site informs visitors that the platform bit the dust for infringing copyright. The familiar timer then runs down to zero and diverts disappointed users to the ACE homepage for a lesson in copyright.

Finally, a dedicated streaming portal has also handed over its domain to ACE. PlanetaTVonlineHD.com first appeared online in 2015, streaming popular TV shows such as Game of Thrones, The Walking Dead, and Prison Break to a fairly sizeable audience.

But now, without any official announcement from ACE, the show is clearly over for the TV show streaming platform.

Like so many other similar sites and services, its domain now redirects to the ACE anti-piracy portal. What happened between the parties may never be known but it seems fairly obvious that the group’s influence convinced the site’s operator that continuing just wasn’t worth the trouble.

Finally, over the past week ACE has been taking control of more Openload, Streamango, and StreamCherry domains. We previously reported that Openload.co, oload.cc, oload.club, oload.download, openload.pw and oloadcdn.net had been seized, but more can be added to the list. They are:

StreamCherry.com, Oload.stream, fruithosted.net, oload.win, oload.life, oload.services, oload.xyz, oload.space, oload.biz, oload.vip, oload.tv, oload.monster, oload.best, oload.press, oload.live, oload.site, oload.network, oload.website, oload.online, olpair.com, and openload.status.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Cox and Music Companies Battle Over Piracy Evidence Ahead of Trial

Post Syndicated from Ernesto original https://torrentfreak.com/cox-and-music-companies-battle-over-piracy-evidence-ahead-of-trial-191109/

Regular Internet providers are being put under increasing pressure for not doing enough to curb copyright infringement.

Music rights company BMG got the ball rolling a few years ago when it won its piracy liability lawsuit against Cox Communications.

The ISP was ordered to pay $25 million in damages and another $8 million in legal fees. Hoping to escape this judgment, the company filed an appeal, but the case was eventually settled with Cox agreeing to pay an undisclosed but substantial settlement amount.

The landmark case signaled the start of many similar lawsuits against a variety of ISPs, several of which are still ongoing. In fact, just days after the settlement was announced, Cox was sued again, this time by a group of RIAA-affiliated music companies.

In simple terms, the crux of the case is whether Cox did enough stop pirating subscribers. While the ISP did have the policy to disconnect repeat infringers, the music companies argue that this wasn’t sufficient.

Over the past several months, both parties have conducted discovery and they are currently gearing up for a jury trial which is scheduled for December.

Most recently, both parties have presented their motions in limine, requesting the court to exclude certain testimony from being presented to the jury. This is typically material they see as irrelevant, misleading, or confusing.

One of the music companies’ motions focuses on a document (DX 74) Cox wants to present which indicates that the ISP’s own graduated response system worked pretty well.

Apparently, internal Cox research showed that 96% of subscribers stop receiving notices after the 5th warning. This was concluded in 2010 and resulted in the ISP’s belief that its “graduated response” system was effective.

The number was also brought up to the plaintiffs, as it was mentioned during the Copyright Alert System negotiations. Cox says that it chose not to join this voluntary piracy notice agreement because it already had a functional anti-piracy system in place.

The music companies don’t want this evidence to be shown to the jury. In a reply to Cox’s objections, they argue that the facts and figures in the document are a confusing mess of misleading calculations that lack data to support them.

The reply, which also rebuts other issues, is aggressively worded and redacts the 96% figure at the center of the dispute.

“The mere utterance of the so-called ‘study’ and its misleading and unsupported conclusion will lend it an air of credibility in the jury’s mind. The proverbial bell cannot be un-rung. The only adequate solution is exclusion,” the music companies write.

Cox has also submitted a variety of motions in limine. Among other things, the ISP doesn’t want the plaintiffs to present the millions of infringement notices tracking company MarkMonitor sent to Cox on behalf of other rightsholders.

The music companies disagree, however, arguing that the jury is allowed to know that potential copyright infringements are not limited to their own complaints. The other notices are also relevant to determine crucial issues such as liability, willfulness, and statutory damages, they add.

According to Cox, however, these third-party infringements notices are irrelevant to the present case and don’t prove anything.

“Plaintiffs’ attempt to litigate this case with evidence from an unrelated case concerning acts of infringement that are not at issue is inappropriate, improper, and prejudicial. Plaintiffs’ evidence of third-party infringement allegations should be excluded from trial.”

The docket is littered with back and forths on issues one party wants to exclude while being considered vital evidence by the other. This process is generally the last major clash before the trial starts.

The court has yet to rule on the various motions. When that is done the case will move forward. If all goes according to the current schedule, the verdict will be announced in a few weeks.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

RIAA Delists YouTube Rippers From Google Using Rare Anti-Circumvention Notices

Post Syndicated from Andy original https://torrentfreak.com/riaa-delists-youtube-rippers-from-google-using-rare-anti-circumvention-notices-191108/

While music piracy has reduced in recent years due to the popularity of platforms such as Spotify, the major labels remain highly concerned over so-called steam-ripping services.

These sites allow users to enter a YouTube URL, for example, and then download audio from the corresponding video, mostly in MP3 format. This means that users can download music and store it on their own machines, negating the need to revisit YouTube for the same content. This, the major labels say, deprives content creators of streaming revenue.

Tackling this issue has become one of the industry’s highest anti-piracy priorities. Previously, YouTube-MP3 – the largest ripping site at the time – was shut down following legal action by the major labels. Since then, lawsuits have been filed against other platforms but the battle is far from over and recently a new strategy appears to have been deployed.

A pair of DMCA notices appeared on the Lumen Database late October, having been filed there by Google. The sender of both notices is listed as the RIAA, acting on behalf of its members including Universal Music Group, Sony Music Entertainment, and Warner Music Group.

They are worded slightly differently but each target the homepages of five major YouTube-ripping sites – 2conv.com, flvto.biz, y2mate.com, yout.com, and youtubeconverter.io. Both contain the following key claim:

“To our knowledge, the URLs provide access to a service (and/or software) that circumvents YouTube’s rolling cipher, a technical protection measure, that protects our members’ works on YouTube from unauthorized copying/downloading,” the notices read.

Unlike regular DMCA takedown notices filed with Google, these notices do not appear in Google’s Transparency report. However, Google has acted on them by delisting the homepages of all five platforms from its search results. Other URLs for the platforms still appear, but their homepages are all gone.

The notices are listed on the Lumen Database in the anti-circumvention section, meaning that the RIAA-labeled complaints demand action from Google under the anti-circumvention provisions of the DMCA, rather than demanding the takedown of URLs based on the claim they carry infringing music titles.

The ‘technical measures’ allegedly being circumvented (such as the “rolling cypher” referenced in the complaints) are those put in place by YouTube, which in turn protect the copyrighted content of the labels.

TorrentFreak contacted the RIAA yesterday, requesting comment and seeking additional information on the basis for the notices. Unfortunately, the industry group declined to make any further comment on any aspect of the complaints.

Nevertheless, the RIAA and its members are no strangers to the claim that by circumventing YouTube’s ‘technological measures’, so-called ‘ripping’ sites infringe their rights too. Two of the sites targeted in the recent notices – 2conv.com and flvto.biz – were sued by the labels in 2018. The original complaint contains the following text:

From the complaint

That circumvention (at least in respect of the labels’ works when users select them for download) may also amount to an infringement of the labels’ rights seems to be supported by comments made in the Disney vs VidAngel case.

An opinion from the Court of Appeals for the Ninth Circuit stated that “[n]o person shall circumvent a technological measure that effectively controls access to a [copyrighted] work. Circumvention means ‘to decrypt an encrypted work.. without the authority of the copyright owner’.”

Nevertheless, it was previously argued by the EFF that stream-ripping sites are not by definition illegal since on top of the usual fair use exemptions, some creators who upload their content to online platforms grant permission for people to freely download and modify their work.

“There exists a vast and growing volume of online video that is licensed for free downloading and modification, or contains audio tracks that are not subject to copyright,” the EFF stresses.

“Moreover, many audio extractions qualify as non-infringing fair uses under copyright. Providing a service that is capable of extracting audio tracks for these lawful purposes is itself lawful, even if some users infringe.”

The anti-circumvention notices detailed above are not only relatively rare but also have an additional interesting property – they are harder to dispute than regular DMCA takedown notices.

As detailed here last year, Google told the target of a similar complaint requesting URL delisting that “There is no formal counter notification process available under US law for circumvention, so we have not reinstated these URLs.”

The pair of DMCA anti-circumvention notices can be found here 1,2 (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Cross-Account Cross-Region Dashboards with Amazon CloudWatch

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/cross-account-cross-region-dashboards-with-amazon-cloudwatch/

Best practices for AWS cloud deployments include the use of multiple accounts and/or multiple regions. Multiple accounts provide a security and billing boundary that isolates resources and reduces the impact of issues. Multiple regions ensures a high degree of isolation, low latency for end users, and data resiliency of applications. These best practices can come with monitoring and troubleshooting complications.

Centralized operations teams, DevOps engineers, and service owners need to monitor, troubleshoot, and analyze applications running in multiple regions and in many accounts. If an alarm is received an on-call engineer likely needs to login to a dashboard to diagnose the issue and might also need to login to other accounts to view additional dashboards for multiple application components or dependencies. Service owners need visibility of application resources, shared resources, or cross-application dependencies that can impact service availability. Using multiple accounts and/or multiple regions can make it challenging to correlate between components for root cause analysis and increase the time to resolution.

Announced today, Amazon CloudWatch cross-account cross-region dashboards enable customers to create high level operational dashboards and utilize one-click drill-downs into more specific dashboards in different accounts, without having to log in and out of different accounts or switch regions. The ability to visualize, aggregate, and summarize performance and operational data across accounts and regions helps reduce friction and thus assists in reducing time to resolution. Cross-Account Cross-Region can also be used purely for navigation, without building dashboards, if I’m only interested in viewing alarms/resources/metrics in other accounts and/or regions for example.

Amazon CloudWatch Cross-Account Cross-Region Dashboards Account Setup
Getting started with cross-account cross-region dashboards is easy and I also have the choice of integrating with AWS Organizations if I wish. By using Organizations to manage and govern multiple AWS accounts I can use the CloudWatch console to navigate between Amazon CloudWatch dashboards, metrics and alarms, in any account in my organization, without logging in, as I’ll show in this post. I can also of course just set up cross-region dashboards for a single account. In this post I’ll be making use of the integration with Organizations.

To support this blog post, I’ve already created an organization and invited, using the Organizations console, several other of my accounts to join. As noted, using Organizations makes it easy for me to select accounts later when I’m configuring my dashboards. I could also choose to not use Organizations and pre-populate a custom account selector, so that I don’t need to remember accounts, or enter the account IDs manually when I need them, as I build my dashboard. You can read more on how to set up an organization in the AWS Organizations User Guide. With my organization set up I’m ready to start configuring the accounts.

My first task is to identify and configure the account in which I will create a dashboard – this is my monitoring account (and I can have more than one). Secondly, I need to identify the accounts (known as member accounts in Organizations) that I want to monitor – these accounts will be configured to share data with my monitoring account. My monitoring account requires a Service Linked Role (SLR) to permit CloudWatch to assume a role in each member account. The console will automatically create this role when I enable the cross-account cross-region option. To set up each member account I need to enable data sharing, from within the account, with the monitoring account(s).

Starting with my monitoring account, from the CloudWatch console home, I select Settings in the navigation panel to the left. Cross-Account Cross-Region is shown at the top of the page and I click Configure to get started.


This takes me to a settings screen that I’ll also use in my member accounts to enable data sharing. For now, in my monitoring account, I want to click the Edit option to view my cross-account cross-region options:


The final step for my monitoring account is to enable the AWS Organization account selector option. This will require an additional role be deployed to the master account for the organization to permit the account to access the account list in the organization. The console will guide me through this process for the master account.


This concludes set up for my monitoring account and I can now switch focus to my member accounts and enable data sharing. To do this, I log out of my monitoring account and for each member account, log in and navigate to the CloudWatch console and again click Settings before clicking Configure under Cross-Account Cross-Region, as shown earlier. This time I click Share data, enter the IDs of the monitoring account(s) I want to share data with and set the scope of the sharing (read-only access to my CloudWatch data or full read-only access to my account), and then launch a CloudFormation stack with a predefined template to complete the process. Note that I can also elect to share my data with all accounts in the organization. How to do this is detailed in the documentation.


That completes configuration of both my monitoring account and the member accounts that my monitoring account will be able to access to obtain CloudWatch data for my resources. I can now proceed to create one or more dashboards in my monitoring account.

Configuring Cross-Account Cross-Region Dashboards
With account configuration complete it’s time to create a dashboard! In my member accounts I am running several EC2 instances, in different regions. One member account has one Windows and one Linux instance running in US West (Oregon). My second member account is running three Windows instances in an AWS Auto Scaling group in US East (Ohio). I’d like to create a dashboard giving me insight into CPU and network utilization for all these instances across both accounts and both regions.

To get started I log into the AWS console with my monitoring account and navigate to the CloudWatch console home, click Dashboards, then Create dashboard. Note the new account ID and region fields at the top of the page – now that cross-account cross-region access has been configured I can also perform ad-hoc inspection across accounts and/or regions without constructing a dashboard.


I first give the dashboard a name – I chose Compute – and then select Add widget to add my first set of metrics for CPU utilization. I chose a Line widget and clicked Configure. This takes me to an Add metric graph dialog and I can select the account and regions to pull metrics from into my dashboard.


With the account and region selected, I can proceed to select the relevant metrics for my instances and can add all my instances for my monitoring account in the two different regions. Switching accounts, and region, I repeat for the instances in my member accounts. I then add another widget, this time a Stacked area, for inbound network traffic, again selecting the instances of interest in each of my accounts and regions. Finally I click Save dashboard. The end result is a dashboard showing CPU utilization and network traffic for my 4 instances and one cluster across the accounts and regions (note the xa indicator in the top right of each widget, denoting this is representing data from multiple accounts and regions).


Hovering over a particular instance triggers a fly-out with additional data, and a deep link that will open the CloudWatch homepage in the account and region of the metric:

Availability
Amazon CloudWatch cross-account cross-region dashboards are available for use today in all commercial AWS regions and you can take advantage of the integration with AWS Organizations in those regions where Organizations is available.

— Steve

Tech Companies Warn U.S. Against Harmful Copyright Laws Worldwide

Post Syndicated from Ernesto original https://torrentfreak.com/tech-companies-warn-u-s-against-harmful-copyright-laws-worldwide-191109/

In recent years many countries around the world have tightened their copyright laws to curb the threat of online piracy.

These new regulations aim to help copyright holders, often by creating new obligations and restrictions for Internet service providers that host, link to, or just pass on infringing material.

Rightsholders are happy with these developments, but many Silicon Valley giants and other tech companies see the new laws as threats. This was made clear again this week by the Computer & Communications Industry Association (CCIA) and the Internet Association.

The two groups both submitted stark warnings to the US Trade Representative (USTR). The submissions were sent in response to a request for comments in preparation for the Government’s yearly report on foreign trade barriers.

The CCIA, which includes prominent members such as Amazon, Cloudflare, Facebook, and Google, lists a wide variety of threats, several of which are copyright-related.

One of the main problems is the increased copyright liability for online intermediaries. In the US, online services have strong safe harbor protections that prevent them from being held liable for users’ infringements, but in other countries, this is no longer the case, CCIA warns.

“Countries are increasingly using outdated Internet service liability laws that impose substantial penalties on intermediaries that have had no role in the development of objectionable content. These practices deter investment and market entry, impeding legitimate online services,” CCIA writes.

These countries include France, Germany, India, Italy, and Vietnam. In Australia, for example, several US platforms are excluded from liability protections, which goes against the U.S.-Australia Free Trade Agreement, CCIA notes.

Another major point of concern is the new EU Copyright Directive, which passed earlier this year. While individual member states have yet to implement it, it’s seen as a looming threat for US companies and users alike.

“[T]he recent EU Copyright Directive poses an immediate threat to Internet services and the obligations set out in the final text depart significantly from global norms. Laws made pursuant to the Directive will deter Internet service exports into the EU market due to significant costs of compliance,” CCIA writes.

“Despite claims from EU officials, lawful user activities will be severely restricted. EU officials are claiming that the new requirements would not affect lawful user activity such as sharing memes, alluding to the exceptions and limitations on quotation, criticism, review, and parody outlined in the text.”

The Internet Association also warns against the EU Copyright Directive in its submission. According to the group, which represents tech companies including Google, Reddit, Twitter, as well as Microsoft and Spotify, Europe’s plans are out of sync with US copyright law.

“The EU’s Copyright Directive directly conflicts with U.S. law and requires a broad range of U.S. consumer and enterprise firms to install filtering technologies, pay European organizations for activities that are entirely lawful under the U.S. copyright framework, and face direct liability for third-party content,” the Internet Association writes.

Aside from the EU plans, other countries such as Australia, Brazil, Colombia, India, and Ukraine are also proposing new “onerous” copyright liability proposals for Internet services. In many cases, these plans conflict with promises that were made under U.S. free trade agreements, the Internet Association writes.

“If the U.S. does not stand up for the U.S. copyright framework abroad, then U.S. innovators and exporters will suffer, and other countries will increasingly misuse copyright to limit market entry,” the group warns.

Both the CCIA and the Internet Archive urge the US Government to push back against these developments. They advise promoting strong and balanced copyright legislation, which doesn’t put US companies at risk when following US law.

While it makes sense that the US would back its owns laws and policies abroad, the comments made by both groups come at a time where changes to intermediary liability are on the agenda of local lawmakers as well.

Copyright holders see these foreign developments as inspiration, as they want increased liability for intermediaries. As such, MPAA recently asked lawmakers not to include current safe harbor language in future trade agreements.

This is also the advice of the House Judiciary Committee. While the committee isn’t taking a position on a future direction just yet, it wants to await current developments before porting current US liability exceptions into international deals.

The CCIA’s submission to the USTR is available here (pdf) and the Internet Association’s submission can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.