Tag Archives: ip address

Court Orders Spanish ISPs to Block Pirate Sites For Hollywood

Post Syndicated from Andy original https://torrentfreak.com/court-orders-spanish-isps-to-block-pirate-sites-for-hollywood-180216/

Determined to reduce levels of piracy globally, Hollywood has become one of the main proponents of site-blocking on the planet. To date there have been multiple lawsuits in far-flung jurisdictions, with Europe one of the primary targets.

Following complaints from Disney, 20th Century Fox, Paramount, Sony, Universal and Warner, Spain has become one of the latest targets. According to the studios a pair of sites – HDFull.tv and Repelis.tv – infringe their copyrights on a grand scale and need to be slowed down by preventing users from accessing them.

HDFull is a platform that provides movies and TV shows in both Spanish and English. Almost 60% its traffic comes from Spain and after a huge surge in visitors last July, it’s now the 337th most popular site in the country according to Alexa. Visitors from Mexico, Argentina, United States and Chile make up the rest of its audience.

Repelis.tv is a similar streaming portal specializing in movies, mainly in Spanish. A third of the site’s visitors hail from Mexico with the remainder coming from Argentina, Columbia, Spain and Chile. In common with HDFull, Repelis has been building its visitor numbers quickly since 2017.

The studios demanding more blocks

With a ruling in hand from the European Court of Justice which determined that sites can be blocked on copyright infringement grounds, the studios asked the courts to issue an injunction against several local ISPs including Telefónica, Vodafone, Orange and Xfera. In an order handed down this week, Barcelona Commercial Court No. 6 sided with the studios and ordered the ISPs to begin blocking the sites.

“They damage the legitimate rights of those who own the films and series, which these pages illegally display and with which they profit illegally through the advertising revenues they generate,” a statement from the Spanish Federation of Cinematographic Distributors (FEDECINE) reads.

FEDECINE General director Estela Artacho said that changes in local law have helped to provide the studios with a new way to protect audiovisual content released in Spain.

“Thanks to the latest reform of the Civil Procedure Law, we have in this jurisdiction a new way to exercise different possibilities to protect our commercial film offering,” Artacho said.

“Those of us who are part of this industry work to make culture accessible and offer the best cinematographic experience in the best possible conditions, guaranteeing the continuity of the sector.”

The development was also welcomed by Stan McCoy, president of the Motion Picture Association’s EMEA division, which represents the plaintiffs in the case.

“We have just taken a welcome step which we consider crucial to face the problem of piracy in Spain,” McCoy said.

“These actions are necessary to maintain the sustainability of the creative community both in Spain and throughout Europe. We want to ensure that consumers enjoy the entertainment offer in a safe and secure environment.”

After gaining experience from blockades and subsequent circumvention in other regions, the studios seem better prepared to tackle fallout in Spain. In addition to blocking primary domains, the ruling handed down by the court this week also obliges ISPs to block any other domain, subdomain or IP address whose purpose is to facilitate access to the blocked platforms.

News of Spain’s ‘pirate’ blocks come on the heels of fresh developments in Germany, where this week a court ordered ISP Vodafone to block KinoX, one of the country’s most popular streaming portals.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Backblaze and GDPR

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/gdpr-compliance/

GDPR General Data Protection Regulation

Over the next few months the noise over GDPR will finally reach a crescendo. For the uninitiated, “GDPR” stands for “General Data Protection Regulation” and it goes into effect on May 25th of this year. GDPR is designed to protect how personal information of EU (European Union) citizens is collected, stored, and shared. The regulation should also improve transparency as to how personal information is managed by a business or organization.

Backblaze fully expects to be GDPR compliant when May 25th rolls around and we thought we’d share our experience along the way. We’ll start with this post as an introduction to GDPR. In future posts, we’ll dive into some of the details of the process we went through in meeting the GDPR objectives.

GDPR: A Two Way Street

To ensure we are GDPR compliant, Backblaze has assembled a dedicated internal team, engaged outside counsel in the United Kingdom, and consulted with other tech companies on best practices. While it is a sizable effort on our part, we view this as a waypoint in our ongoing effort to secure and protect our customers’ data and to be transparent in how we work as a company.

In addition to the effort we are putting into complying with the regulation, we think it is important to underscore and promote the idea that data privacy and security is a two-way street. We can spend millions of dollars on protecting the security of our systems, but we can’t stop a bad actor from finding and using your account credentials left on a note stuck to your monitor. We can give our customers tools like two factor authentication and private encryption keys, but it is the partnership with our customers that is the most powerful protection. The same thing goes for your digital privacy — we’ll do our best to protect your information, but we will need your help to do so.

Why GDPR is Important

At the center of GDPR is the protection of Personally Identifiable Information or “PII.” The definition for PII is information that can be used stand-alone or in concert with other information to identify a specific person. This includes obvious data like: name, address, and phone number, less obvious data like email address and IP address, and other data such as a credit card number, and unique identifiers that can be decoded back to the person.

How Will GDPR Affect You as an Individual

If you are a citizen in the EU, GDPR is designed to protect your private information from being used or shared without your permission. Technically, this only applies when your data is collected, processed, stored or shared outside of the EU, but it’s a good practice to hold all of your service providers to the same standard. For example, when you are deciding to sign up with a service, you should be able to quickly access and understand what personal information is being collected, why it is being collected, and what the business can do with that information. These terms are typically found in “Terms and Conditions” and “Privacy Policy” documents, or perhaps in a written contract you signed before starting to use a given service or product.

Even if you are not a citizen of the EU, GDPR will still affect you. Why? Because nearly every company you deal with, especially online, will have customers that live in the EU. It makes little sense for Backblaze, or any other service provider or vendor, to create a separate set of rules for just EU citizens. In practice, protection of private information should be more accountable and transparent with GDPR.

How Will GDPR Affect You as a Backblaze Customer

Over the coming months Backblaze customers will see changes to our current “Terms and Conditions,” “Privacy Policy,” and to our Backblaze services. While the changes to the Backblaze services are expected to be minimal, the “terms and privacy” documents will change significantly. The changes will include among other things the addition of a group of model clauses and related materials. These clauses will be generally consistent across all GDPR compliant vendors and are meant to be easily understood so that a customer can easily determine how their PII is being collected and used.

Common GDPR Questions:

Here are a few of the more common questions we have heard regarding GDPR.

  1. GDPR will only affect citizens in the EU.
    Answer: The changes that are being made by companies such as Backblaze to comply with GDPR will almost certainly apply to customers from all countries. And that’s a good thing. The protections afforded to EU citizens by GDPR are something all users of our service should benefit from.
  2. After May 25, 2018, a citizen of the EU will not be allowed to use any applications or services that store data outside of the EU.
    Answer: False, no one will stop you as an EU citizen from using the internet-based service you choose. But, you should make sure you know where your data is being collected, processed, and stored. If any of those activities occur outside the EU, make sure the company is following the GDPR guidelines.
  3. My business only has a few EU citizens as customers, so I don’t need to care about GDPR?
    Answer: False, even if you have just one EU citizen as a customer, and you capture, process or store data their PII outside of the EU, you need to comply with GDPR.
  4. Companies can be fined millions of dollars for not complying with GDPR.
    Answer:
    True, but: the regulation allows for companies to be fined up to $4 Million dollars or 20% of global revenue (whichever is greater) if they don’t comply with GDPR. In practice, the feeling is that such fines will be reserved (at least initially) for egregious violators that ignore or merely give “lip-service” to GDPR.
  5. You’ll be able to tell a company is GDPR compliant because they have a “GDPR Certified” badge on their website.
    Answer: There is no official GDPR certification or an official GDPR certification program. Companies that comply with GDPR are expected to follow the articles in the regulation and it should be clear from the outside looking in that they have followed the regulations. For example, their “Terms and Conditions,” and “Privacy Policy” should clearly spell out how and why they collect, use, and share your information. At some point a real GDPR certification program may be adopted, but not yet.

For all the hoopla about GDPR, the regulation is reasonably well thought out and addresses a very important issue — people’s privacy online. Creating a best practices document, or in this case a regulation, that companies such as Backblaze can follow is a good idea. The document isn’t perfect, and over the coming years we expect there to be changes. One thing we hope for is that the countries within the EU continue to stand behind one regulation and not fragment the document into multiple versions, each applying to themselves. We believe that having multiple different GDPR versions for different EU countries would lead to less protection overall of EU citizens.

In summary, GDPR changes are coming over the next few months. Backblaze has our internal staff and our EU-based legal council working diligently to ensure that we will be GDPR compliant by May 25th. We believe that GDPR will have a positive effect in enhancing the protection of personally identifiable information for not only EU citizens, but all of our Backblaze customers.

The post Backblaze and GDPR appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Migrating Your Amazon ECS Containers to AWS Fargate

Post Syndicated from Tiffany Jernigan original https://aws.amazon.com/blogs/compute/migrating-your-amazon-ecs-containers-to-aws-fargate/

AWS Fargate is a new technology that works with Amazon Elastic Container Service (ECS) to run containers without having to manage servers or clusters. What does this mean? With Fargate, you no longer need to provision or manage a single virtual machine; you can just create tasks and run them directly!

Fargate uses the same API actions as ECS, so you can use the ECS console, the AWS CLI, or the ECS CLI. I recommend running through the first-run experience for Fargate even if you’re familiar with ECS. It creates all of the one-time setup requirements, such as the necessary IAM roles. If you’re using a CLI, make sure to upgrade to the latest version

In this blog, you will see how to migrate ECS containers from running on Amazon EC2 to Fargate.

Getting started

Note: Anything with code blocks is a change in the task definition file. Screen captures are from the console. Additionally, Fargate is currently available in the us-east-1 (N. Virginia) region.

Launch type

When you create tasks (grouping of containers) and clusters (grouping of tasks), you now have two launch type options: EC2 and Fargate. The default launch type, EC2, is ECS as you knew it before the announcement of Fargate. You need to specify Fargate as the launch type when running a Fargate task.

Even though Fargate abstracts away virtual machines, tasks still must be launched into a cluster. With Fargate, clusters are a logical infrastructure and permissions boundary that allow you to isolate and manage groups of tasks. ECS also supports heterogeneous clusters that are made up of tasks running on both EC2 and Fargate launch types.

The optional, new requiresCompatibilities parameter with FARGATE in the field ensures that your task definition only passes validation if you include Fargate-compatible parameters. Tasks can be flagged as compatible with EC2, Fargate, or both.

"requiresCompatibilities": [
    "FARGATE"
]

Networking

"networkMode": "awsvpc"

In November, we announced the addition of task networking with the network mode awsvpc. By default, ECS uses the bridge network mode. Fargate requires using the awsvpc network mode.

In bridge mode, all of your tasks running on the same instance share the instance’s elastic network interface, which is a virtual network interface, IP address, and security groups.

The awsvpc mode provides this networking support to your tasks natively. You now get the same VPC networking and security controls at the task level that were previously only available with EC2 instances. Each task gets its own elastic networking interface and IP address so that multiple applications or copies of a single application can run on the same port number without any conflicts.

The awsvpc mode also provides a separation of responsibility for tasks. You can get complete control of task placement within your own VPCs, subnets, and the security policies associated with them, even though the underlying infrastructure is managed by Fargate. Also, you can assign different security groups to each task, which gives you more fine-grained security. You can give an application only the permissions it needs.

"portMappings": [
    {
        "containerPort": "3000"
    }
 ]

What else has to change? First, you only specify a containerPort value, not a hostPort value, as there is no host to manage. Your container port is the port that you access on your elastic network interface IP address. Therefore, your container ports in a single task definition file need to be unique.

"environment": [
    {
        "name": "WORDPRESS_DB_HOST",
        "value": "127.0.0.1:3306"
    }
 ]

Additionally, links are not allowed as they are a property of the “bridge” network mode (and are now a legacy feature of Docker). Instead, containers share a network namespace and communicate with each other over the localhost interface. They can be referenced using the following:

localhost/127.0.0.1:<some_port_number>

CPU and memory

"memory": "1024",
 "cpu": "256"

"memory": "1gb",
 "cpu": ".25vcpu"

When launching a task with the EC2 launch type, task performance is influenced by the instance types that you select for your cluster combined with your task definition. If you pick larger instances, your applications make use of the extra resources if there is no contention.

In Fargate, you needed a way to get additional resource information so we created task-level resources. Task-level resources define the maximum amount of memory and cpu that your task can consume.

  • memory can be defined in MB with just the number, or in GB, for example, “1024” or “1gb”.
  • cpu can be defined as the number or in vCPUs, for example, “256” or “.25vcpu”.
    • vCPUs are virtual CPUs. You can look at the memory and vCPUs for instance types to get an idea of what you may have used before.

The memory and CPU options available with Fargate are:

CPU Memory
256 (.25 vCPU) 0.5GB, 1GB, 2GB
512 (.5 vCPU) 1GB, 2GB, 3GB, 4GB
1024 (1 vCPU) 2GB, 3GB, 4GB, 5GB, 6GB, 7GB, 8GB
2048 (2 vCPU) Between 4GB and 16GB in 1GB increments
4096 (4 vCPU) Between 8GB and 30GB in 1GB increments

IAM roles

Because Fargate uses awsvpc mode, you need an Amazon ECS service-linked IAM role named AWSServiceRoleForECS. It provides Fargate with the needed permissions, such as the permission to attach an elastic network interface to your task. After you create your service-linked IAM role, you can delete the remaining roles in your services.

"executionRoleArn": "arn:aws:iam::<your_account_id>:role/ecsTaskExecutionRole"

With the EC2 launch type, an instance role gives the agent the ability to pull, publish, talk to ECS, and so on. With Fargate, the task execution IAM role is only needed if you’re pulling from Amazon ECR or publishing data to Amazon CloudWatch Logs.

The Fargate first-run experience tutorial in the console automatically creates these roles for you.

Volumes

Fargate currently supports non-persistent, empty data volumes for containers. When you define your container, you no longer use the host field and only specify a name.

Load balancers

For awsvpc mode, and therefore for Fargate, use the IP target type instead of the instance target type. You define this in the Amazon EC2 service when creating a load balancer.

If you’re using a Classic Load Balancer, change it to an Application Load Balancer or a Network Load Balancer.

Tip: If you are using an Application Load Balancer, make sure that your tasks are launched in the same VPC and Availability Zones as your load balancer.

Let’s migrate a task definition!

Here is an example NGINX task definition. This type of task definition is what you’re used to if you created one before Fargate was announced. It’s what you would run now with the EC2 launch type.

{
    "containerDefinitions": [
        {
            "name": "nginx",
            "image": "nginx",
            "memory": "512",
            "cpu": "100",
            "essential": true,
            "portMappings": [
                {
                    "hostPort": "80",
                    "containerPort": "80",
                    "protocol": "tcp"
                }
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "/ecs/",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "ecs"
                }
            }
        }
    ],
    "family": "nginx-ec2"
}

OK, so now what do you need to do to change it to run with the Fargate launch type?

  • Add FARGATE for requiredCompatibilities (not required, but a good safety check for your task definition).
  • Use awsvpc as the network mode.
  • Just specify the containerPort (the hostPortvalue is the same).
  • Add a task executionRoleARN value to allow logging to CloudWatch.
  • Provide cpu and memory limits for the task.
{
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "containerDefinitions": [
        {
            "name": "nginx",
            "image": "nginx",
            "memory": "512",
            "cpu": "100",
            "essential": true,
            "portMappings": [
                {
                    "containerPort": "80",
                    "protocol": "tcp"
                }
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "/ecs/",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "ecs"
                }
            }
        }
    ],
    "networkMode": "awsvpc",
    "executionRoleArn": "arn:aws:iam::<your_account_id>:role/ecsTaskExecutionRole",
    "family": "nginx-fargate",
    "memory": "512",
    "cpu": "256"
}

Are there more examples?

Yep! Head to the AWS Samples GitHub repo. We have several sample task definitions you can try for both the EC2 and Fargate launch types. Contributions are very welcome too :).

 

tiffany jernigan
@tiffanyfayj

Despite Protests, ISP Ordered To Hand Over Pirates’ Details to Police

Post Syndicated from Andy original https://torrentfreak.com/despite-protests-isp-ordered-to-hand-over-pirates-details-to-police-180201/

As large ISPs become more closely aligned with the entertainment industries, the days of providers strongly standing up to blocking and disclosure requests appear to be on the decline. For Swedish ISP Bahnhof, however, customer privacy has become a business model.

In recent years the company has been a major opponent of data retention requirement, launched a free VPN to protect its users’ privacy, and put on a determined front against the threat of copyright trolls.

Back in May 2016, Bahnhof reiterated its stance that it doesn’t hand over the personal details of alleged pirates to anyone, not even the police. This, despite the fact that the greatest number of disclosure requests from the authorities relate to copyright infringement.

Bahnhof insisted that European privacy regulations mean that it only has to hand over information to the police if the complaint relates to a serious crime. But that went against a recommendation from the Swedish Post and Telecom Authority (PTS).

Now, however, the battle to protect customer privacy has received a significant setback after the Administrative Court in Stockholm found that Swedish provisions on disclosure of subscription data to law enforcement agencies do not contravene EU law.

“PTS asked Bahnhof to provide information on subscribers to law enforcement agencies. Bahnhof appealed against the order, claiming that the Swedish rules on disclosure of subscription information are incompatible with EU law,” the Court said in a statement.

“In support of its view, Bahnhof referred to two rulings of the European Court of Justice. The Administrative Court has held that it is not possible to state that the Swedish rules on law enforcement agencies’ access to subscription data are incompatible with EU law.”

The Court also looked at whether Swedish rules on disclosure of subscriber data meet the requirement of proportionality under EU law. In common with many other copyright-related cases, the Court found that law enforcement’s need to access subscriber data was more important than the individual’s right to privacy.

“In light of this, the Administrative Court has made the assessment that PTS’s decision to impose on Bahnhof a requirement to provide information about subscribers to law enforcement authorities is correct,” the Court adds.

PTS will now be able to instruct Bahnhof to disclose subscriber information in accordance with the provisions of the Electronic Communications Act and the ISP will be required to comply.

But as far as Bahnhof is concerned, the show isn’t over yet.

“We believe the sentence is incorrect, but it is also difficult to take PTS seriously when they can not even interpret the laws behind the decision in a consistent manner. We are of course going to appeal,” the company said in a statement.

To illustrate its point, Bahnhof says that PTS has changed its opinion on the importance of IP addresses in a matter of months. In October 2017, PTS lawyer Staffan Lindmark said he believed that IP addresses are to be regarded as privacy-sensitive data. In January 2018, however, PTS is said to have spoken of the same data in more trivial terms.

“That a supervisory authority pivots so much in its opinions is remarkable,” says Jon Karlung, President of the Bahnhof.

“Bahnhof is not in any way against law enforcement agencies, but we believe that sensitive data should only be released after judicial review and suspected crime.”

Bahnhof says it will save as little data on its customers as it can and IP addresses will be deleted within 24 hours, a practice that has been in place for some time.

In 2016, 27.5% of all disclosure requests sent to Bahnhof were related to online file-sharing, more than any other crime including grooming minors, harassment, sex crimes, forgery, and fraud.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Invoking AWS Lambda from Amazon MQ

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/invoking-aws-lambda-from-amazon-mq/

Contributed by Josh Kahn, AWS Solutions Architect

Message brokers can be used to solve a number of needs in enterprise architectures, including managing workload queues and broadcasting messages to a number of subscribers. Amazon MQ is a managed message broker service for Apache ActiveMQ that makes it easy to set up and operate message brokers in the cloud.

In this post, I discuss one approach to invoking AWS Lambda from queues and topics managed by Amazon MQ brokers. This and other similar patterns can be useful in integrating legacy systems with serverless architectures. You could also integrate systems already migrated to the cloud that use common APIs such as JMS.

For example, imagine that you work for a company that produces training videos and which recently migrated its video management system to AWS. The on-premises system used to publish a message to an ActiveMQ broker when a video was ready for processing by an on-premises transcoder. However, on AWS, your company uses Amazon Elastic Transcoder. Instead of modifying the management system, Lambda polls the broker for new messages and starts a new Elastic Transcoder job. This approach avoids changes to the existing application while refactoring the workload to leverage cloud-native components.

This solution uses Amazon CloudWatch Events to trigger a Lambda function that polls the Amazon MQ broker for messages. Instead of starting an Elastic Transcoder job, the sample writes the received message to an Amazon DynamoDB table with a time stamp indicating the time received.

Getting started

To start, navigate to the Amazon MQ console. Next, launch a new Amazon MQ instance, selecting Single-instance Broker and supplying a broker name, user name, and password. Be sure to document the user name and password for later.

For the purposes of this sample, choose the default options in the Advanced settings section. Your new broker is deployed to the default VPC in the selected AWS Region with the default security group. For this post, you update the security group to allow access for your sample Lambda function. In a production scenario, I recommend deploying both the Lambda function and your Amazon MQ broker in your own VPC.

After several minutes, your instance changes status from “Creation Pending” to “Available.” You can then visit the Details page of your broker to retrieve connection information, including a link to the ActiveMQ web console where you can monitor the status of your broker, publish test messages, and so on. In this example, use the Stomp protocol to connect to your broker. Be sure to capture the broker host name, for example:

<BROKER_ID>.mq.us-east-1.amazonaws.com

You should also modify the Security Group for the broker by clicking on its Security Group ID. Click the Edit button and then click Add Rule to allow inbound traffic on port 8162 for your IP address.

Deploying and scheduling the Lambda function

To simplify the deployment of this example, I’ve provided an AWS Serverless Application Model (SAM) template that deploys the sample function and DynamoDB table, and schedules the function to be invoked every five minutes. Detailed instructions can be found with sample code on GitHub in the amazonmq-invoke-aws-lambda repository, with sample code. I discuss a few key aspects in this post.

First, SAM makes it easy to deploy and schedule invocation of our function:

SubscriberFunction:
	Type: AWS::Serverless::Function
	Properties:
		CodeUri: subscriber/
		Handler: index.handler
		Runtime: nodejs6.10
		Role: !GetAtt SubscriberFunctionRole.Arn
		Timeout: 15
		Environment:
			Variables:
				HOST: !Ref AmazonMQHost
				LOGIN: !Ref AmazonMQLogin
				PASSWORD: !Ref AmazonMQPassword
				QUEUE_NAME: !Ref AmazonMQQueueName
				WORKER_FUNCTIOn: !Ref WorkerFunction
		Events:
			Timer:
				Type: Schedule
				Properties:
					Schedule: rate(5 minutes)

WorkerFunction:
Type: AWS::Serverless::Function
	Properties:
		CodeUri: worker/
		Handler: index.handler
		Runtime: nodejs6.10
Role: !GetAtt WorkerFunctionRole.Arn
		Environment:
			Variables:
				TABLE_NAME: !Ref MessagesTable

In the code, you include the URI, user name, and password for your newly created Amazon MQ broker. These allow the function to poll the broker for new messages on the sample queue.

The sample Lambda function is written in Node.js, but clients exist for a number of programming languages.

stomp.connect(options, (error, client) => {
	if (error) { /* do something */ }

	let headers = {
		destination: ‘/queue/SAMPLE_QUEUE’,
		ack: ‘auto’
	}

	client.subscribe(headers, (error, message) => {
		if (error) { /* do something */ }

		message.readString(‘utf-8’, (error, body) => {
			if (error) { /* do something */ }

			let params = {
				FunctionName: MyWorkerFunction,
				Payload: JSON.stringify({
					message: body,
					timestamp: Date.now()
				})
			}

			let lambda = new AWS.Lambda()
			lambda.invoke(params, (error, data) => {
				if (error) { /* do something */ }
			})
		}
})
})

Sending a sample message

For the purpose of this example, use the Amazon MQ console to send a test message. Navigate to the details page for your broker.

About midway down the page, choose ActiveMQ Web Console. Next, choose Manage ActiveMQ Broker to launch the admin console. When you are prompted for a user name and password, use the credentials created earlier.

At the top of the page, choose Send. From here, you can send a sample message from the broker to subscribers. For this example, this is how you generate traffic to test the end-to-end system. Be sure to set the Destination value to “SAMPLE_QUEUE.” The message body can contain any text. Choose Send.

You now have a Lambda function polling for messages on the broker. To verify that your function is working, you can confirm in the DynamoDB console that the message was successfully received and processed by the sample Lambda function.

First, choose Tables on the left and select the table name “amazonmq-messages” in the middle section. With the table detail in view, choose Items. If the function was successful, you’ll find a new entry similar to the following:

If there is no message in DynamoDB, check again in a few minutes or review the CloudWatch Logs group for Lambda functions that contain debug messages.

Alternative approaches

Beyond the approach described here, you may consider other approaches as well. For example, you could use an intermediary system such as Apache Flume to pass messages from the broker to Lambda or deploy Apache Camel to trigger Lambda via a POST to API Gateway. There are trade-offs to each of these approaches. My goal in using CloudWatch Events was to introduce an easily repeatable pattern familiar to many Lambda developers.

Summary

I hope that you have found this example of how to integrate AWS Lambda with Amazon MQ useful. If you have expertise or legacy systems that leverage APIs such as JMS, you may find this useful as you incorporate serverless concepts in your enterprise architectures.

To learn more, see the Amazon MQ website and Developer Guide. You can try Amazon MQ for free with the AWS Free Tier, which includes up to 750 hours of a single-instance mq.t2.micro broker and up to 1 GB of storage per month for one year.

Task Networking in AWS Fargate

Post Syndicated from Nathan Peck original https://aws.amazon.com/blogs/compute/task-networking-in-aws-fargate/

AWS Fargate is a technology that allows you to focus on running your application without needing to provision, monitor, or manage the underlying compute infrastructure. You package your application into a Docker container that you can then launch using your container orchestration tool of choice.

Fargate allows you to use containers without being responsible for Amazon EC2 instances, similar to how EC2 allows you to run VMs without managing physical infrastructure. Currently, Fargate provides support for Amazon Elastic Container Service (Amazon ECS). Support for Amazon Elastic Container Service for Kubernetes (Amazon EKS) will be made available in the near future.

Despite offloading the responsibility for the underlying instances, Fargate still gives you deep control over configuration of network placement and policies. This includes the ability to use many networking fundamentals such as Amazon VPC and security groups.

This post covers how to take advantage of the different ways of networking your containers in Fargate when using ECS as your orchestration platform, with a focus on how to do networking securely.

The first step to running any application in Fargate is defining an ECS task for Fargate to launch. A task is a logical group of one or more Docker containers that are deployed with specified settings. When running a task in Fargate, there are two different forms of networking to consider:

  • Container (local) networking
  • External networking

Container Networking

Container networking is often used for tightly coupled application components. Perhaps your application has a web tier that is responsible for serving static content as well as generating some dynamic HTML pages. To generate these dynamic pages, it has to fetch information from another application component that has an HTTP API.

One potential architecture for such an application is to deploy the web tier and the API tier together as a pair and use local networking so the web tier can fetch information from the API tier.

If you are running these two components as two processes on a single EC2 instance, the web tier application process could communicate with the API process on the same machine by using the local loopback interface. The local loopback interface has a special IP address of 127.0.0.1 and hostname of localhost.

By making a networking request to this local interface, it bypasses the network interface hardware and instead the operating system just routes network calls from one process to the other directly. This gives the web tier a fast and efficient way to fetch information from the API tier with almost no networking latency.

In Fargate, when you launch multiple containers as part of a single task, they can also communicate with each other over the local loopback interface. Fargate uses a special container networking mode called awsvpc, which gives all the containers in a task a shared elastic network interface to use for communication.

If you specify a port mapping for each container in the task, then the containers can communicate with each other on that port. For example the following task definition could be used to deploy the web tier and the API tier:

{
  "family": "myapp"
  "containerDefinitions": [
    {
      "name": "web",
      "image": "my web image url",
      "portMappings": [
        {
          "containerPort": 80
        }
      ],
      "memory": 500,
      "cpu": 10,
      "esssential": true
    },
    {
      "name": "api",
      "image": "my api image url",
      "portMappings": [
        {
          "containerPort": 8080
        }
      ],
      "cpu": 10,
      "memory": 500,
      "essential": true
    }
  ]
}

ECS, with Fargate, is able to take this definition and launch two containers, each of which is bound to a specific static port on the elastic network interface for the task.

Because each Fargate task has its own isolated networking stack, there is no need for dynamic ports to avoid port conflicts between different tasks as in other networking modes. The static ports make it easy for containers to communicate with each other. For example, the web container makes a request to the API container using its well-known static port:

curl 127.0.0.1:8080/my-endpoint

This sends a local network request, which goes directly from one container to the other over the local loopback interface without traversing the network. This deployment strategy allows for fast and efficient communication between two tightly coupled containers. But most application architectures require more than just internal local networking.

External Networking

External networking is used for network communications that go outside the task to other servers that are not part of the task, or network communications that originate from other hosts on the internet and are directed to the task.

Configuring external networking for a task is done by modifying the settings of the VPC in which you launch your tasks. A VPC is a fundamental tool in AWS for controlling the networking capabilities of resources that you launch on your account.

When setting up a VPC, you create one or more subnets, which are logical groups that your resources can be placed into. Each subnet has an Availability Zone and its own route table, which defines rules about how network traffic operates for that subnet. There are two main types of subnets: public and private.

Public subnets

A public subnet is a subnet that has an associated internet gateway. Fargate tasks in that subnet are assigned both private and public IP addresses:


A browser or other client on the internet can send network traffic to the task via the internet gateway using its public IP address. The tasks can also send network traffic to other servers on the internet because the route table can route traffic out via the internet gateway.

If tasks want to communicate directly with each other, they can use each other’s private IP address to send traffic directly from one to the other so that it stays inside the subnet without going out to the internet gateway and back in.

Private subnets

A private subnet does not have direct internet access. The Fargate tasks inside the subnet don’t have public IP addresses, only private IP addresses. Instead of an internet gateway, a network address translation (NAT) gateway is attached to the subnet:

 

There is no way for another server or client on the internet to reach your tasks directly, because they don’t even have an address or a direct route to reach them. This is a great way to add another layer of protection for internal tasks that handle sensitive data. Those tasks are protected and can’t receive any inbound traffic at all.

In this configuration, the tasks can still communicate to other servers on the internet via the NAT gateway. They would appear to have the IP address of the NAT gateway to the recipient of the communication. If you run a Fargate task in a private subnet, you must add this NAT gateway. Otherwise, Fargate can’t make a network request to Amazon ECR to download the container image, or communicate with Amazon CloudWatch to store container metrics.

Load balancers

If you are running a container that is hosting internet content in a private subnet, you need a way for traffic from the public to reach the container. This is generally accomplished by using a load balancer such as an Application Load Balancer or a Network Load Balancer.

ECS integrates tightly with AWS load balancers by automatically configuring a service-linked load balancer to send network traffic to containers that are part of the service. When each task starts, the IP address of its elastic network interface is added to the load balancer’s configuration. When the task is being shut down, network traffic is safely drained from the task before removal from the load balancer.

To get internet traffic to containers using a load balancer, the load balancer is placed into a public subnet. ECS configures the load balancer to forward traffic to the container tasks in the private subnet:

This configuration allows your tasks in Fargate to be safely isolated from the rest of the internet. They can still initiate network communication with external resources via the NAT gateway, and still receive traffic from the public via the Application Load Balancer that is in the public subnet.

Another potential use case for a load balancer is for internal communication from one service to another service within the private subnet. This is typically used for a microservice deployment, in which one service such as an internet user account service needs to communicate with an internal service such as a password service. Obviously, it is undesirable for the password service to be directly accessible on the internet, so using an internet load balancer would be a major security vulnerability. Instead, this can be accomplished by hosting an internal load balancer within the private subnet:

With this approach, one container can distribute requests across an Auto Scaling group of other private containers via the internal load balancer, ensuring that the network traffic stays safely protected within the private subnet.

Best Practices for Fargate Networking

Determine whether you should use local task networking

Local task networking is ideal for communicating between containers that are tightly coupled and require maximum networking performance between them. However, when you deploy one or more containers as part of the same task they are always deployed together so it removes the ability to independently scale different types of workload up and down.

In the example of the application with a web tier and an API tier, it may be the case that powering the application requires only two web tier containers but 10 API tier containers. If local container networking is used between these two container types, then an extra eight unnecessary web tier containers would end up being run instead of allowing the two different services to scale independently.

A better approach would be to deploy the two containers as two different services, each with its own load balancer. This allows clients to communicate with the two web containers via the web service’s load balancer. The web service could distribute requests across the eight backend API containers via the API service’s load balancer.

Run internet tasks that require internet access in a public subnet

If you have tasks that require internet access and a lot of bandwidth for communication with other services, it is best to run them in a public subnet. Give them public IP addresses so that each task can communicate with other services directly.

If you run these tasks in a private subnet, then all their outbound traffic has to go through an NAT gateway. AWS NAT gateways support up to 10 Gbps of burst bandwidth. If your bandwidth requirements go over this, then all task networking starts to get throttled. To avoid this, you could distribute the tasks across multiple private subnets, each with their own NAT gateway. It can be easier to just place the tasks into a public subnet, if possible.

Avoid using a public subnet or public IP addresses for private, internal tasks

If you are running a service that handles private, internal information, you should not put it into a public subnet or use a public IP address. For example, imagine that you have one task, which is an API gateway for authentication and access control. You have another background worker task that handles sensitive information.

The intended access pattern is that requests from the public go to the API gateway, which then proxies request to the background task only if the request is from an authenticated user. If the background task is in a public subnet and has a public IP address, then it could be possible for an attacker to bypass the API gateway entirely. They could communicate directly to the background task using its public IP address, without being authenticated.

Conclusion

Fargate gives you a way to run containerized tasks directly without managing any EC2 instances, but you still have full control over how you want networking to work. You can set up containers to talk to each other over the local network interface for maximum speed and efficiency. For running workloads that require privacy and security, use a private subnet with public internet access locked down. Or, for simplicity with an internet workload, you can just use a public subnet and give your containers a public IP address.

To deploy one of these Fargate task networking approaches, check out some sample CloudFormation templates showing how to configure the VPC, subnets, and load balancers.

If you have questions or suggestions, please comment below.

Movie Industry Hides Anti-Piracy Messages in ‘Pirate’ Subtitles

Post Syndicated from Andy original https://torrentfreak.com/movie-industry-hides-anti-piracy-messages-in-pirate-subtitles-180125/

Anti-piracy campaigns come in all shapes and sizes, from oppressive and scary to the optimistically educational. It is rare for any to be labeled ‘brilliant’ but a campaign just revealed in Belgium hits really close to the mark.

According to an announcement by the Belgian Entertainment Association (BEA), Belgian Federation of Cinemas, together with film producers and distributors, cinemas and directors, a brand new campaign has been targeting those who download content from illegal sources. It is particularly innovative and manages to hit pirates in a way they can’t easily avoid.

Working on the premise that many locals download English language movies and then augment them with local language subtitles, a fiendish plot was hatched. Instead of a generic preaching video on YouTube or elsewhere, the movie companies decided to ‘infect’ pirate subtitles with messages of their own.

“Suddenly the story gets a surprising turn. With a playful wink it suddenly seems as if Samuel L. Jackson in The Hitman’s Bodyguard directly appeals to the illegal viewer and says that you should not download,” the group explains.

Samuel is watching…..

>

“I do not need any research to see that these are bad subtitles,” Jackson informs the viewer.

In another scene with Ryan Reynolds, Jackson notes that illegal downloading can have a negative effect on a person.

Don’t download…..

Don’t download…..

“And you wanted to become a policeman, until you started downloading,” he says.

The movie groups say that they also planted edited subtitles in The Bridge, with police officers in the show noting they’re on the trail of illegal downloaders. The movies Logan Lucky and The Foreigner got similar treatment.

It’s not clear on which sites these modified subtitles were distributed but according to the companies involved, they’ve been downloaded 10,000 times already.

“The viewer not only feels caught but immediately realizes that you do not necessarily get a real quality product through illegal sources,” the companies say.

The campaign is the work of advertising agency TBWA, which appropriately bills itself as the Disruption Company.

“We are not a traditional ad agency network — we are a radically open creative collective. We look at what everyone else is doing and strive to do something completely new,” the company says.

Coincidentally, the company refers to its staff as pirates who rewrite rules and have ideas to take on “conventionally-steered ships.”

“As creative director of communication agency TBWA, protecting creative work is very important to us,” says TBWA Creative Director Gert Pauwels. “That is precisely why we came up with the subtle prank to work together with the sector to tackle illegal downloading.”

Although framed as a joke, one which may even raise a wry smile and a nod of respect from some pirates, there’s an underlying serious message from the companies involved.

“Maybe many think that everything is possible on the internet and that downloading will remain without consequences,” says Pieter Swaelens, Managing Director of BEA. “That is not the case. Here too, many jobs are being challenged in Belgium and we have to tackle this behavior.”

It’s also worth noting that while this campaign is both innovative and light-hearted, at least one of the companies involved is also a supporter of much tougher action.

Dutch Filmworks recently obtained permission from the Dutch Data Authority to begin monitoring pirates. Once it has their IP addresses it will attempt to make contact, offering a cash settlement agreement to make a potential lawsuit disappear.

“We are pleased with the extra attention to the problem of downloading from illegal sources,” says René van Turnhout, COO Dutch FilmWorks. “Too many jobs in our sector have been lost. Moreover, piracy endangers the creativity and quality of the legal offer.”

“I’d better watch legally … that’s true”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Spiegelbilder Studio’s giant CRT video walls

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/crt-video-walls/

After getting in contact with us to share their latest build with us, we invited Matvey Fridman of Germany-based production company Spiegelbilder Studio to write a guest blog post about their CRT video walls created for the band STRANDKØNZERT.

STRANDKØNZERT – TAGTRAUMER – OFFICIAL VIDEO

GERMAN DJENT RAP / EST. 2017. COMPLETE DIY-PROJECT.

CRT video wall

About a year ago, we had the idea of building a huge video wall out of old TVs to use in a music video. It took some time, but half a year later we found ourselves in a studio actually building this thing using 30 connected computers, 24 of which were Raspberry Pis.

STRANDKØNZERT CRT video wall Raspberry Pi

How we did it

After weeks and months of preproduction and testing, we decided on two consecutive days to build the wall, create the underlying IP network, run a few tests, and then film the artists’ performance in front of it. We actually had 32 Pis (a mixed bag of first, second, and third generation models) and even more TVs ready to go, since we didn’t know what the final build would actually look like. We ended up using 29 separate screens of various sizes hooked up to 24 separate Pis — the remaining five TVs got a daisy-chained video signal out of other monitors for a cool effect. Each Pi had to run a free software called PiWall.

STRANDKØNZERT CRT video wall Raspberry Pi

Since the TVs only had analogue video inputs, we had to get special composite breakout cables and then adapt the RCA connectors to either SCART, S-Video, or BNC.

STRANDKØNZERT CRT video wall Raspberry Pi

As soon as we had all of that running, we connected every Pi to a 48-port network switch that we’d hooked up to a Windows PC acting as a DHCP server to automatically assign IP addresses and handle the multicast addressing. To make remote control of the Raspberry Pis easier, a separate master Linux PC and two MacBook laptops, each with SSH enabled and a Samba server running, joined the network as well.

STRANDKØNZERT CRT video wall Raspberry Pi

The MacBook laptops were used to drop two files containing the settings on each Pi. The .pitile file was unique to every Pi and contained their respective IDs. The .piwall file contained the same info for all Pis: the measurements and positions of every single screen to help the software split up the video signal coming in through the network. After every Pi got the command to start the PiWall software, which specifies the UDP multicast address and settings to be used to receive the video stream, the master Linux PC was tasked with streaming the video file to these UDP addresses. Now every TV was showing its section of the video, and we could begin filming.

STRANDKØNZERT CRT video wall Raspberry Pi

The whole process and the contents of the files and commands are summarised in the infographic below. A lot of trial and error was involved in the making of this project, but it all worked out well in the end. We hope you enjoy the craft behind the music video even though the music is not for everybody 😉

PiWall_Infographic

You can follow Spiegelbilder Studio on Facebook, Twitter, and Instagram. And if you enjoyed the music video, be sure to follow STRANDKØNZERT too.

The post Spiegelbilder Studio’s giant CRT video walls appeared first on Raspberry Pi.

Copyright Trolls Obtained Details of 200,000 Finnish Internet Users

Post Syndicated from Andy original https://torrentfreak.com/copyright-trolls-obtained-details-of-200000-finnish-internet-users-180118/

Fifteen years ago, the RIAA was contacting alleged file-sharers in the United States, demanding cash payments to make supposed lawsuits go away. In the years that followed, dozens of companies followed in their footsteps – not as a deterrent – but as a way to turn piracy into profit.

The practice is now widespread, not just in the United States, but also in Europe where few major countries have avoided the clutches of trolls. Germany has been hit particularly hard, with millions of cases. The UK has also seen tens of thousands of individuals targeted since 2006 although more recently the trolls there have been in retreat. The same cannot be said about Finland, however.

From a relatively late start in 2013, trolls have been stepping up their game in leaps and bounds but the true scale of developments in this Scandinavian country will probably come as a surprise to even the most seasoned of troll-watchers.

According to data compiled by NGO activist Ritva Puolakka, the business in Finland has grown to epidemic proportions. In fact, between 2013 and 2017 the Market Court (which deals with Intellectual Property matters, among other things) has ordered local Internet service providers to hand over the details of almost 200,000 Finnish Internet subscribers.

Published on the Ministry of Education and Culture website (via mikrobitti.fi) the data (pdf) reveals hundreds of processes against major Finnish ISPs.

Notably, every single case has been directed at a core group of three providers – Elisa, TeliaSonera and DNA – while customers of other ISPs seem to have been completely overlooked. Exactly why isn’t clear but in other jurisdictions it’s proven more cost-effective to hone a process with a small number of ISPs, rather than spread out to those with fewer customers.

Only one legal process is listed for 2013 but that demanded the identities of people behind 50 IP addresses. In 2014 there was a 14-fold increase in processes and the number of IP addresses targeted grew to 1,387.

For 2015, a total of 117 processes are listed, demanding the identities of people behind 37,468 IP addresses. In 2016 the trolls really upped their game. A total of 131 processes demanded the details of individuals behind 98,966 IP addresses. For last year, 79 processes are on the books, which in total amounted to 60,681 potential defendants in settlement cases.

In total, between 2013 and 2017 the Market Court ordered the ISPs to hand over the personal details of people behind a staggering 198,552 IP addresses. While it should be noted that each might not lead to a unique individual, the number is huge when one considers the potential returns if everyone pays up hundreds of euros to make supposed court cases go away.

But despite the significant scale, it will probably come as no surprise that very few companies are involved. Troll operations tend to be fairly centralized, often using the same base services to track and collect evidence against alleged pirates.

In the order they entered the settlement business in Finland the companies involved are: LFP Video Group LLC, International Content Holding B.V., Dallas Buyers Club LLC, Crystalis Entertainment UG, Scanbox Entertainment A/S, Fairway Film Alliance LLC, Copyright Collections Ltd, Mircom International Content Management, Interallip LLP, and Oy Atlantic Film Finland Ab.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Pirate Streaming on Facebook is a Seriously Risky Business

Post Syndicated from Andy original https://torrentfreak.com/pirate-streaming-on-facebook-is-a-seriously-risky-business-180114/

For more than a year the British public has been warned about the supposed dangers of Kodi piracy.

Dozens of headlines have claimed consequences ranging from system-destroying malware to prison sentences. Fortunately, most of them can be filed under “tabloid nonsense.”

That being said, there is an extremely important issue that deserves much closer attention, particularly given a shift in the UK legal climate during 2017. We’re talking about live streaming copyrighted content on Facebook, which is both incredibly easy and frighteningly risky.

This week it was revealed that 34-year-old Craig Foster from the UK had been given an ultimatum from Sky to pay a £5,000 settlement fee. The media giant discovered that he’d live-streamed the Anthony Joshua v Wladimir Klitschko fight on Facebook and wanted compensation to make a potential court case disappear.

While it may seem initially odd to use the word, Foster was lucky.

Under last year’s Digital Economy Act, he could’ve been jailed for up to ten years for distributing copyright-infringing content to the public, if he had “reason to believe that communicating the work to the public [would] cause loss to the owner of the copyright, or [would] expose the owner of the copyright to a risk of loss.”

Clearly, as a purchaser of the £19.95 pay-per-view himself, he would’ve appreciated that the event costs money. With that in mind, a court would likely find that he would have been aware that Sky would have been exposed to a “risk of loss”. Sky claim that 4,250 people watched the stream but the way the law is written, no specific level of loss is required for a breach of the law.

But it’s not just the threat of a jail sentence that’s the problem. People streaming live sports on Facebook are sitting ducks.

In Foster’s case, the fight he streamed was watermarked, which means that Sky put a tracking code into it which identified him personally as the buyer of the event. When he (or his friend, as Foster claims) streamed it on Facebook, it was trivial for Sky to capture the watermark and track it back to his Sky account.

Equally, it would be simplicity itself to see that the name on the Sky account had exactly the same name and details as Foster’s Facebook account. So, to most observers, it would appear that not only had Foster purchased the event, but he was also streaming it to Facebook illegally.

It’s important to keep something else in mind. No cooperation between Sky and Facebook would’ve been necessary to obtain Foster’s details. Take the amount of information most people share on Facebook, combine that with the information Sky already had, and the company’s anti-piracy team would have had a very easy job.

Now compare this situation with an upload of the same stream to a torrent site.

While the video capture would still contain Foster’s watermark, which would indicate the source, to prove he also distributed the video Sky would’ve needed to get inside a torrent swarm. From there they would need to capture the IP address of the initial seeder and take the case to court, to force an ISP to hand over that person’s details.

Presuming they were the same person, Sky would have a case, with a broadly similar level of evidence to that presented in the current matter. However, it would’ve taken them months to get their man and cost large sums of money to get there. It’s very unlikely that £5,000 would cover the costs, meaning a much, much bigger bill for the culprit.

Or, confident that Foster was behind the leak based on the watermark alone, Sky could’ve gone straight to the police. That never ends well.

The bottom line is that while live-streaming on Facebook is simplicity itself, people who do it casually from their own account (especially with watermarked content) are asking for trouble.

Nailing Foster was the piracy equivalent of shooting fish in a barrel but the worrying part is that he probably never gave his (or his friend’s…) alleged infringement a second thought. With a click or two, the fight was live and he was staring down the barrel of a potential jail sentence, had Sky not gone the civil route.

It’s scary stuff and not enough is being done to warn people of the consequences. Forget the scare stories attempting to deter people from watching fights or movies on Kodi, thoughtlessly streaming them to the public on social media is the real danger.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

ISP: We’re Cooperating With Police Following Pirate IPTV Raid

Post Syndicated from Andy original https://torrentfreak.com/isp-were-cooperating-with-police-following-pirate-iptv-raid-180113/

This week, police forces around Europe took action against what is believed to be one of the world’s largest pirate IPTV networks.

The investigation, launched a year ago and coordinated by Europol, came to head on Tuesday when police carried out raids in Cyprus, Bulgaria, Greece, and the Netherlands. A fresh announcement from the crime-fighting group reveals the scale of the operation.

It was led by the Cypriot Police – Intellectual Property Crime Unit, with the support of the Cybercrime Division of the Greek Police, the Dutch Fiscal Investigative and Intelligence Service (FIOD), the Cybercrime Unit of the Bulgarian Police, Europol’s Intellectual Property Crime Coordinated Coalition (IPC³), and supported by members of the Audiovisual Anti-Piracy Alliance (AAPA).

In Cyprus, Bulgaria and Greece, 17 house searches were carried out. Three individuals aged 43, 44, and 53 were arrested in Cyprus and one was arrested in Bulgaria.

All stand accused of being involved in an international operation to illegally broadcast around 1,200 channels of pirated content to an estimated 500,000 subscribers. Some of the channels offered were illegally sourced from Sky UK, Bein Sports, Sky Italia, and Sky DE. On Thursday, the three individuals in Cyprus were remanded in custody for seven days.

“The servers used to distribute the channels were shut down, and IP addresses hosted by a Dutch company were also deactivated thanks to the cooperation of the authorities of The Netherlands,” Europol reports.

“In Bulgaria, 84 servers and 70 satellite receivers were seized, with decoders, computers and accounting documents.”

TorrentFreak was previously able to establish that Megabyte-Internet Ltd, an ISP located in the small Bulgarian town Petrich, was targeted by police. The provider went down on Tuesday but returned towards the end of the week. Responding to our earlier inquiries, the company told us more about the situation.

“We are an ISP provider located in Petrich, Bulgaria. We are selling services to around 1,500 end-clients in the Petrich area and surrounding villages,” a spokesperson explained.

“Another part of our business is internet services like dedicated unmanaged servers, hosting, email servers, storage services, and VPNs etc.”

The spokesperson added that some of Megabyte’s equipment is located at Telepoint, Bulgaria’s biggest datacenter, with connectivity to Petrich. During the raid the police seized the company’s hardware to check for evidence of illegal activity.

“We were informed by the police that some of our clients in Petrich and Sofia were using our service for illegal streaming and actions,” the company said.

“Of course, we were not able to know this because our services are unmanaged and root access [to servers] is given to our clients. For this reason any client and anyone that uses our services are responsible for their own actions.”

TorrentFreak asked many more questions, including how many police attended, what type and volume of hardware was seized, and whether anyone was arrested or taken for questioning. But, apart from noting that the police were friendly, the company declined to give us any additional information, revealing that it was not permitted to do so at this stage.

What is clear, however, is that Megabyte-Internet is offering its full cooperation to the authorities. The company says that it cannot be held responsible for the actions of its clients so their details will be handed over as part of the investigation.

“So now we will give to the police any details about these clients because we hold their full details by law. [The police] will find [out about] all the illegal actions from them,” the company concludes, adding that it’s fully operational once more and working with clients.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Court Expands Dutch Pirate Bay Blockade to More ISPs, For Now

Post Syndicated from Ernesto original https://torrentfreak.com/court-expands-dutch-pirate-bay-blockade-to-more-isps-180113/

The Pirate Bay is arguably the most widely blocked website on the Internet.

ISPs from all over the world have been ordered by courts to prevent users from accessing the torrent site, and this week the list has grown a bit longer.

A Dutch court has ruled that local Internet providers KPN, Tele2, T-Mobile, Zeelandnet and CAIW must block the site within ten days. The verdict follows a similar decision from September last year, where Ziggo and XS4All were ordered to do the same.

The blockade applies to several IP addresses and more than 150 domain names that are used by the notorious torrent site. Several of the ISPs had warned the court about the dangers of overblocking, but these concerns were rejected.

While most Dutch customers will be unable to access The Pirate Bay directly, the decision is not final yet. Not until the Supreme Court issues its pending decision. That will be the climax of a legal battle that started eight years ago.

A Dutch court first issued an order to block The Pirate Bay in 2012, but this decision was overturned two years later. Anti-piracy group BREIN then took the matter to the Supreme Court, which subsequently referred the case to the EU Court of Justice, seeking further clarification.

After a careful review of the case, the EU Court of Justice decided last year that The Pirate Bay can indeed be blocked.

The top EU court ruled that although The Pirate Bay’s operators don’t share anything themselves, they knowingly provide users with a platform to share copyright-infringing links. This can be seen as “an act of communication” under the EU Copyright Directive.

This put the case back to the Dutch Supreme court, which has yet to decide on the matter.

BREIN, however, wanted a blocking decision more quickly and requested preliminary injunctions, like the one issued this week. These injunctions will only be valid until the final verdict is handed down.

A copy of the most recent court order is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Judge Issues Devastating Order Against BitTorrent Copyright Troll

Post Syndicated from Ernesto original https://torrentfreak.com/judge-issues-devastating-order-bittorrent-copyright-troll-180110/

In recent years, file-sharers around the world have been pressured to pay significant settlement fees, or face legal repercussions.

These so-called “copyright trolling” efforts have been a common occurrence in the United States since the turn of the last decade.

Increasingly, however, courts are growing weary of these cases. Many districts have turned into no-go zones for copyright trolls and the people behind Prenda law were arrested and are being prosecuted in a criminal case.

In the Western District of Washington, the tide also appears to have turned. After Venice PI, a copyright holder of the film “Once Upon a Time in Venice”, sued a man who later passed away, concerns were raised over the validity of the evidence.

Venice PI responded to the concerns with a declaration explaining its data gathering technique and assuring the Court that false positives are out of the question.

That testimony didn’t help much though, as a recently filed minute order shows this week. The order applies to a dozen cases and prohibits the company from reaching out to any defendants until further notice, as there are several alarming issues that have to be resolved first.

One of the problems is that Venice PI declared that it’s owned by a company named Lost Dog Productions, which in turn is owned by Voltage Productions. Interestingly, these companies don’t appear in the usual records.

“A search of the California Secretary of State’s online database, however, reveals no registered entity with the name ‘Lost Dog’ or ‘Lost Dog Productions’,” the Court notes.

“Moreover, although ‘Voltage Pictures, LLC’ is registered with the California Secretary of State, and has the same address as Venice PI, LLC, the parent company named in plaintiff’s corporate disclosure form, ‘Voltage Productions, LLC,’ cannot be found in the California Secretary of State’s online database and does not appear to exist.”

In other words, the company that filed the lawsuit, as well as its parent company, are extremely questionable.

While the above is a reason for concern, it’s just the tip of the iceberg. The Court not only points out administrative errors, but it also has serious doubts about the evidence collection process. This was carried out by the German company MaverickEye, which used the tracking technology of another German company, GuardaLey.

GuardaLey CEO Benjamin Perino, who claims that he coded the tracking software, wrote a declaration explaining that the infringement detection system at issue “cannot yield a false positive.” However, the Court doubts this statement and Perino’s qualifications in general.

“Perino has been proffered as an expert, but his qualifications consist of a technical high school education and work experience unrelated to the peer-to-peer file-sharing technology known as BitTorrent,” the Court writes.

“Perino does not have the qualifications necessary to be considered an expert in the field in question, and his opinion that the surveillance program is incapable of error is both contrary to common sense and inconsistent with plaintiff’s counsel’s conduct in other matters in this district. Plaintiff has not submitted an adequate offer of proof”

It seems like the Court would prefer to see an assessment from a qualified independent expert instead of the person who wrote the software. For now, this means that the IP-address evidence, in these cases, is not good enough. That’s quite a blow for the copyright holder.

If that wasn’t enough the Court also highlights another issue that’s possibly even more problematic. When Venice PI requested the subpoenas to identify alleged pirates, they relied on declarations from Daniel Arheidt, a consultant for MaverickEye.

These declarations fail to mention, however, that MaverickEye has the proper paperwork to collect IP addresses.

“Nowhere in Arheidt’s declarations does he indicate that either he or MaverickEye is licensed in Washington to conduct private investigation work,” the order reads.

This is important, as doing private investigator work without a license is a gross misdemeanor in Washington. The copyright holder was aware of this requirement because it was brought up in related cases in the past.

“Plaintiff’s counsel has apparently been aware since October 2016, when he received a letter concerning LHF Productions, Inc. v. Collins, C16-1017 RSM, that Arheidt might be committing a crime by engaging in unlicensed surveillance of Washington citizens, but he did not disclose this fact to the Court.”

The order is very bad news for Venice PI. The company had hoped to score a few dozen easy settlements but the tables have now been turned. The Court instead asks the company to explain the deficiencies and provide additional details. In the meantime, the copyright holder is urged not to spend or transfer any of the settlement money that has been collected thus far.

The latter indicates that Venice PI might have to hand defendants their money back, which would be pretty unique.

The order suggests that the Judge is very suspicious of these trolling activities. In a footnote there’s a link to a Fight Copyright Trolls article which revealed that the same counsel dismissed several cases, allegedly to avoid having IP-address evidence scrutinized.

Even more bizarrely, in another footnote the Court also doubts if MaverickEye’s aforementioned consultant, Daniel Arheidt, actually exists.

“The Court has recently become aware that Arheidt is the latest in a series of German declarants (Darren M. Griffin, Daniel Macek, Daniel Susac, Tobias Fieser, Michael Patzer) who might be aliases or even fictitious.

“Plaintiff will not be permitted to rely on Arheidt’s declarations or underlying data without explaining to the Court’s satisfaction Arheidt’s relationship to the above-listed declarants and producing proof beyond a reasonable doubt of Arheidt’s existence,” the court adds.

These are serious allegations, to say the least.

If a copyright holder uses non-existent companies and questionable testimony from unqualified experts after obtaining evidence illegally to get a subpoena backed by a fictitious person….something’s not quite right.

A copy of the minute order, which affects a series of cases, is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

RuTracker Reveals Innovative Plan For Users to Subvert ISP Blocking

Post Syndicated from Andy original https://torrentfreak.com/rutracker-reveals-innovative-plan-for-users-to-subvert-isp-blocking-180110/

As Russia’s largest torrent site and one that earned itself a mention in TF’s list of most popular torrent sites 2018, RuTracker is continuously under fire.

The site has an extremely dedicated following but Russia’s telecoms watchdog, spurred on by copyright holders brandishing court rulings, does everything in its power to ensure that people can’t access the site easily.

As a result, RuTracker’s main domains are blocked by all ISPs, meaning that people have to resort to VPNs or the many dozens of proxy and mirror sites that have been set up to facilitate access to the popular tracker.

While all of these methods used to work just fine, new legislation that came into force during October means that mirror and proxy sites can be added to block lists without copyright holders having to return to court. And, following legislation introduced in November, local VPN services are forbidden from providing access to blocked sites.

While RuTracker has always insisted that web blockades have little effect on the numbers of people sharing content, direct traffic to their main domains has definitely suffered. To solve this problem and go some way towards mitigating VPN and proxy bans, the site has just come up with a new plan to keep the torrents flowing.

The scheme was quietly announced, not on RuTracker’s main forum, but to a smaller set of users on local site Leprosorium. The idea was that a quieter launch there would allow for controlled testing before a release to the masses. The project is called My.RuTracker and here’s how it works.

Instead of blocked users fruitlessly trying to find public circumvention methods that once seen are immediately blocked, they are invited to register their own domains. These can be single use, for the person who registers them, but it’s envisioned that they’ll be shared out between friends, family, and online groups, to better make use of the resource.

Once domains are registered, users are invited to contact a special user account on the RuTracker site (operated by the site’s operators) which will provide them with precise technical details on how to set up their domain (.ru domains are not allowed) to gain access to RuTracker.

“In response, after a while (usually every other day), a list of NS-addresses will be sent to the registrar’s domain settings. Under this scheme, the user domain will be redirected to the RuTracker site via a dynamic IP address: this will avoid blocking the torrent tracker for a particular IP address,” the scheme envisages.

According to local news resource Tjournal, 62 personal mirrors were launched following the initial appeal, with the operators of RuTracker now planning to publicly announce the project to their community. As more are added, the site will keep track of traffic from each of the personal “mirrors” for balancing the load on the site.

At least in theory, this seems like a pretty innovative scheme. Currently, the authorities rely on the scale and public awareness of a particular proxy or mirror in order to earmark it for blocking. This much more decentralized plan, in which only small numbers of people should know each domain, seems like a much more robust system – at least until the authorities and indeed the law catches up.

And so the cat-and-mouse game continues.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons