Tag Archives: 2016

Friday Squid Blogging: Squid Prices Rise as Catch Decreases

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/friday_squid_bl_621.html

In Japan:

Last year’s haul sank 15% to 53,000 tons, according to the JF Zengyoren national federation of fishing cooperatives. The squid catch has fallen by half in just two years. The previous low was plumbed in 2016.

Lighter catches have been blamed on changing sea temperatures, which impedes the spawning and growth of the squid. Critics have also pointed to overfishing by North Korean and Chinese fishing boats.

Wholesale prices of flying squid have climbed as a result. Last year’s average price per kilogram came to 564 yen, a roughly 80% increase from two years earlier, according to JF Zengyoren.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

RDS for Oracle: Extending Outbound Network Access to use SSL/TLS

Post Syndicated from Surya Nallu original https://aws.amazon.com/blogs/architecture/rds-for-oracle-extending-outbound-network-access-to-use-ssltls/

In December 2016, we launched the Outbound Network Access functionality for Amazon RDS for Oracle, enabling customers to use their RDS for Oracle database instances to communicate with external web endpoints using the utl_http and utl tcp packages, and sending emails through utl_smtp. We extended the functionality by adding the option of using custom DNS servers, allowing such outbound network accesses to make use of any DNS server a customer chooses to use. These releases enabled HTTP, TCP and SMTP communication originating out of RDS for Oracle instances – limited to non-secure (non-SSL) mediums.

To overcome the limitation over SSL connections, we recently published a whitepaper, that guides through the process of creating customized Oracle wallet bundles on your RDS for Oracle instances. By making use of such wallets, you can now extend the Outbound Network Access capability to have external communications happen over secure (SSL/TLS) connections. This opens up new use cases for your RDS for Oracle instances.

With the right set of certificates imported into your RDS for Oracle instances (through Oracle wallets), your database instances can now:

  • Communicate with a HTTPS endpoint: Using utl_http, access a resource such as https://status.aws.amazon.com/robots.txt
  • Download files from Amazon S3 securely: Using a presigned URL from Amazon S3, you can now download any file over SSL
  • Extending Oracle Database links to use SSL: Database links between RDS for Oracle instances can now use SSL as long as the instances have the SSL option installed
  • Sending email over SMTPS:
    • You can now integrate with Amazon SES to send emails from your database instances and any other generic SMTPS with which the provider can be integrated

These are just a few high-level examples of new use cases that have opened up with the whitepaper. As a reminder, always ensure to have best security practices in place when making use of Outbound Network Access (detailed in the whitepaper).

About the Author

Surya Nallu is a Software Development Engineer on the Amazon RDS for Oracle team.

Securing Elections

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/securing_electi_1.html

Elections serve two purposes. The first, and obvious, purpose is to accurately choose the winner. But the second is equally important: to convince the loser. To the extent that an election system is not transparently and auditably accurate, it fails in that second purpose. Our election systems are failing, and we need to fix them.

Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper.

Recently, there have been two graphic demonstrations of how bad our computerized voting system is. In 2007, the states of California and Ohio conducted audits of their electronic voting machines. Expert review teams found exploitable vulnerabilities in almost every component they examined. The researchers were able to undetectably alter vote tallies, erase audit logs, and load malware on to the systems. Some of their attacks could be implemented by a single individual with no greater access than a normal poll worker; others could be done remotely.

Last year, the Defcon hackers’ conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail.

It’s important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend.

It shouldn’t be any surprise that voting equipment, including voting machines, voter registration databases, and vote tabulation systems, are that hackable. They’re computers — often ancient computers running operating systems no longer supported by the manufacturers — and they don’t have any magical security technology that the rest of the industry isn’t privy to. If anything, they’re less secure than the computers we generally use, because their manufacturers hide any flaws behind the proprietary nature of their equipment.

We’re not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election.

Voting systems have another requirement that makes security even harder to achieve: the requirement for a secret ballot. Because we have to securely separate the election-roll system that determines who can vote from the system that collects and tabulates the votes, we can’t use the security systems available to banking and other high-value applications.

We can securely bank online, but can’t securely vote online. If we could do away with anonymity — if everyone could check that their vote was counted correctly — then it would be easy to secure the vote. But that would lead to other problems. Before the US had the secret ballot, voter coercion and vote-buying were widespread.

We can’t, so we need to accept that our voting systems are insecure. We need an election system that is resilient to the threats. And for many parts of the system, that means paper.

Let’s start with the voter rolls. We know they’ve already been targeted. In 2016, someone changed the party affiliation of hundreds of voters before the Republican primary. That’s just one possibility. A well-executed attack that deletes, for example, one in five voters at random — or changes their addresses — would cause chaos on election day.

Yes, we need to shore up the security of these systems. We need better computer, network, and database security for the various state voter organizations. We also need to better secure the voter registration websites, with better design and better internet security. We need better security for the companies that build and sell all this equipment.

Multiple, unchangeable backups are essential. A record of every addition, deletion, and change needs to be stored on a separate system, on write-only media like a DVD. Copies of that DVD, or — even better — a paper printout of the voter rolls, should be available at every polling place on election day. We need to be ready for anything.

Next, the voting machines themselves. Security researchers agree that the gold standard is a voter-verified paper ballot. The easiest (and cheapest) way to achieve this is through optical-scan voting. Voters mark paper ballots by hand; they are fed into a machine and counted automatically. That paper ballot is saved, and serves as a final true record in a recount in case of problems. Touch-screen machines that print a paper ballot to drop in a ballot box can also work for voters with disabilities, as long as the ballot can be easily read and verified by the voter.

Finally, the tabulation and reporting systems. Here again we need more security in the process, but we must always use those paper ballots as checks on the computers. A manual, post-election, risk-limiting audit varies the number of ballots examined according to the margin of victory. Conducting this audit after every election, before the results are certified, gives us confidence that the election outcome is correct, even if the voting machines and tabulation computers have been tampered with. Additionally, we need better coordination and communications when incidents occur.

It’s vital to agree on these procedures and policies before an election. Before the fact, when anyone can win and no one knows whose votes might be changed, it’s easy to agree on strong security. But after the vote, someone is the presumptive winner — and then everything changes. Half of the country wants the result to stand, and half wants it reversed. At that point, it’s too late to agree on anything.

The politicians running in the election shouldn’t have to argue their challenges in court. Getting elections right is in the interest of all citizens. Many countries have independent election commissions that are charged with conducting elections and ensuring their security. We don’t do that in the US.

Instead, we have representatives from each of our two parties in the room, keeping an eye on each other. That provided acceptable security against 20th-century threats, but is totally inadequate to secure our elections in the 21st century. And the belief that the diversity of voting systems in the US provides a measure of security is a dangerous myth, because few districts can be decisive and there are so few voting-machine vendors.

We can do better. In 2017, the Department of Homeland Security declared elections to be critical infrastructure, allowing the department to focus on securing them. On 23 March, Congress allocated $380m to states to upgrade election security.

These are good starts, but don’t go nearly far enough. The constitution delegates elections to the states but allows Congress to “make or alter such Regulations”. In 1845, Congress set a nationwide election day. Today, we need it to set uniform and strict election standards.

This essay originally appeared in the Guardian.

Implementing safe AWS Lambda deployments with AWS CodeDeploy

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/implementing-safe-aws-lambda-deployments-with-aws-codedeploy/

This post courtesy of George Mao, AWS Senior Serverless Specialist – Solutions Architect

AWS Lambda and AWS CodeDeploy recently made it possible to automatically shift incoming traffic between two function versions based on a preconfigured rollout strategy. This new feature allows you to gradually shift traffic to the new function. If there are any issues with the new code, you can quickly rollback and control the impact to your application.

Previously, you had to manually move 100% of traffic from the old version to the new version. Now, you can have CodeDeploy automatically execute pre- or post-deployment tests and automate a gradual rollout strategy. Traffic shifting is built right into the AWS Serverless Application Model (SAM), making it easy to define and deploy your traffic shifting capabilities. SAM is an extension of AWS CloudFormation that provides a simplified way of defining serverless applications.

In this post, I show you how to use SAM, CloudFormation, and CodeDeploy to accomplish an automated rollout strategy for safe Lambda deployments.

Scenario

For this walkthrough, you write a Lambda application that returns a count of the S3 buckets that you own. You deploy it and use it in production. Later on, you receive requirements that tell you that you need to change your Lambda application to count only buckets that begin with the letter “a”.

Before you make the change, you need to be sure that your new Lambda application works as expected. If it does have issues, you want to minimize the number of impacted users and roll back easily. To accomplish this, you create a deployment process that publishes the new Lambda function, but does not send any traffic to it. You use CodeDeploy to execute a PreTraffic test to ensure that your new function works as expected. After the test succeeds, CodeDeploy automatically shifts traffic gradually to the new version of the Lambda function.

Your Lambda function is exposed as a REST service via an Amazon API Gateway deployment. This makes it easy to test and integrate.

Prerequisites

To execute the SAM and CloudFormation deployment, you must have the following IAM permissions:

  • cloudformation:*
  • lambda:*
  • codedeploy:*
  • iam:create*

You may use the AWS SAM Local CLI or the AWS CLI to package and deploy your Lambda application. If you choose to use SAM Local, be sure to install it onto your system. For more information, see AWS SAM Local Installation.

All of the code used in this post can be found in this GitHub repository: https://github.com/aws-samples/aws-safe-lambda-deployments.

Walkthrough

For this post, use SAM to define your resources because it comes with built-in CodeDeploy support for safe Lambda deployments.  The deployment is handled and automated by CloudFormation.

SAM allows you to define your Serverless applications in a simple and concise fashion, because it automatically creates all necessary resources behind the scenes. For example, if you do not define an execution role for a Lambda function, SAM automatically creates one. SAM also creates the CodeDeploy application necessary to drive the traffic shifting, as well as the IAM service role that CodeDeploy uses to execute all actions.

Create a SAM template

To get started, write your SAM template and call it template.yaml.

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: An example SAM template for Lambda Safe Deployments.

Resources:

  returnS3Buckets:
    Type: AWS::Serverless::Function
    Properties:
      Handler: returnS3Buckets.handler
      Runtime: nodejs6.10
      AutoPublishAlias: live
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "s3:ListAllMyBuckets"
            Resource: '*'
      DeploymentPreference:
          Type: Linear10PercentEvery1Minute
          Hooks:
            PreTraffic: !Ref preTrafficHook
      Events:
        Api:
          Type: Api
          Properties:
            Path: /test
            Method: get

  preTrafficHook:
    Type: AWS::Serverless::Function
    Properties:
      Handler: preTrafficHook.handler
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "codedeploy:PutLifecycleEventHookExecutionStatus"
            Resource:
              !Sub 'arn:aws:codedeploy:${AWS::Region}:${AWS::AccountId}:deploymentgroup:${ServerlessDeploymentApplication}/*'
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "lambda:InvokeFunction"
            Resource: !Ref returnS3Buckets.Version
      Runtime: nodejs6.10
      FunctionName: 'CodeDeployHook_preTrafficHook'
      DeploymentPreference:
        Enabled: false
      Timeout: 5
      Environment:
        Variables:
          NewVersion: !Ref returnS3Buckets.Version

This template creates two functions:

  • returnS3Buckets
  • preTrafficHook

The returnS3Buckets function is where your application logic lives. It’s a simple piece of code that uses the AWS SDK for JavaScript in Node.JS to call the Amazon S3 listBuckets API action and return the number of buckets.

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			callback(null, {
				statusCode: 200,
				body: allBuckets.length
			});
		}
	});	
}

Review the key parts of the SAM template that defines returnS3Buckets:

  • The AutoPublishAlias attribute instructs SAM to automatically publish a new version of the Lambda function for each new deployment and link it to the live alias.
  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides the function with permission to call listBuckets.
  • The DeploymentPreference attribute configures the type of rollout pattern to use. In this case, you are shifting traffic in a linear fashion, moving 10% of traffic every minute to the new version. For more information about supported patterns, see Serverless Application Model: Traffic Shifting Configurations.
  • The Hooks attribute specifies that you want to execute the preTrafficHook Lambda function before CodeDeploy automatically begins shifting traffic. This function should perform validation testing on the newly deployed Lambda version. This function invokes the new Lambda function and checks the results. If you’re satisfied with the tests, instruct CodeDeploy to proceed with the rollout via an API call to: codedeploy.putLifecycleEventHookExecutionStatus.
  • The Events attribute defines an API-based event source that can trigger this function. It accepts requests on the /test path using an HTTP GET method.
'use strict';

const AWS = require('aws-sdk');
const codedeploy = new AWS.CodeDeploy({apiVersion: '2014-10-06'});
var lambda = new AWS.Lambda();

exports.handler = (event, context, callback) => {

	console.log("Entering PreTraffic Hook!");
	
	// Read the DeploymentId & LifecycleEventHookExecutionId from the event payload
    var deploymentId = event.DeploymentId;
	var lifecycleEventHookExecutionId = event.LifecycleEventHookExecutionId;

	var functionToTest = process.env.NewVersion;
	console.log("Testing new function version: " + functionToTest);

	// Perform validation of the newly deployed Lambda version
	var lambdaParams = {
		FunctionName: functionToTest,
		InvocationType: "RequestResponse"
	};

	var lambdaResult = "Failed";
	lambda.invoke(lambdaParams, function(err, data) {
		if (err){	// an error occurred
			console.log(err, err.stack);
			lambdaResult = "Failed";
		}
		else{	// successful response
			var result = JSON.parse(data.Payload);
			console.log("Result: " +  JSON.stringify(result));

			// Check the response for valid results
			// The response will be a JSON payload with statusCode and body properties. ie:
			// {
			//		"statusCode": 200,
			//		"body": 51
			// }
			if(result.body == 9){	
				lambdaResult = "Succeeded";
				console.log ("Validation testing succeeded!");
			}
			else{
				lambdaResult = "Failed";
				console.log ("Validation testing failed!");
			}

			// Complete the PreTraffic Hook by sending CodeDeploy the validation status
			var params = {
				deploymentId: deploymentId,
				lifecycleEventHookExecutionId: lifecycleEventHookExecutionId,
				status: lambdaResult // status can be 'Succeeded' or 'Failed'
			};
			
			// Pass AWS CodeDeploy the prepared validation test results.
			codedeploy.putLifecycleEventHookExecutionStatus(params, function(err, data) {
				if (err) {
					// Validation failed.
					console.log('CodeDeploy Status update failed');
					console.log(err, err.stack);
					callback("CodeDeploy Status update failed");
				} else {
					// Validation succeeded.
					console.log('Codedeploy status updated successfully');
					callback(null, 'Codedeploy status updated successfully');
				}
			});
		}  
	});
}

The hook is hardcoded to check that the number of S3 buckets returned is 9.

Review the key parts of the SAM template that defines preTrafficHook:

  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides permissions to call the CodeDeploy PutLifecycleEventHookExecutionStatus API action. The second statement provides permissions to invoke the specific version of the returnS3Buckets function to test
  • This function has traffic shifting features disabled by setting the DeploymentPreference option to false.
  • The FunctionName attribute explicitly tells CloudFormation what to name the function. Otherwise, CloudFormation creates the function with the default naming convention: [stackName]-[FunctionName]-[uniqueID].  Name the function with the “CodeDeployHook_” prefix because the CodeDeployServiceRole role only allows InvokeFunction on functions named with that prefix.
  • Set the Timeout attribute to allow enough time to complete your validation tests.
  • Use an environment variable to inject the ARN of the newest deployed version of the returnS3Buckets function. The ARN allows the function to know the specific version to invoke and perform validation testing on.

Deploy the function

Your SAM template is all set and the code is written—you’re ready to deploy the function for the first time. Here’s how to do it via the SAM CLI. Replace “sam” with “cloudformation” to use CloudFormation instead.

First, package the function. This command returns a CloudFormation importable file, packaged.yaml.

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml

Now deploy everything:

sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

At this point, both Lambda functions have been deployed within the CloudFormation stack mySafeDeployStack. The returnS3Buckets has been deployed as Version 1:

SAM automatically created a few things, including the CodeDeploy application, with the deployment pattern that you specified (Linear10PercentEvery1Minute). There is currently one deployment group, with no action, because no deployments have occurred. SAM also created the IAM service role that this CodeDeploy application uses:

There is a single managed policy attached to this role, which allows CodeDeploy to invoke any Lambda function that begins with “CodeDeployHook_”.

An API has been set up called safeDeployStack. It targets your Lambda function with the /test resource using the GET method. When you test the endpoint, API Gateway executes the returnS3Buckets function and it returns the number of S3 buckets that you own. In this case, it’s 51.

Publish a new Lambda function version

Now implement the requirements change, which is to make returnS3Buckets count only buckets that begin with the letter “a”. The code now looks like the following (see returnS3BucketsNew.js in GitHub):

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			//callback(null, allBuckets.length);

			//  New Code begins here
			var counter=0;
			for(var i  in allBuckets){
				if(allBuckets[i].Name[0] === "a")
					counter++;
			}
			console.log("Total buckets starting with a: " + counter);

			callback(null, {
				statusCode: 200,
				body: counter
			});
			
		}
	});	
}

Repackage and redeploy with the same two commands as earlier:

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml
	
sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

CloudFormation understands that this is a stack update instead of an entirely new stack. You can see that reflected in the CloudFormation console:

During the update, CloudFormation deploys the new Lambda function as version 2 and adds it to the “live” alias. There is no traffic routing there yet. CodeDeploy now takes over to begin the safe deployment process.

The first thing CodeDeploy does is invoke the preTrafficHook function. Verify that this happened by reviewing the Lambda logs and metrics:

The function should progress successfully, invoke Version 2 of returnS3Buckets, and finally invoke the CodeDeploy API with a success code. After this occurs, CodeDeploy begins the predefined rollout strategy. Open the CodeDeploy console to review the deployment progress (Linear10PercentEvery1Minute):

Verify the traffic shift

During the deployment, verify that the traffic shift has started to occur by running the test periodically. As the deployment shifts towards the new version, a larger percentage of the responses return 9 instead of 51. These numbers match the S3 buckets.

A minute later, you see 10% more traffic shifting to the new version. The whole process takes 10 minutes to complete. After completion, open the Lambda console and verify that the “live” alias now points to version 2:

After 10 minutes, the deployment is complete and CodeDeploy signals success to CloudFormation and completes the stack update.

Check the results

If you invoke the function alias manually, you see the results of the new implementation.

aws lambda invoke –function [lambda arn to live alias] out.txt

You can also execute the prod stage of your API and verify the results by issuing an HTTP GET to the invoke URL:

Summary

This post has shown you how you can safely automate your Lambda deployments using the Lambda traffic shifting feature. You used the Serverless Application Model (SAM) to define your Lambda functions and configured CodeDeploy to manage your deployment patterns. Finally, you used CloudFormation to automate the deployment and updates to your function and PreTraffic hook.

Now that you know all about this new feature, you’re ready to begin automating Lambda deployments with confidence that things will work as designed. I look forward to hearing about what you’ve built with the AWS Serverless Platform.

КЗК: глоба за Музикаутор в спора с Българското национално радио

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/18/bnr-6/

Прессъобщение на КЗК от днес:

Комисията за защита на конкуренцията (КЗК) наложи санкция в размер на 56 678 лв. за извършено нарушение по чл. 21, т. 5 от Закона за защита на конкуренцията (ЗЗК) на Сдружение на композитори, автори на литературни произведения, свързани с музиката и музикални издатели за колективно управление на авторски права „МУЗИКАУТОР“, изразяващо се в злоупотреба с господстващо положение на пазара на предоставяне на права за излъчване на музикални и литературни произведения от репертоара на Музикаутор по безжичен път, предаването или препредаването на произведенията по електронна съобщителна мрежа за доставчици на радиоуслуги на територията на страната, което може да предотврати, ограничи или наруши конкуренцията и да засегне интересите на потребителите посредством необосновано прекратяване на съществуващите договорни отношения с Българското национално радио, с което се препятства осъществяваната от радиото дейност.

Предмет на проучване в производството е поведението на Сдружение „Музикаутор“, във връзка с прекратяване на съществуващите по силата на Договор за разрешаване използването на музикални и литературни произведения по радио отношения между „Музикаутор“ и БНР и следващата от това невъзможност за общественото радио да използва в програмите си репертоара на Сдружението.

В хода на проучването се установи, че отношенията между страните в производството по предоставянето на права за използване на репертоара на Музикаутор се уреждат с договор за разрешаване използването на музикални и литературни произведения в радио, сключен между страните на 19.12.2011 г. Договорът е прекратен считано от 01.01.2017 г. чрез отправено на 21.11. 2016 г. предизвестие от „Музикаутор“.

От водените между страните в производството преговори и от цялостното поведение на „Музикаутор“ в тази връзка, може да се направи извод, че прекратяването на действащия договор между страните представлява начин за постигане на исканите от Сдружението условия за определяне размера на дължимите му възнаграждения за лицензирането на съответните права.  Чрез прекратяването на договора с БНР Музикаутор, като предприятие с господстващо положение, лишава националното радио от възможност за ефективно конкуриране на пазара, на който последното осъществява дейност.

Решението може да бъде обжалвано от страните и всяко трето лице, което има правен интерес, в 14-дневен срок, който започва да тече от съобщаването му по реда на АПК, а за третите лица – от публикуването в електронния регистър на Комисията.

Решението

Към решението е приложено и Особено мнение на Димитър Кюмюрджиев, заместник – председател на КЗК, наблюдаващ член по преписка № 141/ 2017 г.

Google Search Receives Fewer Takedown Notices Than Before

Post Syndicated from Ernesto original https://torrentfreak.com/google-search-receives-fewer-takedown-notices-than-before-180414/

In recent years Google has had to cope with a continuous increase in takedown requests from copyright holders, which target pirate sites in search results.

Just a few years ago the search engine removed ‘only’ a few thousand URLs per day. This has since grown to millions and has kept growing, until recently.

Around a year ago Google received a billion takedown requests a year, and for a while, it stabilized at roughly 20 million requests per week. By October last year, Google search had processed over three billion DMCA requests since it started counting.

After that, it appears that things calmed down a little. Where Google’s weekly takedown chart went up year after year, it’s now trending in a downward direction.

During the past half year, Google received ‘only’ 375 million takedown requests. That translates to roughly 15 million per week or 750 million per year. This is a 25% decrease compared the average in 2016.

Does this mean that copyright holders can no longer find enough pirated content via the search engine? We doubt it. But it’s clear that some of the big reporting agencies are sending in less complaints.

Degban, for example, which was at one point good for more than 10% of the weekly number of DMCA requests, has disappeared completely. Other big players, such as the Mexican anti-piracy outfit APDIF and Remove Your Media, have clearly lowered their volumes.

APDIF’s weekly DMCA volume

Of all the big players, UK Music Group BPI has been most consistent. Their average hasn’t dropped much in recent years, but is certainly not rising either.

It’s too early to tell whether this trend will hold, but according to the numbers we see now, Google will for the first time have a significant decrease in the number of takedown requests this year.

Despite the decrease, Google is under quite a bit of pressure from copyright holders to improve its takedown efforts. Most would like Google to delist pirate site domains entirely.

While the search engine isn’t willing to go that far, it does give a lower ranking to sites for which it receives a large volume of takedown requests. In addition, the company recently started accepting ‘prophylactic’ DMCA requests, for content that is not indexed yet.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

IP Address Fail: ISP Doesn’t Have to Hand ‘Pirates’ Details to Copyright Trolls

Post Syndicated from Andy original https://torrentfreak.com/ip-address-fail-isp-doesnt-have-to-hand-pirates-details-to-copyright-trolls-180414/

On October 27, 2016, UK-based Copyright Management Services (CMS) filed a case against Sweden-based ISP, Tele2.

CMS, run by Patrick Achache of German-based anti-piracy outfit MaverickEye (which in turn is deeply involved with infamous copyright troll outfit Guardaley), claimed that Tele2 customers had infringed its clients’ copyrights on the movies Cell and IT by sharing them via BitTorrent.

Since Tele2 had the personal details of the customers behind those IP addresses, CMS asked the Patent and Market Court to prevent the ISP from deleting the data before it could be handed over. Once in its possession, CMS would carry out the usual process of writing to customers and demanding cash settlements to make supposed lawsuits go away.

Tele2 complained that it could not hand over the details of customers using NAT addresses since it simply doesn’t hold that information. The ISP also said it could not hand over details of customers if IP address information had previously been deleted.

Taking these objections into consideration, in November 2017 the Court approved an interim order in respect of the remaining IP addresses. But there were significant problems which led the ISP to appeal.

According to tests carried out by Tele2, many of the IP addresses in the case did not relate to Sweden or indeed Tele2. In fact, some IP addresses belonged to foreign companies or mere affiliates of the ISP.

“Tele2 thus lacks the actual ability to provide information regarding a large part of the IP addresses covered by the submission,” the Court of Appeal noted in a decision published this week.

The problem appears to lie with the way the MaverickEye monitoring system attributed monitored IP addresses to Tele2.

The Court notes that the company relied on the RIPE Database which stated that the IP addresses in question were allocated to the “geographic area of Sweden”. According to Tele2, however, that wasn’t the case and as such, it had no information to hand over.

CMS, on the other hand, maintained that according to RIPE’s records, Tele2 was indeed the controller of the IP addresses in question so must hand over the information as requested.

While the Patent and Market Court said that Tele2 didn’t object to the MaverickEye monitoring software in terms of the data it collects on file-sharers, it noted that CMS had failed to initiate an investigation in respect of the IP addresses allegedly not belonging to Tele2.

“CMS has not invoked any investigation showing how the identification of the IP addresses in question is made in this case or who at Maverickeye UG was responsible for this,” the Court writes.

“Nor did CMS use the opportunity to hear representatives of Tele2 or others with Tele2 in mind to discover if the company has access to any of the current IP addresses and, if so, which.”

Considering the above, the Court notes that Tele2’s statement, that it doesn’t have access to the data, must stand.

“In these circumstances, CMS, against Tele2’s appeal, has not shown that Tele2 holds the information requested by the disclosure order. CMS’ application for a disclosure order should therefore be rejected,” the Court concludes.

The decision cannot be appealed so Copyright Management Services won’t get its hands on the personal details of the people behind the IP addresses, at least through this process.

The decision (Swedish, pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Осъдих Външно за данните за издръжката на консулствата ни

Post Syndicated from Боян Юруков original https://yurukov.net/blog/2018/osadih-vanshno/

Преди точно две години пуснах заявление за достъп до обществена информация до Министерството на външните работи. С него исках данни за работата на дипломатическите ни представителства. Запитването включваше както колко хора работят с граждани в консулствата и какви приходи имат от такси, така и колко са разходите за издръжка на представителствата. Исках да видя колко Външно инвестира като хора и ресурси и колко пари получава обратно от предоставените услуги.

От 8-те въпроса, които зададох, ми беше отговорено само на половината – тези за броя служители в консулствата. На тяхна база показах колко крайно недостатъчни са хората, които работят с българите зад граница. В западна Европа, САЩ и Канада през 2016-та е имало 26 консулски служби със 70 служителя. Българите в тези региони са над милион. Това е все едно кметство в малко градче да поеме управлението на град като София. 2/3 от всички консулства са били с един или двама служителя. Такъв беше случая с консулството във Франкфурт, което отговаря за 100 хиляди българи.

europe

Резултатът е, че за някои хора излиза по-бързо и евтино да летят до България да извадят нужните им документи. Електронните услуги, за които бяха платени стотици хиляди, не са достъпни за всички, а и не работят съвсем, както сам се убедих. За доста обаче пътуването не е възможно, както свидетелстват редовните опашки пред консулствата ни. В щатите и Канада това довежда до по-малко деца получили български акт за раждане, въпреки, че имат право на това. Това е процедура, която по закон консулствата следва да правят дори сами, но отсъства от вътрешните указания и е практически невъзможно.

Броят на служителите е обаче само една страна от монетата. Натоварването на консулствата е друг съществен аспект и е много трудно да се оцени. Един начин да се сравни колко пари получават от консулски услуги и колко Външно инвестира обратно под формата на техника, заплати и други. Консулите далеч не са само чиновници подпечатващи пълномощни или молби за паспорти. Те имат ред други задължения включително с българските общности и ако нямат достатъчно ресурс, няма да могат да ги изпълняват адекватно.

Именно тези данни ми бяха отказани от Външно и реших да ги съдя. Подадох жалба до Административен съд София-град и през октомври 2016 той отсъди в моя полза. Защитата на постоянния секретар на Министерство на външните работи беше, че данните били вече на страницата на министерството и са ме насочили да си ги взема от там. В действителност в бюджетните им отчети бяха публикувани само общо приходи и разходи от всички дейности без разбивка по консулства. Отделно по закон дори информацията да е достъпна някъде, при запитване по ЗДОИ следва да ми я предоставят директно като отговор, а не да ме насочват „вижте ни сайта“.

Скоро след това Външно обжалва пред ВАС и делото беше насрочено за март 2018. Вчера излезе решението, което потвърждава долната инстанция. Това означава, че Външно трябва да ми предостави данните във вида, в който съм ги искал. Съдейки по опита ми с тях обаче, имам известни съмнения, че това няма да стане съвсем. Предполагам, че пак ще се налага да пускам жалби за неизпълнение на съдебно решение, за което в случая постоянният секретар ще носи лична финансова отговорност.

Интересното в това дело е, че направих всичко през интернет без адвокат или друг представител. Пуснах жалбите и последвалите становища по мейл с електронен подпис. Платих таксата с електронно банкиране. Нямах възможност да присъствам и реших да пробвам без адвокат. Отделих доста време да се ровя из нормативните актове определящи дейността на Външно, както и вътрешните им правила. В становищата си се аргументирах коя дирекция и защо следва да има информацията, която искам и то именно във вида, в който я очаквам. Опитах се да оборя тезите в заповедта и жалбата на Външно и свободните им интерпретации на ЗДОИ. Единственото ми притеснение беше, че Външно ще искат да покрия разходите им за юристконсулта по тарифи за частен адвокат (около 500 лв). Докато чаках заседанието обаче промениха закона, че може да искат само сума колкото за служебен защитник (около 150 лв). Не се наложи обаче.

Този пример всъщност ми дава увереност да гоня през съда повече министерства и агенции, които са ми отказвали отговори. Докато в някои случаи просто решават да не отговорят и дори да дадат входящ номер, в други отказват да коментират или споделят източници на твърдения, които са правили. Както съм споделял преди, системата работи, но има нужда от побутване. Трябва упорство, а и отнема време, което е донякъде нормално предвид вида жалба и натовареността на съда.

Днес ще припомня на Външно да ми изпратят данните и ще пусна ново запитване за същите данни от 2016-та насам. Когато ги предоставят, ще пусна обновен анализ на това как държавата ни общува с българите в чужбина. Заявки виждаме много, но на практика може да се окаже, че емиграцията ни не само е най-големият инвеститор в икономиката, а и че практически издържа дипломатическите ни представителства.

Ето данните, за които осъдих Външно:

  1. Справка за приходите от такси за издаване на документи, сключване на граждански брак, легализация и заверка на документи, както и други услуги извършвани в консулските ни служби разделена по консулства и години за последните 10 години
  2. Справка за разходите за издръжка, заплати, командировъчни и представителни изплатени на консулските ни служби разделена по консулства и години за последните 10 години
  3. Справка за разходите за издръжка, заплати, командировъчни и представителни изплатени на всички останали дипломатически представителства (включително търговските представители в консулствата) разделена по представителства и години за последните 10 години
  4. Справка за разходите за ново техническо оборудване, поддържане или ремонт на съществуващото – компютри, телефони, копирни машини и друга техника необходима за извършване на консулски услуги – разделена по консулства и години за последните 10 години

More power to your Pi

Post Syndicated from James Adams original https://www.raspberrypi.org/blog/pi-power-supply-chip/

It’s been just over three weeks since we launched the new Raspberry Pi 3 Model B+. Although the product is branded Raspberry Pi 3B+ and not Raspberry Pi 4, a serious amount of engineering was involved in creating it. The wireless networking, USB/Ethernet hub, on-board power supplies, and BCM2837 chip were all upgraded: together these represent almost all the circuitry on the board! Today, I’d like to tell you about the work that has gone into creating a custom power supply chip for our newest computer.

Raspberry Pi 3 Model B+, with custome power supply chip

The new Raspberry Pi 3B+, sporting a new, custom power supply chip (bottom left-hand corner)

Successful launch

The Raspberry Pi 3B+ has been well received, and we’ve enjoyed hearing feedback from the community as well as reading the various reviews and articles highlighting the solid improvements in wireless networking, Ethernet, CPU, and thermal performance of the new board. Gareth Halfacree’s post here has some particularly nice graphs showing the increased performance as well as how the Pi 3B+ keeps cool under load due to the new CPU package that incorporates a metal heat spreader. The Raspberry Pi production lines at the Sony UK Technology Centre are running at full speed, and it seems most people who want to get hold of the new board are able to find one in stock.

Powering your Pi

One of the most critical but often under-appreciated elements of any electronic product, particularly one such as Raspberry Pi with lots of complex on-board silicon (processor, networking, high-speed memory), is the power supply. In fact, the Raspberry Pi 3B+ has no fewer than six different voltage rails: two at 3.3V — one special ‘quiet’ one for audio, and one for everything else; 1.8V; 1.2V for the LPDDR2 memory; and 1.2V nominal for the CPU core. Note that the CPU voltage is actually raised and lowered on the fly as the speed of the CPU is increased and decreased depending on how hard the it is working. The sixth rail is 5V, which is the master supply that all the others are created from, and the output voltage for the four downstream USB ports; this is what the mains power adaptor is supplying through the micro USB power connector.

Power supply primer

There are two common classes of power supply circuits: linear regulators and switching regulators. Linear regulators work by creating a lower, regulated voltage from a higher one. In simple terms, they monitor the output voltage against an internally generated reference and continually change their own resistance to keep the output voltage constant. Switching regulators work in a different way: they ‘pump’ energy by first storing the energy coming from the source supply in a reactive component (usually an inductor, sometimes a capacitor) and then releasing it to the regulated output supply. The switches in switching regulators effect this energy transfer by first connecting the inductor (or capacitor) to store the source energy, and then switching the circuit so the energy is released to its destination.

Linear regulators produce smoother, less noisy output voltages, but they can only convert to a lower voltage, and have to dissipate energy to do so. The higher the output current and the voltage difference across them is, the more energy is lost as heat. On the other hand, switching supplies can, depending on their design, convert any voltage to any other voltage and can be much more efficient (efficiencies of 90% and above are not uncommon). However, they are more complex and generate noisier output voltages.

Designers use both types of regulators depending on the needs of the downstream circuit: for low-voltage drops, low current, or low noise, linear regulators are usually the right choice, while switching regulators are used for higher power or when efficiency of conversion is required. One of the simplest switching-mode power supply circuits is the buck converter, used to create a lower voltage from a higher one, and this is what we use on the Pi.

A history lesson

The BCM2835 processor chip (found on the original Raspberry Pi Model B and B+, as well as on the Zero products) has on-chip power supplies: one switch-mode regulator for the core voltage, as well as a linear one for the LPDDR2 memory supply. This meant that in addition to 5V, we only had to provide 3.3V and 1.8V on the board, which was relatively simple to do using cheap, off-the-shelf parts.

Pi Zero sporting a BCM2835 processor which only needs 2 external switchers (the components clustered behind the camera port)

When we moved to the BCM2836 for Raspberry Pi Model 2 (and subsequently to the BCM2837A1 and B0 for Raspberry Pi 3B and 3B+), the core supply and the on-chip LPDDR2 memory supply were not up to the job of supplying the extra processor cores and larger memory, so we removed them. (We also used the recovered chip area to help fit in the new quad-core ARM processors.) The upshot of this was that we had to supply these power rails externally for the Raspberry Pi 2 and models thereafter. Moreover, we also had to provide circuitry to sequence them correctly in order to control exactly when they power up compared to the other supplies on the board.

Power supply design is tricky (but critical)

Raspberry Pi boards take in 5V from the micro USB socket and have to generate the other required supplies from this. When 5V is first connected, each of these other supplies must ‘start up’, meaning go from ‘off’, or 0V, to their correct voltage in some short period of time. The order of the supplies starting up is often important: commonly, there are structures inside a chip that form diodes between supply rails, and bringing supplies up in the wrong order can sometimes ‘turn on’ these diodes, causing them to conduct, with undesirable consequences. Silicon chips come with a data sheet specifying what supplies (voltages and currents) are needed and whether they need to be low-noise, in what order they must power up (and in some cases down), and sometimes even the rate at which the voltages must power up and down.

A Pi3. Power supply components are clustered bottom left next to the micro USB, middle (above LPDDR2 chip which is on the bottom of the PCB) and above the A/V jack.

In designing the power chain for the Pi 2 and 3, the sequencing was fairly straightforward: power rails power up in order of voltage (5V, 3.3V, 1.8V, 1.2V). However, the supplies were all generated with individual, discrete devices. Therefore, I spent quite a lot of time designing circuitry to control the sequencing — even with some design tricks to reduce component count, quite a few sequencing components are required. More complex systems generally use a Power Management Integrated Circuit (PMIC) with multiple supplies on a single chip, and many different PMIC variants are made by various manufacturers. Since Raspberry Pi 2 days, I was looking for a suitable PMIC to simplify the Pi design, but invariably (and somewhat counter-intuitively) these were always too expensive compared to my discrete solution, usually because they came with more features than needed.

One device to rule them all

It was way back in May 2015 when I first chatted to Peter Coyle of Exar (Exar were bought by MaxLinear in 2017) about power supply products for Raspberry Pi. We didn’t find a product match then, but in June 2016 Peter, along with Tuomas Hollman and Trevor Latham, visited to pitch the possibility of building a custom power management solution for us.

I was initially sceptical that it could be made cheap enough. However, our discussion indicated that if we could tailor the solution to just what we needed, it could be cost-effective. Over the coming weeks and months, we honed a specification we agreed on from the initial sketches we’d made, and Exar thought they could build it for us at the target price.

The chip we designed would contain all the key supplies required for the Pi on one small device in a cheap QFN package, and it would also perform the required sequencing and voltage monitoring. Moreover, the chip would be flexible to allow adjustment of supply voltages from their default values via I2C; the largest supply would be capable of being adjusted quickly to perform the dynamic core voltage changes needed in order to reduce voltage to the processor when it is idling (to save power), and to boost voltage to the processor when running at maximum speed (1.4 GHz). The supplies on the chip would all be generously specified and could deliver significantly more power than those used on the Raspberry Pi 3. All in all, the chip would contain four switching-mode converters and one low-current linear regulator, this last one being low-noise for the audio circuitry.

The MXL7704 chip

The project was a great success: MaxLinear delivered working samples of first silicon at the end of May 2017 (almost exactly a year after we had kicked off the project), and followed through with production quantities in December 2017 in time for the Raspberry Pi 3B+ production ramp.

The team behind the power supply chip on the Raspberry Pi 3 Model B+ (group of six men, two of whom are holding Raspberry Pi boards)

Front row: Roger with the very first Pi 3B+ prototypes and James with a MXL7704 development board hacked to power a Pi 3. Back row left to right: Will Torgerson, Trevor Latham, Peter Coyle, Tuomas Hollman.

The MXL7704 device has been key to reducing Pi board complexity and therefore overall bill of materials cost. Furthermore, by being able to deliver more power when needed, it has also been essential to increasing the speed of the (newly packaged) BCM2837B0 processor on the 3B+ to 1.4GHz. The result is improvements to both the continuous output current to the CPU (from 3A to 4A) and to the transient performance (i.e. the chip has helped to reduce the ‘transient response’, which is the change in supply voltage due to a sudden current spike that occurs when the processor suddenly demands a large current in a few nanoseconds, as modern CPUs tend to do).

With the MXL7704, the power supply circuitry on the 3B+ is now a lot simpler than the Pi 3B design. This new supply also provides the LPDDR2 memory voltage directly from a switching regulator rather than using linear regulators like the Pi 3, thereby improving energy efficiency. This helps to somewhat offset the extra power that the faster Ethernet, wireless networking, and processor consume. A pleasing side effect of using the new chip is the symmetric board layout of the regulators — it’s easy to see the four switching-mode supplies, given away by four similar-looking blobs (three grey and one brownish), which are the inductors.

Close-up of the power supply chip on the Raspberry Pi 3 Model B+

The Pi 3B+ PMIC MXL7704 — pleasingly symmetric

Kudos

It takes a lot of effort to design a new chip from scratch and get it all the way through to production — we are very grateful to the team at MaxLinear for their hard work, dedication, and enthusiasm. We’re also proud to have created something that will not only power Raspberry Pis, but will also be useful for other product designs: it turns out when you have a low-cost and flexible device, it can be used for many things — something we’re fairly familiar with here at Raspberry Pi! For the curious, the product page (including the data sheet) for the MXL7704 chip is here. Particular thanks go to Peter Coyle, Tuomas Hollman, and Trevor Latham, and also to Jon Cronk, who has been our contact in the US and has had to get up early to attend all our conference calls!

The MXL7704 design team celebrating on Pi Day — it takes a lot of people to design a chip!

I hope you liked reading about some of the effort that has gone into creating the new Pi. It’s nice to finally have a chance to tell people about some of the (increasingly complex) technical work that makes building a $35 computer possible — we’re very pleased with the Raspberry Pi 3B+, and we hope you enjoy using it as much as we’ve enjoyed creating it!

The post More power to your Pi appeared first on Raspberry Pi.

Reddit Copyright Complaints Jump 138% But Almost Half Get Rejected

Post Syndicated from Andy original https://torrentfreak.com/reddit-copyright-complaints-jump-138-but-almost-half-get-rejected-180411/

So-called ‘transparency reports’ are becoming increasingly popular with Internet-based platforms and their users. Among other things, they provide much-needed insight into how outsiders attempt to censor content published online and what actions are taken in response.

Google first started publishing its report in 2010, Twitter followed in 2012, and they’ve now been joined by a multitude of major companies including Microsoft, Facebook and Cloudflare.

As one of the world’s most recognized sites, Reddit joined the transparency party fairly late, publishing its first report in early 2015. While light on detail, it revealed that in the previous year the site received just 218 requests to remove content, 81% of which were DMCA-style copyright notices. A significant 62% of those copyright-related requests were rejected.

Over time, Reddit’s reporting has become a little more detailed. Last April it revealed that in 2016, the platform received ‘just’ 3,294 copyright removal requests for the entire year. However, what really caught the eye is how many notices were rejected. In just 610 instances, Reddit was required to remove content from the site, a rejection rate of 81%.

Having been a year since Reddit’s last report, the company has just published its latest edition, covering the period January 1, 2017 to December 31, 2017.

“Reddit publishes this transparency report every year as part of our ongoing commitment to keep you aware of the trends on the various requests regarding private Reddit user account information or removal of content posted to Reddit,” the company said in a statement.

“Reddit believes that maintaining this transparency is extremely important. We want you to be aware of this information, consider it carefully, and ask questions to keep us accountable.”

The detailed report covers a wide range of topics, including government requests for the preservation or production of user information (there were 310) and even an instruction to monitor one Reddit user’s activities in real time via a so-called ‘Trap and Trace’ order.

In copyright terms, there has been significant movement. In 2017, Reddit received 7,825 notifications of alleged copyright infringement under the Digital Millennium Copyright Act, that’s up roughly 138% over the 3,294 notifications received in 2016.

For a platform of Reddit’s unquestionable size, these volumes are not big. While the massive percentage increase is notable, the site still receives less than 10 complaints each day. For comparison, Google receives millions every week.

But perhaps most telling is that despite receiving more than 7,800 DMCA-style takedown notices, these resulted in Reddit carrying out just 4,352 removals. This means that for whatever reasons (Reddit doesn’t specify), 3,473 requests were denied, a rejection rate of 44.38%. Google, on the other hand, removes around 90% of content reported.

DMCA notices can be declared invalid for a number of reasons, from incorrect formatting through to flat-out abuse. In many cases, copyright law is incorrectly applied and it’s not unknown for complainants to attempt a DMCA takedown to stifle speech or perceived competition.

Reddit says it tries to take all things into consideration before removing content.

“Reddit reviews each DMCA takedown notice carefully, and removes content where a valid report is received, as required by the law,” the company says.

“Reddit considers whether the reported content may fall under an exception listed in the DMCA, such as ‘fair use,’ and may ask for clarification that will assist in the review of the removal request.”

Considering the numbers of community-focused “subreddits” dedicated to piracy (not just general discussion, but actual links to content), the low numbers of copyright notices received by Reddit continues to baffle.

There are sections in existence right now offering many links to movies and TV shows hosted on various file-hosting sites. They’re the type of links that are targeted all the time whenever they appear in Google search but copyright owners don’t appear to notice or care about them on Reddit.

Finally, it would be nice if Reddit could provide more information in next year’s report, including detail on why so many requests are rejected. Perhaps regular submission of notices to the Lumen Database would be something Reddit would consider for the future.

Reddit’s Transparency Report for 2017 can be found here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Roku Bans Popular Social IPTV Linking Service cCloud TV

Post Syndicated from Andy original https://torrentfreak.com/roku-bans-popular-social-iptv-linking-service-ccloud-tv-180409/

Despite being one of the more popular set-top box platforms, until last year Roku managed to stay completely out of the piracy conversation.

However, due to abuse of its system by third-parties, last June the Superior Court of Justice of the City of Mexico banned the importation and distribution of Roku devices in the country.

The decision followed a complaint filed by cable TV provider Cablevision, which said that some Roku channels and their users were infringing its distribution rights.

Since then, Roku has been fighting to have the ban lifted, previously informing TF that it expressly prohibits copyright infringement of any kind. That led to several more legal processes yet last month and after considerable effort, the ban was upheld, much to Roku’s disappointment.

“It is necessary for Roku to make adjustments to its software, as other online content distribution platforms do, so that violations of copyrighted content do not take place,” Cablevision said.

Then, at the end of March, Roku suddenly banned the USTVnow channel from its platform, citing a third-party copyright complaint.

In a series of emails with TF, the company declined to offer further details but there is plenty of online speculation that the decision was a move towards the “adjustments” demanded by Cablevision. Today yet more fuel is being poured onto that same fire with Roku’s decision to ban the popular cCloud TV service from its platform.

For those unfamiliar with cCloud TV, it’s a video streaming platform that relies on users to contribute media links found on the web, whether they’re movie and TV shows or live sporting events.

“Project cCloud TV is known as the ‘Popcorn Time for Live TV’. The project started with 50 channels and has grown over time and now has over 4000 channels from all around the world,” its founder ‘Bane’ told TF back in 2016.

“The project was inspired by Popcorn Time and its simplicity for streaming torrents. The service works based on media links that can be found anywhere on the web and the cCloud project makes it easier for users to stream.”

Aside from the vast array of content cCloud offers, its versatility is almost unrivaled. In an addition to working via most modern web browsers, it’s also accessible using smartphones, tablets, Plex media server, Kodi, VLC, and (until recently at least) Roku.

But cCloud and USTVnow aren’t the only services suffering bans at Roku.

As highlighted by CordCuttersNews, other channels are also suffering similar fates, such as XTV that was previously replaced with an FBI warning.

cCloud has had problems on Kodi too. Back in September 2017, TVAddons announced that it had been forced to remove the cCloud addon from its site.

“cCloud TV has been removed from our web site due to a complaint made by Bell, Rogers, Videotron and TVA on June 12th, 2017 as part of their lawsuit against our web site,” the site announced.

“Prior to hearing of the lawsuit, we had never received a single complaint relating to the cCloud TV addon for Kodi. cCloud TV for Kodi was developed by podgod, and was basically an interface for the community-based web service that goes by the same name.”

Last week, TVAddons went on to publish an “blacklist” that lists addons that have the potential to deliver content not authorized by rightsholders. Among many others, the list contains cCloud, meaning that potential users will now have to obtain it directly from the Kodi Bae Repository on Github instead.

At the time of publication, Roku had not responded to TorrentFreak’s request for comment.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Piracy Falls 6%, in Spain, But It’s Still a Multi-Billion Euro Problem

Post Syndicated from Andy original https://torrentfreak.com/piracy-falls-6-in-spain-but-its-still-a-multi-billion-euro-problem-180409/

The Coalition of Creators and Content Industries, which represents Spain’s leading entertainment industry companies, is keeping a close eye on the local piracy landscape.

The outfit has just published its latest Piracy Observatory and Digital Content Consumption Habits report, carried out by the independent consultant GFK, and there is good news to report on headline piracy figures.

During 2017, the report estimates that people accessed unlicensed digital content just over four billion times, which equates to almost 21.9 billion euros in lost revenues. While this is a significant number, it’s a decrease of 6% compared to 2016 and an accumulated decrease of 9% compared to 2015, the coalition reports.

Overall, movies are most popular with pirates, with 34% helping themselves to content without paying.

“The volume of films accessed illegally during 2017 was 726 million, with a market value of 5.7 billion euros, compared to 6.9 billion in 2016. 35% of accesses happened while the film was still on screens in cinema theaters, while this percentage was 33% in 2016,” the report notes.

TV shows are in a close second position with 30% of users gobbling up 945 million episodes illegally during 2017. A surprisingly high 24% of users went for eBooks, with music relegated to fourth place with ‘just’ 22%, followed by videogames (11%) and football (10%).

The reasons given by pirates for their habits are both varied and familiar. 51% said that original content is too expensive while 43% said that taking the illegal route “is fast and easy”. Half of the pirates said that simply paying for an internet connection was justification for getting content for free.

A quarter of all pirates believe that they aren’t doing anyone any harm, with the same number saying they get content without paying because there are no consequences for doing so. But it isn’t just pirates themselves in the firing line.

Perhaps unsurprisingly given the current climate, the report heavily criticizes search engines for facilitating access to infringing content.

“With 75%, search engines are the main method of accessing illegal content and Google is used for nine out of ten accesses to pirate content,” the report reads.

“Regarding social networks, Facebook is the most used method of access (83%), followed by Twitter (42%) and Instagram (34%). Therefore it is most valuable that Facebook has reached agreements with different industries to become a legal source and to regulate access to content.”

Once on pirate sites, some consumers reported difficulties in determining whether they’re legal or not. Around 15% said that they had “big difficulties” telling whether a site is authorized with 44% saying they had problems “sometimes”.

That being said, given the amount of advertising on pirate sites, it’s no surprise that most knew a pirate site when they visited one and, according to the report, advertising placement is only on the up.

Just over a quarter of advertising appearing on pirate sites features well-known brands, although this is a reduction from more than 37% in 2016. This needs to be further improved, the coalition says, via collaboration between all parties involved in the industry.

A curious claim from the report is that 81% of pirate site users said they were required to register in order to use a platform. This resulted in “transferring personal data” to pirate site operators who gather it in databases that are used for profitable “e-marketing campaigns”.

“Pirate sites also get much more valuable data than one could imagine which allow them to get important economic benefits, as for example, Internet surfing habits, other websites visited by consumers, preferences, likes, and purchase habits,” the report states.

So what can be done to reduce consumer reliance on pirate sites? The report finds that consumers are largely in line with how the entertainment industries believe piracy should or could be tackled.

“The most efficient measures against piracy would be, according to the internet users’ own view, blocking access to the website offering content (78%) and penalizing internet providers (73%),” the report reads.

“Following these two, the best measure to reduce infringements would be, according to consumers, to promote social awareness campaigns against piracy (61%). This suggests that increased collaboration between the content sector and the ISPs (Internet Service Providers) could count on consumers’ support and positive assessment.”

Finally, consumers in Spain are familiar with the legal options, should they wish to take that route in future. Netflix awareness in the country is at 91%, Spotify at 81%, with Movistar+ and HBO at 80% and 68% respectively.

“This invalidates the reasons given by pirate users who said they did so because of the lack of an accessible legal offer at affordable prices,” the report adds.

However, those who take the plunge into the legal world don’t always kick the pirate habit, with the paper stating that users of pirates sites tend to carry on pirating, although they do pirate less in some sectors, notably music. The study also departs from findings in other regions that pirates can also be avid consumers of legitimate content.

Several reports, from the UK, Sweden, Australia, and even from Hollywood, have clearly indicated that pirates are the entertainment industries’ best customers.

In Spain, however, the situation appears to be much more pessimistic, with only 8% of people who access illegal digital content paying for legal content too. That seems low given that Netflix alone had more than a million Spanish subscribers at the end of 2017 and six million Spanish households currently subscribe to other pay TV services.

The report is available here (Spanish, pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Community profile: Dave Akerman

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-dave-akerman/

This column is from The MagPi issue 61. You can download a PDF of the full issue for free, or subscribe to receive the print edition through your letterbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve our charitable goals.

The pinned tweet on Dave Akerman’s Twitter account shows a table displaying the various components needed for a high-altitude balloon (HAB) flight. Batteries, leads, a camera and Raspberry Pi, plus an unusually themed payload. The caption reads ‘The Queen, The Duke of York, and my TARDIS”, and sums up Dave’s maker career in a heartbeat.

David Akerman on Twitter

The Queen, The Duke of York, and my TARDIS 🙂 #UKHAS #RaspberryPi

Though writing software for industrial automation pays the bills, the majority of Dave’s time is spent in the world of high-altitude ballooning and the ever-growing community that encompasses it. And, while he makes some money sending business-themed balloons to near space for the likes of Aardman Animations, Confused.com, and the BBC, Dave is best known in the Raspberry Pi community for his use of the small computer in every payload, and his work as a tutor alongside the Foundation’s staff at Skycademy events.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave continues to help others while breaking records and having a good time exploring the atmosphere.

Dave has dedicated many hours and many, many more miles to assist with the Foundation’s Skycademy programme, helping to explore high-altitude ballooning with educators from across the UK. Using a Raspberry Pi and various other pieces of lightweight tech, Dave and Foundation staff member James Robinson explored the incorporation of high-altitude ballooning into education. Through Skycademy, educators were able to learn new skills and take them to the classroom, setting off their own balloons with their students, and recording the results on Raspberry Pis.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave’s most recent flight broke a new record. On 13 August 2017, his HAB payload was able to send back the highest images taken by any amateur flight.

But education isn’t the only reason for Dave’s involvement in the HAB community. As with anyone passionate about a specific hobby, Dave strives to break records. The most recent record-breaking flight took place on 13 August 2017, when Dave’s Raspberry Pi Zero HAB sent home the highest images taken by any amateur high-altitude balloon launch: at 43014 metres. No other HAB balloon has provided images from such an altitude, and the lightweight nature of the Pi Zero definitely helped, as Dave went on to mention on Twitter a few days later.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave is recognised as being the first person to incorporate a Raspberry Pi into a HAB payload, and continues to break records with the help of the little green board. More recently, he’s been able to lighten the load by using the Raspberry Pi Zero.

When the first Pi made its way to near space, Dave tore the computer apart in order to meet the weight restriction. The Pi in the Sky board was created to add the extra features needed for the flight. Since then, the HAT has experienced a few changes.

Dave Akerman The MagPi Raspberry Pi Community Profile

The Pi in the Sky board, created specifically for HAB flights.

Dave first fell in love with high-altitude ballooning after coming across the hobby in a video shared on a photographic forum. With a lifelong interest in space thanks to watching the Moon landings as a boy, plus a talent for electronics and photography, it seems a natural progression for him. Throw in his coding skills from learning to program on a Teletype and it’s no wonder he was ready and eager to take to the skies, so to speak, and capture the curvature of the Earth. What was so great about using the Raspberry Pi was the instant gratification he got from receiving images in real time as they were taken during the flight. While other devices could control a camera and store captured images for later retrieval, thanks to the Pi Dave was able to transmit the files back down to Earth and check the progress of his balloon while attempting to break records with a flight.

Dave Akerman The MagPi Raspberry Pi Community Profile Morph

One of the many commercial flights Dave has organised featured the classic children’s TV character Morph, a creation of the Aardman Animations studio known for Wallace and Gromit. Morph took to the sky twice in his mission to reach near space, and finally succeeded in 2016.

High-altitude ballooning isn’t the only part of Dave’s life that incorporates a Raspberry Pi. Having “lost count” of how many Pis he has running tasks, Dave has also created radio receivers for APRS (ham radio data), ADS-B (aircraft tracking), and OGN (gliders), along with a time-lapse camera in his garden, and he has a few more Pi for tinkering purposes.

The post Community profile: Dave Akerman appeared first on Raspberry Pi.

За конспирациите и конспироманите

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2130

Представете си, че сте специалист по компютърна сигурност. Трябва да подсигурите компютрите в една мрежа срещу заразяване с вируси, троянски коне и други подобни софтуерни радости. Какво е първото, което ще направите?

Лично аз най-напред бих се поинтересувал от навиците на потребителите на тези компютри. Винаги и без изключение най-слабият елемент в компютъра е задклавиатурното устройство – логично е да се обърне внимание първо на него. Ако потребителят не поглежда нищо извън фирмения CRM, комбинация от приличен антивирус и добре настроен файъруол като цяло ще го опазят от заразяване.

Ако обаче потребителят прекарва половината си време в порносайтове, а другата половина – в торент сайтове, той заслужава доста по-специално внимание. Такъв потребител е „тифната Мери“ на всяка компютърна мрежа. Струва си да накичите компютъра му със следящ софтуер и непрекъснато да го държите под око – ако някой или нещо се опитат да проникнат в мрежата, вероятно ще е през тях.

Звучи като доста постен и базисен урок по компютърна сигурност, нали? Така е. Разказвам го, защото е полезен в едно не-компютърно отношение.

През 2016 г. един от хората в екипа на Доналд Тръмп – Джоузеф Шмиц – се вманиачава по търсене на изтритите е-майли на Хилари Клинтън. Той искрено вярва, че в тях има доказателства за какво ли не – като се започне от това, че тя тайно контролира държавата и унищожава всякаква политическа конкуренция, свърши се с това, че тя е поклонник на дявола, и се мине през сигурно и Шмиц не знае какво още по пътя. За да ги открие, той рови къде ли не – включително в Dark Web-а. Накрая получава купчина е-майли от някой с ник “PATRIOT” и гордо ги предава на ФБР за анализ и оценка.

От ФБР анализират внимателно е-майлите. Естествено, те се оказват фалшиви. ФБР обаче никога не са се съмнявали в това – те търсят следи кой може да е PATRIOT, и смятат това за много важно.

Дали са открили следи нямам информация. Ако съдя по косвени признаци, вероятно са. А ако съдя и по някои неофициални информации, догадката ми кой стои зад ника PATRIOT също е вярна.

Сега очаквате да разкрия това, нали? Няма да го направя, защото е от минимално значение – рискувам с него да ви отклоня от много по-важното.

Че този човек е точно Джоузеф Шмиц, а не примерно Джим Хоскинс, няма абсолютно никакво значение. Точно както и че той работи за екипа на Тръмп, а не за този на Клинтън, Сандърс, Буш или примерно Николас Мадуро. Или дори това, че работи като политик, а не примерно келнер. Няма никакво значение и че е намерил е-майлите именно в Dark Web-а, от какъв точно ник ги е получил, и т.н. Дори няма значение, че ги е предал на ФБР.

Това последното няма значение, защото ФБР щяха да знаят, че той се е снабдил с тези е-майли, и щяха да му ги изискат. Задължително. Това е и истински важното в тази история – откъде ФБР щяха да знаят, че той е намерил нещо, и с какво това нещо е толкова важно за тях. Всъщност, откъдето е ясно – очевидно са го държали под око, но защо именно? И с какво неговите находки са ценни?

И така, какви са отговорите на тези два важни въпроса? За една малка – уви, много малка – част от българите тези отговори са абсолютно очевидни. За огромната част няма очевидни отговори, или са очевидни съвсем други отговори.

Затова, преди да прочетете по-нататък, се опитайте да си отговорите на тези два въпроса. Важно е, за самите вас – за да разберете къде сте, какво трябва да очаквате и за какво трябва да внимавате.

… Отговорихте ли си? Чудесно. Ето ви истинските отговори.

ФБР са държали под око Шмиц по една много проста причина. Той вярва в конспирации от сорта на „Хилари Клинтън управлява държавата“. Иначе казано, той е „алтернативно адекватен“ – а това означава, че е изключително лесен за манипулиране. Конспироманията и сродните ѝ лудости са като конците на кукла – хване ли ги някой, лудият танцува под негова команда, без изобщо да го осъзнава. (И най-често даже се пази агресивно от срязване на конците му и връщане в реалния свят. Защото обикновено е стигнал до лудостта си заради нужда от нея, и тази нужда не изчезва със срязването на конците – само излиза на бял свят и започва да боли.)

Съчетайте това с отговорната му позиция към този момент – член на екипа на основен кандидат за президент – и получавате аналога на компютърния потребител, който по цял ден виси където в Нета не е разумно да се ходи. Идеалната отворена врата за всеки, който поиска да атакува и овладее „компютърната мрежа“, в която е той.

Затова и ФБР внимава за Шмиц и смята получените от него „е-майли“ за важни. Достатъчно добър анализ може да извади от тях както кой е истинският им създател, така и как целят да изманипулират куклата на конци и другите в мрежата ѝ. Иначе казано – кой се опитва да атакува Америка… Точно както добрият специалист по ИТ сигурност внимава най-напред за слабите места в поверения му домейн – ако някой атакува, ще е през тях.

А изводът, който искам да ви дам, е прост. Лудият е опасен не само за себе си. Той е опасен за всички наоколо. Пазете се от луди, които вярват в плоски Земи, световни конспирации и подобни. Те може да подлежат юридически на отговорност за действията си, но това не значи, че са реално способни да ги контролират – точно както пешеходната пътека дава предимство, но не и неуязвимост. Внимавайте за тях и се пазете.

(Или, ако трябва да цитирам дядо Стоян – над стогодишния най-стар брат на дядо ми, никога през живота си не влизал в училище, на шопския му диалект от 19 век: „От бик ша са вардиш отпреде, оти ръга. От кон ша са вардиш отзаде, оти рита. А от улав ша са вардиш отсекаде, оти и он не знае що прай!“)

И най-много се пазете да не полудеете вие. Да не се вманиачите по световни конспирации, борба с ГМО, „здравословно“ хранене, хомеопатия, лечение на аурата и прочее. Затова ви помолих да си намислите своите отговори, преди да прочетете истинските. За да прецените къде сте и накъде вървите. За себе си, без да го разкривате на никого – но да знаете, и ако е нужно, да вземете мерки. Докато не е късно. Инак вие може да не усещате нищо, но за околните гледката ще е тъжна. И ще сте опасни – и за себе си, и за тези, които обичате.

If YouTube-Ripping Sites Are Illegal, What About Tools That Do a Similar Job?

Post Syndicated from Andy original https://torrentfreak.com/if-youtube-ripping-sites-are-illegal-what-about-tools-that-do-a-similar-job-180407/

In 2016, the International Federation of the Phonographic Industry published research which claimed that half of 16 to 24-year-olds use stream-ripping tools to copy music from sites like YouTube.

While this might not have surprised those who regularly participate in the activity, IFPI said that volumes had become so vast that stream-ripping had overtaken pirate site music downloads. That was a big statement.

Probably not coincidentally, just two weeks later IFPI, RIAA, and BPI announced legal action against the world’s largest YouTube ripping site, YouTube-MP3.

“YTMP3 rapidly and seamlessly removes the audio tracks contained in videos streamed from YouTube that YTMP3’s users access, converts those audio tracks to an MP3 format, copies and stores them on YTMP3’s servers, and then distributes copies of the MP3 audio files from its servers to its users in the United States, enabling its users to download those MP3 files to their computers, tablets, or smartphones,” the complaint read.

The labels sued YouTube-MP3 for direct infringement, contributory infringement, vicarious infringement, inducing others to infringe, plus circumvention of technological measures on top. The case was big and one that would’ve been intriguing to watch play out in court, but that never happened.

A year later in September 2017, YouTubeMP3 settled out of court. No details were made public but YouTube-MP3 apparently took all the blame and the court was asked to rule in favor of the labels on all counts.

This certainly gave the impression that what YouTube-MP3 did was illegal and a strong message was sent out to other companies thinking of offering a similar service. However, other onlookers clearly saw the labels’ lawsuit as something to be studied and learned from.

One of those was the operator of NotMP3downloader.com, a site that offers Free MP3 Recorder for YouTube, a tool offering similar functionality to YouTube-MP3 while supposedly avoiding the same legal pitfalls.

Part of that involves audio being processed on the user’s machine – not by stream-ripping as such – but by stream-recording. A subtle difference perhaps, but the site’s operator thinks it’s important.

“After examining the claims made by the copyright holders against youtube-mp3.org, we identified that the charges were based on the three main points. [None] of them are applicable to our product,” he told TF this week.

The first point involves YouTube-MP3’s acts of conversion, storage and distribution of content it had previously culled from YouTube. Copies of unlicensed tracks were clearly held on its own servers, a potent direct infringement risk.

“We don’t have any servers to download, convert or store a copyrighted or any other content from YouTube. Therefore, we do not violate any law or prohibition implied in this part,” NotMP3downloader’s operator explains.

Then there’s the act of “stream-ripping” itself. While YouTube-MP3 downloaded digital content from YouTube using its own software, NotMP3downloader claims to do things differently.

“Our software doesn’t download any streaming content directly, but only launches a web browser with the video specified by a user. The capturing happens from a local machine’s sound card and doesn’t deal with any content streamed through a network,” its operator notes.

This part also seems quite important. YouTube-MP3 was accused of unlawfully circumventing technological measures implemented by YouTube to prevent people downloading or copying content. By opening up YouTube’s own website and viewing content in the way the site demands, NotMP3downloader says it does not “violate the website’s integrity nor performs direct download of audio or video files.”

Like the Betamax video recorder before it that enabled recording from analog TV, NotMP3downloader enables a user to record a YouTube stream on their local machine. This, its makers claim, means the software is completely legal and defeats all the claims made by the labels in the YouTube-MP3 lawsuit.

“What YouTube does is broadcasting content through the Internet. Thus, there is nothing wrong if users are allowed to watch such content later as they may want,” the NotMP3downloader team explain.

“It is worth noting that in Sony Corp. of America v. United City Studios, Inc. (464 U.S. 417) the United States Supreme Court held that such practice, also known as time-shifting, was lawful representing fair use under the US Copyright Act and causing no substantial harm to the copyright holder.”

While software that can record video and sounds locally are nothing new, the developments in the YouTube-MP3 case and this response from NotMP3downloader raises interesting questions.

We put some of them to none other than former RIAA Executive Vice President, Neil Turkewitz, who now works as President of Turkewitz Consulting Group.

Turkewitz stressed that he doesn’t speak for the industry as a whole or indeed the RIAA but it’s clear that his passion for protecting creators persists. He told us that in this instance, reliance on the Betamax decision is “misplaced”.

“The content is different, the activity is different, and the function is different,” Turkewitz told TF.

“The Sony decision must be understood in its context — the time shifting of audiovisual programming being broadcast from point to multipoint. The making available of content by a point-to-point interactive service like YouTube isn’t broadcasting — or at a minimum, is not a form of broadcasting akin to that considered by the Supreme Court in Sony.

“More fundamentally, broadcasting (right of communication to the public) is one of only several rights implicated by the service. And of course, issues of liability will be informed by considerations of purpose, effect and perceived harm. A court’s judgment will also be affected by whether it views the ‘innovation’ as an attempt to circumvent the requirements of law. The decision of the Supreme Court in ABC v. Aereo is certainly instructive in that regard.”

And there are other issues too. While YouTube itself is yet to take any legal action to deter users from downloading rather than merely streaming content, its terms of service are quite specific and seem to cover all eventualities.

“[Y]ou agree not to access Content or any reason other than your personal, non-commercial use solely as intended through and permitted by the normal functionality of the Service, and solely for Streaming,” YouTube’s ToS reads.

“‘Streaming’ means a contemporaneous digital transmission of the material by YouTube via the Internet to a user operated Internet enabled device in such a manner that the data is intended for real-time viewing and not intended to be downloaded (either permanently or temporarily), copied, stored, or redistributed by the user.

“You shall not copy, reproduce, distribute, transmit, broadcast, display, sell, license, or otherwise exploit any Content for any other purposes without the prior written consent of YouTube or the respective licensors of the Content.”

In this respect, it seems that a user doing anything but real-time streaming of YouTube content is breaching YouTube’s terms of service. The big question then, of course, is whether providing a tool specifically for that purpose represents an infringement of copyright.

The people behind Free MP3 Recorder believe that the “scope of application depends entirely on the end users’ intentions” which seems like a fair argument at first view. But, as usual, copyright law is incredibly complex and there are plenty of opposing views.

We asked the BPI, which took action against YouTubeMP3, for its take on this type of tool. The official response was “No comment” which doesn’t really clarify the position, at least for now.

Needless to say, the Betamax decision – relevant or not – doesn’t apply in the UK. But that only adds more parameters into the mix – and perhaps more opportunities for lawyers to make money arguing for and against tools like this in the future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

New – Encryption of Data in Transit for Amazon EFS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-encryption-of-data-in-transit-for-amazon-efs/

Amazon Elastic File System was designed to be the file system of choice for cloud-native applications that require shared access to file-based storage. We launched EFS in mid-2016 and have added several important features since then including on-premises access via Direct Connect and encryption of data at rest. We have also made EFS available in additional AWS Regions, most recently US West (Northern California). As was the case with EFS itself, these enhancements were made in response to customer feedback, and reflect our desire to serve an ever-widening customer base.

Encryption in Transit
Today we are making EFS even more useful with the addition of support for encryption of data in transit. When used in conjunction with the existing support for encryption of data at rest, you now have the ability to protect your stored files using a defense-in-depth security strategy.

In order to make it easy for you to implement encryption in transit, we are also releasing an EFS mount helper. The helper (available in source code and RPM form) takes care of setting up a TLS tunnel to EFS, and also allows you to mount file systems by ID. The two features are independent; you can use the helper to mount file systems by ID even if you don’t make use of encryption in transit. The helper also supplies a recommended set of default options to the actual mount command.

Setting up Encryption
I start by installing the EFS mount helper on my Amazon Linux instance:

$ sudo yum install -y amazon-efs-utils

Next, I visit the EFS Console and capture the file system ID:

Then I specify the ID (and the TLS option) to mount the file system:

$ sudo mount -t efs fs-92758f7b -o tls /mnt/efs

And that’s it! The encryption is transparent and has an almost negligible impact on data transfer speed.

Available Now
You can start using encryption in transit today in all AWS Regions where EFS is available.

The mount helper is available for Amazon Linux. If you are running another distribution of Linux you will need to clone the GitHub repo and build your own RPM, as described in the README.

Jeff;

Men Who Sold Pirate IPTV Service to Pubs Jailed for 4.5 Years

Post Syndicated from Andy original https://torrentfreak.com/men-who-sold-pirate-iptv-service-to-pubs-jailed-for-4-5-years-180404/

For owners and landlords of pubs and clubs in the UK, providing top-tier sports on TV can be the key to bringing in plenty of thirsty customers.

That being said, the costs of doing so is viewed by many as extortionate, with companies including Sky and BT Sport demanding huge fees for the privilege.

As a result, there is a growing opportunity for people to step in to provide cheaper alternatives. With satellite-type piracy now on the wane, IPTV is now a rising force and there’s no shortage of companies prepared to sell a device and associated subscription service to a landlord for the fraction of Sky’s fees.

That’s where John Dodds, 65, and Jason Richards, 45, stepped in. From 2009 until 2016, the pair were involved in an operation selling such services to a staggering 270 pubs and clubs in the North-East of England.

While Sky could charge thousands per month, the duo allegedly charged customers less than £200 per month. For this fee, they received a set-top box plus a service, which included Premier League soccer and otherwise PPV boxing matches.

According to local sources, the scheme was incredibly lucrative for the pair. Via a fraudulent company, the duo generated revenues of £1.5m, which provided luxury cars and foreign homes.

Unfortunately, however, the business – which at some point was branded ‘Full Effects HD Sports’ – attracted the attention of the Premier League. In common with the movie industry before them, they carried out a private prosecution on the basis the pair were defrauding the organization.

“What the defendants created was their own, highly professional broadcasting service which was being sold to subscribers at a rate designed to undercut any legitimate broadcaster, which they were able to do as they weren’t paying to make any of the programmes or buy from the owners, such as the Premier League,” Prosecutor David Groome told the court.

The court was convinced by the Premier League’s arguments and this morning, before Newcastle Crown Court, the pair were sentenced to four-and-a-half years each in prison.

“This was a sophisticated fraud committed against numerous broadcasters throughout the world and those who have interests in the contents of broadcasts, particularly the Football Association, Premier League,” the judge said, as quoted by Sunderland Echo.

“You both knew perfectly well you were engaged in fraud because you knew the broadcasters were not being paid any or any appropriate fee for the use of their broadcasts. You were able to mislead customers, tell them that the services were lawful for them to use when you knew they were not.”

Unfortunately for the duo’s customers, a number of publicans who bought the service were also sued or prosecuted, which the judge noted could have negative consequences in relation to their future suitability to hold a liquor license.

“This is a hugely significant judgment as it provides further evidence that selling these devices is illegal and can result in a prison sentence,” said Premier League director of legal services Kevin Plumb.

“We hope this verdict gets the message out that selling or using these devices is simply not worth the risk.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Hosting Provider Steadfast is Not Liable for ‘Pirate’ Site

Post Syndicated from Ernesto original https://torrentfreak.com/hosting-provider-steadfast-is-not-liable-for-pirate-site-180403/

In 2016, adult entertainment publisher ALS Scan dragged several third-party Internet services to court.

The company targeted companies including CDN provider CloudFlare and the Chicago-based hosting company Steadfast, accusing them of copyright infringement because they offered services to pirate sites.

ALS argued that Steadfast refused to shut down the servers of the image sharing platform Imagebam.com, which was operated by its client Flixya. The hosting provider had been targeted with dozens of DMCA notices, and ALS accused Steadfast of turning a blind eye to the situation.

Steadfast denied these allegations. The hosting provider did indeed lease servers to Flixya for ten years but said that it forwarded all notices to its client. The hosting company could not address individual infringements, other than shutting down the entire site, which would have been disproportionate in their view.

With a trial getting closer, the hosting company submitted a motion for summary judgment, arguing that it can’t be held liable for copyright infringement. A few days ago California District Court Judge George Wu ordered on the matter, bringing good news for Steadfast.

Judge Wu dismissed all claims against Steadfast, including contributory copyright infringement, vicarious copyright infringement, and contributory trademark infringement, which is a clear win.

Dismissed

The order clarifies that hosting providers such as Steadfast can be held liable for pirate sites. This is also the case when these sites are hosted on servers that are leased by a company which itself has a takedown policy, something Steadfast contended.

In this case, it is clear that Steadfast knew of the infringements. It could have shut down imagebam.com but failed to do so, and continued to provide server space to known copyright infringers on the site. All these arguments could, in theory, weigh against the hosting provider.

However, in order to be liable for contributory copyright infringement, ALS Scan needed to show that Steadfast failed to take simple steps to prevent the copyright infringements at issue. This is where the adult entertainment publisher’s arguments failed.

Steadfast forwarded all notices to its customer Flixya which resulted in the removal of the infringing images. In other words, the hosting provider took simple steps that prevented further copyright infringements.

“Given these undisputed facts, the Court would find that Steadfast did not ‘[fail] to take simple measures’ to prevent the specific acts of infringement of which it was aware. Steadfast took simple steps that resulted in all of the at-issue images being removed,” Judge Wu writes.

ALS argued that Steadfast should have shut down the entire server of its customer to prevent future infringements, but this isn’t necessarily the case. Service providers only have to take measures if they know that infringements occurred or will occur in the future. The latter was not obvious here.

“As such, the Court is not convinced that Steadfast had any reason, legal or practical, to terminate Flixya’s account and power down its servers,” the order reads.

Steadfast founder and CEO Karl Zimmerman is happy with the outcome of the case. He agrees that hosting providers have a responsibility to respond to copyright infringement complaints, but stresses that his company already has the right procedures in play.

“We already check and assure the content is removed, and yes, if the content simply stays up, that is concerning and shows that more could be done,” Zimmerman informs TF.

“We took action in forwarding the complaints, tracking those complaints, and validating the content had been removed. We did what was required of us, which is why I thought it was odd we were in this case in the first place.”

Hosting providers should take measures to help curb copyright infringement, according to Steadfast. However, shutting down entire services of customers who take down infringing links when they’re asked too, goes too far. Zimmerman is glad that Judge Wu agreed with this.

“To me, it simply does not seem reasonable to have to shut down a customer just because future infringement of their users is possible, when every indication is that the customer is completely law-abiding and I’m glad the judge agreed with that,” he says.

A copy of United States District Court Judge George Wu’s order is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Cloudflare Fails to Eliminate ‘Moot’ Pirate Site Blocking Threat

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-fails-eliminate-moot-pirate-site-blocking-threat/

Representing various major record labels, the RIAA filed a lawsuit against pirate site MP3Skull three years ago.

With millions of visitors per month the MP3 download site had been one of the prime sources of pirated music for a long time.

In 2016, the record labels won their case against the MP3 download portal but the site initially ignored the court order and continued to operate. This prompted the RIAA to go after third-party services including Cloudflare, demanding that they block associated domain names.

Cloudflare objected and argued that the DMCA shielded the company from the broad blocking requirements. However, the court ruled that the DMCA doesn’t apply in this case, opening the door to widespread anti-piracy filtering.

The court stressed that, before issuing an injunction against Cloudflare, it still had to be determined whether the CDN provider is “in active concert or participation” with the pirate site. However, this has yet to happen. Since MP3Skull has ceased its operations the RIAA has shown little interest in pursuing the matter any further.

While there is no longer an immediate site blocking threat, it makes it easier for rightsholders to request similar blocking requests in the future. Cloudflare, therefore, asked the court to throw the order out, arguing that since MP3Skull is no longer available the issue is moot.

This week, US District Court Judge Marcia Cooke denied that request.

Denied

This is, of course, music to the ears of the RIAA and its members.

The RIAA wants to keep the door open for similar blocking requests in the future. This potential liability for pirates sites is the main reason why the CDN provider asked the court to vacate the order, the RIAA said previously.

While the order remains in place, Judge Cooke suggests that both parties are working on some kind of compromise or clarification and gave two weeks to draft this into a new proposal.

“The parties may draft and submit a joint proposed order addressing the issues raised at the hearing on or before April 10, 2018,” Judge Cooke writes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Major Pirate Site Operators’ Sentences Increased on Appeal

Post Syndicated from Andy original https://torrentfreak.com/major-pirate-site-operators-sentences-increased-on-appeal-180330/

With The Pirate Bay the most famous pirate site in Swedish history still in full swing, a lesser known streaming platform started to gain traction more than half a decade ago.

From humble beginnings, Swefilmer eventually grew to become Sweden’s most popular movie and TV show streaming site. At one stage it was credited alongside another streaming portal for serving up to 25% of all online video streaming in Sweden.

But in 2015, everything came crashing down. An operator of the site in his early twenties was raided by local police and arrested. An older Turkish man, who was accused of receiving donations from users and setting up Swefilmer’s deals with advertisers, was later arrested in Germany.

Their activities between November 2013 and June 2015 landed them an appearance before the Varberg District Court last January, where they were accused of making more than $1.5m in advertising revenue from copyright infringement.

The prosecutor described the site as being like “organized crime”. The then 26-year-old was described as the main player behind the site, with the then 23-year-old playing a much smaller role. The latter received an estimated $4,000 of the proceeds, the former was said to have pocketed more than $1.5m.

As expected, things didn’t go well. The older man, who was described as leading a luxury lifestyle, was convicted of 1,044 breaches of copyright law and serious money laundering offenses. He was sentenced to three years in prison and ordered to forfeit 14,000,000 SEK (US$1.68m).

Due to his minimal role, the younger man was given probation and ordered to complete 120 hours of community service. Speaking with TorrentFreak at the time, the 23-year-old said he was relieved at the relatively light sentence but noted it may not be over yet.

Indeed, as is often the case with these complex copyright prosecutions, the matter found itself at the Court of Appeal of Western Sweden. On Wednesday its decision was handed down and it’s bad news for both men.

“The Court of Appeal, like the District Court, judges the men for breach of copyright law,” the Court said in a statement.

“They are judged to have made more than 1,400 copyrighted films available through the Swefilmer streaming service, without obtaining permission from copyright holders. One of the men is also convicted of gross money laundering because he received revenues from the criminal activity.”

In respect of the now 27-year-old, the Court decided to hand down a much more severe sentence, extending the term of imprisonment from three to four years.

There was some better news in respect of the amount he has to forfeit to the state, however. The District Court set this amount at 14,000,000 SEK (US$1.68m) but the Court of Appeal reduced it to ‘just’ 4,000,000 SEK (US$482,280).

The younger man’s conditional sentence was upheld but community service was replaced with a fine of 10,000 SEK (US$1,200). Also, along with his accomplice, he must now pay significant damages to a Norwegian plaintiff in the case.

“Both men will jointly pay damages of NOK 2.2 million (US$283,000) together with interest to Nordisk Film A / S for copyright infringement in one of the films posted on the website,” the Court writes in its decision.

But even now, the matter may not be closed. Ansgar Firsching, the older man’s lawyer, told SVT that the case could go all the way to the Supreme Court.

“I have informed my client about the content of the judgment and it is highly likely that he will turn to the Supreme Court,” Firsching said.

It appears that the 27-year-old will argue that at the time of the alleged offenses, merely linking to copyrighted content was not a criminal offense but whether this approach will succeed is seriously up for debate.

While linking was previously considered by some to sit in a legal gray area, the District Court drew heavily on the GS Media ruling handed down by the European Court of Justice in September 2016.

In that case, the EU Court found that those who post links to content they do not know is infringing in a non-commercial environment usually don’t commit infringement. The Swefilmer case doesn’t immediately appear to fit either of those parameters.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.