Tag Archives: DISA

Steam Censors MEGA.nz Links in Chats and Forum Posts

Post Syndicated from Ernesto original https://torrentfreak.com/steam-censors-mega-nz-links-in-chats-and-forum-posts-180421/

With more than 150 million registered accounts, Steam is much more than just a game distribution platform.

For many people, it’s also a social hangout and a communication channel.

Steam’s instant messaging tool, for example, is widely used for chats with friends. About games of course, but also to discuss lots of other stuff.

While Valve doesn’t mind people socializing on its platform, there are certain things the company doesn’t want Steam users to share. This includes links to the cloud hosting service Mega.

Users who’d like to show off some gaming footage, or even a collection of cat pictures they stored on Mega, are unable to do so. As it turns out, Steam actively censors these type of links from forum posts and chats.

In forum posts, these offending links are replaced by the text {LINK REMOVED} and private chats get the same treatment. Instead of the Mega link, people on the other end only get a mention that a link was removed.

Mega link removed from chat

While Mega operates as a regular company that offers cloud hosting services, Steam notes on their website that the website is “potentially malicious.”

“The site could contain malicious content or be known for stealing user credentials,” Steam’s link checker warns.

Potentially malicious…

It’s unclear what malicious means in this context. Mega has never been flagged by Google’s Safe Browsing program, which is regarded as one of the industry standards for malware and other unwanted software.

What’s more likely is that Mega’s piracy stigma has something to do with the censoring. As it turns out, Steam also censors 4shared.com, as well as Pirate Bay’s former .se domain name.

Other “malicious sites” which get the same treatment are more game oriented, such as cheathappens.com and the CSGO Skin Screenshot site metjm.net. While it’s understandable some game developers don’t like these, malicious is a rather broad term in this regard.

Mega clearly refutes that they are doing anything wrong. Mega Chairman Stephen Hall tells TorrentFreak that the company swiftly removes any malicious content, once it receives an abuse notice.

“It is crazy for sites to block Mega links as we respond very quickly to disable any links that are reported as malware, generally much quicker than our competitors,” Hall says.

Valve did not immediately reply to our request for clarification so the precise reason for the link censoring remains unknown.

That said, when something’s censored the public tends to work around any restrictions. Mega links are still being shared on Steam, with a slightly altered URL. In addition, Mega’s backup domain Mega.co.nz still works fine too.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Securing Elections

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/securing_electi_1.html

Elections serve two purposes. The first, and obvious, purpose is to accurately choose the winner. But the second is equally important: to convince the loser. To the extent that an election system is not transparently and auditably accurate, it fails in that second purpose. Our election systems are failing, and we need to fix them.

Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper.

Recently, there have been two graphic demonstrations of how bad our computerized voting system is. In 2007, the states of California and Ohio conducted audits of their electronic voting machines. Expert review teams found exploitable vulnerabilities in almost every component they examined. The researchers were able to undetectably alter vote tallies, erase audit logs, and load malware on to the systems. Some of their attacks could be implemented by a single individual with no greater access than a normal poll worker; others could be done remotely.

Last year, the Defcon hackers’ conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail.

It’s important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend.

It shouldn’t be any surprise that voting equipment, including voting machines, voter registration databases, and vote tabulation systems, are that hackable. They’re computers — often ancient computers running operating systems no longer supported by the manufacturers — and they don’t have any magical security technology that the rest of the industry isn’t privy to. If anything, they’re less secure than the computers we generally use, because their manufacturers hide any flaws behind the proprietary nature of their equipment.

We’re not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election.

Voting systems have another requirement that makes security even harder to achieve: the requirement for a secret ballot. Because we have to securely separate the election-roll system that determines who can vote from the system that collects and tabulates the votes, we can’t use the security systems available to banking and other high-value applications.

We can securely bank online, but can’t securely vote online. If we could do away with anonymity — if everyone could check that their vote was counted correctly — then it would be easy to secure the vote. But that would lead to other problems. Before the US had the secret ballot, voter coercion and vote-buying were widespread.

We can’t, so we need to accept that our voting systems are insecure. We need an election system that is resilient to the threats. And for many parts of the system, that means paper.

Let’s start with the voter rolls. We know they’ve already been targeted. In 2016, someone changed the party affiliation of hundreds of voters before the Republican primary. That’s just one possibility. A well-executed attack that deletes, for example, one in five voters at random — or changes their addresses — would cause chaos on election day.

Yes, we need to shore up the security of these systems. We need better computer, network, and database security for the various state voter organizations. We also need to better secure the voter registration websites, with better design and better internet security. We need better security for the companies that build and sell all this equipment.

Multiple, unchangeable backups are essential. A record of every addition, deletion, and change needs to be stored on a separate system, on write-only media like a DVD. Copies of that DVD, or — even better — a paper printout of the voter rolls, should be available at every polling place on election day. We need to be ready for anything.

Next, the voting machines themselves. Security researchers agree that the gold standard is a voter-verified paper ballot. The easiest (and cheapest) way to achieve this is through optical-scan voting. Voters mark paper ballots by hand; they are fed into a machine and counted automatically. That paper ballot is saved, and serves as a final true record in a recount in case of problems. Touch-screen machines that print a paper ballot to drop in a ballot box can also work for voters with disabilities, as long as the ballot can be easily read and verified by the voter.

Finally, the tabulation and reporting systems. Here again we need more security in the process, but we must always use those paper ballots as checks on the computers. A manual, post-election, risk-limiting audit varies the number of ballots examined according to the margin of victory. Conducting this audit after every election, before the results are certified, gives us confidence that the election outcome is correct, even if the voting machines and tabulation computers have been tampered with. Additionally, we need better coordination and communications when incidents occur.

It’s vital to agree on these procedures and policies before an election. Before the fact, when anyone can win and no one knows whose votes might be changed, it’s easy to agree on strong security. But after the vote, someone is the presumptive winner — and then everything changes. Half of the country wants the result to stand, and half wants it reversed. At that point, it’s too late to agree on anything.

The politicians running in the election shouldn’t have to argue their challenges in court. Getting elections right is in the interest of all citizens. Many countries have independent election commissions that are charged with conducting elections and ensuring their security. We don’t do that in the US.

Instead, we have representatives from each of our two parties in the room, keeping an eye on each other. That provided acceptable security against 20th-century threats, but is totally inadequate to secure our elections in the 21st century. And the belief that the diversity of voting systems in the US provides a measure of security is a dangerous myth, because few districts can be decisive and there are so few voting-machine vendors.

We can do better. In 2017, the Department of Homeland Security declared elections to be critical infrastructure, allowing the department to focus on securing them. On 23 March, Congress allocated $380m to states to upgrade election security.

These are good starts, but don’t go nearly far enough. The constitution delegates elections to the states but allows Congress to “make or alter such Regulations”. In 1845, Congress set a nationwide election day. Today, we need it to set uniform and strict election standards.

This essay originally appeared in the Guardian.

Implementing safe AWS Lambda deployments with AWS CodeDeploy

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/implementing-safe-aws-lambda-deployments-with-aws-codedeploy/

This post courtesy of George Mao, AWS Senior Serverless Specialist – Solutions Architect

AWS Lambda and AWS CodeDeploy recently made it possible to automatically shift incoming traffic between two function versions based on a preconfigured rollout strategy. This new feature allows you to gradually shift traffic to the new function. If there are any issues with the new code, you can quickly rollback and control the impact to your application.

Previously, you had to manually move 100% of traffic from the old version to the new version. Now, you can have CodeDeploy automatically execute pre- or post-deployment tests and automate a gradual rollout strategy. Traffic shifting is built right into the AWS Serverless Application Model (SAM), making it easy to define and deploy your traffic shifting capabilities. SAM is an extension of AWS CloudFormation that provides a simplified way of defining serverless applications.

In this post, I show you how to use SAM, CloudFormation, and CodeDeploy to accomplish an automated rollout strategy for safe Lambda deployments.

Scenario

For this walkthrough, you write a Lambda application that returns a count of the S3 buckets that you own. You deploy it and use it in production. Later on, you receive requirements that tell you that you need to change your Lambda application to count only buckets that begin with the letter “a”.

Before you make the change, you need to be sure that your new Lambda application works as expected. If it does have issues, you want to minimize the number of impacted users and roll back easily. To accomplish this, you create a deployment process that publishes the new Lambda function, but does not send any traffic to it. You use CodeDeploy to execute a PreTraffic test to ensure that your new function works as expected. After the test succeeds, CodeDeploy automatically shifts traffic gradually to the new version of the Lambda function.

Your Lambda function is exposed as a REST service via an Amazon API Gateway deployment. This makes it easy to test and integrate.

Prerequisites

To execute the SAM and CloudFormation deployment, you must have the following IAM permissions:

  • cloudformation:*
  • lambda:*
  • codedeploy:*
  • iam:create*

You may use the AWS SAM Local CLI or the AWS CLI to package and deploy your Lambda application. If you choose to use SAM Local, be sure to install it onto your system. For more information, see AWS SAM Local Installation.

All of the code used in this post can be found in this GitHub repository: https://github.com/aws-samples/aws-safe-lambda-deployments.

Walkthrough

For this post, use SAM to define your resources because it comes with built-in CodeDeploy support for safe Lambda deployments.  The deployment is handled and automated by CloudFormation.

SAM allows you to define your Serverless applications in a simple and concise fashion, because it automatically creates all necessary resources behind the scenes. For example, if you do not define an execution role for a Lambda function, SAM automatically creates one. SAM also creates the CodeDeploy application necessary to drive the traffic shifting, as well as the IAM service role that CodeDeploy uses to execute all actions.

Create a SAM template

To get started, write your SAM template and call it template.yaml.

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: An example SAM template for Lambda Safe Deployments.

Resources:

  returnS3Buckets:
    Type: AWS::Serverless::Function
    Properties:
      Handler: returnS3Buckets.handler
      Runtime: nodejs6.10
      AutoPublishAlias: live
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "s3:ListAllMyBuckets"
            Resource: '*'
      DeploymentPreference:
          Type: Linear10PercentEvery1Minute
          Hooks:
            PreTraffic: !Ref preTrafficHook
      Events:
        Api:
          Type: Api
          Properties:
            Path: /test
            Method: get

  preTrafficHook:
    Type: AWS::Serverless::Function
    Properties:
      Handler: preTrafficHook.handler
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "codedeploy:PutLifecycleEventHookExecutionStatus"
            Resource:
              !Sub 'arn:aws:codedeploy:${AWS::Region}:${AWS::AccountId}:deploymentgroup:${ServerlessDeploymentApplication}/*'
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "lambda:InvokeFunction"
            Resource: !Ref returnS3Buckets.Version
      Runtime: nodejs6.10
      FunctionName: 'CodeDeployHook_preTrafficHook'
      DeploymentPreference:
        Enabled: false
      Timeout: 5
      Environment:
        Variables:
          NewVersion: !Ref returnS3Buckets.Version

This template creates two functions:

  • returnS3Buckets
  • preTrafficHook

The returnS3Buckets function is where your application logic lives. It’s a simple piece of code that uses the AWS SDK for JavaScript in Node.JS to call the Amazon S3 listBuckets API action and return the number of buckets.

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			callback(null, {
				statusCode: 200,
				body: allBuckets.length
			});
		}
	});	
}

Review the key parts of the SAM template that defines returnS3Buckets:

  • The AutoPublishAlias attribute instructs SAM to automatically publish a new version of the Lambda function for each new deployment and link it to the live alias.
  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides the function with permission to call listBuckets.
  • The DeploymentPreference attribute configures the type of rollout pattern to use. In this case, you are shifting traffic in a linear fashion, moving 10% of traffic every minute to the new version. For more information about supported patterns, see Serverless Application Model: Traffic Shifting Configurations.
  • The Hooks attribute specifies that you want to execute the preTrafficHook Lambda function before CodeDeploy automatically begins shifting traffic. This function should perform validation testing on the newly deployed Lambda version. This function invokes the new Lambda function and checks the results. If you’re satisfied with the tests, instruct CodeDeploy to proceed with the rollout via an API call to: codedeploy.putLifecycleEventHookExecutionStatus.
  • The Events attribute defines an API-based event source that can trigger this function. It accepts requests on the /test path using an HTTP GET method.
'use strict';

const AWS = require('aws-sdk');
const codedeploy = new AWS.CodeDeploy({apiVersion: '2014-10-06'});
var lambda = new AWS.Lambda();

exports.handler = (event, context, callback) => {

	console.log("Entering PreTraffic Hook!");
	
	// Read the DeploymentId & LifecycleEventHookExecutionId from the event payload
    var deploymentId = event.DeploymentId;
	var lifecycleEventHookExecutionId = event.LifecycleEventHookExecutionId;

	var functionToTest = process.env.NewVersion;
	console.log("Testing new function version: " + functionToTest);

	// Perform validation of the newly deployed Lambda version
	var lambdaParams = {
		FunctionName: functionToTest,
		InvocationType: "RequestResponse"
	};

	var lambdaResult = "Failed";
	lambda.invoke(lambdaParams, function(err, data) {
		if (err){	// an error occurred
			console.log(err, err.stack);
			lambdaResult = "Failed";
		}
		else{	// successful response
			var result = JSON.parse(data.Payload);
			console.log("Result: " +  JSON.stringify(result));

			// Check the response for valid results
			// The response will be a JSON payload with statusCode and body properties. ie:
			// {
			//		"statusCode": 200,
			//		"body": 51
			// }
			if(result.body == 9){	
				lambdaResult = "Succeeded";
				console.log ("Validation testing succeeded!");
			}
			else{
				lambdaResult = "Failed";
				console.log ("Validation testing failed!");
			}

			// Complete the PreTraffic Hook by sending CodeDeploy the validation status
			var params = {
				deploymentId: deploymentId,
				lifecycleEventHookExecutionId: lifecycleEventHookExecutionId,
				status: lambdaResult // status can be 'Succeeded' or 'Failed'
			};
			
			// Pass AWS CodeDeploy the prepared validation test results.
			codedeploy.putLifecycleEventHookExecutionStatus(params, function(err, data) {
				if (err) {
					// Validation failed.
					console.log('CodeDeploy Status update failed');
					console.log(err, err.stack);
					callback("CodeDeploy Status update failed");
				} else {
					// Validation succeeded.
					console.log('Codedeploy status updated successfully');
					callback(null, 'Codedeploy status updated successfully');
				}
			});
		}  
	});
}

The hook is hardcoded to check that the number of S3 buckets returned is 9.

Review the key parts of the SAM template that defines preTrafficHook:

  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides permissions to call the CodeDeploy PutLifecycleEventHookExecutionStatus API action. The second statement provides permissions to invoke the specific version of the returnS3Buckets function to test
  • This function has traffic shifting features disabled by setting the DeploymentPreference option to false.
  • The FunctionName attribute explicitly tells CloudFormation what to name the function. Otherwise, CloudFormation creates the function with the default naming convention: [stackName]-[FunctionName]-[uniqueID].  Name the function with the “CodeDeployHook_” prefix because the CodeDeployServiceRole role only allows InvokeFunction on functions named with that prefix.
  • Set the Timeout attribute to allow enough time to complete your validation tests.
  • Use an environment variable to inject the ARN of the newest deployed version of the returnS3Buckets function. The ARN allows the function to know the specific version to invoke and perform validation testing on.

Deploy the function

Your SAM template is all set and the code is written—you’re ready to deploy the function for the first time. Here’s how to do it via the SAM CLI. Replace “sam” with “cloudformation” to use CloudFormation instead.

First, package the function. This command returns a CloudFormation importable file, packaged.yaml.

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml

Now deploy everything:

sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

At this point, both Lambda functions have been deployed within the CloudFormation stack mySafeDeployStack. The returnS3Buckets has been deployed as Version 1:

SAM automatically created a few things, including the CodeDeploy application, with the deployment pattern that you specified (Linear10PercentEvery1Minute). There is currently one deployment group, with no action, because no deployments have occurred. SAM also created the IAM service role that this CodeDeploy application uses:

There is a single managed policy attached to this role, which allows CodeDeploy to invoke any Lambda function that begins with “CodeDeployHook_”.

An API has been set up called safeDeployStack. It targets your Lambda function with the /test resource using the GET method. When you test the endpoint, API Gateway executes the returnS3Buckets function and it returns the number of S3 buckets that you own. In this case, it’s 51.

Publish a new Lambda function version

Now implement the requirements change, which is to make returnS3Buckets count only buckets that begin with the letter “a”. The code now looks like the following (see returnS3BucketsNew.js in GitHub):

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			//callback(null, allBuckets.length);

			//  New Code begins here
			var counter=0;
			for(var i  in allBuckets){
				if(allBuckets[i].Name[0] === "a")
					counter++;
			}
			console.log("Total buckets starting with a: " + counter);

			callback(null, {
				statusCode: 200,
				body: counter
			});
			
		}
	});	
}

Repackage and redeploy with the same two commands as earlier:

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml
	
sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

CloudFormation understands that this is a stack update instead of an entirely new stack. You can see that reflected in the CloudFormation console:

During the update, CloudFormation deploys the new Lambda function as version 2 and adds it to the “live” alias. There is no traffic routing there yet. CodeDeploy now takes over to begin the safe deployment process.

The first thing CodeDeploy does is invoke the preTrafficHook function. Verify that this happened by reviewing the Lambda logs and metrics:

The function should progress successfully, invoke Version 2 of returnS3Buckets, and finally invoke the CodeDeploy API with a success code. After this occurs, CodeDeploy begins the predefined rollout strategy. Open the CodeDeploy console to review the deployment progress (Linear10PercentEvery1Minute):

Verify the traffic shift

During the deployment, verify that the traffic shift has started to occur by running the test periodically. As the deployment shifts towards the new version, a larger percentage of the responses return 9 instead of 51. These numbers match the S3 buckets.

A minute later, you see 10% more traffic shifting to the new version. The whole process takes 10 minutes to complete. After completion, open the Lambda console and verify that the “live” alias now points to version 2:

After 10 minutes, the deployment is complete and CodeDeploy signals success to CloudFormation and completes the stack update.

Check the results

If you invoke the function alias manually, you see the results of the new implementation.

aws lambda invoke –function [lambda arn to live alias] out.txt

You can also execute the prod stage of your API and verify the results by issuing an HTTP GET to the invoke URL:

Summary

This post has shown you how you can safely automate your Lambda deployments using the Lambda traffic shifting feature. You used the Serverless Application Model (SAM) to define your Lambda functions and configured CodeDeploy to manage your deployment patterns. Finally, you used CloudFormation to automate the deployment and updates to your function and PreTraffic hook.

Now that you know all about this new feature, you’re ready to begin automating Lambda deployments with confidence that things will work as designed. I look forward to hearing about what you’ve built with the AWS Serverless Platform.

Pirates Taunt Amazon Over New “Turd Sandwich” Prime Video Quality

Post Syndicated from Andy original https://torrentfreak.com/pirates-taunt-amazon-over-new-turd-sandwich-prime-video-quality-180419/

Even though they generally aren’t paying for the content they consume, don’t fall into the trap of believing that all pirates are eternally grateful for even poor quality media.

Without a doubt, some of the most quality-sensitive individuals are to be found in pirate communities and they aren’t scared to make their voices known when release groups fail to come up with the best possible goods.

This week there’s been a sustained chorus of disapproval over the quality of pirate video releases sourced from Amazon Prime. The anger is usually directed at piracy groups who fail to capture content in the correct manner but according to a number of observers, the problem is actually at Amazon’s end.

Discussions on Reddit, for example, report that episodes in a single TV series have been declining in filesize and bitrate, from 1.56 GB in 720p at a 3073 kb/s video bitrate for episode 1, down to 907 MB in 720p at just 1514 kb/s video bitrate for episode 10.

Numerous theories as to why this may be the case are being floated around, including that Amazon is trying to save on bandwidth expenses. While this is a possibility, the company hasn’t made any announcements to that end.

Indeed, one legitimate customer reported that he’d raised the quality issue with Amazon and they’d said that the problem was “probably on his end”.

“I have Amazon Prime Video and I noticed the quality was always great for their exclusive shows, so I decided to try buying the shows on Amazon instead of iTunes this year. I paid for season pass subscriptions for Legion, Billions and Homeland this year,” he wrote.

“Just this past weekend, I have noticed a significant drop in details compared to weeks before! So naturally I assumed it was an issue on my end. I started trying different devices, calling support, etc, but nothing really helped.

“Billions continued to look like a blurry mess, almost like I was watching a standard definition DVD instead of the crystal clear HD I paid for and have experienced in the past! And when I check the previous episodes, sure enough, they look fantastic again. What the heck??”

With Amazon distancing itself from the issues, piracy groups have already begun to dig in the knife. Release group DEFLATE has been particularly critical.

“Amazon, in their infinite wisdom, have decided to start fucking with the quality of their encodes. They’re now reaching Netflix’s subpar 1080p.H264 levels, and their H265 encodes aren’t even close to what Netflix produces,” the group said in a file attached to S02E07 of The Good Fight released on Sunday.

“Netflix is able to produce drastic visual improvements with their H265 encodes compared to H264 across every original. In comparison, Amazon can’t decide whether H265 or H264 is going to produce better results, and as a result we suffer for it.”

Arrr! The quality be fallin’

So what’s happening exactly?

A TorrentFreak source (who tells us he’s been working in the BluRay/DCP authoring business for the last 10 years) was kind enough to give us two opinions, one aimed at the techies and another at us mere mortals.

“In technical terms, it appears [Amazon has] increased the CRF [Constant Rate Factor] value they use when encoding for both the HEVC [H265] and H264 streams. Previously, their H264 streams were using CRF 18 and a max bitrate of 15Mbit/s, which usually resulted in file sizes of roughly 3GB, or around 10Mbit/s. Similarly with their HEVC streams, they were using CRF 20 and resulting in streams which were around the same size,” he explained.

“In the past week, the H264 streams have decreased by up to 50% for some streams. While there are no longer any x264 headers embedded in the H264 streams, the HEVC streams still retain those headers and the CRF value used has been increased, so it does appear this change has been done on purpose.”

In layman’s terms, our source believes that Amazon had previously been using an encoding profile that was “right on the edge of relatively good quality” which kept bitrates relatively low but high enough to ensure no perceivable loss of quality.

“H264 streams encoded with CRF 18 could provide an acceptable compromise between quality and file size, where the loss of detail is often negligible when watched at regular viewing distances, at a desk, or in a lounge room on a larger TV,” he explained.

“Recently, it appears these values have been intentionally changed in order to lower the bitrate and file sizes for reasons unknown. As a result, the quality of some streams has been reduced by up to 50% of their previous values. This has introduced a visual loss of quality, comparable to that of viewing something in standard definition versus high definition.”

With the situation failing to improve during the week, by the time piracy group DEFLATE released S03E14 of Supergirl on Tuesday their original criticism had transformed into flat-out insults.

“These are only being done in H265 because Amazon have shit the bed, and it’s a choice between a turd sandwich and a giant douche,” they wrote, offering these images as illustrative of the problem and these indicating what should be achievable.

With DEFLATE advising customers to start complaining to Amazon, the memes have already begun, with unfavorable references to now-defunct group YIFY (which was often chastized for its low quality rips) and even a spin on one of the most well known anti-piracy campaigns.

You wouldn’t download stream….

TorrentFreak contacted Amazon Prime for comment on both the recent changes and growing customer complaints but at the time of publication we were yet to receive a response.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Google Search Receives Fewer Takedown Notices Than Before

Post Syndicated from Ernesto original https://torrentfreak.com/google-search-receives-fewer-takedown-notices-than-before-180414/

In recent years Google has had to cope with a continuous increase in takedown requests from copyright holders, which target pirate sites in search results.

Just a few years ago the search engine removed ‘only’ a few thousand URLs per day. This has since grown to millions and has kept growing, until recently.

Around a year ago Google received a billion takedown requests a year, and for a while, it stabilized at roughly 20 million requests per week. By October last year, Google search had processed over three billion DMCA requests since it started counting.

After that, it appears that things calmed down a little. Where Google’s weekly takedown chart went up year after year, it’s now trending in a downward direction.

During the past half year, Google received ‘only’ 375 million takedown requests. That translates to roughly 15 million per week or 750 million per year. This is a 25% decrease compared the average in 2016.

Does this mean that copyright holders can no longer find enough pirated content via the search engine? We doubt it. But it’s clear that some of the big reporting agencies are sending in less complaints.

Degban, for example, which was at one point good for more than 10% of the weekly number of DMCA requests, has disappeared completely. Other big players, such as the Mexican anti-piracy outfit APDIF and Remove Your Media, have clearly lowered their volumes.

APDIF’s weekly DMCA volume

Of all the big players, UK Music Group BPI has been most consistent. Their average hasn’t dropped much in recent years, but is certainly not rising either.

It’s too early to tell whether this trend will hold, but according to the numbers we see now, Google will for the first time have a significant decrease in the number of takedown requests this year.

Despite the decrease, Google is under quite a bit of pressure from copyright holders to improve its takedown efforts. Most would like Google to delist pirate site domains entirely.

While the search engine isn’t willing to go that far, it does give a lower ranking to sites for which it receives a large volume of takedown requests. In addition, the company recently started accepting ‘prophylactic’ DMCA requests, for content that is not indexed yet.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

COPPA Compliance

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/coppa_complianc.html

Interesting research: “‘Won’t Somebody Think of the Children?’ Examining COPPA Compliance at Scale“:

Abstract: We present a scalable dynamic analysis framework that allows for the automatic evaluation of the privacy behaviors of Android apps. We use our system to analyze mobile apps’ compliance with the Children’s Online Privacy Protection Act (COPPA), one of the few stringent privacy laws in the U.S. Based on our automated analysis of 5,855 of the most popular free children’s apps, we found that a majority are potentially in violation of COPPA, mainly due to their use of third-party SDKs. While many of these SDKs offer configuration options to respect COPPA by disabling tracking and behavioral advertising, our data suggest that a majority of apps either do not make use of these options or incorrectly propagate them across mediation SDKs. Worse, we observed that 19% of children’s apps collect identifiers or other personally identifiable information (PII) via SDKs whose terms of service outright prohibit their use in child-directed apps. Finally, we show that efforts by Google to limit tracking through the use of a resettable advertising ID have had little success: of the 3,454 apps that share the resettable ID with advertisers, 66% transmit other, non-resettable, persistent identifiers as well, negating any intended privacy-preserving properties of the advertising ID.

MPAA Quietly Shut Down Its ‘Legal’ Movie Search Engine

Post Syndicated from Ernesto original https://torrentfreak.com/mpaa-quietly-shut-down-its-legal-movie-search-engine-180411/

During the fall of 2014, Hollywood launched WhereToWatch, its very own search engine for movies and TV-shows.

The site enabled people to check if and where the latest entertainment was available, hoping to steer U.S. visitors away from pirate sites.

Aside from the usual critics, the launch received a ton of favorable press. This was soon followed up by another release highlighting some of the positive responses and praise from the press.

“The initiative marks a further attempt by the MPAA to combat rampant online piracy by reminding consumers of legal means to watch movies and TV shows,” the LA Times wrote, for example.

Over the past several years, the site hasn’t appeared in the news much, but it did help thousands of people find legal sources for the latest entertainment. However, those who try to access it today will notice that WhereToWatch has been abandoned, quietly.

The MPAA pulled the plug on the service a few months ago. And where the mainstream media covered its launch in detail, the shutdown received zero mentions. So why did the site fold?

According to MPAA Vice President of Corporate Communications, Chris Ortman, it was no longer needed as there are many similar search engines out there.

“Given the many search options commercially available today, which can be found on the MPAA website, WheretoWatch.com was discontinued at the conclusion of 2017,” Ortman informs TF.

“There are more than 140 lawful online platforms in the United States for accessing film and television content, and more than 460 around the world,” he adds.

The MPAA lists several of these alternative search engines on its new website. The old WhereToWatch domain now forwards to the MPAA’s online magazine ‘The Credits,’ which features behind-the-scenes stories and industry profiles.

While the MPAA is right that there are alternative search engines, many of these were already available when WhereToWatch launched. In fact, the site used the services of the competing service GoWatchIt for its search results.

Perhaps the lack of interest from the U.S. public played a role as well. The site never really took off and according to traffic estimates from SimilarWeb and Alexa, most of the visitors came from Iran, where the site was unusable due to a geo-block.

After searching long and hard we were able to track down a former WhereToWatch user on Reddit. This person just started to get into the service and was disappointed to see it go.

“So, does anyone know of better places or simply other places where this information lives in an easily accessible place?” he or she asked.

One person responded by recommending Icefilms.info, a pirate site. This is a response the MPAA would cringe at, but luckily, most people mentioned justwatch.com as the best alternative.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

MPA Reveals Scale of Worldwide Pirate Site Blocking

Post Syndicated from Andy original https://torrentfreak.com/mpa-reveals-scale-of-worldwide-pirate-site-blocking-180410/

Few people following the controversial topic of Internet piracy will be unaware of the site-blocking phenomenon. It’s now one of the main weapons in the entertainment industries’ arsenal and it’s affecting dozens of countries.

While general figures can be culled from the hundreds of news reports covering the issue, the manner in which blocking is handled in several regions means that updates aren’t always provided. New sites are regularly added to blocklists without fanfare, meaning that the public is kept largely in the dark.

Now, however, a submission to the Canadian Radio-television and Telecommunications Commission (CRTC) by Motion Picture Association Canada provides a more detailed overview. It was presented in support of the proposed blocking regime in Canada, so while the key figures are no doubt accurate, some of the supporting rhetoric should be viewed in context.

“Over the last decade, at least 42 countries have either adopted and implemented, or are legally obligated to adopt and implement, measures to ensure that ISPs take steps to disable access to copyright infringing websites, including throughout the European Union, the United Kingdom, Australia, and South Korea,” the submission reads.

The 42 blocking-capable countries referenced by the Hollywood group include the members of the European Union plus the following: Argentina, Australia, Iceland, India, Israel, Liechtenstein, Malaysia, Mexico, Norway, Russia, Singapore, South Korea, and Thailand.

While all countries have their own unique sets of legislation, countries within the EU are covered by the requirements of Article 8.3 of the INFOSEC Directive which provides that; “Member States shall ensure that rightholders are in a position to apply for an injunction against intermediaries whose services are used by a third party to infringe a copyright or related right.”

That doesn’t mean that all countries are actively blocking, however. While Bulgaria, Croatia, Cyprus, Czech Republic, Estonia, Hungary, Latvia, Liechtenstein, Lithuania, Luxembourg, Malta, Poland, Romania, Slovakia, and Slovenia have the legal basis to block infringing sites, none have yet done so.

In a significant number of other EU countries, however, blocking activity is prolific.

“To date, in at least 17 European countries, over 1,800 infringing sites and over 5,300 domains utilized by such sites have been blocked, including in the following four countries where the positive impact of site-blocking over time has been demonstrated,” MPA Canada notes.

Major blocking nations in the EU

At this point, it’s worth pointing out that authority to block sites is currently being obtained in two key ways, either through the courts or via an administrative process.

In the examples above, the UK and Denmark are dealt with via the former, with Italy and Portugal handled via the latter. At least as far as the volume of sites is concerned, court processes – which can be expensive – tend to yield lower site blocking levels than those carried out through an administrative process. Indeed, the MPAA has praised Portugal’s super-streamlined efforts as something to aspire to.

Outside Europe, the same two processes are also in use. For example, Australia, Argentina, and Singapore utilize the judicial route while South Korea, Mexico, Malaysia and Indonesia have opted for administrative remedies.

“Across 10 of these countries, over 1,100 infringing sites and over 1,500 domains utilized by such sites have been blocked,” MPA Canada reveals.

To date, South Korea has blocked 460 sites and 547 domains, while Australia has blocked 91 sites and 355 domains. In the case of the latter, “research has confirmed the increasingly positive impact that site-blocking has, as a greater number of sites are blocked over time,” the Hollywood group notes.

Although by no means comprehensive, MPA Canada lists the following “Notorious Sites” as subject to blocking in multiple countries via both judicial and administrative means. Most will be familiar, with the truly notorious The Pirate Bay heading the pile. Several no longer exist in their original form but in many cases, clones are blocked as if they still represent the original target.


The methods used to block the sites vary from country to country, dependent on what courts deem fit and in consideration of ISPs’ technical capabilities. Three main tools are in use including DNS blocking, IP address blocking, and URL blocking, which can also include Deep Packet Inspection.

The MPA submission (pdf) is strongly in favor of adding Canada to the list of site-blocking countries detailed above. The Hollywood group believes that the measures are both effective and proportionate, citing reduced usage of blocked sites, reduced traffic to pirate sites in general, and increased visits to legitimate platforms.

“There is every reason to believe that the website blocking measures [presented to the CRTC] will lead to the same beneficial results in Canada,” MPA Canada states.

While plenty of content creators and distributors are in favor of proposals, all signs suggest they will have a battle on their hands, with even some ISPs coming out in opposition.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Publisher Gets Carte Blanche to Seize New Sci-Hub Domains

Post Syndicated from Ernesto original https://torrentfreak.com/publisher-gets-carte-blanche-to-seize-new-sci-hub-domains-180410/

While Sci-Hub is loved by thousands of researchers and academics around the world, copyright holders are doing everything in their power to wipe if off the web.

Following a $15 million defeat against Elsevier last June, the American Chemical Society (ACS) won a default judgment of $4.8 million in copyright damages a few months later.

The publisher was further granted a broad injunction, requiring various third-party services to stop providing access to the site. This includes domain registries, hosting companies and search engines.

Soon after the order was signed, several of Sci-Hub’s domain names became unreachable as domain registries and Cloudflare complied with the court order. Still, Sci-Hub remained available all this time, with help from several newly registered domain names.

Frustrated by Sci-Hub’s resilience, ACS recently went back to court asking for an amended injunction. The publisher requested the authority to seize any and all Sci-Hub domain names, also those that will be registered in the future.

“Plaintiff has been forced to engage in a game of ‘whac-a-mole’ whereby new ‘sci-hub’ domain names emerge,” ACS informed the court.

“Further complicating matters, some registries, registrars, and Internet service providers have refused to disable newer Sci-Hub domain names that were not specifically identified in the Complaint or the injunction”

Soon after the request was submitted, US District Court Judge Leonie Brinkema agreed to the amended language.

The amended injunction now requires search engines, hosting companies, domain registrars, and other service or software providers, to cease facilitating access to Sci-Hub. This includes, but is not limited to, the following domain names.

‘sci-hub.ac, scihub.biz, sci-hub.bz, sci-hub.cc, sci-hub.cf, sci-hub.cn, sci-hub.ga, sci-hub.gq, scihub.hk, sci-hub.is, sci-hub.la, sci-hub.name, sci-hub.nu, sci-hub.nz, sci-hub.onion, scihub22266oqcxt.onion, sci-hub.tw, and sci-hub.ws.’

From the injunction

The new injunction makes ACS’ enforcement efforts much more effective. It effectively means that third-party services can no longer refuse to comply because a Sci-Hub domain is not listed in the complaint or injunction.

This already appears to have had some effect, as several domain names including sci-hub.la and sci-hub.tv became inaccessible soon after the paperwork was signed. Still, it is unlikely that it will help to shut down the site completely.

Several service providers are not receptive to US Court orders. One example is Iceland’s domain registry ISNIC and indeed, at the time of writing, Sci-Hub.is is still widely available.

Seizing .onion domain names, which are used on the Tor network, may also prove to be a challenge. After all, there is no central registration organization involved.

For now, Sci-Hub founder and operator Alexandra Elbakyan appears determined to keep the site online, whatever it takes. While it may be a hassle for users to find the latest working domain names, the new court order is not the end of the “whac-a-mole” just yet.

A copy of the amended injunction is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Roku Bans Popular Social IPTV Linking Service cCloud TV

Post Syndicated from Andy original https://torrentfreak.com/roku-bans-popular-social-iptv-linking-service-ccloud-tv-180409/

Despite being one of the more popular set-top box platforms, until last year Roku managed to stay completely out of the piracy conversation.

However, due to abuse of its system by third-parties, last June the Superior Court of Justice of the City of Mexico banned the importation and distribution of Roku devices in the country.

The decision followed a complaint filed by cable TV provider Cablevision, which said that some Roku channels and their users were infringing its distribution rights.

Since then, Roku has been fighting to have the ban lifted, previously informing TF that it expressly prohibits copyright infringement of any kind. That led to several more legal processes yet last month and after considerable effort, the ban was upheld, much to Roku’s disappointment.

“It is necessary for Roku to make adjustments to its software, as other online content distribution platforms do, so that violations of copyrighted content do not take place,” Cablevision said.

Then, at the end of March, Roku suddenly banned the USTVnow channel from its platform, citing a third-party copyright complaint.

In a series of emails with TF, the company declined to offer further details but there is plenty of online speculation that the decision was a move towards the “adjustments” demanded by Cablevision. Today yet more fuel is being poured onto that same fire with Roku’s decision to ban the popular cCloud TV service from its platform.

For those unfamiliar with cCloud TV, it’s a video streaming platform that relies on users to contribute media links found on the web, whether they’re movie and TV shows or live sporting events.

“Project cCloud TV is known as the ‘Popcorn Time for Live TV’. The project started with 50 channels and has grown over time and now has over 4000 channels from all around the world,” its founder ‘Bane’ told TF back in 2016.

“The project was inspired by Popcorn Time and its simplicity for streaming torrents. The service works based on media links that can be found anywhere on the web and the cCloud project makes it easier for users to stream.”

Aside from the vast array of content cCloud offers, its versatility is almost unrivaled. In an addition to working via most modern web browsers, it’s also accessible using smartphones, tablets, Plex media server, Kodi, VLC, and (until recently at least) Roku.

But cCloud and USTVnow aren’t the only services suffering bans at Roku.

As highlighted by CordCuttersNews, other channels are also suffering similar fates, such as XTV that was previously replaced with an FBI warning.

cCloud has had problems on Kodi too. Back in September 2017, TVAddons announced that it had been forced to remove the cCloud addon from its site.

“cCloud TV has been removed from our web site due to a complaint made by Bell, Rogers, Videotron and TVA on June 12th, 2017 as part of their lawsuit against our web site,” the site announced.

“Prior to hearing of the lawsuit, we had never received a single complaint relating to the cCloud TV addon for Kodi. cCloud TV for Kodi was developed by podgod, and was basically an interface for the community-based web service that goes by the same name.”

Last week, TVAddons went on to publish an “blacklist” that lists addons that have the potential to deliver content not authorized by rightsholders. Among many others, the list contains cCloud, meaning that potential users will now have to obtain it directly from the Kodi Bae Repository on Github instead.

At the time of publication, Roku had not responded to TorrentFreak’s request for comment.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

China’s Website and VPN Blocking Hurts Business, US Says

Post Syndicated from Ernesto original https://torrentfreak.com/chinas-website-and-vpn-blocking-hurts-business-us-says-180407/

The Chinese government is known to keep a tight grip on the websites its citizens are allowed to see on the Internet.

The so-called ‘Great Firewall’ blocks pirate sites, but also a wide variety of other websites which the government believes could have a negative influence on society.

While the exact scope of the blocking effort is unknown, it’s certain that thousands of websites are affected.

The US Government, however, is not happy with this type of censorship. In its latest Trade Barriers report, the Office of the United States Trade Representative (USTR) notes that it has a detrimental impact on businesses around the world.

“China continues to engage in extensive blocking of legitimate websites, imposing significant costs on both suppliers and users of web-based services and products,” the report reads.

The Chinese blocking efforts are affecting billions of dollars in business according to the US. The services that are affected include app stores, news sites, as well as communication services.

While many of these are targeted intentionally, some are hit by over-blocking. This happens when a blocked site shares an IP-address with other sites, which are then censored as collateral damage.

“While becoming more sophisticated over time, the technical means of blocking, dubbed the Great Firewall, still often appears to affect sites that may not be the intended target, but that may share the same Internet Protocol address,” USTR writes.

According to industry figures, twelve of the top thirty most popular sites on the Internet are currently censored in China. And while it used to be relatively easy to bypass these measures with a VPN, that is changing too.

Starting this month, all unauthorized VPN services are banned. Companies can only operate a VPN if they lease state-approved services via the Government. This is hurting even more businesses, according to the US. Not just in their pockets, but also in terms of privacy.

“In the past, consumers and business have been able to avoid government-run filtering through the use of VPN services, but a crackdown in 2017 has all but eliminated that option, with popular VPN applications now banned,” USTR writes.

“This development has had a particularly dire effect on foreign businesses, which routinely use VPN services to connect to locations and services outside of China, and which depend on VPN technology to ensure confidentiality of communications.”

Ironically, US companies are assisting the Chinese Government to keep their Great Firewall up. For example, last year VPN applications started to disappear from Apple’s iOS store following pressure from Chinese authorities.

It’s clear that the United States is not happy with China’s censorship regime. However, it’s unlikely that we’ll see a reversal anytime soon. As long as China is willing to jail its citizens for operating VPN services, there’s still a long way to go.

A copy of USTR’s 2018 National Trade Estimate Report on Foreign Trade Barriers is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

User Authentication Best Practices Checklist

Post Syndicated from Bozho original https://techblog.bozho.net/user-authentication-best-practices-checklist/

User authentication is the functionality that every web application shared. We should have perfected that a long time ago, having implemented it so many times. And yet there are so many mistakes made all the time.

Part of the reason for that is that the list of things that can go wrong is long. You can store passwords incorrectly, you can have a vulnerably password reset functionality, you can expose your session to a CSRF attack, your session can be hijacked, etc. So I’ll try to compile a list of best practices regarding user authentication. OWASP top 10 is always something you should read, every year. But that might not be enough.

So, let’s start. I’ll try to be concise, but I’ll include as much of the related pitfalls as I can cover – e.g. what could go wrong with the user session after they login:

  • Store passwords with bcrypt/scrypt/PBKDF2. No MD5 or SHA, as they are not good for password storing. Long salt (per user) is mandatory (the aforementioned algorithms have it built in). If you don’t and someone gets hold of your database, they’ll be able to extract the passwords of all your users. And then try these passwords on other websites.
  • Use HTTPS. Period. (Otherwise user credentials can leak through unprotected networks). Force HTTPS if user opens a plain-text version.
  • Mark cookies as secure. Makes cookie theft harder.
  • Use CSRF protection (e.g. CSRF one-time tokens that are verified with each request). Frameworks have such functionality built-in.
  • Disallow framing (X-Frame-Options: DENY). Otherwise your website may be included in another website in a hidden iframe and “abused” through javascript.
  • Have a same-origin policy
  • Logout – let your users logout by deleting all cookies and invalidating the session. This makes usage of shared computers safer (yes, users should ideally use private browsing sessions, but not all of them are that savvy)
  • Session expiry – don’t have forever-lasting sessions. If the user closes your website, their session should expire after a while. “A while” may still be a big number depending on the service provided. For ajax-heavy website you can have regular ajax-polling that keeps the session alive while the page stays open.
  • Remember me – implementing “remember me” (on this machine) functionality is actually hard due to the risks of a stolen persistent cookie. Spring-security uses this approach, which I think should be followed if you wish to implement more persistent logins.
  • Forgotten password flow – the forgotten password flow should rely on sending a one-time (or expiring) link to the user and asking for a new password when it’s opened. 0Auth explain it in this post and Postmark gives some best pracitces. How the link is formed is a separate discussion and there are several approaches. Store a password-reset token in the user profile table and then send it as parameter in the link. Or do not store anything in the database, but send a few params: userId:expiresTimestamp:hmac(userId+expiresTimestamp). That way you have expiring links (rather than one-time links). The HMAC relies on a secret key, so the links can’t be spoofed. It seems there’s no consensus, as the OWASP guide has a bit different approach
  • One-time login links – this is an option used by Slack, which sends one-time login links instead of asking users for passwords. It relies on the fact that your email is well guarded and you have access to it all the time. If your service is not accessed to often, you can have that approach instead of (rather than in addition to) passwords.
  • Limit login attempts – brute-force through a web UI should not be possible; therefore you should block login attempts if they become too many. One approach is to just block them based on IP. The other one is to block them based on account attempted. (Spring example here). Which one is better – I don’t know. Both can actually be combined. Instead of fully blocking the attempts, you may add a captcha after, say, the 5th attempt. But don’t add the captcha for the first attempt – it is bad user experience.
  • Don’t leak information through error messages – you shouldn’t allow attackers to figure out if an email is registered or not. If an email is not found, upon login report just “Incorrect credentials”. On passwords reset, it may be something like “If your email is registered, you should have received a password reset email”. This is often at odds with usability – people don’t often remember the email they used to register, and the ability to check a number of them before getting in might be important. So this rule is not absolute, though it’s desirable, especially for more critical systems.
  • Make sure you use JWT only if it’s really necessary and be careful of the pitfalls.
  • Consider using a 3rd party authentication – OpenID Connect, OAuth by Google/Facebook/Twitter (but be careful with OAuth flaws as well). There’s an associated risk with relying on a 3rd party identity provider, and you still have to manage cookies, logout, etc., but some of the authentication aspects are simplified.
  • For high-risk or sensitive applications use 2-factor authentication. There’s a caveat with Google Authenticator though – if you lose your phone, you lose your accounts (unless there’s a manual process to restore it). That’s why Authy seems like a good solution for storing 2FA keys.

I’m sure I’m missing something. And you see it’s complicated. Sadly we’re still at the point where the most common functionality – authenticating users – is so tricky and cumbersome, that you almost always get at least some of it wrong.

The post User Authentication Best Practices Checklist appeared first on Bozho's tech blog.

Rotate Amazon RDS database credentials automatically with AWS Secrets Manager

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-manager/

Recently, we launched AWS Secrets Manager, a service that makes it easier to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You can configure Secrets Manager to rotate secrets automatically, which can help you meet your security and compliance needs. Secrets Manager offers built-in integrations for MySQL, PostgreSQL, and Amazon Aurora on Amazon RDS, and can rotate credentials for these databases natively. You can control access to your secrets by using fine-grained AWS Identity and Access Management (IAM) policies. To retrieve secrets, employees replace plaintext secrets with a call to Secrets Manager APIs, eliminating the need to hard-code secrets in source code or update configuration files and redeploy code when secrets are rotated.

In this post, I introduce the key features of Secrets Manager. I then show you how to store a database credential for a MySQL database hosted on Amazon RDS and how your applications can access this secret. Finally, I show you how to configure Secrets Manager to rotate this secret automatically.

Key features of Secrets Manager

These features include the ability to:

  • Rotate secrets safely. You can configure Secrets Manager to rotate secrets automatically without disrupting your applications. Secrets Manager offers built-in integrations for rotating credentials for Amazon RDS databases for MySQL, PostgreSQL, and Amazon Aurora. You can extend Secrets Manager to meet your custom rotation requirements by creating an AWS Lambda function to rotate other types of secrets. For example, you can create an AWS Lambda function to rotate OAuth tokens used in a mobile application. Users and applications retrieve the secret from Secrets Manager, eliminating the need to email secrets to developers or update and redeploy applications after AWS Secrets Manager rotates a secret.
  • Secure and manage secrets centrally. You can store, view, and manage all your secrets. By default, Secrets Manager encrypts these secrets with encryption keys that you own and control. Using fine-grained IAM policies, you can control access to secrets. For example, you can require developers to provide a second factor of authentication when they attempt to retrieve a production database credential. You can also tag secrets to help you discover, organize, and control access to secrets used throughout your organization.
  • Monitor and audit easily. Secrets Manager integrates with AWS logging and monitoring services to enable you to meet your security and compliance requirements. For example, you can audit AWS CloudTrail logs to see when Secrets Manager rotated a secret or configure AWS CloudWatch Events to alert you when an administrator deletes a secret.
  • Pay as you go. Pay for the secrets you store in Secrets Manager and for the use of these secrets; there are no long-term contracts or licensing fees.

Get started with Secrets Manager

Now that you’re familiar with the key features, I’ll show you how to store the credential for a MySQL database hosted on Amazon RDS. To demonstrate how to retrieve and use the secret, I use a python application running on Amazon EC2 that requires this database credential to access the MySQL instance. Finally, I show how to configure Secrets Manager to rotate this database credential automatically. Let’s get started.

Phase 1: Store a secret in Secrets Manager

  1. Open the Secrets Manager console and select Store a new secret.
     
    Secrets Manager console interface
     
  2. I select Credentials for RDS database because I’m storing credentials for a MySQL database hosted on Amazon RDS. For this example, I store the credentials for the database superuser. I start by securing the superuser because it’s the most powerful database credential and has full access over the database.
     
    Store a new secret interface with Credentials for RDS database selected
     

    Note: For this example, you need permissions to store secrets in Secrets Manager. To grant these permissions, you can use the AWSSecretsManagerReadWriteAccess managed policy. Read the AWS Secrets Manager Documentation for more information about the minimum IAM permissions required to store a secret.

  3. Next, I review the encryption setting and choose to use the default encryption settings. Secrets Manager will encrypt this secret using the Secrets Manager DefaultEncryptionKeyDefaultEncryptionKey in this account. Alternatively, I can choose to encrypt using a customer master key (CMK) that I have stored in AWS KMS.
     
    Select the encryption key interface
     
  4. Next, I view the list of Amazon RDS instances in my account and select the database this credential accesses. For this example, I select the DB instance mysql-rds-database, and then I select Next.
     
    Select the RDS database interface
     
  5. In this step, I specify values for Secret Name and Description. For this example, I use Applications/MyApp/MySQL-RDS-Database as the name and enter a description of this secret, and then select Next.
     
    Secret Name and description interface
     
  6. For the next step, I keep the default setting Disable automatic rotation because my secret is used by my application running on Amazon EC2. I’ll enable rotation after I’ve updated my application (see Phase 2 below) to use Secrets Manager APIs to retrieve secrets. I then select Next.

    Note: If you’re storing a secret that you’re not using in your application, select Enable automatic rotation. See our AWS Secrets Manager getting started guide on rotation for details.

     
    Configure automatic rotation interface
     

  7. Review the information on the next screen and, if everything looks correct, select Store. We’ve now successfully stored a secret in Secrets Manager.
  8. Next, I select See sample code.
     
    The See sample code button
     
  9. Take note of the code samples provided. I will use this code to update my application to retrieve the secret using Secrets Manager APIs.
     
    Python sample code
     

Phase 2: Update an application to retrieve secret from Secrets Manager

Now that I have stored the secret in Secrets Manager, I update my application to retrieve the database credential from Secrets Manager instead of hard coding this information in a configuration file or source code. For this example, I show how to configure a python application to retrieve this secret from Secrets Manager.

  1. I connect to my Amazon EC2 instance via Secure Shell (SSH).
  2. Previously, I configured my application to retrieve the database user name and password from the configuration file. Below is the source code for my application.
    import MySQLdb
    import config

    def no_secrets_manager_sample()

    # Get the user name, password, and database connection information from a config file.
    database = config.database
    user_name = config.user_name
    password = config.password

    # Use the user name, password, and database connection information to connect to the database
    db = MySQLdb.connect(database.endpoint, user_name, password, database.db_name, database.port)

  3. I use the sample code from Phase 1 above and update my application to retrieve the user name and password from Secrets Manager. This code sets up the client and retrieves and decrypts the secret Applications/MyApp/MySQL-RDS-Database. I’ve added comments to the code to make the code easier to understand.
    # Use the code snippet provided by Secrets Manager.
    import boto3
    from botocore.exceptions import ClientError

    def get_secret():
    #Define the secret you want to retrieve
    secret_name = "Applications/MyApp/MySQL-RDS-Database"
    #Define the Secrets mManager end-point your code should use.
    endpoint_url = "https://secretsmanager.us-east-1.amazonaws.com"
    region_name = "us-east-1"

    #Setup the client
    session = boto3.session.Session()
    client = session.client(
    service_name='secretsmanager',
    region_name=region_name,
    endpoint_url=endpoint_url
    )

    #Use the client to retrieve the secret
    try:
    get_secret_value_response = client.get_secret_value(
    SecretId=secret_name
    )
    #Error handling to make it easier for your code to tolerate faults
    except ClientError as e:
    if e.response['Error']['Code'] == 'ResourceNotFoundException':
    print("The requested secret " + secret_name + " was not found")
    elif e.response['Error']['Code'] == 'InvalidRequestException':
    print("The request was invalid due to:", e)
    elif e.response['Error']['Code'] == 'InvalidParameterException':
    print("The request had invalid params:", e)
    else:
    # Decrypted secret using the associated KMS CMK
    # Depending on whether the secret was a string or binary, one of these fields will be populated
    if 'SecretString' in get_secret_value_response:
    secret = get_secret_value_response['SecretString']
    else:
    binary_secret_data = get_secret_value_response['SecretBinary']

    # Your code goes here.

  4. Applications require permissions to access Secrets Manager. My application runs on Amazon EC2 and uses an IAM role to obtain access to AWS services. I will attach the following policy to my IAM role. This policy uses the GetSecretValue action to grant my application permissions to read secret from Secrets Manager. This policy also uses the resource element to limit my application to read only the Applications/MyApp/MySQL-RDS-Database secret from Secrets Manager. You can visit the AWS Secrets Manager Documentation to understand the minimum IAM permissions required to retrieve a secret.
    {
    "Version": "2012-10-17",
    "Statement": {
    "Sid": "RetrieveDbCredentialFromSecretsManager",
    "Effect": "Allow",
    "Action": "secretsmanager:GetSecretValue",
    "Resource": "arn:aws:secretsmanager:::secret:Applications/MyApp/MySQL-RDS-Database"
    }
    }

Phase 3: Enable Rotation for Your Secret

Rotating secrets periodically is a security best practice because it reduces the risk of misuse of secrets. Secrets Manager makes it easy to follow this security best practice and offers built-in integrations for rotating credentials for MySQL, PostgreSQL, and Amazon Aurora databases hosted on Amazon RDS. When you enable rotation, Secrets Manager creates a Lambda function and attaches an IAM role to this function to execute rotations on a schedule you define.

Note: Configuring rotation is a privileged action that requires several IAM permissions and you should only grant this access to trusted individuals. To grant these permissions, you can use the AWS IAMFullAccess managed policy.

Next, I show you how to configure Secrets Manager to rotate the secret Applications/MyApp/MySQL-RDS-Database automatically.

  1. From the Secrets Manager console, I go to the list of secrets and choose the secret I created in the first step Applications/MyApp/MySQL-RDS-Database.
     
    List of secrets in the Secrets Manager console
     
  2. I scroll to Rotation configuration, and then select Edit rotation.
     
    Rotation configuration interface
     
  3. To enable rotation, I select Enable automatic rotation. I then choose how frequently I want Secrets Manager to rotate this secret. For this example, I set the rotation interval to 60 days.
     
    Edit rotation configuration interface
     
  4. Next, Secrets Manager requires permissions to rotate this secret on your behalf. Because I’m storing the superuser database credential, Secrets Manager can use this credential to perform rotations. Therefore, I select Use the secret that I provided in step 1, and then select Next.
     
    Select which secret to use in the Edit rotation configuration interface
     
  5. The banner on the next screen confirms that I have successfully configured rotation and the first rotation is in progress, which enables you to verify that rotation is functioning as expected. Secrets Manager will rotate this credential automatically every 60 days.
     
    Confirmation banner message
     

Summary

I introduced AWS Secrets Manager, explained the key benefits, and showed you how to help meet your compliance requirements by configuring AWS Secrets Manager to rotate database credentials automatically on your behalf. Secrets Manager helps you protect access to your applications, services, and IT resources without the upfront investment and on-going maintenance costs of operating your own secrets management infrastructure. To get started, visit the Secrets Manager console. To learn more, visit Secrets Manager documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Linux kernel lockdown and UEFI Secure Boot

Post Syndicated from Matthew Garrett original https://mjg59.dreamwidth.org/50577.html

David Howells recently published the latest version of his kernel lockdown patchset. This is intended to strengthen the boundary between root and the kernel by imposing additional restrictions that prevent root from modifying the kernel at runtime. It’s not the first feature of this sort – /dev/mem no longer allows you to overwrite arbitrary kernel memory, and you can configure the kernel so only signed modules can be loaded. But the present state of things is that these security features can be easily circumvented (by using kexec to modify the kernel security policy, for instance).

Why do you want lockdown? If you’ve got a setup where you know that your system is booting a trustworthy kernel (you’re running a system that does cryptographic verification of its boot chain, or you built and installed the kernel yourself, for instance) then you can trust the kernel to keep secrets safe from even root. But if root is able to modify the running kernel, that guarantee goes away. As a result, it makes sense to extend the security policy from the boot environment up to the running kernel – it’s really just an extension of configuring the kernel to require signed modules.

The patchset itself isn’t hugely conceptually controversial, although there’s disagreement over the precise form of certain restrictions. But one patch has, because it associates whether or not lockdown is enabled with whether or not UEFI Secure Boot is enabled. There’s some backstory that’s important here.

Most kernel features get turned on or off by either build-time configuration or by passing arguments to the kernel at boot time. There’s two ways that this patchset allows a bootloader to tell the kernel to enable lockdown mode – it can either pass the lockdown argument on the kernel command line, or it can set the secure_boot flag in the bootparams structure that’s passed to the kernel. If you’re running in an environment where you’re able to verify the kernel before booting it (either through cryptographic validation of the kernel, or knowing that there’s a secret tied to the TPM that will prevent the system booting if the kernel’s been tampered with), you can turn on lockdown.

There’s a catch on UEFI systems, though – you can build the kernel so that it looks like an EFI executable, and then run it directly from the firmware. The firmware doesn’t know about Linux, so can’t populate the bootparam structure, and there’s no mechanism to enforce command lines so we can’t rely on that either. The controversial patch simply adds a kernel configuration option that automatically enables lockdown when UEFI secure boot is enabled and otherwise leaves it up to the user to choose whether or not to turn it on.

Why do we want lockdown enabled when booting via UEFI secure boot? UEFI secure boot is designed to prevent the booting of any bootloaders that the owner of the system doesn’t consider trustworthy[1]. But a bootloader is only software – the only thing that distinguishes it from, say, Firefox is that Firefox is running in user mode and has no direct access to the hardware. The kernel does have direct access to the hardware, and so there’s no meaningful distinction between what grub can do and what the kernel can do. If you can run arbitrary code in the kernel then you can use the kernel to boot anything you want, which defeats the point of UEFI Secure Boot. Linux distributions don’t want their kernels to be used to be used as part of an attack chain against other distributions or operating systems, so they enable lockdown (or equivalent functionality) for kernels booted this way.

So why not enable it everywhere? There’s a couple of reasons. The first is that some of the features may break things people need – for instance, some strange embedded apps communicate with PCI devices by mmap()ing resources directly from sysfs[2]. This is blocked by lockdown, which would break them. Distributions would then have to ship an additional kernel that had lockdown disabled (it’s not possible to just have a command line argument that disables it, because an attacker could simply pass that), and users would have to disable secure boot to boot that anyway. It’s easier to just tie the two together.

The second is that it presents a promise of security that isn’t really there if your system didn’t verify the kernel. If an attacker can replace your bootloader or kernel then the ability to modify your kernel at runtime is less interesting – they can just wait for the next reboot. Appearing to give users safety assurances that are much less strong than they seem to be isn’t good for keeping users safe.

So, what about people whose work is impacted by lockdown? Right now there’s two ways to get stuff blocked by lockdown unblocked: either disable secure boot[3] (which will disable it until you enable secure boot again) or press alt-sysrq-x (which will disable it until the next boot). Discussion has suggested that having an additional secure variable that disables lockdown without disabling secure boot validation might be helpful, and it’s not difficult to implement that so it’ll probably happen.

Overall: the patchset isn’t controversial, just the way it’s integrated with UEFI secure boot. The reason it’s integrated with UEFI secure boot is because that’s the policy most distributions want, since the alternative is to enable it everywhere even when it doesn’t provide real benefits but does provide additional support overhead. You can use it even if you’re not using UEFI secure boot. We should have just called it securelevel.

[1] Of course, if the owner of a system isn’t allowed to make that determination themselves, the same technology is restricting the freedom of the user. This is abhorrent, and sadly it’s the default situation in many devices outside the PC ecosystem – most of them not using UEFI. But almost any security solution that aims to prevent malicious software from running can also be used to prevent any software from running, and the problem here is the people unwilling to provide that policy to users rather than the security features.
[2] This is how X.org used to work until the advent of kernel modesetting
[3] If your vendor doesn’t provide a firmware option for this, run sudo mokutil –disable-validation

comment count unavailable comments

Backblaze Announces B2 Compute Partnerships

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/introducing-cloud-compute-services/

Backblaze Announces B2 Compute Partnerships

In 2015, we announced Backblaze B2 Cloud Storage — the most affordable, high performance storage cloud on the planet. The decision to release B2 as a service was in direct response to customers asking us if they could use the same cloud storage infrastructure we use for our Computer Backup service. With B2, we entered a market in direct competition with Amazon S3, Google Cloud Services, and Microsoft Azure Storage. Today, we have over 500 petabytes of data from customers in over 150 countries. At $0.005 / GB / month for storage (1/4th of S3) and $0.01 / GB for downloads (1/5th of S3), it turns out there’s a healthy market for cloud storage that’s easy and affordable.

As B2 has grown, customers wanted to use our cloud storage for a variety of use cases that required not only storage but compute. We’re happy to say that through partnerships with Packet & ServerCentral, today we’re announcing that compute is now available for B2 customers.

Cloud Compute and Storage

Backblaze has directly connected B2 with the compute servers of Packet and ServerCentral, thereby allowing near-instant (< 10 ms) data transfers between services. Also, transferring data between B2 and both our compute partners is free.

  • Storing data in B2 and want to run an AI analysis on it? — There are no fees to move the data to our compute partners.
  • Generating data in an application? — Run the application with one of our partners and store it in B2.
  • Transfers are free and you’ll save more than 50% off of the equivalent set of services from AWS.

These partnerships enable B2 customers to use compute, give our compute partners’ customers access to cloud storage, and introduce new customers to industry-leading storage and compute — all with high-performance, low-latency, and low-cost.

Is This a Big Deal? We Think So

Compute is one of the most requested services from our customers Why? Because it unlocks a number of use cases for them. Let’s look at three popular examples:

Transcoding Media Files

B2 has earned wide adoption in the Media & Entertainment (“M&E”) industry. Our affordable storage and download pricing make B2 great for a wide variety of M&E use cases. But many M&E workflows require compute. Content syndicators, like American Public Television, need the ability to transcode files to meet localization and distribution management requirements.

There are a multitude of reasons that transcode is needed — thumbnail and proxy generation enable M&E professionals to work efficiently. Without compute, the act of transcoding files remains cumbersome. Either the files need to be brought down from the cloud, transcoded, and then pushed back up or they must be kept locally until the project is complete. Both scenarios are inefficient.

Starting today, any content producer can spin up compute with one of our partners, pay by the hour for their transcode processing, and return the new media files to B2 for storage and distribution. The company saves money, moves faster, and ensures their files are safe and secure.

Disaster Recovery

Backblaze’s heritage is based on providing outstanding backup services. When you have incredibly affordable cloud storage, it ends up being a great destination for your backup data.

Most enterprises have virtual machines (“VMs”) running in their infrastructure and those VMs need to be backed up. In a disaster scenario, a business wants to know they can get back up and running quickly.

With all data stored in B2, a business can get up and running quickly. Simply restore your backed up VM to one of our compute providers, and your business will be able to get back online.

Since B2 does not place restrictions, delays, or penalties on getting data out, customers can get back up and running quickly and affordably.

Saving $74 Million (aka “The Dropbox Effect”)

Ten years ago, Backblaze decided that S3 was too costly a platform to build its cloud storage business. Instead, we created the Backblaze Storage Pod and our own cloud storage infrastructure. That decision enabled us to offer our customers storage at a previously unavailable price point and maintain those prices for over a decade. It also laid the foundation for Netflix Open Connect and Facebook Open Compute.

Dropbox recently migrated the majority of their cloud services off of AWS and onto Dropbox’s own infrastructure. By leaving AWS, Dropbox was able to build out their own data centers and still save over $74 Million. They achieved those savings by avoiding the fees AWS charges for storing and downloading data, which, incidentally, are five times higher than Backblaze B2.

For Dropbox, being able to realize savings was possible because they have access to enough capital and expertise that they can build out their own infrastructure. For companies that have such resources and scale, that’s a great answer.

“Before this offering, the economics of the cloud would have made our business simply unviable.” — Gabriel Menegatti, SlicingDice

The questions Backblaze and our compute partners pondered was “how can we democratize the Dropbox effect for our storage and compute customers? How can we help customers do more and pay less?” The answer we came up with was to connect Backblaze’s B2 storage with strategic compute partners and remove any transfer fees between them. You may not save $74 million as Dropbox did, but you can choose the optimal providers for your use case and realize significant savings in the process.

This Sounds Good — Tell Me More About Your Partners

We’re very fortunate to be launching our compute program with two fantastic partners in Packet and ServerCentral. These partners allow us to offer a range of computing services.

Packet

We recommend Packet for customers that need on-demand, high performance, bare metal servers available by the hour. They also have robust offerings for private / customized deployments. Their offerings end up costing 50-75% of the equivalent offerings from EC2.

To get started with Packet and B2, visit our partner page on Packet.net.

ServerCentral

ServerCentral is the right partner for customers that have business and IT challenges that require more than “just” hardware. They specialize in fully managed, custom cloud solutions that solve complex business and IT challenges. ServerCentral also has expertise in managed network solutions to address global connectivity and content delivery.

To get started with ServerCentral and B2, visit our partner page on ServerCentral.com.

What’s Next?

We’re excited to find out. The combination of B2 and compute unlocks use cases that were previously impossible or at least unaffordable.

“The combination of performance and price offered by this partnership enables me to create an entirely new business line. Before this offering, the economics of the cloud would have made our business simply unviable,” noted Gabriel Menegatti, co-founder at SlicingDice, a serverless data warehousing service. “Knowing that transfers between compute and B2 are free means I don’t have to worry about my business being successful. And, with download pricing from B2 at just $0.01 GB, I know I’m avoiding a 400% tax from AWS on data I retrieve.”

What can you do with B2 & compute? Please share your ideas with us in the comments. And, for those attending NAB 2018 in Las Vegas next week, please come by and say hello!

The post Backblaze Announces B2 Compute Partnerships appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

UK IPTV Provider ACE Calls it Quits, Cites Mounting Legal Pressure

Post Syndicated from Andy original https://torrentfreak.com/uk-iptv-provider-ace-calls-it-quits-cites-mounting-legal-pressure-180402/

Terms including “Kodi box” are now in common usage in the UK and thanks to continuing coverage in the tabloid media, more and more people are learning that free content is just a few clicks away.

In parallel, premium IPTV services are also on the up. In basic terms, these provide live TV and sports through an Internet connection in a consumer-friendly way. When bundled with beautiful interfaces and fully functional Electronic Program Guides (EPG), they’re almost indistinguishable from services offered by Sky and BTSport, for example.

These come at a price, typically up to £10 per month or £20 for a three-month package, but for the customer this represents good value for money. Many providers offer several thousand channels in decent quality and reliability is much better than free streams. This kind of service was offered by prominent UK provider ACE TV but an announcement last December set alarm bells ringing.

“It saddens me to announce this, but due to pressure from the authorities in the UK, we are no longer selling new subscriptions. This obviously includes trials,” ACE said in a statement.

ACE insisted that it would continue as a going concern, servicing existing customers. However, it did keep its order books open for a while longer, giving people one last chance to subscribe to the service for anything up to a year. And with that ACE continued more quietly in the background, albeit with a disabled Facebook page.

But things were not well in ACE land. Like all major IPTV providers delivering services to the UK, ACE was subjected to blocking action by the English Premier League and UEFA. High Court injunctions allow ISPs in the UK to block their pirate streams in real-time, meaning that matches were often rendered inaccessible to ACE’s customers.

While this blocking can be mitigated when the customer uses a VPN, most don’t want to go to the trouble. Some IPTV providers have engaged in a game of cat-and-mouse with the blocking efforts, some with an impressive level of success. However, it appears that the nuisance eventually took its toll on ACE.

“The ISPs in the UK and across Europe have recently become much more aggressive in blocking our service while football games are in progress,” ACE said in a statement last month.

“In order to get ourselves off of the ISP blacklist we are going to black out the EPL games for all users (including VPN users) starting on Monday. We believe that this will enable us to rebuild the bypass process and successfully provide you with all EPL games.”

People familiar with the blocking process inform TF that this is unlikely to have worked.

Although nobody outside the EPL’s partners knows exactly how the system works, it appears that anti-piracy companies simply subscribe to IPTV services themselves and extract the IP addresses serving the content. ISPs then block them. No pause would’ve helped the situation.

Then, on March 24, another announcement indicated that ACE probably wouldn’t make it very far into 2019.

“It is with sorrow that we announce that we are no longer accepting renewals, upgrades to existing subscriptions or the purchase of new credits. We plan to support existing subscriptions until they expire,” the team wrote.

“EPL games including highlights continue to be blocked and are not expected to be reinstated before the end of the season.”

The suggestion was that ACE would keep going, at least for a while, but chat transcripts with the company obtained by TF last month indicated that ACE would probably shut down, sooner rather than later. Less than a week on, that proved to be the case.

On or around March 29, ACE began sending emails out to customers, announcing the end of the company.

“We recently announced that Ace was no longer accepting renewals or offering new reseller credits but planned to support existing subscription. Due to mounting legal pressure in the UK we have been forced to change our plans and we are now announcing that Ace will close down at the end of March,” the email read.

“This means that from April 1st onwards the Ace service will no longer work.”

April 1 was yesterday and it turns out it wasn’t a joke. Customers who paid in advance no longer have a service and those who paid a year up front are particularly annoyed. So-called ‘re-sellers’ of ACE are fuming more than most.

Re-sellers effectively act as sales agents for IPTV providers, buying access to the service at a reduced rate and making a small profit on each subscriber they sign up. They get a nice web interface to carry out the transactions and it’s something that anyone can do.

However, this generally requires investment from the re-seller in order to buy ‘credits’ up front, which are used to sell services to new customers. Those who invested money in this way with ACE are now in trouble.

“If anyone from ACE is reading here, yer a bunch of fuckin arseholes. I hope your next shite is a hedgehog!!” one shouted on Reddit. “Being a reseller for them and losing hundreds a pounds is bad enough!!”

While the loss of a service is probably a shock to more recent converts to the world of IPTV, those with experience of any kind of pirate TV product should already be well aware that this is nothing out of the ordinary.

For those who bought hacked or cloned satellite cards in the 1990s, to those who used ‘chipped’ cable boxes a little later on, the free rides all come to an end at some point. It’s just a question of riding the wave when it arrives and paying attention to the next big thing, without investing too much money at the wrong time.

For ACE’s former customers, it’s simply a case of looking for a new provider. There are plenty of them, some with zero intent of shutting down. There are rumors that ACE might ‘phoenix’ themselves under another name but that’s also par for the course when people feel they’re owed money and suspicions are riding high.

“Please do not ask if we are rebranding/setting up a new service, the answer is no,” ACE said in a statement.

And so the rollercoaster continues…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Why Did The World’s Largest Streaming Site Suddenly Shut Down?

Post Syndicated from Andy original https://torrentfreak.com/why-did-the-worlds-largest-streaming-site-suddenly-shut-down-180401/

With sites like The Pirate Bay still going great guns in the background, streaming sites are now all the rage. With their Netflix style interfaces and almost instant streaming, these platforms provide the kind of instant fix impatient pirates long for.

One of the most successful was 123Movies, which over the past 18 months and several rebrandings (123movieshub, GoMovies) later managed to build a steady base of millions of users.

Had such a site made its base in the US or Europe, it’s likely that authorities would’ve been breathing down its neck somewhat sooner. However, the skyrocketing platform was allegedly based in Vietnam, a country not exactly known for its staunch support of intellectual property rights. Nevertheless, the tentacles of Hollywood and its friends in government are never far away.

In March 2017, US Ambassador to Vietnam Ted Osius called on the local Government to criminally prosecute the people behind movie streaming site 123movies, Kisscartoon, and a Putlocker variant.

Osius had a meeting with Truong Minh Tuan, Vietnam’s Minister of Information and Communications, after which the Minister assured the Ambassador that Vietnam wanted to protect copyrights. He reportedly told Osius that a decision would soon be made on how to deal with the pirate streaming sites. Perhaps coincidentally, perhaps not, during the discussions 123Movies suffered a significant period of downtime.

Almost exactly a year later, the MPAA piled on the pressure again when it branded 123Movies as the “most popular illegal site in the world”, noting that its 98 million monthly visitors were being serviced from Vietnam.

Then, around March 19, 2018, 123Movies announced that it would be shutting down for good. A notice on the site was accompanied by a countdown timer, predicting the end of the site in five days. When the timer ran out, so did the site and it remains down to this day. But was its closure entirely down to the MPAA?

For the past couple of years, Vietnam has been seeking to overhaul its intellectual property laws, not least due to pressure from countries like the United States. Then, last October, Vietnamese Ambassador Duong Chi Dung was voted in as chairman of the World Intellectual Property Organization (WIPO) General Assembly for the 2018-19 tenure.

It was the first time in 12 years that the Asia-Pacific region had had one of its representatives serving as chairperson of the WIPO General Assembly. Quite an honor considering the diplomat enjoyed the backing of 191 member nations during the Assembly’s 49th session in Geneva, Switzerland.

Then in February, local media began publishing stories detailing how Vietnam was improving its stance towards intellectual property. Citing the sixth annual International IP Index released that month by the US Chamber of Commerce Global Innovation Policy Center (GIPC), it was noted that Vietnam’s score was on the increase.

“Vietnam has taken some positive steps forward towards strengthening its IP framework to compete more closely with its Southeast Asian peers, increasing its score,” said Patrick Kilbride, vice president of GIPC.

“With continued investment in strong IP rights, Vietnam can harness this positive momentum to become a leader in the region, stimulate its domestic capacity for innovation, and enhance its global competitiveness.”

The Vietnam government was also credited with passing legislation to “strengthen the criminal standards for IP infringement”, a move set to “strengthen the enforcement environment” in the country.

Amid the positive developments, it was noted that Vietnam has a way to go. Early March a report in Vietnam News cited a deputy chief inspector of the Ministry of Science and Technology as saying that while an intellectual property court is “in sight”, it isn’t yet clear when one will appear.

“There needs be an intellectual property court in Vietnam, but we don’t know when it will be established,” Nguyễn Như Quỳnh said. That, it appears, is happily being exploited, both intentionally and by those who don’t know any better.

“Several young people are making tonnes of money out of their online businesses without having to have capital, just a few tricks to increase the number of ‘fans’ on their Facebook pages,” she said. “But a lot of them sell fake stuff, which is considered an infringement.”

Come April 10, 2018, there will be new IP regulations in place in Vietnam concerning local and cross-border copyright protection. Additionally, amendments made last year to the Penal Code, which took effect this year, mean that IP infringements carried out by businesses will now be subject to criminal prosecution.

“Article 225 of the Penal Code stipulates that violations of IPR and related rights by private individuals carries a non-custodial sentence of three years or a jail term of up to three years,” Vietnamnet.vn reports.

“Businesses found guilty will be fined VND300 million to VND1 billion (US$13,000-43,800) for the first offense. If the offense is repeated, the penalty will be a fine of VND3 billion ($130,000) or suspension of operations for up to two years.”

The threshold for criminality appears to be quite low. Previously, infringements had to be carried out “on a commercial scale” to qualify but now all that is required is an illicit profit of around US$500.

How this soup of intellectual property commitments, legislative change, hopes, dreams and promises will affect the apparent rise and fall of streaming platforms in Vietnam is unclear. All that being said, it seems likely that all of these factors are playing their part to ratchet up the pressure.

And, with the US currently playing hardball with China over a lack of respect for IP rights, Vietnam will be keen to be viewed as a cooperative nation.

As for 123Movies, it’s unknown whether it will reappear anytime soon, if at all, given the apparent shifting enthusiasm towards protecting IP in Vietnam. Perhaps against the odds its sister site, Animehub, which was launched in December 2017, is still online. But that could be gone in the blink of an eye too, if recent history is anything to go by.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.