Tag Archives: request

Steam Censors MEGA.nz Links in Chats and Forum Posts

Post Syndicated from Ernesto original https://torrentfreak.com/steam-censors-mega-nz-links-in-chats-and-forum-posts-180421/

With more than 150 million registered accounts, Steam is much more than just a game distribution platform.

For many people, it’s also a social hangout and a communication channel.

Steam’s instant messaging tool, for example, is widely used for chats with friends. About games of course, but also to discuss lots of other stuff.

While Valve doesn’t mind people socializing on its platform, there are certain things the company doesn’t want Steam users to share. This includes links to the cloud hosting service Mega.

Users who’d like to show off some gaming footage, or even a collection of cat pictures they stored on Mega, are unable to do so. As it turns out, Steam actively censors these type of links from forum posts and chats.

In forum posts, these offending links are replaced by the text {LINK REMOVED} and private chats get the same treatment. Instead of the Mega link, people on the other end only get a mention that a link was removed.

Mega link removed from chat

While Mega operates as a regular company that offers cloud hosting services, Steam notes on their website that the website is “potentially malicious.”

“The site could contain malicious content or be known for stealing user credentials,” Steam’s link checker warns.

Potentially malicious…

It’s unclear what malicious means in this context. Mega has never been flagged by Google’s Safe Browsing program, which is regarded as one of the industry standards for malware and other unwanted software.

What’s more likely is that Mega’s piracy stigma has something to do with the censoring. As it turns out, Steam also censors 4shared.com, as well as Pirate Bay’s former .se domain name.

Other “malicious sites” which get the same treatment are more game oriented, such as cheathappens.com and the CSGO Skin Screenshot site metjm.net. While it’s understandable some game developers don’t like these, malicious is a rather broad term in this regard.

Mega clearly refutes that they are doing anything wrong. Mega Chairman Stephen Hall tells TorrentFreak that the company swiftly removes any malicious content, once it receives an abuse notice.

“It is crazy for sites to block Mega links as we respond very quickly to disable any links that are reported as malware, generally much quicker than our competitors,” Hall says.

Valve did not immediately reply to our request for clarification so the precise reason for the link censoring remains unknown.

That said, when something’s censored the public tends to work around any restrictions. Mega links are still being shared on Steam, with a slightly altered URL. In addition, Mega’s backup domain Mega.co.nz still works fine too.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Russia Blacklists 250 Pirate Sites For Displaying Gambling Ads

Post Syndicated from Andy original https://torrentfreak.com/russia-blacklists-250-pirate-sites-for-displaying-gambling-ads-180421/

Blocking alleged pirate sites is usually a question of proving that they’re involved in infringement and then applying to the courts for an injunction.

In Europe, the process is becoming easier, largely thanks to an EU ruling that permits blocking on copyright grounds.

As reported over the past several years, Russia is taking its blocking processes very seriously. Copyright holders can now have sites blocked in just a few days, if they can show their operators as being unresponsive to takedown demands.

This week, however, Russian authorities have again shown that copyright infringement doesn’t have to be the only Achilles’ heel of pirate sites.

Back in 2006, online gambling was completely banned in Russia. Three years later in 2009, land-based gambling was also made illegal in all but four specified regions. Then, in 2012, the Russian Supreme Court ruled that ISPs must block access to gambling sites, something they had previously refused to do.

That same year, telecoms watchdog Rozcomnadzor began publishing a list of banned domains and within those appeared some of the biggest names in gambling. Many shut down access to customers located in Russia but others did not. In response, Rozcomnadzor also began targeting sites that simply offered information on gambling.

Fast forward more than six years and Russia is still taking a hard line against gambling operators. However, it now finds itself in a position where the existence of gambling material can also assist the state in its quest to take down pirate sites.

Following a complaint from the Federal Tax Service of Russia, Rozcomnadzor has again added a large number of ‘pirate’ sites to the country’s official blocklist after they advertised gambling-related products and services.

“Rozkomnadzor, at the request of the Federal Tax Service of Russia, added more than 250 pirate online cinemas and torrent trackers to the unified register of banned information, which hosted illegal advertising of online casinos and bookmakers,” the telecoms watchdog reported.

Almost immediately, 200 of the sites were blocked by local ISPs since they failed to remove the advertising when told to do so. For the remaining 50 sites, breathing space is still available. Their bans can be suspended if the offending ads are removed within a timeframe specified by the authorities, which has not yet run out.

“Information on a significant number of pirate resources with illegal advertising was received by Rozcomnadzor from citizens and organizations through a hotline that operates on the site of the Unified Register of Prohibited Information, all of which were sent to the Federal Tax Service for making decisions on restricting access,” the watchdog revealed.

Links between pirate sites and gambling companies have traditionally been close over the years, with advertising for many top-tier brands appearing on portals large and small. However, in recent times the prevalence of gambling ads has diminished, in part due to campaigns conducted in the United States, Europe, and the UK.

For pirate site operators in Russia, the decision to carry gambling ads now comes with the added risk of being blocked. Only time will tell whether any reduction in traffic is considered serious enough to warrant a gambling boycott of their own.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

The End of Google Cloud Messaging, and What it Means for Your Apps

Post Syndicated from Zach Barbitta original https://aws.amazon.com/blogs/messaging-and-targeting/the-end-of-google-cloud-messaging-and-what-it-means-for-your-apps/

On April 10, 2018, Google announced the deprecation of its Google Cloud Messaging (GCM) platform. Specifically, the GCM server and client APIs are deprecated and will be removed as soon as April 11, 2019.  What does this mean for you and your applications that use Amazon Simple Notification Service (Amazon SNS) or Amazon Pinpoint?

First, nothing will break now or after April 11, 2019. GCM device tokens are completely interchangeable with the newer Firebase Cloud Messaging (FCM) device tokens. If you have existing GCM tokens, you’ll still be able to use them to send notifications. This statement is also true for GCM tokens that you generate in the future.

On the back end, we’ve already migrated Amazon SNS and Amazon Pinpoint to the server endpoint for FCM (https://fcm.googleapis.com/fcm/send). As a developer, you don’t need to make any changes as a result of this deprecation.

We created the following mini-FAQ to address some of the questions you may have as a developer who uses Amazon SNS or Amazon Pinpoint.

If I migrate to FCM from GCM, can I still use Amazon Pinpoint and Amazon SNS?

Yes. Your ability to connect to your applications and send messages through both Amazon SNS and Amazon Pinpoint doesn’t change. We’ll update the documentation for Amazon SNS and Amazon Pinpoint soon to reflect these changes.

If I don’t migrate to FCM from GCM, can I still use Amazon Pinpoint and Amazon SNS?

Yes. If you do nothing, your existing credentials and GCM tokens will still be valid. All applications that you previously set up to use Amazon Pinpoint or Amazon SNS will continue to work normally. When you call the API for Amazon Pinpoint or Amazon SNS, we initiate a request to the FCM server endpoint directly.

What are the differences between Amazon SNS and Amazon Pinpoint?

Amazon SNS makes it easy for developers to set up, operate, and send notifications at scale, affordably and with a high degree of flexibility. Amazon Pinpoint has many of the same messaging capabilities as Amazon SNS, with the same levels of scalability and flexibility.

The main difference between the two services is that Amazon Pinpoint provides both transactional and targeted messaging capabilities. By using Amazon Pinpoint, marketers and developers can not only send transactional messages to their customers, but can also segment their audiences, create campaigns, and analyze both application and message metrics.

How do I migrate from GCM to FCM?

For more information about migrating from GCM to FCM, see Migrate a GCM Client App for Android to Firebase Cloud Messaging on the Google Developers site.

If you have any questions, please post them in the comments section, or in the Amazon Pinpoint or Amazon SNS forums.

Implementing safe AWS Lambda deployments with AWS CodeDeploy

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/implementing-safe-aws-lambda-deployments-with-aws-codedeploy/

This post courtesy of George Mao, AWS Senior Serverless Specialist – Solutions Architect

AWS Lambda and AWS CodeDeploy recently made it possible to automatically shift incoming traffic between two function versions based on a preconfigured rollout strategy. This new feature allows you to gradually shift traffic to the new function. If there are any issues with the new code, you can quickly rollback and control the impact to your application.

Previously, you had to manually move 100% of traffic from the old version to the new version. Now, you can have CodeDeploy automatically execute pre- or post-deployment tests and automate a gradual rollout strategy. Traffic shifting is built right into the AWS Serverless Application Model (SAM), making it easy to define and deploy your traffic shifting capabilities. SAM is an extension of AWS CloudFormation that provides a simplified way of defining serverless applications.

In this post, I show you how to use SAM, CloudFormation, and CodeDeploy to accomplish an automated rollout strategy for safe Lambda deployments.

Scenario

For this walkthrough, you write a Lambda application that returns a count of the S3 buckets that you own. You deploy it and use it in production. Later on, you receive requirements that tell you that you need to change your Lambda application to count only buckets that begin with the letter “a”.

Before you make the change, you need to be sure that your new Lambda application works as expected. If it does have issues, you want to minimize the number of impacted users and roll back easily. To accomplish this, you create a deployment process that publishes the new Lambda function, but does not send any traffic to it. You use CodeDeploy to execute a PreTraffic test to ensure that your new function works as expected. After the test succeeds, CodeDeploy automatically shifts traffic gradually to the new version of the Lambda function.

Your Lambda function is exposed as a REST service via an Amazon API Gateway deployment. This makes it easy to test and integrate.

Prerequisites

To execute the SAM and CloudFormation deployment, you must have the following IAM permissions:

  • cloudformation:*
  • lambda:*
  • codedeploy:*
  • iam:create*

You may use the AWS SAM Local CLI or the AWS CLI to package and deploy your Lambda application. If you choose to use SAM Local, be sure to install it onto your system. For more information, see AWS SAM Local Installation.

All of the code used in this post can be found in this GitHub repository: https://github.com/aws-samples/aws-safe-lambda-deployments.

Walkthrough

For this post, use SAM to define your resources because it comes with built-in CodeDeploy support for safe Lambda deployments.  The deployment is handled and automated by CloudFormation.

SAM allows you to define your Serverless applications in a simple and concise fashion, because it automatically creates all necessary resources behind the scenes. For example, if you do not define an execution role for a Lambda function, SAM automatically creates one. SAM also creates the CodeDeploy application necessary to drive the traffic shifting, as well as the IAM service role that CodeDeploy uses to execute all actions.

Create a SAM template

To get started, write your SAM template and call it template.yaml.

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: An example SAM template for Lambda Safe Deployments.

Resources:

  returnS3Buckets:
    Type: AWS::Serverless::Function
    Properties:
      Handler: returnS3Buckets.handler
      Runtime: nodejs6.10
      AutoPublishAlias: live
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "s3:ListAllMyBuckets"
            Resource: '*'
      DeploymentPreference:
          Type: Linear10PercentEvery1Minute
          Hooks:
            PreTraffic: !Ref preTrafficHook
      Events:
        Api:
          Type: Api
          Properties:
            Path: /test
            Method: get

  preTrafficHook:
    Type: AWS::Serverless::Function
    Properties:
      Handler: preTrafficHook.handler
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "codedeploy:PutLifecycleEventHookExecutionStatus"
            Resource:
              !Sub 'arn:aws:codedeploy:${AWS::Region}:${AWS::AccountId}:deploymentgroup:${ServerlessDeploymentApplication}/*'
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "lambda:InvokeFunction"
            Resource: !Ref returnS3Buckets.Version
      Runtime: nodejs6.10
      FunctionName: 'CodeDeployHook_preTrafficHook'
      DeploymentPreference:
        Enabled: false
      Timeout: 5
      Environment:
        Variables:
          NewVersion: !Ref returnS3Buckets.Version

This template creates two functions:

  • returnS3Buckets
  • preTrafficHook

The returnS3Buckets function is where your application logic lives. It’s a simple piece of code that uses the AWS SDK for JavaScript in Node.JS to call the Amazon S3 listBuckets API action and return the number of buckets.

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			callback(null, {
				statusCode: 200,
				body: allBuckets.length
			});
		}
	});	
}

Review the key parts of the SAM template that defines returnS3Buckets:

  • The AutoPublishAlias attribute instructs SAM to automatically publish a new version of the Lambda function for each new deployment and link it to the live alias.
  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides the function with permission to call listBuckets.
  • The DeploymentPreference attribute configures the type of rollout pattern to use. In this case, you are shifting traffic in a linear fashion, moving 10% of traffic every minute to the new version. For more information about supported patterns, see Serverless Application Model: Traffic Shifting Configurations.
  • The Hooks attribute specifies that you want to execute the preTrafficHook Lambda function before CodeDeploy automatically begins shifting traffic. This function should perform validation testing on the newly deployed Lambda version. This function invokes the new Lambda function and checks the results. If you’re satisfied with the tests, instruct CodeDeploy to proceed with the rollout via an API call to: codedeploy.putLifecycleEventHookExecutionStatus.
  • The Events attribute defines an API-based event source that can trigger this function. It accepts requests on the /test path using an HTTP GET method.
'use strict';

const AWS = require('aws-sdk');
const codedeploy = new AWS.CodeDeploy({apiVersion: '2014-10-06'});
var lambda = new AWS.Lambda();

exports.handler = (event, context, callback) => {

	console.log("Entering PreTraffic Hook!");
	
	// Read the DeploymentId & LifecycleEventHookExecutionId from the event payload
    var deploymentId = event.DeploymentId;
	var lifecycleEventHookExecutionId = event.LifecycleEventHookExecutionId;

	var functionToTest = process.env.NewVersion;
	console.log("Testing new function version: " + functionToTest);

	// Perform validation of the newly deployed Lambda version
	var lambdaParams = {
		FunctionName: functionToTest,
		InvocationType: "RequestResponse"
	};

	var lambdaResult = "Failed";
	lambda.invoke(lambdaParams, function(err, data) {
		if (err){	// an error occurred
			console.log(err, err.stack);
			lambdaResult = "Failed";
		}
		else{	// successful response
			var result = JSON.parse(data.Payload);
			console.log("Result: " +  JSON.stringify(result));

			// Check the response for valid results
			// The response will be a JSON payload with statusCode and body properties. ie:
			// {
			//		"statusCode": 200,
			//		"body": 51
			// }
			if(result.body == 9){	
				lambdaResult = "Succeeded";
				console.log ("Validation testing succeeded!");
			}
			else{
				lambdaResult = "Failed";
				console.log ("Validation testing failed!");
			}

			// Complete the PreTraffic Hook by sending CodeDeploy the validation status
			var params = {
				deploymentId: deploymentId,
				lifecycleEventHookExecutionId: lifecycleEventHookExecutionId,
				status: lambdaResult // status can be 'Succeeded' or 'Failed'
			};
			
			// Pass AWS CodeDeploy the prepared validation test results.
			codedeploy.putLifecycleEventHookExecutionStatus(params, function(err, data) {
				if (err) {
					// Validation failed.
					console.log('CodeDeploy Status update failed');
					console.log(err, err.stack);
					callback("CodeDeploy Status update failed");
				} else {
					// Validation succeeded.
					console.log('Codedeploy status updated successfully');
					callback(null, 'Codedeploy status updated successfully');
				}
			});
		}  
	});
}

The hook is hardcoded to check that the number of S3 buckets returned is 9.

Review the key parts of the SAM template that defines preTrafficHook:

  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides permissions to call the CodeDeploy PutLifecycleEventHookExecutionStatus API action. The second statement provides permissions to invoke the specific version of the returnS3Buckets function to test
  • This function has traffic shifting features disabled by setting the DeploymentPreference option to false.
  • The FunctionName attribute explicitly tells CloudFormation what to name the function. Otherwise, CloudFormation creates the function with the default naming convention: [stackName]-[FunctionName]-[uniqueID].  Name the function with the “CodeDeployHook_” prefix because the CodeDeployServiceRole role only allows InvokeFunction on functions named with that prefix.
  • Set the Timeout attribute to allow enough time to complete your validation tests.
  • Use an environment variable to inject the ARN of the newest deployed version of the returnS3Buckets function. The ARN allows the function to know the specific version to invoke and perform validation testing on.

Deploy the function

Your SAM template is all set and the code is written—you’re ready to deploy the function for the first time. Here’s how to do it via the SAM CLI. Replace “sam” with “cloudformation” to use CloudFormation instead.

First, package the function. This command returns a CloudFormation importable file, packaged.yaml.

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml

Now deploy everything:

sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

At this point, both Lambda functions have been deployed within the CloudFormation stack mySafeDeployStack. The returnS3Buckets has been deployed as Version 1:

SAM automatically created a few things, including the CodeDeploy application, with the deployment pattern that you specified (Linear10PercentEvery1Minute). There is currently one deployment group, with no action, because no deployments have occurred. SAM also created the IAM service role that this CodeDeploy application uses:

There is a single managed policy attached to this role, which allows CodeDeploy to invoke any Lambda function that begins with “CodeDeployHook_”.

An API has been set up called safeDeployStack. It targets your Lambda function with the /test resource using the GET method. When you test the endpoint, API Gateway executes the returnS3Buckets function and it returns the number of S3 buckets that you own. In this case, it’s 51.

Publish a new Lambda function version

Now implement the requirements change, which is to make returnS3Buckets count only buckets that begin with the letter “a”. The code now looks like the following (see returnS3BucketsNew.js in GitHub):

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			//callback(null, allBuckets.length);

			//  New Code begins here
			var counter=0;
			for(var i  in allBuckets){
				if(allBuckets[i].Name[0] === "a")
					counter++;
			}
			console.log("Total buckets starting with a: " + counter);

			callback(null, {
				statusCode: 200,
				body: counter
			});
			
		}
	});	
}

Repackage and redeploy with the same two commands as earlier:

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml
	
sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

CloudFormation understands that this is a stack update instead of an entirely new stack. You can see that reflected in the CloudFormation console:

During the update, CloudFormation deploys the new Lambda function as version 2 and adds it to the “live” alias. There is no traffic routing there yet. CodeDeploy now takes over to begin the safe deployment process.

The first thing CodeDeploy does is invoke the preTrafficHook function. Verify that this happened by reviewing the Lambda logs and metrics:

The function should progress successfully, invoke Version 2 of returnS3Buckets, and finally invoke the CodeDeploy API with a success code. After this occurs, CodeDeploy begins the predefined rollout strategy. Open the CodeDeploy console to review the deployment progress (Linear10PercentEvery1Minute):

Verify the traffic shift

During the deployment, verify that the traffic shift has started to occur by running the test periodically. As the deployment shifts towards the new version, a larger percentage of the responses return 9 instead of 51. These numbers match the S3 buckets.

A minute later, you see 10% more traffic shifting to the new version. The whole process takes 10 minutes to complete. After completion, open the Lambda console and verify that the “live” alias now points to version 2:

After 10 minutes, the deployment is complete and CodeDeploy signals success to CloudFormation and completes the stack update.

Check the results

If you invoke the function alias manually, you see the results of the new implementation.

aws lambda invoke –function [lambda arn to live alias] out.txt

You can also execute the prod stage of your API and verify the results by issuing an HTTP GET to the invoke URL:

Summary

This post has shown you how you can safely automate your Lambda deployments using the Lambda traffic shifting feature. You used the Serverless Application Model (SAM) to define your Lambda functions and configured CodeDeploy to manage your deployment patterns. Finally, you used CloudFormation to automate the deployment and updates to your function and PreTraffic hook.

Now that you know all about this new feature, you’re ready to begin automating Lambda deployments with confidence that things will work as designed. I look forward to hearing about what you’ve built with the AWS Serverless Platform.

New – Registry of Open Data on AWS (RODA)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-registry-of-open-data-on-aws-roda/

Almost a decade ago, my colleague Deepak Singh introduced the AWS Public Datasets in his post Paging Researchers, Analysts, and Developers. I’m happy to report that Deepak is still an important part of the AWS team and that the Public Datasets program is still going strong!

Today we are announcing a new take on open and public data, the Registry of Open Data on AWS, or RODA. This registry includes existing Public Datasets and allows anyone to add their own datasets so that they can be accessed and analyzed on AWS.

Inside the Registry
The home page lists all of the datasets in the registry:

Entering a search term shrinks the list so that only the matching datasets are displayed:

Each dataset has an associated detail page, including usage examples, license info, and the information needed to locate and access the dataset on AWS:

In this case, I can access the data with a simple CLI command:

I could also access it programmatically, or download data to my EC2 instance.

Adding to the Repository
If you have a dataset that is publicly available and would like to add it to RODA , you can simply send us a pull request. Head over to the open-data-registry repo, read the CONTRIBUTING document, and create a YAML file that describes your dataset, using one of the existing files in the datasets directory as a model:

We’ll review pull requests regularly; you can “star” or watch the repo in order to track additions and changes.

Impress Me
I am looking forward to an inrush of new datasets, along with some blog posts and apps that show how to to use the data in powerful and interesting ways. Let me know what you come up with.

Jeff;

 

Achieving Major Stability and Performance Improvements in Yahoo Mail with a Novel Redux Architecture

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/173062946866

yahoodevelopers:

By Mohit Goenka, Gnanavel Shanmugam, and Lance Welsh

At Yahoo Mail, we’re constantly striving to upgrade our product experience. We do this not only by adding new features based on our members’ feedback, but also by providing the best technical solutions to power the most engaging experiences. As such, we’ve recently introduced a number of novel and unique revisions to the way in which we use Redux that have resulted in significant stability and performance improvements. Developers may find our methods useful in achieving similar results in their apps.

Improvements to product metrics

Last year Yahoo Mail implemented a brand new architecture using Redux. Since then, we have transformed the overall architecture to reduce latencies in various operations, reduce JavaScript exceptions, and better synchronized states. As a result, the product is much faster and more stable.

Stability improvements:

  • when checking for new emails – 20%
  • when reading emails – 30%
  • when sending emails – 20%

Performance improvements:

  • 10% improvement in page load performance
  • 40% improvement in frame rendering time

We have also reduced API calls by approximately 20%.

How we use Redux in Yahoo Mail

Redux architecture is reliant on one large store that represents the application state. In a Redux cycle, action creators dispatch actions to change the state of the store. React Components then respond to those state changes. We’ve made some modifications on top of this architecture that are atypical in the React-Redux community.

For instance, when fetching data over the network, the traditional methodology is to use Thunk middleware. Yahoo Mail fetches data over the network from our API. Thunks would create an unnecessary and undesirable dependency between the action creators and our API. If and when the API changes, the action creators must then also change. To keep these concerns separate we dispatch the action payload from the action creator to store them in the Redux state for later processing by “action syncers”. Action syncers use the payload information from the store to make requests to the API and process responses. In other words, the action syncers form an API layer by interacting with the store. An additional benefit to keeping the concerns separate is that the API layer can change as the backend changes, thereby preventing such changes from bubbling back up into the action creators and components. This also allowed us to optimize the API calls by batching, deduping, and processing the requests only when the network is available. We applied similar strategies for handling other side effects like route handling and instrumentation. Overall, action syncers helped us to reduce our API calls by ~20% and bring down API errors by 20-30%.

Another change to the normal Redux architecture was made to avoid unnecessary props. The React-Redux community has learned to avoid passing unnecessary props from high-level components through multiple layers down to lower-level components (prop drilling) for rendering. We have introduced action enhancers middleware to avoid passing additional unnecessary props that are purely used when dispatching actions. Action enhancers add data to the action payload so that data does not have to come from the component when dispatching the action. This avoids the component from having to receive that data through props and has improved frame rendering by ~40%. The use of action enhancers also avoids writing utility functions to add commonly-used data to each action from action creators.

image

In our new architecture, the store reducers accept the dispatched action via action enhancers to update the state. The store then updates the UI, completing the action cycle. Action syncers then initiate the call to the backend APIs to synchronize local changes.

Conclusion

Our novel use of Redux in Yahoo Mail has led to significant user-facing benefits through a more performant application. It has also reduced development cycles for new features due to its simplified architecture. We’re excited to share our work with the community and would love to hear from anyone interested in learning more.

Oblivious DNS

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/oblivious_dns.html

Interesting idea:

…we present Oblivious DNS (ODNS), which is a new design of the DNS ecosystem that allows current DNS servers to remain unchanged and increases privacy for data in motion and at rest. In the ODNS system, both the client is modified with a local resolver, and there is a new authoritative name server for .odns. To prevent an eavesdropper from learning information, the DNS query must be encrypted; the client generates a request for www.foo.com, generates a session key k, encrypts the requested domain, and appends the TLD domain .odns, resulting in {www.foo.com}k.odns. The client forwards this, with the session key encrypted under the .odns authoritative server’s public key ({k}PK) in the “Additional Information” record of the DNS query to the recursive resolver, which then forwards it to the authoritative name server for .odns. The authoritative server decrypts the session key with his private key, and then subsequently decrypts the requested domain with the session key. The authoritative server then forwards the DNS request to the appropriate name server, acting as a recursive resolver. While the name servers see incoming DNS requests, they do not know which clients they are coming from; additionally, an eavesdropper cannot connect a client with her corresponding DNS queries.

News article.

Hollywood Studios Get ISP Blocking Order Against Rarbg in India

Post Syndicated from Ernesto original https://torrentfreak.com/hollywood-studios-score-blocking-order-against-rarbg-in-india-180417/

While the major Hollywood studios are very reluctant to bring a pirate site blocking case to their home turf, they are very active abroad.

The companies are the driving force behind lawsuits in Europe, Australia, and are also active in India, where they booked a new success last week.

Website blocking is by no means a new phenomenon in India. The country is known for so-called John Doe orders, where a flurry of websites are temporarily blocked to protect the release of a specific title.

The major Hollywood studios are taking a different approach. Disney Enterprises, Twentieth Century Fox, Paramount Pictures, Columbia Pictures, Universal, and Warner Bros. are requesting blockades, accusing sites of being structural copyright infringers.

One of the most recent targets is the popular torrent site Rarbg. The Hollywood studios describe Rarbg as a ‘habitual’ copyright infringer and demand that several Internet providers block access to the site.

“It is submitted that the Defendant Website aids and facilitates the accessibility and availability of infringing material, and induce third parties, intentionally and/or knowingly, to infringe through their websites by various means,’ the movie studios allege.

The complaint filed at the High Court of Delhi lists more than 20 Internet providers as co-defendants, and also includes India’s Department of Telecommunications and Department of Electronics and Information Technology in the mix.

The two Government departments are added because they have the power to enforce blocking orders. Specifically, the Hollywood studios note that the Department of Technology’s license agreement with ISPs requires these companies to ensure that copyright infringing content is not carried on their networks.

“It is submitted that the DoT itself acknowledges the fact that service providers have an obligation to ensure that no violation of third party intellectual property rights takes place through their networks and that effective protection is provided to right holders of such intellectual property,” the studios write.

Last week the court granted an injunction that requires local Internet providers including Bharti Airtel, Reliance Communications, Telenor, You Broadband, and Vodafone to block Rarbg.

Blocking order

As requested, the Department of Telecommunications and Department of Electronics and Information Technology are directed to notify all local internet and telecom service providers that they must block the torrent site as well.

The order is preliminary and can still be contested in court. However, given the history of similar blocking efforts around the world, it is likely that it will be upheld.

While there’s not much coverage on the matter, this isn’t the first blocking request the companies have filed in India. Last October, a similar case was filed against another popular torrent site, 1337x.to, with success.

TorrentFreak reached out to the law firm representing the Hollywood studios to get a broader overview of the blocking plans in India. At the time of writing, we have yet to hear back.

A copy of the order obtained by Disney Enterprises, Twentieth Century Fox, Paramount Pictures, Columbia Pictures, Universal, Warner Bros and the local Disney owned media conglomerate UTV Software, is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

TV Broadcaster Wants App Stores Blocked to Prevent Piracy

Post Syndicated from Andy original https://torrentfreak.com/tv-broadcaster-wants-app-stores-blocked-to-prevent-piracy-180416/

After first targeting torrent and regular streaming platforms with blocking injunctions, last year Village Roadshow and studios including Disney, Universal, Warner Bros, Twentieth Century Fox, and Paramount began looking at a new threat.

The action targeted HDSubs+, a reasonably popular IPTV service that provides hundreds of otherwise premium live channels, movies, and sports for a relatively small monthly fee. The application was filed during October 2017 and targeted Australia’s largest ISPs.

In parallel, Hong Kong-based broadcaster Television Broadcasts Limited (TVB) launched a similar action, demanding that the same ISPs (including Telstra, Optus, TPG, and Vocus, plus subsidiaries) block several ‘pirate’ IPTV services, named in court as A1, BlueTV, EVPAD, FunTV, MoonBox, Unblock, and hTV5.

Due to the similarity of the cases, both applications were heard in Federal Court in Sydney on Friday. Neither case is as straightforward as blocking a torrent or basic streaming portal, so both applicants are having to deal with additional complexities.

The TVB case is of particular interest. Up to a couple of dozen URLs maintain the services, which are used to provide the content, an EPG (electronic program guide), updates and sundry other features. While most of these appear to fit the description of an “online location” designed to assist copyright infringement, where the Android-based software for the IPTV services is hosted provides an interesting dilemma.

ComputerWorld reports that the apps – which offer live broadcasts, video-on-demand, and catch-up TV – are hosted on as-yet-unnamed sites which are functionally similar to Google Play or Apple’s App Store. They’re repositories of applications that also carry non-infringing apps, such as those for Netflix and YouTube.

Nevertheless, despite clear knowledge of this dual use, TVB wants to have these app marketplaces blocked by Australian ISPs, which would not only render the illicit apps inaccessible to the public but all of the non-infringing ones too. Part of its argument that this action would be reasonable appears to be that legal apps – such as Netflix’s for example – can also be freely accessed elsewhere.

It will be up to Justice Nicholas to decide whether the “primary purpose” of these marketplaces is to infringe or facilitate the infringement of TVB’s copyrights. However, TVB also appears to have another problem which is directly connected to the copyright status in Australia of its China-focused live programming.

Justice Nicholas questioned whether watching a stream in Australia of TVB’s live Chinese broadcasts would amount to copyright infringement because no copy of that content is being made.

“If most of what is occurring here is a reproduction of broadcasts that are not protected by copyright, then the primary purpose is not to facilitate copyright infringement,” Justice Nicholas said.

One of the problems appears to be that China is not a party to the 1961 Rome Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organisations. However, TVB is arguing that it should still receive protection because it airs pre-recorded content and the live broadcasts are also archived for re-transmission via catch-up services.

The question over whether unchoreographed live broadcasts receive protection has been raised in other regions but in most cases, a workaround has been found. The presence of broadcaster logos on screen (which receive copyright protection) is a factor and it’s been reported that broadcasters are able to record the ‘live’ action and transmit a copy just a couple of seconds later, thereby broadcasting an already-copyrighted work.

While TVB attempts to overcome its issues, Village Roadshow is facing some of its own in its efforts to take down HDSubs+.

It appears that at least partly in response to the Roadshow legal action, the service has undergone some modifications, including a change of brand to ‘Press Play Extra’. As reported by ZDNet, there have been structural changes too, which means that Roadshow can no longer “see under the hood”.

According to Justice Nicholas, there is no evidence that the latest version of the app infringes copyright but according to counsel for Village Roadshow, the new app is merely transitional and preparing for a possible future change.

“We submit the difference to be drawn is reactive to my clients serving on the operators a notice,” counsel for Roadshow argued, with an expert describing the new app as “almost like a placeholder.”

In short, Roadshow still wants all of the target domains in its original application blocked because the company believes there’s a good chance they’ll be reactivated in the future.

None of the ISPs involved in either case turned up to the hearings on Friday, which removes one layer of complexity in what appears thus far to be less than straightforward cases.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Google Search Receives Fewer Takedown Notices Than Before

Post Syndicated from Ernesto original https://torrentfreak.com/google-search-receives-fewer-takedown-notices-than-before-180414/

In recent years Google has had to cope with a continuous increase in takedown requests from copyright holders, which target pirate sites in search results.

Just a few years ago the search engine removed ‘only’ a few thousand URLs per day. This has since grown to millions and has kept growing, until recently.

Around a year ago Google received a billion takedown requests a year, and for a while, it stabilized at roughly 20 million requests per week. By October last year, Google search had processed over three billion DMCA requests since it started counting.

After that, it appears that things calmed down a little. Where Google’s weekly takedown chart went up year after year, it’s now trending in a downward direction.

During the past half year, Google received ‘only’ 375 million takedown requests. That translates to roughly 15 million per week or 750 million per year. This is a 25% decrease compared the average in 2016.

Does this mean that copyright holders can no longer find enough pirated content via the search engine? We doubt it. But it’s clear that some of the big reporting agencies are sending in less complaints.

Degban, for example, which was at one point good for more than 10% of the weekly number of DMCA requests, has disappeared completely. Other big players, such as the Mexican anti-piracy outfit APDIF and Remove Your Media, have clearly lowered their volumes.

APDIF’s weekly DMCA volume

Of all the big players, UK Music Group BPI has been most consistent. Their average hasn’t dropped much in recent years, but is certainly not rising either.

It’s too early to tell whether this trend will hold, but according to the numbers we see now, Google will for the first time have a significant decrease in the number of takedown requests this year.

Despite the decrease, Google is under quite a bit of pressure from copyright holders to improve its takedown efforts. Most would like Google to delist pirate site domains entirely.

While the search engine isn’t willing to go that far, it does give a lower ranking to sites for which it receives a large volume of takedown requests. In addition, the company recently started accepting ‘prophylactic’ DMCA requests, for content that is not indexed yet.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

MPAA and RIAA Still Can’t Go After Megaupload

Post Syndicated from Ernesto original https://torrentfreak.com/mpaa-and-riaa-still-cant-go-after-megaupload-180414/

Well over six years have passed since Megaupload was shutdown, but there is still little progress in the criminal proceedings against its founders.

The United States wants New Zealand to extradite the men but have thus far failed to achieve that goal. Dotcom and his former colleagues are using all legal means to prevent this eventuality and a final conclusion has yet to be reached.

While all parties await the outcome, the criminal case in the United States remains pending. The same goes for the lawsuits filed by the MPAA and RIAA in 2014.

Since the civil cases may influence the criminal proceedings, Megaupload’s legal team previously managed to put these cases on hold, and last week another extension was granted.

Previous extensions didn’t always go this easy. Last year there were concerns that the long delays could result in the destruction of evidence, as some of Megaupload’s hard drives were starting to fail.

However, after the parties agreed on a solution to back-up and restore the files, this is no longer an issue.

“With the preservation order in place, and there being no other objection, Defendant Megaupload hereby moves the Court to enter the attached proposed order, continuing the stay in this case for an additional six months,” Megaupload’s legal team recently informed the court.

Without any objections from the MPAA and RIAA, U.S. District Court Judge Liam O’Grady swiftly granted Megaupload’s request to stay both lawsuits until October this year.

While the US Government hopes to have Dotcom in custody by that time, the entrepreneur has different plans. Following a win at the Human Rights Tribunal in New Zealand, he hopes to put the criminal case behind him soon.

If that indeed happens, the MPAA and RIAA might have their turn.

The latest stay order

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

IP Address Fail: ISP Doesn’t Have to Hand ‘Pirates’ Details to Copyright Trolls

Post Syndicated from Andy original https://torrentfreak.com/ip-address-fail-isp-doesnt-have-to-hand-pirates-details-to-copyright-trolls-180414/

On October 27, 2016, UK-based Copyright Management Services (CMS) filed a case against Sweden-based ISP, Tele2.

CMS, run by Patrick Achache of German-based anti-piracy outfit MaverickEye (which in turn is deeply involved with infamous copyright troll outfit Guardaley), claimed that Tele2 customers had infringed its clients’ copyrights on the movies Cell and IT by sharing them via BitTorrent.

Since Tele2 had the personal details of the customers behind those IP addresses, CMS asked the Patent and Market Court to prevent the ISP from deleting the data before it could be handed over. Once in its possession, CMS would carry out the usual process of writing to customers and demanding cash settlements to make supposed lawsuits go away.

Tele2 complained that it could not hand over the details of customers using NAT addresses since it simply doesn’t hold that information. The ISP also said it could not hand over details of customers if IP address information had previously been deleted.

Taking these objections into consideration, in November 2017 the Court approved an interim order in respect of the remaining IP addresses. But there were significant problems which led the ISP to appeal.

According to tests carried out by Tele2, many of the IP addresses in the case did not relate to Sweden or indeed Tele2. In fact, some IP addresses belonged to foreign companies or mere affiliates of the ISP.

“Tele2 thus lacks the actual ability to provide information regarding a large part of the IP addresses covered by the submission,” the Court of Appeal noted in a decision published this week.

The problem appears to lie with the way the MaverickEye monitoring system attributed monitored IP addresses to Tele2.

The Court notes that the company relied on the RIPE Database which stated that the IP addresses in question were allocated to the “geographic area of Sweden”. According to Tele2, however, that wasn’t the case and as such, it had no information to hand over.

CMS, on the other hand, maintained that according to RIPE’s records, Tele2 was indeed the controller of the IP addresses in question so must hand over the information as requested.

While the Patent and Market Court said that Tele2 didn’t object to the MaverickEye monitoring software in terms of the data it collects on file-sharers, it noted that CMS had failed to initiate an investigation in respect of the IP addresses allegedly not belonging to Tele2.

“CMS has not invoked any investigation showing how the identification of the IP addresses in question is made in this case or who at Maverickeye UG was responsible for this,” the Court writes.

“Nor did CMS use the opportunity to hear representatives of Tele2 or others with Tele2 in mind to discover if the company has access to any of the current IP addresses and, if so, which.”

Considering the above, the Court notes that Tele2’s statement, that it doesn’t have access to the data, must stand.

“In these circumstances, CMS, against Tele2’s appeal, has not shown that Tele2 holds the information requested by the disclosure order. CMS’ application for a disclosure order should therefore be rejected,” the Court concludes.

The decision cannot be appealed so Copyright Management Services won’t get its hands on the personal details of the people behind the IP addresses, at least through this process.

The decision (Swedish, pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

‘Pirate’ Android App Store Operator Avoids Prison

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-android-app-store-operator-avoids-prison-180413/

Assisted by police in France and the Netherlands, the FBI took down the “pirate” Android stores Appbucket, Applanet and SnappzMarket in the summer of 2012.

During the years that followed several people connected to the Android app sites were arrested and indicted, and slowly but surely these cases are reaching their conclusions.

This week the Northern District Court of Georgia announced the sentencing of one of the youngest defendants. Aaron Buckley was fifteen when he started working on Applanet, and still a teenager when armed agents raided his house.

Years passed and a lot has changed since then, Buckley’s attorney informed the court before sentencing. The former pirate, who pleaded guilty to Conspiracy to Commit Copyright Infringement and Criminal Copyright Infringement, is a completely different person today.

Similar to many people who have a run-in with the law, life wasn’t always easy on him. Computers offered a welcome escape but also dragged Buckley into trouble, something he deeply regrets now.

Following the indictment, things started to change. The Applanet operator picked up his life, away from the computer, and also got involved in community work. Among other things, he plays a leading role in a popular support community for LGBT teenagers.

Given the tough circumstances of his personal life, which we won’t elaborate on, his attorney requested a downward departure from the regular sentencing guidelines, to allow for lesser punishment.

After considering all the options, District Court Judge Timothy C. Batten agreed to a lower sentence. Unlike some other pirate app stores operators, who must spend years in prison, Buckley will not be incarcerated.

Instead, the Applanet operator, who is now in his mid-twenties, will be put on probation for three years, including a year of home confinement.

The sentence (pdf)

In addition, he has to perform 20 hours of community service and work towards passing a General Educational Development (GED) exam.

It’s tough to live with the prospect of possibly spending years in jail, especially for more than a decade. Given the circumstances, this sentence must be a huge relief.

TorrentFreak contacted Buckley, who informed us that he is happy with the outcome and ready to work on a bright future.

“I really respect the government and the judge in their sentencing and am extremely grateful that they took into account all concerns of my health and life situation in regards to possible sentences,” he tells us.

“I am just glad to have another chance to use my time and skills to hopefully contribute to society in a more positive way as much as I am capable thanks to the outcome of the case.”

Time to move on.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

AWS AppSync – Production-Ready with Six New Features

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-appsync-production-ready-with-six-new-features/

If you build (or want to build) data-driven web and mobile apps and need real-time updates and the ability to work offline, you should take a look at AWS AppSync. Announced in preview form at AWS re:Invent 2017 and described in depth here, AWS AppSync is designed for use in iOS, Android, JavaScript, and React Native apps. AWS AppSync is built around GraphQL, an open, standardized query language that makes it easy for your applications to request the precise data that they need from the cloud.

I’m happy to announce that the preview period is over and that AWS AppSync is now generally available and production-ready, with six new features that will simplify and streamline your application development process:

Console Log Access – You can now see the CloudWatch Logs entries that are created when you test your GraphQL queries, mutations, and subscriptions from within the AWS AppSync Console.

Console Testing with Mock Data – You can now create and use mock context objects in the console for testing purposes.

Subscription Resolvers – You can now create resolvers for AWS AppSync subscription requests, just as you can already do for query and mutate requests.

Batch GraphQL Operations for DynamoDB – You can now make use of DynamoDB’s batch operations (BatchGetItem and BatchWriteItem) across one or more tables. in your resolver functions.

CloudWatch Support – You can now use Amazon CloudWatch Metrics and CloudWatch Logs to monitor calls to the AWS AppSync APIs.

CloudFormation Support – You can now define your schemas, data sources, and resolvers using AWS CloudFormation templates.

A Brief AppSync Review
Before diving in to the new features, let’s review the process of creating an AWS AppSync API, starting from the console. I click Create API to begin:

I enter a name for my API and (for demo purposes) choose to use the Sample schema:

The schema defines a collection of GraphQL object types. Each object type has a set of fields, with optional arguments:

If I was creating an API of my own I would enter my schema at this point. Since I am using the sample, I don’t need to do this. Either way, I click on Create to proceed:

The GraphQL schema type defines the entry points for the operations on the data. All of the data stored on behalf of a particular schema must be accessible using a path that begins at one of these entry points. The console provides me with an endpoint and key for my API:

It also provides me with guidance and a set of fully functional sample apps that I can clone:

When I clicked Create, AWS AppSync created a pair of Amazon DynamoDB tables for me. I can click Data Sources to see them:

I can also see and modify my schema, issue queries, and modify an assortment of settings for my API.

Let’s take a quick look at each new feature…

Console Log Access
The AWS AppSync Console already allows me to issue queries and to see the results, and now provides access to relevant log entries.In order to see the entries, I must enable logs (as detailed below), open up the LOGS, and check the checkbox. Here’s a simple mutation query that adds a new event. I enter the query and click the arrow to test it:

I can click VIEW IN CLOUDWATCH for a more detailed view:

To learn more, read Test and Debug Resolvers.

Console Testing with Mock Data
You can now create a context object in the console where it will be passed to one of your resolvers for testing purposes. I’ll add a testResolver item to my schema:

Then I locate it on the right-hand side of the Schema page and click Attach:

I choose a data source (this is for testing and the actual source will not be accessed), and use the Put item mapping template:

Then I click Select test context, choose Create New Context, assign a name to my test content, and click Save (as you can see, the test context contains the arguments from the query along with values to be returned for each field of the result):

After I save the new Resolver, I click Test to see the request and the response:

Subscription Resolvers
Your AWS AppSync application can monitor changes to any data source using the @aws_subscribe GraphQL schema directive and defining a Subscription type. The AWS AppSync client SDK connects to AWS AppSync using MQTT over Websockets and the application is notified after each mutation. You can now attach resolvers (which convert GraphQL payloads into the protocol needed by the underlying storage system) to your subscription fields and perform authorization checks when clients attempt to connect. This allows you to perform the same fine grained authorization routines across queries, mutations, and subscriptions.

To learn more about this feature, read Real-Time Data.

Batch GraphQL Operations
Your resolvers can now make use of DynamoDB batch operations that span one or more tables in a region. This allows you to use a list of keys in a single query, read records multiple tables, write records in bulk to multiple tables, and conditionally write or delete related records across multiple tables.

In order to use this feature the IAM role that you use to access your tables must grant access to DynamoDB’s BatchGetItem and BatchPutItem functions.

To learn more, read the DynamoDB Batch Resolvers tutorial.

CloudWatch Logs Support
You can now tell AWS AppSync to log API requests to CloudWatch Logs. Click on Settings and Enable logs, then choose the IAM role and the log level:

CloudFormation Support
You can use the following CloudFormation resource types in your templates to define AWS AppSync resources:

AWS::AppSync::GraphQLApi – Defines an AppSync API in terms of a data source (an Amazon Elasticsearch Service domain or a DynamoDB table).

AWS::AppSync::ApiKey – Defines the access key needed to access the data source.

AWS::AppSync::GraphQLSchema – Defines a GraphQL schema.

AWS::AppSync::DataSource – Defines a data source.

AWS::AppSync::Resolver – Defines a resolver by referencing a schema and a data source, and includes a mapping template for requests.

Here’s a simple schema definition in YAML form:

  AppSyncSchema:
    Type: "AWS::AppSync::GraphQLSchema"
    DependsOn:
      - AppSyncGraphQLApi
    Properties:
      ApiId: !GetAtt AppSyncGraphQLApi.ApiId
      Definition: |
        schema {
          query: Query
          mutation: Mutation
        }
        type Query {
          singlePost(id: ID!): Post
          allPosts: [Post]
        }
        type Mutation {
          putPost(id: ID!, title: String!): Post
        }
        type Post {
          id: ID!
          title: String!
        }

Available Now
These new features are available now and you can start using them today! Here are a couple of blog posts and other resources that you might find to be of interest:

Jeff;

 

 

The answers to your questions for Eben Upton

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/eben-q-a-1/

Before Easter, we asked you to tell us your questions for a live Q & A with Raspberry Pi Trading CEO and Raspberry Pi creator Eben Upton. The variety of questions and comments you sent was wonderful, and while we couldn’t get to them all, we picked a handful of the most common to grill him on.

You can watch the video below — though due to this being the first pancake of our live Q&A videos, the sound is a bit iffy — or read Eben’s answers to the first five questions today. We’ll follow up with the rest in the next few weeks!

Live Q&A with Eben Upton, creator of the Raspberry Pi

Get your questions to us now using #AskRaspberryPi on Twitter

Any plans for 64-bit Raspbian?

Raspbian is effectively 32-bit Debian built for the ARMv6 instruction-set architecture supported by the ARM11 processor in the first-generation Raspberry Pi. So maybe the question should be: “Would we release a version of our operating environment that was built on top of 64-bit ARM Debian?”

And the answer is: “Not yet.”

When we released the Raspberry Pi 3 Model B+, we released an operating system image on the same day; the wonderful thing about that image is that it runs on every Raspberry Pi ever made. It even runs on the alpha boards from way back in 2011.

That deep backwards compatibility is really important for us, in large part because we don’t want to orphan our customers. If someone spent $35 on an older-model Raspberry Pi five or six years ago, they still spent $35, so it would be wrong for us to throw them under the bus.

So, if we were going to do a 64-bit version, we’d want to keep doing the 32-bit version, and then that would mean our efforts would be split across the two versions; and remember, we’re still a very small engineering team. Never say never, but it would be a big step for us.

For people wanting a 64-bit operating system, there are plenty of good third-party images out there, including SUSE Linux Enterprise Server.

Given that the 3B+ includes 5GHz wireless and Power over Ethernet (PoE) support, why would manufacturers continue to use the Compute Module?

It’s a form-factor thing.

Very large numbers of people are using the bigger product in an industrial context, and it’s well engineered for that: it has module certification, wireless on board, and now PoE support. But there are use cases that can’t accommodate this form factor. For example, NEC displays: we’ve had this great relationship with NEC for a couple of years now where a lot of their displays have a socket in the back that you can put a Compute Module into. That wouldn’t work with the 3B+ form factor.

Back of an NEC display with a Raspberry Pi Compute Module slotted in.

An NEC display with a Raspberry Pi Compute Module

What are some industrial uses/products Raspberry is used with?

The NEC displays are a good example of the broader trend of using Raspberry Pi in digital signage.

A Raspberry Pi running the wait time signage at The Wizarding World of Harry Potter, Universal Studios.
Image c/o thelonelyredditor1

If you see a monitor at a station, or an airport, or a recording studio, and you look behind it, it’s amazing how often you’ll find a Raspberry Pi sitting there. The original Raspberry Pi was particularly strong for multimedia use cases, so we saw uptake in signage very early on.

An array of many Raspberry Pis

Los Alamos Raspberry Pi supercomputer

Another great example is the Los Alamos National Laboratory building supercomputers out of Raspberry Pis. Many high-end supercomputers now are built using white-box hardware — just regular PCs connected together using some networking fabric — and a collection of Raspberry Pi units can serve as a scale model of that. The Raspberry Pi has less processing power, less memory, and less networking bandwidth than the PC, but it has a balanced amount of each. So if you don’t want to let your apprentice supercomputer engineers loose on your expensive supercomputer, a cluster of Raspberry Pis is a good alternative.

Why is there no power button on the Raspberry Pi?

“Once you start, where do you stop?” is a question we ask ourselves a lot.

There are a whole bunch of useful things that we haven’t included in the Raspberry Pi by default. We don’t have a power button, we don’t have a real-time clock, and we don’t have an analogue-to-digital converter — those are probably the three most common requests. And the issue with them is that they each cost a bit of money, they’re each only useful to a minority of users, and even that minority often can’t agree on exactly what they want. Some people would like a power button that is literally a physical analogue switch between the 5V input and the rest of the board, while others would like something a bit more like a PC power button, which is partway between a physical switch and a ‘shutdown’ button. There’s no consensus about what sort of power button we should add.

So the answer is: accessories. By leaving a feature off the board, we’re not taxing the majority of people who don’t want the feature. And of course, we create an opportunity for other companies in the ecosystem to create and sell accessories to those people who do want them.

Adafruit Push-button Power Switch Breakout Raspberry Pi

The Adafruit Push-button Power Switch Breakout is one of many accessories that fill in the gaps for makers.

We have this neat way of figuring out what features to include by default: we divide through the fraction of people who want it. If you have a 20 cent component that’s going to be used by a fifth of people, we treat that as if it’s a $1 component. And it has to fight its way against the $1 components that will be used by almost everybody.

Do you think that Raspberry Pi is the future of the Internet of Things?

Absolutely, Raspberry Pi is the future of the Internet of Things!

In practice, most of the viable early IoT use cases are in the commercial and industrial spaces rather than the consumer space. Maybe in ten years’ time, IoT will be about putting 10-cent chips into light switches, but right now there’s so much money to be saved by putting automation into factories that you don’t need 10-cent components to address the market. Last year, roughly 2 million $35 Raspberry Pi units went into commercial and industrial applications, and many of those are what you’d call IoT applications.

So I think we’re the future of a particular slice of IoT. And we have ten years to get our price point down to 10 cents 🙂

The post The answers to your questions for Eben Upton appeared first on Raspberry Pi.

Reddit Copyright Complaints Jump 138% But Almost Half Get Rejected

Post Syndicated from Andy original https://torrentfreak.com/reddit-copyright-complaints-jump-138-but-almost-half-get-rejected-180411/

So-called ‘transparency reports’ are becoming increasingly popular with Internet-based platforms and their users. Among other things, they provide much-needed insight into how outsiders attempt to censor content published online and what actions are taken in response.

Google first started publishing its report in 2010, Twitter followed in 2012, and they’ve now been joined by a multitude of major companies including Microsoft, Facebook and Cloudflare.

As one of the world’s most recognized sites, Reddit joined the transparency party fairly late, publishing its first report in early 2015. While light on detail, it revealed that in the previous year the site received just 218 requests to remove content, 81% of which were DMCA-style copyright notices. A significant 62% of those copyright-related requests were rejected.

Over time, Reddit’s reporting has become a little more detailed. Last April it revealed that in 2016, the platform received ‘just’ 3,294 copyright removal requests for the entire year. However, what really caught the eye is how many notices were rejected. In just 610 instances, Reddit was required to remove content from the site, a rejection rate of 81%.

Having been a year since Reddit’s last report, the company has just published its latest edition, covering the period January 1, 2017 to December 31, 2017.

“Reddit publishes this transparency report every year as part of our ongoing commitment to keep you aware of the trends on the various requests regarding private Reddit user account information or removal of content posted to Reddit,” the company said in a statement.

“Reddit believes that maintaining this transparency is extremely important. We want you to be aware of this information, consider it carefully, and ask questions to keep us accountable.”

The detailed report covers a wide range of topics, including government requests for the preservation or production of user information (there were 310) and even an instruction to monitor one Reddit user’s activities in real time via a so-called ‘Trap and Trace’ order.

In copyright terms, there has been significant movement. In 2017, Reddit received 7,825 notifications of alleged copyright infringement under the Digital Millennium Copyright Act, that’s up roughly 138% over the 3,294 notifications received in 2016.

For a platform of Reddit’s unquestionable size, these volumes are not big. While the massive percentage increase is notable, the site still receives less than 10 complaints each day. For comparison, Google receives millions every week.

But perhaps most telling is that despite receiving more than 7,800 DMCA-style takedown notices, these resulted in Reddit carrying out just 4,352 removals. This means that for whatever reasons (Reddit doesn’t specify), 3,473 requests were denied, a rejection rate of 44.38%. Google, on the other hand, removes around 90% of content reported.

DMCA notices can be declared invalid for a number of reasons, from incorrect formatting through to flat-out abuse. In many cases, copyright law is incorrectly applied and it’s not unknown for complainants to attempt a DMCA takedown to stifle speech or perceived competition.

Reddit says it tries to take all things into consideration before removing content.

“Reddit reviews each DMCA takedown notice carefully, and removes content where a valid report is received, as required by the law,” the company says.

“Reddit considers whether the reported content may fall under an exception listed in the DMCA, such as ‘fair use,’ and may ask for clarification that will assist in the review of the removal request.”

Considering the numbers of community-focused “subreddits” dedicated to piracy (not just general discussion, but actual links to content), the low numbers of copyright notices received by Reddit continues to baffle.

There are sections in existence right now offering many links to movies and TV shows hosted on various file-hosting sites. They’re the type of links that are targeted all the time whenever they appear in Google search but copyright owners don’t appear to notice or care about them on Reddit.

Finally, it would be nice if Reddit could provide more information in next year’s report, including detail on why so many requests are rejected. Perhaps regular submission of notices to the Lumen Database would be something Reddit would consider for the future.

Reddit’s Transparency Report for 2017 can be found here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.