Tag Archives: making

OMG The Stupid It Burns

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/omg-stupid-it-burns.html

This article, pointed out by @TheGrugq, is stupid enough that it’s worth rebutting.

The article starts with the question “Why did the lessons of Stuxnet, Wannacry, Heartbleed and Shamoon go unheeded?“. It then proceeds to ignore the lessons of those things.
Some of the actual lessons should be things like how Stuxnet crossed air gaps, how Wannacry spread through flat Windows networking, how Heartbleed comes from technical debt, and how Shamoon furthers state aims by causing damage.
But this article doesn’t cover the technical lessons. Instead, it thinks the lesson should be the moral lesson, that we should take these things more seriously. But that’s stupid. It’s the sort of lesson people teach you that know nothing about the topic. When you have nothing of value to contribute to a topic you can always take the moral high road and criticize everyone for being morally weak for not taking it more seriously. Obviously, since doctors haven’t cured cancer yet, it’s because they don’t take the problem seriously.
The article continues to ignore the lesson of these cyber attacks and instead regales us with a list of military lessons from WW I and WW II. This makes the same flaw that many in the military make, trying to understand cyber through analogies with the real world. It’s not that such lessons could have no value, it’s that this article contains a poor list of them. It seems to consist of a random list of events that appeal to the author rather than events that have bearing on cybersecurity.
Then, in case we don’t get the point, the article bullies us with hyperbole, cliches, buzzwords, bombastic language, famous quotes, and citations. It’s hard to see how most of them actually apply to the text. Rather, it seems like they are included simply because he really really likes them.
The article invests much effort in discussing the buzzword “OODA loop”. Most attacks in cyberspace don’t have one. Instead, attackers flail around, trying lots of random things, overcoming defense with brute-force rather than an understanding of what’s going on. That’s obviously the case with Wannacry: it was an accident, with the perpetrator experimenting with what would happen if they added the ETERNALBLUE exploit to their existing ransomware code. The consequence was beyond anybody’s ability to predict.
You might claim that this is just the first stage, that they’ll loop around, observe Wannacry’s effects, orient themselves, decide, then act upon what they learned. Nope. Wannacry burned the exploit. It’s essentially removed any vulnerable systems from the public Internet, thereby making it impossible to use what they learned. It’s still active a year later, with infected systems behind firewalls busily scanning the Internet so that if you put a new system online that’s vulnerable, it’ll be taken offline within a few hours, before any other evildoer can take advantage of it.
See what I’m doing here? Learning the actual lessons of things like Wannacry? The thing the above article fails to do??
The article has a humorous paragraph on “defense in depth”, misunderstanding the term. To be fair, it’s the cybersecurity industry’s fault: they adopted then redefined the term. That’s why there’s two separate articles on Wikipedia: one for the old military term (as used in this article) and one for the new cybersecurity term.
As used in the cybersecurity industry, “defense in depth” means having multiple layers of security. Many organizations put all their defensive efforts on the perimeter, and none inside a network. The idea of “defense in depth” is to put more defenses inside the network. For example, instead of just one firewall at the edge of the network, put firewalls inside the network to segment different subnetworks from each other, so that a ransomware infection in the customer support computers doesn’t spread to sales and marketing computers.
The article talks about exploiting WiFi chips to bypass the defense in depth measures like browser sandboxes. This is conflating different types of attacks. A WiFi attack is usually considered a local attack, from somebody next to you in bar, rather than a remote attack from a server in Russia. Moreover, far from disproving “defense in depth” such WiFi attacks highlight the need for it. Namely, phones need to be designed so that successful exploitation of other microprocessors (namely, the WiFi, Bluetooth, and cellular baseband chips) can’t directly compromise the host system. In other words, once exploited with “Broadpwn”, a hacker would need to extend the exploit chain with another vulnerability in the hosts Broadcom WiFi driver rather than immediately exploiting a DMA attack across PCIe. This suggests that if PCIe is used to interface to peripherals in the phone that an IOMMU be used, for “defense in depth”.
Cybersecurity is a young field. There are lots of useful things that outsider non-techies can teach us. Lessons from military history would be well-received.
But that’s not this story. Instead, this story is by an outsider telling us we don’t know what we are doing, that they do, and then proceeds to prove they don’t know what they are doing. Their argument is based on a moral suasion and bullying us with what appears on the surface to be intellectual rigor, but which is in fact devoid of anything smart.
My fear, here, is that I’m going to be in a meeting where somebody has read this pretentious garbage, explaining to me why “defense in depth” is wrong and how we need to OODA faster. I’d rather nip this in the bud, pointing out if you found anything interesting from that article, you are wrong.

Russia Blacklists 250 Pirate Sites For Displaying Gambling Ads

Post Syndicated from Andy original https://torrentfreak.com/russia-blacklists-250-pirate-sites-for-displaying-gambling-ads-180421/

Blocking alleged pirate sites is usually a question of proving that they’re involved in infringement and then applying to the courts for an injunction.

In Europe, the process is becoming easier, largely thanks to an EU ruling that permits blocking on copyright grounds.

As reported over the past several years, Russia is taking its blocking processes very seriously. Copyright holders can now have sites blocked in just a few days, if they can show their operators as being unresponsive to takedown demands.

This week, however, Russian authorities have again shown that copyright infringement doesn’t have to be the only Achilles’ heel of pirate sites.

Back in 2006, online gambling was completely banned in Russia. Three years later in 2009, land-based gambling was also made illegal in all but four specified regions. Then, in 2012, the Russian Supreme Court ruled that ISPs must block access to gambling sites, something they had previously refused to do.

That same year, telecoms watchdog Rozcomnadzor began publishing a list of banned domains and within those appeared some of the biggest names in gambling. Many shut down access to customers located in Russia but others did not. In response, Rozcomnadzor also began targeting sites that simply offered information on gambling.

Fast forward more than six years and Russia is still taking a hard line against gambling operators. However, it now finds itself in a position where the existence of gambling material can also assist the state in its quest to take down pirate sites.

Following a complaint from the Federal Tax Service of Russia, Rozcomnadzor has again added a large number of ‘pirate’ sites to the country’s official blocklist after they advertised gambling-related products and services.

“Rozkomnadzor, at the request of the Federal Tax Service of Russia, added more than 250 pirate online cinemas and torrent trackers to the unified register of banned information, which hosted illegal advertising of online casinos and bookmakers,” the telecoms watchdog reported.

Almost immediately, 200 of the sites were blocked by local ISPs since they failed to remove the advertising when told to do so. For the remaining 50 sites, breathing space is still available. Their bans can be suspended if the offending ads are removed within a timeframe specified by the authorities, which has not yet run out.

“Information on a significant number of pirate resources with illegal advertising was received by Rozcomnadzor from citizens and organizations through a hotline that operates on the site of the Unified Register of Prohibited Information, all of which were sent to the Federal Tax Service for making decisions on restricting access,” the watchdog revealed.

Links between pirate sites and gambling companies have traditionally been close over the years, with advertising for many top-tier brands appearing on portals large and small. However, in recent times the prevalence of gambling ads has diminished, in part due to campaigns conducted in the United States, Europe, and the UK.

For pirate site operators in Russia, the decision to carry gambling ads now comes with the added risk of being blocked. Only time will tell whether any reduction in traffic is considered serious enough to warrant a gambling boycott of their own.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

RDS for Oracle: Extending Outbound Network Access to use SSL/TLS

Post Syndicated from Surya Nallu original https://aws.amazon.com/blogs/architecture/rds-for-oracle-extending-outbound-network-access-to-use-ssltls/

In December 2016, we launched the Outbound Network Access functionality for Amazon RDS for Oracle, enabling customers to use their RDS for Oracle database instances to communicate with external web endpoints using the utl_http and utl tcp packages, and sending emails through utl_smtp. We extended the functionality by adding the option of using custom DNS servers, allowing such outbound network accesses to make use of any DNS server a customer chooses to use. These releases enabled HTTP, TCP and SMTP communication originating out of RDS for Oracle instances – limited to non-secure (non-SSL) mediums.

To overcome the limitation over SSL connections, we recently published a whitepaper, that guides through the process of creating customized Oracle wallet bundles on your RDS for Oracle instances. By making use of such wallets, you can now extend the Outbound Network Access capability to have external communications happen over secure (SSL/TLS) connections. This opens up new use cases for your RDS for Oracle instances.

With the right set of certificates imported into your RDS for Oracle instances (through Oracle wallets), your database instances can now:

  • Communicate with a HTTPS endpoint: Using utl_http, access a resource such as https://status.aws.amazon.com/robots.txt
  • Download files from Amazon S3 securely: Using a presigned URL from Amazon S3, you can now download any file over SSL
  • Extending Oracle Database links to use SSL: Database links between RDS for Oracle instances can now use SSL as long as the instances have the SSL option installed
  • Sending email over SMTPS:
    • You can now integrate with Amazon SES to send emails from your database instances and any other generic SMTPS with which the provider can be integrated

These are just a few high-level examples of new use cases that have opened up with the whitepaper. As a reminder, always ensure to have best security practices in place when making use of Outbound Network Access (detailed in the whitepaper).

About the Author

Surya Nallu is a Software Development Engineer on the Amazon RDS for Oracle team.

Hackspace magazine 6: Paper Engineering

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-magazine-6/

HackSpace magazine is back with our brand-new issue 6, available for you on shop shelves, in your inbox, and on our website right now.

Inside Hackspace magazine 6

Paper is probably the first thing you ever used for making, and for good reason: in no other medium can you iterate through 20 designs at the cost of only a few pennies. We’ve roped in Rob Ives to show us how to make a barking paper dog with moveable parts and a cam mechanism. Even better, the magazine includes this free paper automaton for you to make yourself. That’s right: free!

At the other end of the scale, there’s the forge, where heat, light, and noise combine to create immutable steel. We speak to Alec Steele, YouTuber, blacksmith, and philosopher, about his amazingly beautiful Damascus steel creations, and about why there’s no difference between grinding a knife and blowing holes in a mountain to build a road through it.

HackSpace magazine 6 Alec Steele

Do it yourself

You’ve heard of reading glasses — how about glasses that read for you? Using a camera, optical character recognition software, and a text-to-speech engine (and of course a Raspberry Pi to hold it all together), reader Andrew Lewis has hacked together his own system to help deal with age-related macular degeneration.

It’s the definition of hacking: here’s a problem, there’s no solution in the shops, so you go and build it yourself!

Radio

60 years ago, the cutting edge of home hacking was the transistor radio. Before the internet was dreamt of, the transistor radio made the world smaller and brought people together. Nowadays, the components you need to build a radio are cheap and easily available, so if you’re in any way electronically inclined, building a radio is an ideal excuse to dust off your soldering iron.

Tutorials

If you’re a 12-month subscriber (if you’re not, you really should be), you’ve no doubt been thinking of all sorts of things to do with the Adafruit Circuit Playground Express we gave you for free. How about a sewable circuit for a canvas bag? Use the accelerometer to detect patterns of movement — walking, for example — and flash a series of lights in response. It’s clever, fun, and an easy way to add some programmable fun to your shopping trips.


We’re also making gin, hacking a children’s toy car to unlock more features, and getting started with robot sumo to fill the void left by the cancellation of Robot Wars.

HackSpace magazine 6

All this, plus an 11-metre tall mechanical miner, in HackSpace magazine issue 6 — subscribe here from just £4 an issue or get the PDF version for free. You can also find HackSpace magazine in WHSmith, Tesco, Sainsbury’s, and independent newsagents in the UK. If you live in the US, check out your local Barnes & Noble, Fry’s, or Micro Center next week. We’re also shipping to stores in Australia, Hong Kong, Canada, Singapore, Belgium, and Brazil, so be sure to ask your local newsagent whether they’ll be getting HackSpace magazine.

The post Hackspace magazine 6: Paper Engineering appeared first on Raspberry Pi.

Married Torrent Tracker Couple Settle With BREIN

Post Syndicated from Ernesto original https://torrentfreak.com/married-torrent-tracker-couple-settles-with-brein-180420/

Dutch anti-piracy group BREIN has targeted operators and uploaders of pirate sites for more than a decade.

The group’s main goal is to shut the sites down. Instead of getting embroiled in dozens of lengthy court battles, it prefers to settle the matter with those responsible.

This week, BREIN announced another victory against a small torrent site, Snuffelland. The private tracker was targeted at a Dutch audience and the anti-piracy group managed to track down its operators.

According to BREIN, the site was run by a married couple from the town of Montfort, a 65-year-old man and a 51-year-old woman. In addition, the group also identified one of the uploaders, a 60-year-old man from Heukelum.

All three are unemployed and their financial position was taken into account in determining the scale of the settlement. The couple agreed to pay 2,500 euros and the uploader settled for 650 euros, with a threat of further penalties if they are caught again.

The private tracker itself was shut down and replaced by a message that was provided by BREIN.

“Making copyright-protected works available infringes the copyrights of the entitled rightsholder. Downloading from unauthorized sources is also prohibited in the Netherlands,” the message reads.

“For providers of legal content, snuffelland.org refers you to thecontentmap.nl and film.nl,” it adds.

These type of shutdowns are nothing new. BREIN has taken down hundreds of smaller sites in the past. However, only in recent years has the group has started to publish these settlement details.

That serves as a deterrent but also provides some more insight into how the group prefers to solve these cases, which appears to be relatively softly. In this case, it also disproves the notion that torrent sites are run by youngsters.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Implementing safe AWS Lambda deployments with AWS CodeDeploy

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/implementing-safe-aws-lambda-deployments-with-aws-codedeploy/

This post courtesy of George Mao, AWS Senior Serverless Specialist – Solutions Architect

AWS Lambda and AWS CodeDeploy recently made it possible to automatically shift incoming traffic between two function versions based on a preconfigured rollout strategy. This new feature allows you to gradually shift traffic to the new function. If there are any issues with the new code, you can quickly rollback and control the impact to your application.

Previously, you had to manually move 100% of traffic from the old version to the new version. Now, you can have CodeDeploy automatically execute pre- or post-deployment tests and automate a gradual rollout strategy. Traffic shifting is built right into the AWS Serverless Application Model (SAM), making it easy to define and deploy your traffic shifting capabilities. SAM is an extension of AWS CloudFormation that provides a simplified way of defining serverless applications.

In this post, I show you how to use SAM, CloudFormation, and CodeDeploy to accomplish an automated rollout strategy for safe Lambda deployments.

Scenario

For this walkthrough, you write a Lambda application that returns a count of the S3 buckets that you own. You deploy it and use it in production. Later on, you receive requirements that tell you that you need to change your Lambda application to count only buckets that begin with the letter “a”.

Before you make the change, you need to be sure that your new Lambda application works as expected. If it does have issues, you want to minimize the number of impacted users and roll back easily. To accomplish this, you create a deployment process that publishes the new Lambda function, but does not send any traffic to it. You use CodeDeploy to execute a PreTraffic test to ensure that your new function works as expected. After the test succeeds, CodeDeploy automatically shifts traffic gradually to the new version of the Lambda function.

Your Lambda function is exposed as a REST service via an Amazon API Gateway deployment. This makes it easy to test and integrate.

Prerequisites

To execute the SAM and CloudFormation deployment, you must have the following IAM permissions:

  • cloudformation:*
  • lambda:*
  • codedeploy:*
  • iam:create*

You may use the AWS SAM Local CLI or the AWS CLI to package and deploy your Lambda application. If you choose to use SAM Local, be sure to install it onto your system. For more information, see AWS SAM Local Installation.

All of the code used in this post can be found in this GitHub repository: https://github.com/aws-samples/aws-safe-lambda-deployments.

Walkthrough

For this post, use SAM to define your resources because it comes with built-in CodeDeploy support for safe Lambda deployments.  The deployment is handled and automated by CloudFormation.

SAM allows you to define your Serverless applications in a simple and concise fashion, because it automatically creates all necessary resources behind the scenes. For example, if you do not define an execution role for a Lambda function, SAM automatically creates one. SAM also creates the CodeDeploy application necessary to drive the traffic shifting, as well as the IAM service role that CodeDeploy uses to execute all actions.

Create a SAM template

To get started, write your SAM template and call it template.yaml.

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: An example SAM template for Lambda Safe Deployments.

Resources:

  returnS3Buckets:
    Type: AWS::Serverless::Function
    Properties:
      Handler: returnS3Buckets.handler
      Runtime: nodejs6.10
      AutoPublishAlias: live
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "s3:ListAllMyBuckets"
            Resource: '*'
      DeploymentPreference:
          Type: Linear10PercentEvery1Minute
          Hooks:
            PreTraffic: !Ref preTrafficHook
      Events:
        Api:
          Type: Api
          Properties:
            Path: /test
            Method: get

  preTrafficHook:
    Type: AWS::Serverless::Function
    Properties:
      Handler: preTrafficHook.handler
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "codedeploy:PutLifecycleEventHookExecutionStatus"
            Resource:
              !Sub 'arn:aws:codedeploy:${AWS::Region}:${AWS::AccountId}:deploymentgroup:${ServerlessDeploymentApplication}/*'
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "lambda:InvokeFunction"
            Resource: !Ref returnS3Buckets.Version
      Runtime: nodejs6.10
      FunctionName: 'CodeDeployHook_preTrafficHook'
      DeploymentPreference:
        Enabled: false
      Timeout: 5
      Environment:
        Variables:
          NewVersion: !Ref returnS3Buckets.Version

This template creates two functions:

  • returnS3Buckets
  • preTrafficHook

The returnS3Buckets function is where your application logic lives. It’s a simple piece of code that uses the AWS SDK for JavaScript in Node.JS to call the Amazon S3 listBuckets API action and return the number of buckets.

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			callback(null, {
				statusCode: 200,
				body: allBuckets.length
			});
		}
	});	
}

Review the key parts of the SAM template that defines returnS3Buckets:

  • The AutoPublishAlias attribute instructs SAM to automatically publish a new version of the Lambda function for each new deployment and link it to the live alias.
  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides the function with permission to call listBuckets.
  • The DeploymentPreference attribute configures the type of rollout pattern to use. In this case, you are shifting traffic in a linear fashion, moving 10% of traffic every minute to the new version. For more information about supported patterns, see Serverless Application Model: Traffic Shifting Configurations.
  • The Hooks attribute specifies that you want to execute the preTrafficHook Lambda function before CodeDeploy automatically begins shifting traffic. This function should perform validation testing on the newly deployed Lambda version. This function invokes the new Lambda function and checks the results. If you’re satisfied with the tests, instruct CodeDeploy to proceed with the rollout via an API call to: codedeploy.putLifecycleEventHookExecutionStatus.
  • The Events attribute defines an API-based event source that can trigger this function. It accepts requests on the /test path using an HTTP GET method.
'use strict';

const AWS = require('aws-sdk');
const codedeploy = new AWS.CodeDeploy({apiVersion: '2014-10-06'});
var lambda = new AWS.Lambda();

exports.handler = (event, context, callback) => {

	console.log("Entering PreTraffic Hook!");
	
	// Read the DeploymentId & LifecycleEventHookExecutionId from the event payload
    var deploymentId = event.DeploymentId;
	var lifecycleEventHookExecutionId = event.LifecycleEventHookExecutionId;

	var functionToTest = process.env.NewVersion;
	console.log("Testing new function version: " + functionToTest);

	// Perform validation of the newly deployed Lambda version
	var lambdaParams = {
		FunctionName: functionToTest,
		InvocationType: "RequestResponse"
	};

	var lambdaResult = "Failed";
	lambda.invoke(lambdaParams, function(err, data) {
		if (err){	// an error occurred
			console.log(err, err.stack);
			lambdaResult = "Failed";
		}
		else{	// successful response
			var result = JSON.parse(data.Payload);
			console.log("Result: " +  JSON.stringify(result));

			// Check the response for valid results
			// The response will be a JSON payload with statusCode and body properties. ie:
			// {
			//		"statusCode": 200,
			//		"body": 51
			// }
			if(result.body == 9){	
				lambdaResult = "Succeeded";
				console.log ("Validation testing succeeded!");
			}
			else{
				lambdaResult = "Failed";
				console.log ("Validation testing failed!");
			}

			// Complete the PreTraffic Hook by sending CodeDeploy the validation status
			var params = {
				deploymentId: deploymentId,
				lifecycleEventHookExecutionId: lifecycleEventHookExecutionId,
				status: lambdaResult // status can be 'Succeeded' or 'Failed'
			};
			
			// Pass AWS CodeDeploy the prepared validation test results.
			codedeploy.putLifecycleEventHookExecutionStatus(params, function(err, data) {
				if (err) {
					// Validation failed.
					console.log('CodeDeploy Status update failed');
					console.log(err, err.stack);
					callback("CodeDeploy Status update failed");
				} else {
					// Validation succeeded.
					console.log('Codedeploy status updated successfully');
					callback(null, 'Codedeploy status updated successfully');
				}
			});
		}  
	});
}

The hook is hardcoded to check that the number of S3 buckets returned is 9.

Review the key parts of the SAM template that defines preTrafficHook:

  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides permissions to call the CodeDeploy PutLifecycleEventHookExecutionStatus API action. The second statement provides permissions to invoke the specific version of the returnS3Buckets function to test
  • This function has traffic shifting features disabled by setting the DeploymentPreference option to false.
  • The FunctionName attribute explicitly tells CloudFormation what to name the function. Otherwise, CloudFormation creates the function with the default naming convention: [stackName]-[FunctionName]-[uniqueID].  Name the function with the “CodeDeployHook_” prefix because the CodeDeployServiceRole role only allows InvokeFunction on functions named with that prefix.
  • Set the Timeout attribute to allow enough time to complete your validation tests.
  • Use an environment variable to inject the ARN of the newest deployed version of the returnS3Buckets function. The ARN allows the function to know the specific version to invoke and perform validation testing on.

Deploy the function

Your SAM template is all set and the code is written—you’re ready to deploy the function for the first time. Here’s how to do it via the SAM CLI. Replace “sam” with “cloudformation” to use CloudFormation instead.

First, package the function. This command returns a CloudFormation importable file, packaged.yaml.

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml

Now deploy everything:

sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

At this point, both Lambda functions have been deployed within the CloudFormation stack mySafeDeployStack. The returnS3Buckets has been deployed as Version 1:

SAM automatically created a few things, including the CodeDeploy application, with the deployment pattern that you specified (Linear10PercentEvery1Minute). There is currently one deployment group, with no action, because no deployments have occurred. SAM also created the IAM service role that this CodeDeploy application uses:

There is a single managed policy attached to this role, which allows CodeDeploy to invoke any Lambda function that begins with “CodeDeployHook_”.

An API has been set up called safeDeployStack. It targets your Lambda function with the /test resource using the GET method. When you test the endpoint, API Gateway executes the returnS3Buckets function and it returns the number of S3 buckets that you own. In this case, it’s 51.

Publish a new Lambda function version

Now implement the requirements change, which is to make returnS3Buckets count only buckets that begin with the letter “a”. The code now looks like the following (see returnS3BucketsNew.js in GitHub):

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			//callback(null, allBuckets.length);

			//  New Code begins here
			var counter=0;
			for(var i  in allBuckets){
				if(allBuckets[i].Name[0] === "a")
					counter++;
			}
			console.log("Total buckets starting with a: " + counter);

			callback(null, {
				statusCode: 200,
				body: counter
			});
			
		}
	});	
}

Repackage and redeploy with the same two commands as earlier:

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml
	
sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

CloudFormation understands that this is a stack update instead of an entirely new stack. You can see that reflected in the CloudFormation console:

During the update, CloudFormation deploys the new Lambda function as version 2 and adds it to the “live” alias. There is no traffic routing there yet. CodeDeploy now takes over to begin the safe deployment process.

The first thing CodeDeploy does is invoke the preTrafficHook function. Verify that this happened by reviewing the Lambda logs and metrics:

The function should progress successfully, invoke Version 2 of returnS3Buckets, and finally invoke the CodeDeploy API with a success code. After this occurs, CodeDeploy begins the predefined rollout strategy. Open the CodeDeploy console to review the deployment progress (Linear10PercentEvery1Minute):

Verify the traffic shift

During the deployment, verify that the traffic shift has started to occur by running the test periodically. As the deployment shifts towards the new version, a larger percentage of the responses return 9 instead of 51. These numbers match the S3 buckets.

A minute later, you see 10% more traffic shifting to the new version. The whole process takes 10 minutes to complete. After completion, open the Lambda console and verify that the “live” alias now points to version 2:

After 10 minutes, the deployment is complete and CodeDeploy signals success to CloudFormation and completes the stack update.

Check the results

If you invoke the function alias manually, you see the results of the new implementation.

aws lambda invoke –function [lambda arn to live alias] out.txt

You can also execute the prod stage of your API and verify the results by issuing an HTTP GET to the invoke URL:

Summary

This post has shown you how you can safely automate your Lambda deployments using the Lambda traffic shifting feature. You used the Serverless Application Model (SAM) to define your Lambda functions and configured CodeDeploy to manage your deployment patterns. Finally, you used CloudFormation to automate the deployment and updates to your function and PreTraffic hook.

Now that you know all about this new feature, you’re ready to begin automating Lambda deployments with confidence that things will work as designed. I look forward to hearing about what you’ve built with the AWS Serverless Platform.

IsoHunt Founder Returns With New Search Tool

Post Syndicated from Ernesto original https://torrentfreak.com/isohunt-founder-returns-with-new-search-tool-180419/

Of all the major torrent sites that dominated the Internet at the beginning of this decade, only a few remain.

One of the sites that fell prey to ever-increasing pressure from the entertainment industry was isoHunt.

Founded by the Canadian entrepreneur Gary Fung, the site was one of the early pioneers in the world of torrents, paving the way for many others. However, this spotlight also caught the attention of the major movie studios.

After a lengthy legal battle isoHunt’s founder eventually shut down the site late 2013. This happened after Fung signed a settlement agreement with Hollywood for no less than $110 million, on paper at least.

Launching a new torrent search engine was never really an option, but Fung decided not to let his expertise go to waste. He focused his time and efforts on a new search project instead, which was unveiled to the public this week.

The new app called “WonderSwipe” has just been added to Apple’s iOS store. It’s a mobile search app that ties into Google’s backend, but with a different user interface. While it has nothing to do with file-sharing, we decided to reach out to isoHunt’s founder to find out more.

Fung tells us that he got the idea for the app because he was frustrated with Google’s default search options on the mobile platform.

“I find myself barely do any search on the smartphone, most of the time waiting until I get to my desktop. I ask why?” Fung tells us.

One of the main issues he identified is the fact that swiping is not an option. Instead, people end up browsing through dozens of mobile browser tabs. So, Fung took Google’s infrastructure and search power, making it swipeable.

“From a UI design perspective, I find swiping through photos on the first iPhone one of the most extraordinary advances in computing. It’s so easy that babies would be doing it before they even learn how to flip open a book!

“Bringing that ease of use to the central way of conducting mobile search and research is the initial eureka I had in starting work on WonderSwipe,” Fung adds.

That was roughly three years ago, and a few hours ago WonderSwipe finally made its way into the App store. Android users will have to wait for now, but the application will eventually be available on that platform as well.

In addition to swiping through search results, the app also promises faster article loading and browsing, a reader mode with condensed search results, and a hands-free mode with automated browsing where summaries are read out loud.

WonderwSwipe


Of course, WonderSwipe is nothing like isoHunt ever was, apart from the fact that Google is a search engine that also links to torrents, indirectly.

This similarity was also brought up during the lawsuit with the MPAA, when Fung’s legal team likened isoHunt to Google in court. However, the Canadian entrepreneur doesn’t expect that Hollywood will have an issue with WonderSwipe in particular.

“isoHunt was similar to Google in how it worked as a search engine, but not in scope. Torrents are a small subset of all the webpages Google indexes,” Fung says.

“WonderSwipe’s aim is to find answers in all webpages, powered by Google search results. It presents results in extracted text and summaries with no connection to BitTorrent clients. As such, WonderSwipe can be bigger than isoHunt has ever been.”

Ironically, in recent years Hollywood has often criticized Google for linking to pirated content in its search results. These results will also be available through WonderSwipe.

However, Fung says that any copyright issues with WonderSwipe will have to be dealt with on the search engine level, by Google.

“If there are links to pirated content, tell search engines so they can take them down!” he says.

WonderSwipe is totally free and Fung tells us that he plans to monetize it with in-app purchases for pro features, and non-intrusive advertising that won’t slow down swiping or search results. More details on the future plans for the app are available here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Backblaze at NAB 2018 in Las Vegas

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/backblaze-at-nab-2018-in-las-vegas/

Backblaze B2 Cloud Storage NAB Booth

Backblaze just returned from exhibiting at NAB in Las Vegas, April 9-12, where the response to our recent announcements was tremendous. In case you missed the news, Backblaze B2 Cloud Storage continues to extend its lead as the most affordable, high performance cloud on the planet.

Backblaze’s News at NAB

Backblaze at NAB 2018 in Las Vegas

The Backblaze booth just before opening

What We Were Asked at NAB

Our booth was busy from start to finish with attendees interested in learning more about Backblaze and B2 Cloud Storage. Here are the questions we were asked most often in the booth.

Q. How long has Backblaze been in business?
A. The company was founded in 2007. Today, we have over 500 petabytes of data from customers in over 150 countries.

B2 Partners at NAB 2018

Q. Where is your data stored?
A. We have data centers in California and Arizona and expect to expand to Europe by the end of the year.

Q. How can your services be so inexpensive?
A. Backblaze’s goal from the beginning was to offer cloud backup and storage that was easy to use and affordable. All the existing options were simply too expensive to be viable, so we created our own infrastructure. Our purpose-built storage system — the Backblaze’s Storage Pod — is recognized as one of the most cost efficient storage platforms available.

Q. Tell me about your hardware.
A. Backblaze’s Storage Pods hold 60 HDDs each, containing as much as 720TB data per pod, stored using Reed-Solomon error correction. Storage Pods are arranged in Tomes with twenty Storage Pods making up a Vault.

Q. Where do you fit in the data workflow?
A. People typically use B2 in for archiving completed projects. All data is readily available for download from B2, making it more convenient than off-line storage. In addition, DAM and MAM systems such as CatDV, axle ai, Cantemo, and others have integrated with B2 to store raw images behind the proxies.

Q. Who uses B2 in the M&E business?
A. KLRU-TV, the PBS station in Austin Texas, uses B2 to archive their entire 43 year catalog of Austin City Limits episodes and related materials. WunderVu, the production house for Pixvana, uses B2 to back up and archive their local storage systems on which they build virtual reality experiences for their customers.

Q. You’re the company that publishes the hard drive stats, right?
A. Yes, we are!

Backblaze Case Studies and Swag at NAB 2018 in Las Vegas

Were You at NAB?

If you were, we hope you stopped by the Backblaze booth to say hello. We’d like to hear what you saw at the show that was interesting or exciting. Please tell us in the comments.

The post Backblaze at NAB 2018 in Las Vegas appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

AIY Projects 2: Google’s AIY Projects Kits get an upgrade

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/google-aiy-projects-2/

After the outstanding success of their AIY Projects Voice and Vision Kits, Google has announced the release of upgraded kits, complete with Raspberry Pi Zero WH, Camera Module, and preloaded SD card.

Google AIY Projects Vision Kit 2 Raspberry Pi

Google’s AIY Projects Kits

Google launched the AIY Projects Voice Kit last year, first as a cover gift with The MagPi magazine and later as a standalone product.

Makers needed to provide their own Raspberry Pi for the original kit. The new kits include everything you need, from Pi to SD card.

Within a DIY cardboard box, makers were able to assemble their own voice-activated AI assistant akin to the Amazon Alexa, Apple’s Siri, and Google’s own Google Home Assistant. The Voice Kit was an instant hit that spurred no end of maker videos and tutorials, including our own free tutorial for controlling a robot using voice commands.

Later in the year, the team followed up the success of the Voice Kit with the AIY Projects Vision Kit — the same cardboard box hosting a camera perfect for some pretty nifty image recognition projects.

For more on the AIY Voice Kit, here’s our release video hosted by the rather delightful Rob Zwetsloot.

AIY Projects adds natural human interaction to your Raspberry Pi

Check out the exclusive Google AIY Projects Kit that comes free with The MagPi 57! Grab yourself a copy in stores or online now: http://magpi.cc/2pI6IiQ This first AIY Projects kit taps into the Google Assistant SDK and Cloud Speech API using the AIY Projects Voice HAT (Hardware Accessory on Top) board, stereo microphone, and speaker (included free with the magazine).

AIY Projects 2

So what’s new with version 2 of the AIY Projects Voice Kit? The kit now includes the recently released Raspberry Pi Zero WH, our Zero W with added pre-soldered header pins for instant digital making accessibility. Purchasers of the kits will also get a micro SD card with preloaded OS to help them get started without having to set the card up themselves.

Google AIY Projects Vision Kit 2 Raspberry Pi

Everything you need to build your own Raspberry Pi-powered Google voice assistant

In the newly upgraded AIY Projects Vision Kit v1.2, makers are also treated to an official Raspberry Pi Camera Module v2, the latest model of our add-on camera.

Google AIY Projects Vision Kit 2 Raspberry Pi

“Everything you need to get started is right there in the box,” explains Billy Rutledge, Google’s Director of AIY Projects. “We knew from our research that even though makers are interested in AI, many felt that adding it to their projects was too difficult or required expensive hardware.”

Google AIY Projects Vision Kit 2 Raspberry Pi
Google AIY Projects Vision Kit 2 Raspberry Pi
Google AIY Projects Vision Kit 2 Raspberry Pi

Google is also hard at work producing AIY Projects companion apps for Android, iOS, and Chrome. The Android app is available now to coincide with the launch of the upgraded kits, with the other two due for release soon. The app supports wireless setup of the AIY Kit, though avid coders will still be able to hack theirs to better suit their projects.

Google has also updated the AIY Projects website with an AIY Models section highlighting a range of neural network projects for the kits.

Get your kit

The updated Voice and Vision Kits were announced last night, and in the US they are available now from Target. UK-based makers should be able to get their hands on them this summer — keep an eye on our social channels for updates and links.

The post AIY Projects 2: Google’s AIY Projects Kits get an upgrade appeared first on Raspberry Pi.

Notes on setting up Raspberry Pi 3 as WiFi hotspot

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/notes-on-setting-up-raspberry-pi-3-as.html

I want to sniff the packets for IoT devices. There are a number of ways of doing this, but one straightforward mechanism is configuring a “Raspberry Pi 3 B” as a WiFi hotspot, then running tcpdump on it to record all the packets that pass through it. Google gives lots of results on how to do this, but they all demand that you have the precise hardware, WiFi hardware, and software that the authors do, so that’s a pain.

I got it working using the instructions here. There are a few additional notes, which is why I’m writing this blogpost, so I remember them.
https://www.raspberrypi.org/documentation/configuration/wireless/access-point.md

I’m using the RPi-3-B and not the RPi-3-B+, and the latest version of Raspbian at the time of this writing, “Raspbian Stretch Lite 2018-3-13”.

Some things didn’t work as described. The first is that it couldn’t find the package “hostapd”. That solution was to run “apt-get update” a second time.

The second problem was error message about the NAT not working when trying to set the masquerade rule. That’s because the ‘upgrade’ updates the kernel, making the running system out-of-date with the files on the disk. The solution to that is make sure you reboot after upgrading.

Thus, what you do at the start is:

apt-get update
apt-get upgrade
apt-get update
shutdown -r now

Then it’s just “apt-get install tcpdump” and start capturing on wlan0. This will get the non-monitor-mode Ethernet frames, which is what I want.

WHOIS Limits Under GDPR Will Make Pirates Harder to Catch, Groups Fear

Post Syndicated from Andy original https://torrentfreak.com/whois-limits-under-gdpr-will-make-pirates-harder-to-catch-groups-fear-180413/

The General Data Protection Regulation (GDPR) is a regulation in EU law covering data protection and privacy for all individuals within the European Union.

As more and more personal data is gathered, stored and (ab)used online, the aim of the GDPR is to protect EU citizens from breaches of privacy. The regulation applies to all companies processing the personal data of subjects residing in the Union, no matter where in the world the company is located.

Penalties for non-compliance can be severe. While there is a tiered approach according to severity, organizations can be fined up to 4% of annual global turnover or €20 million, whichever is greater. Needless to say, the regulations will need to be taken seriously.

Among those affected are domain name registries and registrars who publish the personal details of domain name owners in the public WHOIS database. In a full entry, a person or organization’s name, address, telephone numbers and email addresses can often be found.

This raises a serious issue. While registries and registrars are instructed and contractually obliged to publish data in the WHOIS database by global domain name authority ICANN, in millions of cases this conflicts with the requirements of the GDPR, which prevents the details of private individuals being made freely available on the Internet.

As explained in detail by the EFF, ICANN has been trying to resolve this clash. Its proposed interim model for GDPR compliance (pdf) envisions registrars continuing to collect full WHOIS data but not necessarily publishing it, to “allow the existing data
to be preserved while the community discussions continue on the next generation of WHOIS.”

But the proposed changes that will inevitably restrict free access to WHOIS information has plenty of people spooked, including thousands of companies belonging to entertainment industry groups such as the MPAA, IFPI, RIAA and the Copyright Alliance.

In a letter sent to Vice President Andrus Ansip of the European Commission, these groups and dozens of others warn that restricted access to WHOIS will have a serious effect on their ability to protect their intellectual property rights from “cybercriminals” which pose a threat to their businesses.

Signed by 50 organizations involved in IP protection and other areas of online security, the letter expresses concern that in attempting to comply with the GDPR, ICANN is on a course to “over-correct” while disregarding proportionality, accountability and transparency.

A small sample of the groups calling on ICANN

“We strongly assert that this model does not properly account for the critical public and legitimate interests served by maintaining a sufficient amount of data publicly available while respecting privacy interests of registrants by instituting a tiered or layered access system for the vast majority of personal data as defined by the GDPR,” the groups write.

The letter focuses on two aspects of “over-correction”, the first being ICANN’s proposal that no personal data whatsoever of a domain name registrant will be made available “without appropriate consideration or balancing of the countervailing interests in public disclosure of a limited amount of such data.”

In response to ICANN’s proposal that only the province/state and country of a domain name registrant be made publicly available, the groups advise the organization that publishing “a natural person registrant’s e-mail address” in a publicly accessible WHOIS directory will not constitute a breach of the GDPR.

“[W]e strongly believe that the continued public availability of the registrant’s e-mail address – specifically the e-mail address that the registrant supplies to the registrar at the time the domain name is purchased and which e-mail address the registrar is required to validate – is critical for several reasons,” the groups write.

“First, it is the data element that is typically the most important to have readily available for law enforcement, consumer protection, particularly child protection, intellectual property enforcement and cybersecurity/anti-malware purposes.

“Second, the public accessibility of the registrant’s e-mail address permits a broad array of threats and illegal activities to be addressed quickly and the damage from such threats mitigated and contained in a timely manner, particularly where the abusive/illegal activity may be spawned from a variety of different domain names on different generic Top Level Domains,” they add.

The groups also argue that since making email addresses is effectively required in light of Article 5.1(c) ECD, “there is no legitimate justification to discontinue public availability of the registrant’s e-mail address in the WHOIS directory and especially not in light of other legitimate purposes.”

The EFF, on the other hand, says that being able to contact a domain owner wouldn’t necessarily require an email address to be made public.

“There are other cases in which it makes sense to allow members of the public to contact the owner of a domain, without having to obtain a court order,” EFF writes.

“But this could be achieved very simply if ICANN were simply to provide something like a CAPTCHA-protected contact form, which would deliver email to the appropriate contact point with no need to reveal the registrant’s actual email address.”

The groups’ second main concern is that ICANN reportedly makes no distinction between name registrants that are “natural persons versus those that are legal entities” and intends to treat them all as if they are subject to the GDPR, despite the fact that the regulation only applies to data associated with an “identified or identifiable natural person”.

They say it is imperative that EU Data Protection Authorities are made to understand that when registrants obtain a domain for illegal purposes, they often only register it as a “natural person” when registering as a legal person (legal entity) would be more appropriate, despite that granting them less privacy.

“Consequently, the test for differentiating between a legal and natural person should not merely be the legal status of the registrant, but also whether the registrant is, in fact, acting as a legal or natural person vis a vis the use of the domain name,” the groups note.

“We therefore urge that ICANN be given appropriate guidance as to the importance of maintaining a distinction between natural person and legal person registrants and keeping as much data about legal person domain name registrants as publicly accessible as possible,” they conclude.

What will happen with WHOIS on May 25 still isn’t clear. It wasn’t until October 2017 that ICANN finally determined that it would be affected by the GDPR, meaning that it’s been scrambling ever since to meet the compliance date. And it still is, according to the latest available documentation (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Build a house in Minecraft using Python

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/build-minecraft-house-using-python/

In this tutorial from The MagPi issue 68, Steve Martin takes us through the process of house-building in Minecraft Pi. Get your copy of The MagPi in stores now, or download it as a free PDF here.

Minecraft Pi is provided for free as part of the Raspbian operating system. To start your Minecraft: Pi Edition adventures, try our free tutorial Getting started with Minecraft.

Minecraft Raspberry Pi

Writing programs that create things in Minecraft is not only a great way to learn how to code, but it also means that you have a program that you can run again and again to make as many copies of your Minecraft design as you want. You never need to worry about your creation being destroyed by your brother or sister ever again — simply rerun your program and get it back! Whilst it might take a little longer to write the program than to build one house, once it’s finished you can build as many houses as you want.

Co-ordinates in Minecraft

Let’s start with a review of the coordinate system that Minecraft uses to know where to place blocks. If you are already familiar with this, you can skip to the next section. Otherwise, read on.

Minecraft Raspberry Pi Edition

Plan view of our house design

Minecraft shows us a three-dimensional (3D) view of the world. Imagine that the room you are in is the Minecraft world and you want to describe your location within that room. You can do so with three numbers, as follows:

  • How far across the room are you? As you move from side to side, you change this number. We can consider this value to be our X coordinate.
  • How high off the ground are you? If you are upstairs, or if you jump, this value increases. We can consider this value to be our Y coordinate.
  • How far into the room are you? As you walk forwards or backwards, you change this number. We can consider this value to be our Z coordinate.

You might have done graphs in school with X going across the page and Y going up the page. Coordinates in Minecraft are very similar, except that we have an extra value, Z, for our third dimension. Don’t worry if this still seems a little confusing: once we start to build our house, you will see how these three dimensions work in Minecraft.

Designing our house

It is a good idea to start with a rough design for our house. This will help us to work out the values for the coordinates when we are adding doors and windows to our house. You don’t have to plan every detail of your house right away. It is always fun to enhance it once you have got the basic design written. The image above shows the plan view of the house design that we will be creating in this tutorial. Note that because this is a plan view, it only shows the X and Z co-ordinates; we can’t see how high anything is. Hopefully, you can imagine the house extending up from the screen.

We will build our house close to where the Minecraft player is standing. This a good idea when creating something in Minecraft with Python, as it saves us from having to walk around the Minecraft world to try to find our creation.

Starting our program

Type in the code as you work through this tutorial. You can use any editor you like; we would suggest either Python 3 (IDLE) or Thonny Python IDE, both of which you can find on the Raspberry Pi menu under Programming. Start by selecting the File menu and creating a new file. Save the file with a name of your choice; it must end with .py so that the Raspberry Pi knows that it is a Python program.

It is important to enter the code exactly as it is shown in the listing. Pay particular attention to both the spelling and capitalisation (upper- or lower-case letters) used. You may find that when you run your program the first time, it doesn’t work. This is very common and just means there’s a small error somewhere. The error message will give you a clue about where the error is.

It is good practice to start all of your Python programs with the first line shown in our listing. All other lines that start with a # are comments. These are ignored by Python, but they are a good way to remind us what the program is doing.

The two lines starting with from tell Python about the Minecraft API; this is a code library that our program will be using to talk to Minecraft. The line starting mc = creates a connection between our Python program and the game. Then we get the player’s location broken down into three variables: x, y, and z.

Building the shell of our house

To help us build our house, we define three variables that specify its width, height, and depth. Defining these variables makes it easy for us to change the size of our house later; it also makes the code easier to understand when we are setting the co-ordinates of the Minecraft bricks. For now, we suggest that you use the same values that we have; you can go back and change them once the house is complete and you want to alter its design.

It’s now time to start placing some bricks. We create the shell of our house with just two lines of code! These lines of code each use the setBlocks command to create a complete block of bricks. This function takes the following arguments:

setBlocks(x1, y1, z1, x2, y2, z2, block-id, data)

x1, y1, and z1 are the coordinates of one corner of the block of bricks that we want to create; x1, y1, and z1 are the coordinates of the other corner. The block-id is the type of block that we want to use. Some blocks require another value called data; we will see this being used later, but you can ignore it for now.

We have to work out the values that we need to use in place of x1, y1, z1, x1, y1, z1 for our walls. Note that what we want is a larger outer block made of bricks and that is filled with a slightly smaller block of air blocks. Yes, in Minecraft even air is actually just another type of block.

Once you have typed in the two lines that create the shell of your house, you almost ready to run your program. Before doing so, you must have Minecraft running and displaying the contents of your world. Do not have a world loaded with things that you have created, as they may get destroyed by the house that we are building. Go to a clear area in the Minecraft world before running the program. When you run your program, check for any errors in the ‘console’ window and fix them, repeatedly running the code again until you’ve corrected all the errors.

You should see a block of bricks now, as shown above. You may have to turn the player around in the Minecraft world before you can see your house.

Adding the floor and door

Now, let’s make our house a bit more interesting! Add the lines for the floor and door. Note that the floor extends beyond the boundary of the wall of the house; can you see how we achieve this?

Hint: look closely at how we calculate the x and z attributes as compared to when we created the house shell above. Also note that we use a value of y-1 to create the floor below our feet.

Minecraft doors are two blocks high, so we have to create them in two parts. This is where we have to use the data argument. A value of 0 is used for the lower half of the door, and a value of 8 is used for the upper half (the part with the windows in it). These values will create an open door. If we add 4 to each of these values, a closed door will be created.

Before you run your program again, move to a new location in Minecraft to build the house away from the previous one. Then run it to check that the floor and door are created; you will need to fix any errors again. Even if your program runs without errors, check that the floor and door are positioned correctly. If they aren’t, then you will need to check the arguments so setBlock and setBlocks are exactly as shown in the listing.

Adding windows

Hopefully you will agree that your house is beginning to take shape! Now let’s add some windows. Looking at the plan for our house, we can see that there is a window on each side; see if you can follow along. Add the four lines of code, one for each window.

Now you can move to yet another location and run the program again; you should have a window on each side of the house. Our house is starting to look pretty good!

Adding a roof

The final stage is to add a roof to the house. To do this we are going to use wooden stairs. We will do this inside a loop so that if you change the width of your house, more layers are added to the roof. Enter the rest of the code. Be careful with the indentation: I recommend using spaces and avoiding the use of tabs. After the if statement, you need to indent the code even further. Each indentation level needs four spaces, so below the line with if on it, you will need eight spaces.

Since some of these code lines are lengthy and indented a lot, you may well find that the text wraps around as you reach the right-hand side of your editor window — don’t worry about this. You will have to be careful to get those indents right, however.

Now move somewhere new in your world and run the complete program. Iron out any last bugs, then admire your house! Does it look how you expect? Can you make it better?

Customising your house

Now you can start to customise your house. It is a good idea to use Save As in the menu to save a new version of your program. Then you can keep different designs, or refer back to your previous program if you get to a point where you don’t understand why your new one doesn’t work.

Consider these changes:

  • Change the size of your house. Are you able also to move the door and windows so they stay in proportion?
  • Change the materials used for the house. An ice house placed in an area of snow would look really cool!
  • Add a back door to your house. Or make the front door a double-width door!

We hope that you have enjoyed writing this program to build a house. Now you can easily add a house to your Minecraft world whenever you want to by simply running this program.

Get the complete code for this project here.

Continue your Minecraft journey

Minecraft Pi’s programmable interface is an ideal platform for learning Python. If you’d like to try more of our free tutorials, check out:

You may also enjoy Martin O’Hanlon’s and David Whale’s Adventures in Minecraft, and the Hacking and Making in Minecraft MagPi Essentials guide, which you can download for free or buy in print here.

The post Build a house in Minecraft using Python appeared first on Raspberry Pi.

Artefacts in the classroom with Museum in a Box

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/museum-in-a-box/

Museum in a Box bridges the gap between museums and schools by creating a more hands-on approach to conservation education through 3D printing and digital making.

Artefacts in the classroom with Museum in a Box || Raspberry Pi Stories

Learn more: http://rpf.io/ Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the Raspberry Pi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?

Fantastic collections and where to find them

Large, impressive statues are truly a sight to be seen. Take for example the 2.4m Hoa Hakananai’a at the British Museum. Its tall stature looms over you as you read its plaque to learn of the statue’s journey from Easter Island to the UK under the care of Captain Cook in 1774, and you can’t help but wonder at how it made it here in one piece.

Hoa Hakananai’a Captain Cook British Museum
Hoa Hakananai’a Captain Cook British Museum

But unless you live near a big city where museums are plentiful, you’re unlikely to see the likes of Hoa Hakananai’a in person. Instead, you have to content yourself with online photos or videos of world-famous artefacts.

And that only accounts for the objects that are on display: conservators estimate that only approximately 5 to 10% of museums’ overall collections are actually on show across the globe. The rest is boxed up in storage, inaccessible to the public due to risk of damage, or simply due to lack of space.

Museum in a Box

Museum in a Box aims to “put museum collections and expert knowledge into your hand, wherever you are in the world,” through modern maker practices such as 3D printing and digital making. With the help of the ‘Scan the World’ movement, an “ambitious initiative whose mission is to archive objects of cultural significance using 3D scanning technologies”, the Museum in a Box team has been able to print small, handheld replicas of some of the world’s most recognisable statues and sculptures.

Museum in a Box Raspberry Pi

Each 3D print gets NFC tags so it can initiate audio playback from a Raspberry Pi that sits snugly within the laser-cut housing of a ‘brain box’. Thus the print can talk directly to us through the magic of wireless technology, replacing the dense, dry text of a museum plaque with engaging speech.

Museum in a Box Raspberry Pi

The Museum in a Box team headed by CEO George Oates (featured in the video above) makes use of these 3D-printed figures alongside original artefacts, postcards, and more to bridge the gap between large, crowded, distant museums and local schools. Modeled after the museum handling collections that used to be sent to schools, Museum in a Box is a cheaper, more accessible alternative. Moreover, it not only allows for hands-on learning, but also encourages children to get directly involved by hacking its technology! With NFC technology readily available to the public, students can curate their own collections about their local area, record their own messages, and send their own box-sized museums on to schools in other towns or countries. In this way, Museum in a Box enables students to explore, and expand the reach of, their own histories.

Moving forward

With the technology perfected and interest in the project ever-growing, Museum in a Box has a busy year ahead. Supporting the new ‘Unstacked’ learning initiative, the team will soon be delivering ten boxes to the Smithsonian Libraries. The team has curated two collections specifically for this: an exploration into Asia-Pacific America experiences of migration to the USA throughout the 20th century, and a look into the history of science.

Smithsonian Library Museum in a Box Raspberry Pi

The team will also be making a box for the British Museum to support their Iraq Scheme initiative, and another box will be heading to the V&A to support their See Red programme. While primarily installed in the Lansbury Micro Museum, the box will also take to the road to visit the local Spotlight high school.

Museum in a Box at Raspberry Fields

Lastly, by far the most exciting thing the Museum in a Box team will be doing this year — in our opinion at least — is showcasing at Raspberry Fields! This is our brand-new festival of digital making that’s taking place on 30 June and 1 July 2018 here in Cambridge, UK. Find more information about it and get your ticket here.

The post Artefacts in the classroom with Museum in a Box appeared first on Raspberry Pi.

Piracy & Money Are Virtually Inseparable & People Probably Don’t Care Anymore

Post Syndicated from Andy original https://torrentfreak.com/piracy-money-are-virtually-inseparable-people-probably-dont-care-anymore-180408/

Long before peer-to-peer file-sharing networks were a twinkle in developers’ eyes, piracy of software and games flourished under the radar. Cassettes, floppy discs and CDs were the physical media of choice, while the BBS became the haunt of the need-it-now generation.

Sharing was the name of the game. When someone had game ‘X’ on tape, it was freely shared with friends and associates because when they got game ‘Y’, the favor had to be returned. The content itself became the currency and for most, the thought of asking for money didn’t figure into the equation.

Even when P2P networks first took off, money wasn’t really a major part of the equation. Sure, the people running Kazaa and the like were generating money from advertising but for millions of users, sharing content between friends and associates was still the name of the game.

Even when the torrent site scene began to gain traction, money wasn’t the driving force. Everything was so new that developers were much more concerned with getting half written/half broken tracker scripts to work than anything else. Having people care enough to simply visit the sites and share something with others was the real payoff. Ironically, it was a reward that money couldn’t buy.

But as the scene began to develop, so did the influx of minor and even major businessmen. The ratio economy of the private tracker scene meant that bandwidth could essentially be converted to cash, something which gave site operators revenue streams that had never previously existed. That was both good and bad for the scene.

The fact is that running a torrent site costs money and if time is factored in too, that becomes lots of money. If site admins have to fund everything themselves, a tipping point is eventually reached. If the site becomes unaffordable, it closes, meaning that everyone loses. So, by taking in some donations or offering users other perks in exchange for financial assistance, the whole thing remains viable.

Counter-intuitively, the success of such a venture then becomes the problem, at least as far as maintaining the old “sharing is caring” philosophy goes. A well-run private site, with enthusiastic donors, has the potential to bring in quite a bit of cash. Initially, the excess can be saved away for that rainy day when things aren’t so good. Having a few thousand in the bank when chaos rains down is rarely a bad thing.

But what happens when a site does really well and is making money hand over fist? What happens when advertisers on public sites begin to queue up, offering lots of cash to get involved? Is a site operator really expected to turn down the donations and tell the advertisers to go away? Amazingly, some do. Less amazingly, most don’t.

Although there are some notable exceptions, particularly in the niche private tracker scene, these days most ‘pirate’ sites are in it for the money.

In the current legal climate, some probably consider this their well-earned ‘danger money’ yet others are so far away from the sharing ethos it hurts. Quite often, these sites are incapable of taking in a new member due to alleged capacity issues yet a sizeable ‘donation’ miraculously solves the problem and gets the user in. It’s like magic.

As it happens, two threads on Reddit this week sparked this little rant. Both discuss whether someone should consider paying $20 and 37 euros respectively to get invitations to a pair of torrent sites.

Ask a purist and the answer is always ‘NO’, whether that’s buying an invitation from the operator of a torrent site or from someone selling invites for profit.

Aside from the fact that no one on these sites has paid content owners a dime, sites that demand cash for entry are doing so for one reason and one reason only – profit. Ridiculous when it’s the users of those sites that are paying to distribute the content.

On the other hand, others see no wrong in it.

They argue that paying a relatively small amount to access huge libraries of content is preferable to spending hundreds of dollars on a legitimate service that doesn’t carry all the content they need. Others don’t bother making any excuses at all, spending sizable sums with pirate IPTV/VOD services that dispose of sharing morals by engaging in a different business model altogether.

But the bottom line, whether we like it or not, is that money and Internet piracy have become so intertwined, so enmeshed in each other’s existence, that it’s become virtually impossible to separate them.

Even those running the handful of non-profit sites still around today would be forced to reconsider if they had to start all over again in today’s climate. The risk model is entirely different and quite often, only money tips those scales.

The same holds true for the people putting together the next big streaming portals. These days it’s about getting as many eyeballs on content as possible, making the money, and getting out the other end unscathed.

This is not what most early pirates envisioned. This is certainly not what the early sharing masses wanted. Yet arguably, through the influx of business people and the desire to generate profit among the general population, the pirating masses have never had it so good.

As revealed in a recent study, volumes of piracy are on the up and it is now possible – still possible – to access almost any item of content on pirate sites, despite the so-called “follow the money” approach championed by the authorities.

While ‘Sharing is Caring’ still lives today, it’s slowly being drowned out and at this point, there’s probably no way back. The big question is whether anyone cares anymore and the answer to that is “probably not”.

So, if the driving force isn’t sharing or love, it’ll probably have to be money. And that works everywhere else, doesn’t it?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

If YouTube-Ripping Sites Are Illegal, What About Tools That Do a Similar Job?

Post Syndicated from Andy original https://torrentfreak.com/if-youtube-ripping-sites-are-illegal-what-about-tools-that-do-a-similar-job-180407/

In 2016, the International Federation of the Phonographic Industry published research which claimed that half of 16 to 24-year-olds use stream-ripping tools to copy music from sites like YouTube.

While this might not have surprised those who regularly participate in the activity, IFPI said that volumes had become so vast that stream-ripping had overtaken pirate site music downloads. That was a big statement.

Probably not coincidentally, just two weeks later IFPI, RIAA, and BPI announced legal action against the world’s largest YouTube ripping site, YouTube-MP3.

“YTMP3 rapidly and seamlessly removes the audio tracks contained in videos streamed from YouTube that YTMP3’s users access, converts those audio tracks to an MP3 format, copies and stores them on YTMP3’s servers, and then distributes copies of the MP3 audio files from its servers to its users in the United States, enabling its users to download those MP3 files to their computers, tablets, or smartphones,” the complaint read.

The labels sued YouTube-MP3 for direct infringement, contributory infringement, vicarious infringement, inducing others to infringe, plus circumvention of technological measures on top. The case was big and one that would’ve been intriguing to watch play out in court, but that never happened.

A year later in September 2017, YouTubeMP3 settled out of court. No details were made public but YouTube-MP3 apparently took all the blame and the court was asked to rule in favor of the labels on all counts.

This certainly gave the impression that what YouTube-MP3 did was illegal and a strong message was sent out to other companies thinking of offering a similar service. However, other onlookers clearly saw the labels’ lawsuit as something to be studied and learned from.

One of those was the operator of NotMP3downloader.com, a site that offers Free MP3 Recorder for YouTube, a tool offering similar functionality to YouTube-MP3 while supposedly avoiding the same legal pitfalls.

Part of that involves audio being processed on the user’s machine – not by stream-ripping as such – but by stream-recording. A subtle difference perhaps, but the site’s operator thinks it’s important.

“After examining the claims made by the copyright holders against youtube-mp3.org, we identified that the charges were based on the three main points. [None] of them are applicable to our product,” he told TF this week.

The first point involves YouTube-MP3’s acts of conversion, storage and distribution of content it had previously culled from YouTube. Copies of unlicensed tracks were clearly held on its own servers, a potent direct infringement risk.

“We don’t have any servers to download, convert or store a copyrighted or any other content from YouTube. Therefore, we do not violate any law or prohibition implied in this part,” NotMP3downloader’s operator explains.

Then there’s the act of “stream-ripping” itself. While YouTube-MP3 downloaded digital content from YouTube using its own software, NotMP3downloader claims to do things differently.

“Our software doesn’t download any streaming content directly, but only launches a web browser with the video specified by a user. The capturing happens from a local machine’s sound card and doesn’t deal with any content streamed through a network,” its operator notes.

This part also seems quite important. YouTube-MP3 was accused of unlawfully circumventing technological measures implemented by YouTube to prevent people downloading or copying content. By opening up YouTube’s own website and viewing content in the way the site demands, NotMP3downloader says it does not “violate the website’s integrity nor performs direct download of audio or video files.”

Like the Betamax video recorder before it that enabled recording from analog TV, NotMP3downloader enables a user to record a YouTube stream on their local machine. This, its makers claim, means the software is completely legal and defeats all the claims made by the labels in the YouTube-MP3 lawsuit.

“What YouTube does is broadcasting content through the Internet. Thus, there is nothing wrong if users are allowed to watch such content later as they may want,” the NotMP3downloader team explain.

“It is worth noting that in Sony Corp. of America v. United City Studios, Inc. (464 U.S. 417) the United States Supreme Court held that such practice, also known as time-shifting, was lawful representing fair use under the US Copyright Act and causing no substantial harm to the copyright holder.”

While software that can record video and sounds locally are nothing new, the developments in the YouTube-MP3 case and this response from NotMP3downloader raises interesting questions.

We put some of them to none other than former RIAA Executive Vice President, Neil Turkewitz, who now works as President of Turkewitz Consulting Group.

Turkewitz stressed that he doesn’t speak for the industry as a whole or indeed the RIAA but it’s clear that his passion for protecting creators persists. He told us that in this instance, reliance on the Betamax decision is “misplaced”.

“The content is different, the activity is different, and the function is different,” Turkewitz told TF.

“The Sony decision must be understood in its context — the time shifting of audiovisual programming being broadcast from point to multipoint. The making available of content by a point-to-point interactive service like YouTube isn’t broadcasting — or at a minimum, is not a form of broadcasting akin to that considered by the Supreme Court in Sony.

“More fundamentally, broadcasting (right of communication to the public) is one of only several rights implicated by the service. And of course, issues of liability will be informed by considerations of purpose, effect and perceived harm. A court’s judgment will also be affected by whether it views the ‘innovation’ as an attempt to circumvent the requirements of law. The decision of the Supreme Court in ABC v. Aereo is certainly instructive in that regard.”

And there are other issues too. While YouTube itself is yet to take any legal action to deter users from downloading rather than merely streaming content, its terms of service are quite specific and seem to cover all eventualities.

“[Y]ou agree not to access Content or any reason other than your personal, non-commercial use solely as intended through and permitted by the normal functionality of the Service, and solely for Streaming,” YouTube’s ToS reads.

“‘Streaming’ means a contemporaneous digital transmission of the material by YouTube via the Internet to a user operated Internet enabled device in such a manner that the data is intended for real-time viewing and not intended to be downloaded (either permanently or temporarily), copied, stored, or redistributed by the user.

“You shall not copy, reproduce, distribute, transmit, broadcast, display, sell, license, or otherwise exploit any Content for any other purposes without the prior written consent of YouTube or the respective licensors of the Content.”

In this respect, it seems that a user doing anything but real-time streaming of YouTube content is breaching YouTube’s terms of service. The big question then, of course, is whether providing a tool specifically for that purpose represents an infringement of copyright.

The people behind Free MP3 Recorder believe that the “scope of application depends entirely on the end users’ intentions” which seems like a fair argument at first view. But, as usual, copyright law is incredibly complex and there are plenty of opposing views.

We asked the BPI, which took action against YouTubeMP3, for its take on this type of tool. The official response was “No comment” which doesn’t really clarify the position, at least for now.

Needless to say, the Betamax decision – relevant or not – doesn’t apply in the UK. But that only adds more parameters into the mix – and perhaps more opportunities for lawyers to make money arguing for and against tools like this in the future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Tinkernut’s hidden Coke bottle spy cam

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/tinkernuts-spy-cam/

Go undercover and keep an eye on your stuff with this brilliant secret Coke bottle spy cam from Tinkernut!

Secret Coke Bottle SPY CAM! – Weekend Hacker #1803

SPECIAL NOTE*** THE FULL TUTORIAL WILL BE AVAILABLE NEXT WEEK April Fools! What a terrible day. So many pranks. You can’t believe anything you read. People invading your space. The mental and physical anguish of enduring the day. It’s time to fight back! Let’s catch the perps in action by making a device that always watches.

Keeping tabs

A Raspberry Pi Zero W, a small camera, and a rechargeable Lithium Polymer (LiPo) battery constitute the bulk of this project’s tech. A pair of 3D-printed parts, and gelatine-solidified Coke Zero make up the fake fizzy body.

Tinkernut Coke bottle Raspberry Pi Spy Cam

“So let’s make this video as short as possible and just buy a cheap pre-made spy cam off of Amazon. Just kidding,” Tinkernut jokes in the tutorial video for the project, before going through the step-by-step process of using the Raspberry Pi to “DIY this the right way”.

After accessing the Zero W from his laptop via SSH, Tinkernut opted for using the rpi_camera_surveillance_system Python script written by GitHub user RuiSantosdotme to control the spy cam. Luckily, this meant no additional library setup, and basically no lag on the video feed.

What we want to do is create a script that activates the camera and serves it to a web page so that we can access it from any web browser. There are plenty of different ways to do this (Motion, Raspivid, etc), but I found a simple Python script that does everything I need it to do and doesn’t require any extra software or libraries to install. The best thing about it is that the lag time is practically unnoticeable.

With the code in place, every boot-up of the Raspberry Pi automatically launches both the script and a web page of the live video, allowing for constant monitoring of potential sneaks and thieves.

Tinkernut Coke bottle Raspberry Pi Spy Cam

The projects is powered by a 1500mAh LiPo battery and the Adafruit LiPo charger. It also includes a simple on/off switch, which Tinkernut wired to the charger and the Pi’s PP1 and PP6 connector pads.

Tinkernut Coke bottle Raspberry Pi Spy Cam

Tinkernut decided to use a Coke Zero bottle for the build, incorporating 3D-printed parts to house the Pi, and a mix of Coke and gelatine to create a realistic-looking filling for the bottle. However, the setup can be transferred to pretty much any hollow item in your home, say, a cookie jar or a cracker box. So get creative and get spying!

A complete spy cam how-to

If you’d like to make your own secret spy cam, you can find a tutorial for Tinkernut’s build at hackster.io, or follow along with his video below. Also make sure to subscribe his YouTube channel to be updated on all his newest builds — they’re rather splendid.

BUILD: Coke Bottle SPY CAM! – Tinkernut Workbench

Learn how to take a regular Coke Zero bottle, cram a Raspberry Pi and webcam inside of it, and have it still look like a regular Coke Zero bottle. Why would you want to do this? To spy on those irritating April Fooligans!!!

And if you’re interested in more spy-themed digital making projects, check out our complete 007 how-to guide for links to tutorials such as our Sense HAT puzzle box, Parent detector, and Laser tripwire.

The post Tinkernut’s hidden Coke bottle spy cam appeared first on Raspberry Pi.

New – Machine Learning Inference at the Edge Using AWS Greengrass

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-machine-learning-inference-at-the-edge-using-aws-greengrass/

What happens when you combine the Internet of Things, Machine Learning, and Edge Computing? Before I tell you, let’s review each one and discuss what AWS has to offer.

Internet of Things (IoT) – Devices that connect the physical world and the digital one. The devices, often equipped with one or more types of sensors, can be found in factories, vehicles, mines, fields, homes, and so forth. Important AWS services include AWS IoT Core, AWS IoT Analytics, AWS IoT Device Management, and Amazon FreeRTOS, along with others that you can find on the AWS IoT page.

Machine Learning (ML) – Systems that can be trained using an at-scale dataset and statistical algorithms, and used to make inferences from fresh data. At Amazon we use machine learning to drive the recommendations that you see when you shop, to optimize the paths in our fulfillment centers, fly drones, and much more. We support leading open source machine learning frameworks such as TensorFlow and MXNet, and make ML accessible and easy to use through Amazon SageMaker. We also provide Amazon Rekognition for images and for video, Amazon Lex for chatbots, and a wide array of language services for text analysis, translation, speech recognition, and text to speech.

Edge Computing – The power to have compute resources and decision-making capabilities in disparate locations, often with intermittent or no connectivity to the cloud. AWS Greengrass builds on AWS IoT, giving you the ability to run Lambda functions and keep device state in sync even when not connected to the Internet.

ML Inference at the Edge
Today I would like to toss all three of these important new technologies into a blender! You can now perform Machine Learning inference at the edge using AWS Greengrass. This allows you to use the power of the AWS cloud (including fast, powerful instances equipped with GPUs) to build, train, and test your ML models before deploying them to small, low-powered, intermittently-connected IoT devices running in those factories, vehicles, mines, fields, and homes that I mentioned.

Here are a few of the many ways that you can put Greengrass ML Inference to use:

Precision Farming – With an ever-growing world population and unpredictable weather that can affect crop yields, the opportunity to use technology to increase yields is immense. Intelligent devices that are literally in the field can process images of soil, plants, pests, and crops, taking local corrective action and sending status reports to the cloud.

Physical Security – Smart devices (including the AWS DeepLens) can process images and scenes locally, looking for objects, watching for changes, and even detecting faces. When something of interest or concern arises, the device can pass the image or the video to the cloud and use Amazon Rekognition to take a closer look.

Industrial Maintenance – Smart, local monitoring can increase operational efficiency and reduce unplanned downtime. The monitors can run inference operations on power consumption, noise levels, and vibration to flag anomalies, predict failures, detect faulty equipment.

Greengrass ML Inference Overview
There are several different aspects to this new AWS feature. Let’s take a look at each one:

Machine Learning ModelsPrecompiled TensorFlow and MXNet libraries, optimized for production use on the NVIDIA Jetson TX2 and Intel Atom devices, and development use on 32-bit Raspberry Pi devices. The optimized libraries can take advantage of GPU and FPGA hardware accelerators at the edge in order to provide fast, local inferences.

Model Building and Training – The ability to use Amazon SageMaker and other cloud-based ML tools to build, train, and test your models before deploying them to your IoT devices. To learn more about SageMaker, read Amazon SageMaker – Accelerated Machine Learning.

Model Deployment – SageMaker models can (if you give them the proper IAM permissions) be referenced directly from your Greengrass groups. You can also make use of models stored in S3 buckets. You can add a new machine learning resource to a group with a couple of clicks:

These new features are available now and you can start using them today! To learn more read Perform Machine Learning Inference.

Jeff;

 

New – Encryption of Data in Transit for Amazon EFS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-encryption-of-data-in-transit-for-amazon-efs/

Amazon Elastic File System was designed to be the file system of choice for cloud-native applications that require shared access to file-based storage. We launched EFS in mid-2016 and have added several important features since then including on-premises access via Direct Connect and encryption of data at rest. We have also made EFS available in additional AWS Regions, most recently US West (Northern California). As was the case with EFS itself, these enhancements were made in response to customer feedback, and reflect our desire to serve an ever-widening customer base.

Encryption in Transit
Today we are making EFS even more useful with the addition of support for encryption of data in transit. When used in conjunction with the existing support for encryption of data at rest, you now have the ability to protect your stored files using a defense-in-depth security strategy.

In order to make it easy for you to implement encryption in transit, we are also releasing an EFS mount helper. The helper (available in source code and RPM form) takes care of setting up a TLS tunnel to EFS, and also allows you to mount file systems by ID. The two features are independent; you can use the helper to mount file systems by ID even if you don’t make use of encryption in transit. The helper also supplies a recommended set of default options to the actual mount command.

Setting up Encryption
I start by installing the EFS mount helper on my Amazon Linux instance:

$ sudo yum install -y amazon-efs-utils

Next, I visit the EFS Console and capture the file system ID:

Then I specify the ID (and the TLS option) to mount the file system:

$ sudo mount -t efs fs-92758f7b -o tls /mnt/efs

And that’s it! The encryption is transparent and has an almost negligible impact on data transfer speed.

Available Now
You can start using encryption in transit today in all AWS Regions where EFS is available.

The mount helper is available for Amazon Linux. If you are running another distribution of Linux you will need to clone the GitHub repo and build your own RPM, as described in the README.

Jeff;