Tag Archives: win

[$] Rhashtables: under the hood

Post Syndicated from corbet original https://lwn.net/Articles/751974/rss

The first article in this series described
the interface to the “rhashtable”
resizable hash-table abstraction in Linux 4.15. While a knowledge of
the interface can result in successful use of rhashtables, it often
helps to understand what is going on “under the hood”, particularly when
those details leak out through the interface, as is occasionally the
case with rhashtable. The centerpiece for understanding the
implementation is knowing exactly how the table is resized. So this
follow-on article will explain that operation; it will also present the
configuration parameters that were skimmed over last time and discuss
how they affect the implementation.

Russia Blacklists 250 Pirate Sites For Displaying Gambling Ads

Post Syndicated from Andy original https://torrentfreak.com/russia-blacklists-250-pirate-sites-for-displaying-gambling-ads-180421/

Blocking alleged pirate sites is usually a question of proving that they’re involved in infringement and then applying to the courts for an injunction.

In Europe, the process is becoming easier, largely thanks to an EU ruling that permits blocking on copyright grounds.

As reported over the past several years, Russia is taking its blocking processes very seriously. Copyright holders can now have sites blocked in just a few days, if they can show their operators as being unresponsive to takedown demands.

This week, however, Russian authorities have again shown that copyright infringement doesn’t have to be the only Achilles’ heel of pirate sites.

Back in 2006, online gambling was completely banned in Russia. Three years later in 2009, land-based gambling was also made illegal in all but four specified regions. Then, in 2012, the Russian Supreme Court ruled that ISPs must block access to gambling sites, something they had previously refused to do.

That same year, telecoms watchdog Rozcomnadzor began publishing a list of banned domains and within those appeared some of the biggest names in gambling. Many shut down access to customers located in Russia but others did not. In response, Rozcomnadzor also began targeting sites that simply offered information on gambling.

Fast forward more than six years and Russia is still taking a hard line against gambling operators. However, it now finds itself in a position where the existence of gambling material can also assist the state in its quest to take down pirate sites.

Following a complaint from the Federal Tax Service of Russia, Rozcomnadzor has again added a large number of ‘pirate’ sites to the country’s official blocklist after they advertised gambling-related products and services.

“Rozkomnadzor, at the request of the Federal Tax Service of Russia, added more than 250 pirate online cinemas and torrent trackers to the unified register of banned information, which hosted illegal advertising of online casinos and bookmakers,” the telecoms watchdog reported.

Almost immediately, 200 of the sites were blocked by local ISPs since they failed to remove the advertising when told to do so. For the remaining 50 sites, breathing space is still available. Their bans can be suspended if the offending ads are removed within a timeframe specified by the authorities, which has not yet run out.

“Information on a significant number of pirate resources with illegal advertising was received by Rozcomnadzor from citizens and organizations through a hotline that operates on the site of the Unified Register of Prohibited Information, all of which were sent to the Federal Tax Service for making decisions on restricting access,” the watchdog revealed.

Links between pirate sites and gambling companies have traditionally been close over the years, with advertising for many top-tier brands appearing on portals large and small. However, in recent times the prevalence of gambling ads has diminished, in part due to campaigns conducted in the United States, Europe, and the UK.

For pirate site operators in Russia, the decision to carry gambling ads now comes with the added risk of being blocked. Only time will tell whether any reduction in traffic is considered serious enough to warrant a gambling boycott of their own.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Implement continuous integration and delivery of serverless AWS Glue ETL applications using AWS Developer Tools

Post Syndicated from Prasad Alle original https://aws.amazon.com/blogs/big-data/implement-continuous-integration-and-delivery-of-serverless-aws-glue-etl-applications-using-aws-developer-tools/

AWS Glue is an increasingly popular way to develop serverless ETL (extract, transform, and load) applications for big data and data lake workloads. Organizations that transform their ETL applications to cloud-based, serverless ETL architectures need a seamless, end-to-end continuous integration and continuous delivery (CI/CD) pipeline: from source code, to build, to deployment, to product delivery. Having a good CI/CD pipeline can help your organization discover bugs before they reach production and deliver updates more frequently. It can also help developers write quality code and automate the ETL job release management process, mitigate risk, and more.

AWS Glue is a fully managed data catalog and ETL service. It simplifies and automates the difficult and time-consuming tasks of data discovery, conversion, and job scheduling. AWS Glue crawls your data sources and constructs a data catalog using pre-built classifiers for popular data formats and data types, including CSV, Apache Parquet, JSON, and more.

When you are developing ETL applications using AWS Glue, you might come across some of the following CI/CD challenges:

  • Iterative development with unit tests
  • Continuous integration and build
  • Pushing the ETL pipeline to a test environment
  • Pushing the ETL pipeline to a production environment
  • Testing ETL applications using real data (live test)
  • Exploring and validating data

In this post, I walk you through a solution that implements a CI/CD pipeline for serverless AWS Glue ETL applications supported by AWS Developer Tools (including AWS CodePipeline, AWS CodeCommit, and AWS CodeBuild) and AWS CloudFormation.

Solution overview

The following diagram shows the pipeline workflow:

This solution uses AWS CodePipeline, which lets you orchestrate and automate the test and deploy stages for ETL application source code. The solution consists of a pipeline that contains the following stages:

1.) Source Control: In this stage, the AWS Glue ETL job source code and the AWS CloudFormation template file for deploying the ETL jobs are both committed to version control. I chose to use AWS CodeCommit for version control.

To get the ETL job source code and AWS CloudFormation template, download the gluedemoetl.zip file. This solution is developed based on a previous post, Build a Data Lake Foundation with AWS Glue and Amazon S3.

2.) LiveTest: In this stage, all resources—including AWS Glue crawlers, jobs, S3 buckets, roles, and other resources that are required for the solution—are provisioned, deployed, live tested, and cleaned up.

The LiveTest stage includes the following actions:

  • Deploy: In this action, all the resources that are required for this solution (crawlers, jobs, buckets, roles, and so on) are provisioned and deployed using an AWS CloudFormation template.
  • AutomatedLiveTest: In this action, all the AWS Glue crawlers and jobs are executed and data exploration and validation tests are performed. These validation tests include, but are not limited to, record counts in both raw tables and transformed tables in the data lake and any other business validations. I used AWS CodeBuild for this action.
  • LiveTestApproval: This action is included for the cases in which a pipeline administrator approval is required to deploy/promote the ETL applications to the next stage. The pipeline pauses in this action until an administrator manually approves the release.
  • LiveTestCleanup: In this action, all the LiveTest stage resources, including test crawlers, jobs, roles, and so on, are deleted using the AWS CloudFormation template. This action helps minimize cost by ensuring that the test resources exist only for the duration of the AutomatedLiveTest and LiveTestApproval

3.) DeployToProduction: In this stage, all the resources are deployed using the AWS CloudFormation template to the production environment.

Try it out

This code pipeline takes approximately 20 minutes to complete the LiveTest test stage (up to the LiveTest approval stage, in which manual approval is required).

To get started with this solution, choose Launch Stack:

This creates the CI/CD pipeline with all of its stages, as described earlier. It performs an initial commit of the sample AWS Glue ETL job source code to trigger the first release change.

In the AWS CloudFormation console, choose Create. After the template finishes creating resources, you see the pipeline name on the stack Outputs tab.

After that, open the CodePipeline console and select the newly created pipeline. Initially, your pipeline’s CodeCommit stage shows that the source action failed.

Allow a few minutes for your new pipeline to detect the initial commit applied by the CloudFormation stack creation. As soon as the commit is detected, your pipeline starts. You will see the successful stage completion status as soon as the CodeCommit source stage runs.

In the CodeCommit console, choose Code in the navigation pane to view the solution files.

Next, you can watch how the pipeline goes through the LiveTest stage of the deploy and AutomatedLiveTest actions, until it finally reaches the LiveTestApproval action.

At this point, if you check the AWS CloudFormation console, you can see that a new template has been deployed as part of the LiveTest deploy action.

At this point, make sure that the AWS Glue crawlers and the AWS Glue job ran successfully. Also check whether the corresponding databases and external tables have been created in the AWS Glue Data Catalog. Then verify that the data is validated using Amazon Athena, as shown following.

Open the AWS Glue console, and choose Databases in the navigation pane. You will see the following databases in the Data Catalog:

Open the Amazon Athena console, and run the following queries. Verify that the record counts are matching.

SELECT count(*) FROM "nycitytaxi_gluedemocicdtest"."data";
SELECT count(*) FROM "nytaxiparquet_gluedemocicdtest"."datalake";

The following shows the raw data:

The following shows the transformed data:

The pipeline pauses the action until the release is approved. After validating the data, manually approve the revision on the LiveTestApproval action on the CodePipeline console.

Add comments as needed, and choose Approve.

The LiveTestApproval stage now appears as Approved on the console.

After the revision is approved, the pipeline proceeds to use the AWS CloudFormation template to destroy the resources that were deployed in the LiveTest deploy action. This helps reduce cost and ensures a clean test environment on every deployment.

Production deployment is the final stage. In this stage, all the resources—AWS Glue crawlers, AWS Glue jobs, Amazon S3 buckets, roles, and so on—are provisioned and deployed to the production environment using the AWS CloudFormation template.

After successfully running the whole pipeline, feel free to experiment with it by changing the source code stored on AWS CodeCommit. For example, if you modify the AWS Glue ETL job to generate an error, it should make the AutomatedLiveTest action fail. Or if you change the AWS CloudFormation template to make its creation fail, it should affect the LiveTest deploy action. The objective of the pipeline is to guarantee that all changes that are deployed to production are guaranteed to work as expected.

Conclusion

In this post, you learned how easy it is to implement CI/CD for serverless AWS Glue ETL solutions with AWS developer tools like AWS CodePipeline and AWS CodeBuild at scale. Implementing such solutions can help you accelerate ETL development and testing at your organization.

If you have questions or suggestions, please comment below.

 


Additional Reading

If you found this post useful, be sure to check out Implement Continuous Integration and Delivery of Apache Spark Applications using AWS and Build a Data Lake Foundation with AWS Glue and Amazon S3.

 


About the Authors

Prasad Alle is a Senior Big Data Consultant with AWS Professional Services. He spends his time leading and building scalable, reliable Big data, Machine learning, Artificial Intelligence and IoT solutions for AWS Enterprise and Strategic customers. His interests extend to various technologies such as Advanced Edge Computing, Machine learning at Edge. In his spare time, he enjoys spending time with his family.

 
Luis Caro is a Big Data Consultant for AWS Professional Services. He works with our customers to provide guidance and technical assistance on big data projects, helping them improving the value of their solutions when using AWS.

 

 

 

RDS for Oracle: Extending Outbound Network Access to use SSL/TLS

Post Syndicated from Surya Nallu original https://aws.amazon.com/blogs/architecture/rds-for-oracle-extending-outbound-network-access-to-use-ssltls/

In December 2016, we launched the Outbound Network Access functionality for Amazon RDS for Oracle, enabling customers to use their RDS for Oracle database instances to communicate with external web endpoints using the utl_http and utl tcp packages, and sending emails through utl_smtp. We extended the functionality by adding the option of using custom DNS servers, allowing such outbound network accesses to make use of any DNS server a customer chooses to use. These releases enabled HTTP, TCP and SMTP communication originating out of RDS for Oracle instances – limited to non-secure (non-SSL) mediums.

To overcome the limitation over SSL connections, we recently published a whitepaper, that guides through the process of creating customized Oracle wallet bundles on your RDS for Oracle instances. By making use of such wallets, you can now extend the Outbound Network Access capability to have external communications happen over secure (SSL/TLS) connections. This opens up new use cases for your RDS for Oracle instances.

With the right set of certificates imported into your RDS for Oracle instances (through Oracle wallets), your database instances can now:

  • Communicate with a HTTPS endpoint: Using utl_http, access a resource such as https://status.aws.amazon.com/robots.txt
  • Download files from Amazon S3 securely: Using a presigned URL from Amazon S3, you can now download any file over SSL
  • Extending Oracle Database links to use SSL: Database links between RDS for Oracle instances can now use SSL as long as the instances have the SSL option installed
  • Sending email over SMTPS:
    • You can now integrate with Amazon SES to send emails from your database instances and any other generic SMTPS with which the provider can be integrated

These are just a few high-level examples of new use cases that have opened up with the whitepaper. As a reminder, always ensure to have best security practices in place when making use of Outbound Network Access (detailed in the whitepaper).

About the Author

Surya Nallu is a Software Development Engineer on the Amazon RDS for Oracle team.

Welcome Victoria — Sales Development Representative

Post Syndicated from Yev original https://www.backblaze.com/blog/welcome-victoria-sales-development-representative/

Ever since we introduced our Groups feature, Backblaze for Business has been growing at a rapid rate! We’ve been staffing up in order to support the product and the newest addition to the sales team, Victoria, joins us as a Sales Development Representative! Let’s learn a bit more about Victoria, shall we?

What is your Backblaze Title?
Sales Development Representative.

Where are you originally from?
Harrisburg, North Carolina.

What attracted you to Backblaze?
The leaders and family-style culture.

What do you expect to learn while being at Backblaze?
How to sell, sell, sell!

Where else have you worked?
The North Carolina Autism Society, an ophthalmologist’s office, home health care, and another tech startup.

Where did you go to school?
The University of North Carolina Chapel Hill and Duke University’s Fuqua School of Business.

What’s your dream job?
Fighter pilot, professional snowboarder or killer whale trainer.

Favorite place you’ve traveled?
Hawaii and Banff.

Favorite hobby?
Basketball and cars.

Of what achievement are you most proud?
Missionary work and helping patients feel better.

Star Trek or Star Wars?
Neither, but probably Star Wars.

Coke or Pepsi?
Neither, bubble tea.

Favorite food?
Snow crab legs.

Why do you like certain things?
Because God made me that way.

Anything else you’d like you’d like to tell us?
I’m a germophobe, drink a lot of water and unfortunately, am introverted.

Being on the phones all day is a good way to build up those extroversion skills! Welcome to the team and we hope you enjoy learning how to sell, sell, sell!

The post Welcome Victoria — Sales Development Representative appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Securing Elections

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/securing_electi_1.html

Elections serve two purposes. The first, and obvious, purpose is to accurately choose the winner. But the second is equally important: to convince the loser. To the extent that an election system is not transparently and auditably accurate, it fails in that second purpose. Our election systems are failing, and we need to fix them.

Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper.

Recently, there have been two graphic demonstrations of how bad our computerized voting system is. In 2007, the states of California and Ohio conducted audits of their electronic voting machines. Expert review teams found exploitable vulnerabilities in almost every component they examined. The researchers were able to undetectably alter vote tallies, erase audit logs, and load malware on to the systems. Some of their attacks could be implemented by a single individual with no greater access than a normal poll worker; others could be done remotely.

Last year, the Defcon hackers’ conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail.

It’s important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend.

It shouldn’t be any surprise that voting equipment, including voting machines, voter registration databases, and vote tabulation systems, are that hackable. They’re computers — often ancient computers running operating systems no longer supported by the manufacturers — and they don’t have any magical security technology that the rest of the industry isn’t privy to. If anything, they’re less secure than the computers we generally use, because their manufacturers hide any flaws behind the proprietary nature of their equipment.

We’re not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election.

Voting systems have another requirement that makes security even harder to achieve: the requirement for a secret ballot. Because we have to securely separate the election-roll system that determines who can vote from the system that collects and tabulates the votes, we can’t use the security systems available to banking and other high-value applications.

We can securely bank online, but can’t securely vote online. If we could do away with anonymity — if everyone could check that their vote was counted correctly — then it would be easy to secure the vote. But that would lead to other problems. Before the US had the secret ballot, voter coercion and vote-buying were widespread.

We can’t, so we need to accept that our voting systems are insecure. We need an election system that is resilient to the threats. And for many parts of the system, that means paper.

Let’s start with the voter rolls. We know they’ve already been targeted. In 2016, someone changed the party affiliation of hundreds of voters before the Republican primary. That’s just one possibility. A well-executed attack that deletes, for example, one in five voters at random — or changes their addresses — would cause chaos on election day.

Yes, we need to shore up the security of these systems. We need better computer, network, and database security for the various state voter organizations. We also need to better secure the voter registration websites, with better design and better internet security. We need better security for the companies that build and sell all this equipment.

Multiple, unchangeable backups are essential. A record of every addition, deletion, and change needs to be stored on a separate system, on write-only media like a DVD. Copies of that DVD, or — even better — a paper printout of the voter rolls, should be available at every polling place on election day. We need to be ready for anything.

Next, the voting machines themselves. Security researchers agree that the gold standard is a voter-verified paper ballot. The easiest (and cheapest) way to achieve this is through optical-scan voting. Voters mark paper ballots by hand; they are fed into a machine and counted automatically. That paper ballot is saved, and serves as a final true record in a recount in case of problems. Touch-screen machines that print a paper ballot to drop in a ballot box can also work for voters with disabilities, as long as the ballot can be easily read and verified by the voter.

Finally, the tabulation and reporting systems. Here again we need more security in the process, but we must always use those paper ballots as checks on the computers. A manual, post-election, risk-limiting audit varies the number of ballots examined according to the margin of victory. Conducting this audit after every election, before the results are certified, gives us confidence that the election outcome is correct, even if the voting machines and tabulation computers have been tampered with. Additionally, we need better coordination and communications when incidents occur.

It’s vital to agree on these procedures and policies before an election. Before the fact, when anyone can win and no one knows whose votes might be changed, it’s easy to agree on strong security. But after the vote, someone is the presumptive winner — and then everything changes. Half of the country wants the result to stand, and half wants it reversed. At that point, it’s too late to agree on anything.

The politicians running in the election shouldn’t have to argue their challenges in court. Getting elections right is in the interest of all citizens. Many countries have independent election commissions that are charged with conducting elections and ensuring their security. We don’t do that in the US.

Instead, we have representatives from each of our two parties in the room, keeping an eye on each other. That provided acceptable security against 20th-century threats, but is totally inadequate to secure our elections in the 21st century. And the belief that the diversity of voting systems in the US provides a measure of security is a dangerous myth, because few districts can be decisive and there are so few voting-machine vendors.

We can do better. In 2017, the Department of Homeland Security declared elections to be critical infrastructure, allowing the department to focus on securing them. On 23 March, Congress allocated $380m to states to upgrade election security.

These are good starts, but don’t go nearly far enough. The constitution delegates elections to the states but allows Congress to “make or alter such Regulations”. In 1845, Congress set a nationwide election day. Today, we need it to set uniform and strict election standards.

This essay originally appeared in the Guardian.

Hackspace magazine 6: Paper Engineering

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-magazine-6/

HackSpace magazine is back with our brand-new issue 6, available for you on shop shelves, in your inbox, and on our website right now.

Inside Hackspace magazine 6

Paper is probably the first thing you ever used for making, and for good reason: in no other medium can you iterate through 20 designs at the cost of only a few pennies. We’ve roped in Rob Ives to show us how to make a barking paper dog with moveable parts and a cam mechanism. Even better, the magazine includes this free paper automaton for you to make yourself. That’s right: free!

At the other end of the scale, there’s the forge, where heat, light, and noise combine to create immutable steel. We speak to Alec Steele, YouTuber, blacksmith, and philosopher, about his amazingly beautiful Damascus steel creations, and about why there’s no difference between grinding a knife and blowing holes in a mountain to build a road through it.

HackSpace magazine 6 Alec Steele

Do it yourself

You’ve heard of reading glasses — how about glasses that read for you? Using a camera, optical character recognition software, and a text-to-speech engine (and of course a Raspberry Pi to hold it all together), reader Andrew Lewis has hacked together his own system to help deal with age-related macular degeneration.

It’s the definition of hacking: here’s a problem, there’s no solution in the shops, so you go and build it yourself!

Radio

60 years ago, the cutting edge of home hacking was the transistor radio. Before the internet was dreamt of, the transistor radio made the world smaller and brought people together. Nowadays, the components you need to build a radio are cheap and easily available, so if you’re in any way electronically inclined, building a radio is an ideal excuse to dust off your soldering iron.

Tutorials

If you’re a 12-month subscriber (if you’re not, you really should be), you’ve no doubt been thinking of all sorts of things to do with the Adafruit Circuit Playground Express we gave you for free. How about a sewable circuit for a canvas bag? Use the accelerometer to detect patterns of movement — walking, for example — and flash a series of lights in response. It’s clever, fun, and an easy way to add some programmable fun to your shopping trips.


We’re also making gin, hacking a children’s toy car to unlock more features, and getting started with robot sumo to fill the void left by the cancellation of Robot Wars.

HackSpace magazine 6

All this, plus an 11-metre tall mechanical miner, in HackSpace magazine issue 6 — subscribe here from just £4 an issue or get the PDF version for free. You can also find HackSpace magazine in WHSmith, Tesco, Sainsbury’s, and independent newsagents in the UK. If you live in the US, check out your local Barnes & Noble, Fry’s, or Micro Center next week. We’re also shipping to stores in Australia, Hong Kong, Canada, Singapore, Belgium, and Brazil, so be sure to ask your local newsagent whether they’ll be getting HackSpace magazine.

The post Hackspace magazine 6: Paper Engineering appeared first on Raspberry Pi.

The End of Google Cloud Messaging, and What it Means for Your Apps

Post Syndicated from Zach Barbitta original https://aws.amazon.com/blogs/messaging-and-targeting/the-end-of-google-cloud-messaging-and-what-it-means-for-your-apps/

On April 10, 2018, Google announced the deprecation of its Google Cloud Messaging (GCM) platform. Specifically, the GCM server and client APIs are deprecated and will be removed as soon as April 11, 2019.  What does this mean for you and your applications that use Amazon Simple Notification Service (Amazon SNS) or Amazon Pinpoint?

First, nothing will break now or after April 11, 2019. GCM device tokens are completely interchangeable with the newer Firebase Cloud Messaging (FCM) device tokens. If you have existing GCM tokens, you’ll still be able to use them to send notifications. This statement is also true for GCM tokens that you generate in the future.

On the back end, we’ve already migrated Amazon SNS and Amazon Pinpoint to the server endpoint for FCM (https://fcm.googleapis.com/fcm/send). As a developer, you don’t need to make any changes as a result of this deprecation.

We created the following mini-FAQ to address some of the questions you may have as a developer who uses Amazon SNS or Amazon Pinpoint.

If I migrate to FCM from GCM, can I still use Amazon Pinpoint and Amazon SNS?

Yes. Your ability to connect to your applications and send messages through both Amazon SNS and Amazon Pinpoint doesn’t change. We’ll update the documentation for Amazon SNS and Amazon Pinpoint soon to reflect these changes.

If I don’t migrate to FCM from GCM, can I still use Amazon Pinpoint and Amazon SNS?

Yes. If you do nothing, your existing credentials and GCM tokens will still be valid. All applications that you previously set up to use Amazon Pinpoint or Amazon SNS will continue to work normally. When you call the API for Amazon Pinpoint or Amazon SNS, we initiate a request to the FCM server endpoint directly.

What are the differences between Amazon SNS and Amazon Pinpoint?

Amazon SNS makes it easy for developers to set up, operate, and send notifications at scale, affordably and with a high degree of flexibility. Amazon Pinpoint has many of the same messaging capabilities as Amazon SNS, with the same levels of scalability and flexibility.

The main difference between the two services is that Amazon Pinpoint provides both transactional and targeted messaging capabilities. By using Amazon Pinpoint, marketers and developers can not only send transactional messages to their customers, but can also segment their audiences, create campaigns, and analyze both application and message metrics.

How do I migrate from GCM to FCM?

For more information about migrating from GCM to FCM, see Migrate a GCM Client App for Android to Firebase Cloud Messaging on the Google Developers site.

If you have any questions, please post them in the comments section, or in the Amazon Pinpoint or Amazon SNS forums.

Implementing safe AWS Lambda deployments with AWS CodeDeploy

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/implementing-safe-aws-lambda-deployments-with-aws-codedeploy/

This post courtesy of George Mao, AWS Senior Serverless Specialist – Solutions Architect

AWS Lambda and AWS CodeDeploy recently made it possible to automatically shift incoming traffic between two function versions based on a preconfigured rollout strategy. This new feature allows you to gradually shift traffic to the new function. If there are any issues with the new code, you can quickly rollback and control the impact to your application.

Previously, you had to manually move 100% of traffic from the old version to the new version. Now, you can have CodeDeploy automatically execute pre- or post-deployment tests and automate a gradual rollout strategy. Traffic shifting is built right into the AWS Serverless Application Model (SAM), making it easy to define and deploy your traffic shifting capabilities. SAM is an extension of AWS CloudFormation that provides a simplified way of defining serverless applications.

In this post, I show you how to use SAM, CloudFormation, and CodeDeploy to accomplish an automated rollout strategy for safe Lambda deployments.

Scenario

For this walkthrough, you write a Lambda application that returns a count of the S3 buckets that you own. You deploy it and use it in production. Later on, you receive requirements that tell you that you need to change your Lambda application to count only buckets that begin with the letter “a”.

Before you make the change, you need to be sure that your new Lambda application works as expected. If it does have issues, you want to minimize the number of impacted users and roll back easily. To accomplish this, you create a deployment process that publishes the new Lambda function, but does not send any traffic to it. You use CodeDeploy to execute a PreTraffic test to ensure that your new function works as expected. After the test succeeds, CodeDeploy automatically shifts traffic gradually to the new version of the Lambda function.

Your Lambda function is exposed as a REST service via an Amazon API Gateway deployment. This makes it easy to test and integrate.

Prerequisites

To execute the SAM and CloudFormation deployment, you must have the following IAM permissions:

  • cloudformation:*
  • lambda:*
  • codedeploy:*
  • iam:create*

You may use the AWS SAM Local CLI or the AWS CLI to package and deploy your Lambda application. If you choose to use SAM Local, be sure to install it onto your system. For more information, see AWS SAM Local Installation.

All of the code used in this post can be found in this GitHub repository: https://github.com/aws-samples/aws-safe-lambda-deployments.

Walkthrough

For this post, use SAM to define your resources because it comes with built-in CodeDeploy support for safe Lambda deployments.  The deployment is handled and automated by CloudFormation.

SAM allows you to define your Serverless applications in a simple and concise fashion, because it automatically creates all necessary resources behind the scenes. For example, if you do not define an execution role for a Lambda function, SAM automatically creates one. SAM also creates the CodeDeploy application necessary to drive the traffic shifting, as well as the IAM service role that CodeDeploy uses to execute all actions.

Create a SAM template

To get started, write your SAM template and call it template.yaml.

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: An example SAM template for Lambda Safe Deployments.

Resources:

  returnS3Buckets:
    Type: AWS::Serverless::Function
    Properties:
      Handler: returnS3Buckets.handler
      Runtime: nodejs6.10
      AutoPublishAlias: live
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "s3:ListAllMyBuckets"
            Resource: '*'
      DeploymentPreference:
          Type: Linear10PercentEvery1Minute
          Hooks:
            PreTraffic: !Ref preTrafficHook
      Events:
        Api:
          Type: Api
          Properties:
            Path: /test
            Method: get

  preTrafficHook:
    Type: AWS::Serverless::Function
    Properties:
      Handler: preTrafficHook.handler
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "codedeploy:PutLifecycleEventHookExecutionStatus"
            Resource:
              !Sub 'arn:aws:codedeploy:${AWS::Region}:${AWS::AccountId}:deploymentgroup:${ServerlessDeploymentApplication}/*'
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "lambda:InvokeFunction"
            Resource: !Ref returnS3Buckets.Version
      Runtime: nodejs6.10
      FunctionName: 'CodeDeployHook_preTrafficHook'
      DeploymentPreference:
        Enabled: false
      Timeout: 5
      Environment:
        Variables:
          NewVersion: !Ref returnS3Buckets.Version

This template creates two functions:

  • returnS3Buckets
  • preTrafficHook

The returnS3Buckets function is where your application logic lives. It’s a simple piece of code that uses the AWS SDK for JavaScript in Node.JS to call the Amazon S3 listBuckets API action and return the number of buckets.

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			callback(null, {
				statusCode: 200,
				body: allBuckets.length
			});
		}
	});	
}

Review the key parts of the SAM template that defines returnS3Buckets:

  • The AutoPublishAlias attribute instructs SAM to automatically publish a new version of the Lambda function for each new deployment and link it to the live alias.
  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides the function with permission to call listBuckets.
  • The DeploymentPreference attribute configures the type of rollout pattern to use. In this case, you are shifting traffic in a linear fashion, moving 10% of traffic every minute to the new version. For more information about supported patterns, see Serverless Application Model: Traffic Shifting Configurations.
  • The Hooks attribute specifies that you want to execute the preTrafficHook Lambda function before CodeDeploy automatically begins shifting traffic. This function should perform validation testing on the newly deployed Lambda version. This function invokes the new Lambda function and checks the results. If you’re satisfied with the tests, instruct CodeDeploy to proceed with the rollout via an API call to: codedeploy.putLifecycleEventHookExecutionStatus.
  • The Events attribute defines an API-based event source that can trigger this function. It accepts requests on the /test path using an HTTP GET method.
'use strict';

const AWS = require('aws-sdk');
const codedeploy = new AWS.CodeDeploy({apiVersion: '2014-10-06'});
var lambda = new AWS.Lambda();

exports.handler = (event, context, callback) => {

	console.log("Entering PreTraffic Hook!");
	
	// Read the DeploymentId & LifecycleEventHookExecutionId from the event payload
    var deploymentId = event.DeploymentId;
	var lifecycleEventHookExecutionId = event.LifecycleEventHookExecutionId;

	var functionToTest = process.env.NewVersion;
	console.log("Testing new function version: " + functionToTest);

	// Perform validation of the newly deployed Lambda version
	var lambdaParams = {
		FunctionName: functionToTest,
		InvocationType: "RequestResponse"
	};

	var lambdaResult = "Failed";
	lambda.invoke(lambdaParams, function(err, data) {
		if (err){	// an error occurred
			console.log(err, err.stack);
			lambdaResult = "Failed";
		}
		else{	// successful response
			var result = JSON.parse(data.Payload);
			console.log("Result: " +  JSON.stringify(result));

			// Check the response for valid results
			// The response will be a JSON payload with statusCode and body properties. ie:
			// {
			//		"statusCode": 200,
			//		"body": 51
			// }
			if(result.body == 9){	
				lambdaResult = "Succeeded";
				console.log ("Validation testing succeeded!");
			}
			else{
				lambdaResult = "Failed";
				console.log ("Validation testing failed!");
			}

			// Complete the PreTraffic Hook by sending CodeDeploy the validation status
			var params = {
				deploymentId: deploymentId,
				lifecycleEventHookExecutionId: lifecycleEventHookExecutionId,
				status: lambdaResult // status can be 'Succeeded' or 'Failed'
			};
			
			// Pass AWS CodeDeploy the prepared validation test results.
			codedeploy.putLifecycleEventHookExecutionStatus(params, function(err, data) {
				if (err) {
					// Validation failed.
					console.log('CodeDeploy Status update failed');
					console.log(err, err.stack);
					callback("CodeDeploy Status update failed");
				} else {
					// Validation succeeded.
					console.log('Codedeploy status updated successfully');
					callback(null, 'Codedeploy status updated successfully');
				}
			});
		}  
	});
}

The hook is hardcoded to check that the number of S3 buckets returned is 9.

Review the key parts of the SAM template that defines preTrafficHook:

  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides permissions to call the CodeDeploy PutLifecycleEventHookExecutionStatus API action. The second statement provides permissions to invoke the specific version of the returnS3Buckets function to test
  • This function has traffic shifting features disabled by setting the DeploymentPreference option to false.
  • The FunctionName attribute explicitly tells CloudFormation what to name the function. Otherwise, CloudFormation creates the function with the default naming convention: [stackName]-[FunctionName]-[uniqueID].  Name the function with the “CodeDeployHook_” prefix because the CodeDeployServiceRole role only allows InvokeFunction on functions named with that prefix.
  • Set the Timeout attribute to allow enough time to complete your validation tests.
  • Use an environment variable to inject the ARN of the newest deployed version of the returnS3Buckets function. The ARN allows the function to know the specific version to invoke and perform validation testing on.

Deploy the function

Your SAM template is all set and the code is written—you’re ready to deploy the function for the first time. Here’s how to do it via the SAM CLI. Replace “sam” with “cloudformation” to use CloudFormation instead.

First, package the function. This command returns a CloudFormation importable file, packaged.yaml.

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml

Now deploy everything:

sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

At this point, both Lambda functions have been deployed within the CloudFormation stack mySafeDeployStack. The returnS3Buckets has been deployed as Version 1:

SAM automatically created a few things, including the CodeDeploy application, with the deployment pattern that you specified (Linear10PercentEvery1Minute). There is currently one deployment group, with no action, because no deployments have occurred. SAM also created the IAM service role that this CodeDeploy application uses:

There is a single managed policy attached to this role, which allows CodeDeploy to invoke any Lambda function that begins with “CodeDeployHook_”.

An API has been set up called safeDeployStack. It targets your Lambda function with the /test resource using the GET method. When you test the endpoint, API Gateway executes the returnS3Buckets function and it returns the number of S3 buckets that you own. In this case, it’s 51.

Publish a new Lambda function version

Now implement the requirements change, which is to make returnS3Buckets count only buckets that begin with the letter “a”. The code now looks like the following (see returnS3BucketsNew.js in GitHub):

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			//callback(null, allBuckets.length);

			//  New Code begins here
			var counter=0;
			for(var i  in allBuckets){
				if(allBuckets[i].Name[0] === "a")
					counter++;
			}
			console.log("Total buckets starting with a: " + counter);

			callback(null, {
				statusCode: 200,
				body: counter
			});
			
		}
	});	
}

Repackage and redeploy with the same two commands as earlier:

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml
	
sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

CloudFormation understands that this is a stack update instead of an entirely new stack. You can see that reflected in the CloudFormation console:

During the update, CloudFormation deploys the new Lambda function as version 2 and adds it to the “live” alias. There is no traffic routing there yet. CodeDeploy now takes over to begin the safe deployment process.

The first thing CodeDeploy does is invoke the preTrafficHook function. Verify that this happened by reviewing the Lambda logs and metrics:

The function should progress successfully, invoke Version 2 of returnS3Buckets, and finally invoke the CodeDeploy API with a success code. After this occurs, CodeDeploy begins the predefined rollout strategy. Open the CodeDeploy console to review the deployment progress (Linear10PercentEvery1Minute):

Verify the traffic shift

During the deployment, verify that the traffic shift has started to occur by running the test periodically. As the deployment shifts towards the new version, a larger percentage of the responses return 9 instead of 51. These numbers match the S3 buckets.

A minute later, you see 10% more traffic shifting to the new version. The whole process takes 10 minutes to complete. After completion, open the Lambda console and verify that the “live” alias now points to version 2:

After 10 minutes, the deployment is complete and CodeDeploy signals success to CloudFormation and completes the stack update.

Check the results

If you invoke the function alias manually, you see the results of the new implementation.

aws lambda invoke –function [lambda arn to live alias] out.txt

You can also execute the prod stage of your API and verify the results by issuing an HTTP GET to the invoke URL:

Summary

This post has shown you how you can safely automate your Lambda deployments using the Lambda traffic shifting feature. You used the Serverless Application Model (SAM) to define your Lambda functions and configured CodeDeploy to manage your deployment patterns. Finally, you used CloudFormation to automate the deployment and updates to your function and PreTraffic hook.

Now that you know all about this new feature, you’re ready to begin automating Lambda deployments with confidence that things will work as designed. I look forward to hearing about what you’ve built with the AWS Serverless Platform.

Confused About the Hybrid Cloud? You’re Not Alone

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/confused-about-the-hybrid-cloud-youre-not-alone/

Hybrid Cloud. What is it?

Do you have a clear understanding of the hybrid cloud? If you don’t, it’s not surprising.

Hybrid cloud has been applied to a greater and more varied number of IT solutions than almost any other recent data management term. About the only thing that’s clear about the hybrid cloud is that the term hybrid cloud wasn’t invented by customers, but by vendors who wanted to hawk whatever solution du jour they happened to be pushing.

Let’s be honest. We’re in an industry that loves hype. We can’t resist grafting hyper, multi, ultra, and super and other prefixes onto the beginnings of words to entice customers with something new and shiny. The alphabet soup of cloud-related terms can include various options for where the cloud is located (on-premises, off-premises), whether the resources are private or shared in some degree (private, community, public), what type of services are offered (storage, computing), and what type of orchestrating software is used to manage the workflow and the resources. With so many moving parts, it’s no wonder potential users are confused.

Let’s take a step back, try to clear up the misconceptions, and come up with a basic understanding of what the hybrid cloud is. To be clear, this is our viewpoint. Others are free to do what they like, so bear that in mind.

So, What is the Hybrid Cloud?

The hybrid cloud refers to a cloud environment made up of a mixture of on-premises private cloud resources combined with third-party public cloud resources that use some kind of orchestration between them.

To get beyond the hype, let’s start with Forrester Research‘s idea of the hybrid cloud: “One or more public clouds connected to something in my data center. That thing could be a private cloud; that thing could just be traditional data center infrastructure.”

To put it simply, a hybrid cloud is a mash-up of on-premises and off-premises IT resources.

To expand on that a bit, we can say that the hybrid cloud refers to a cloud environment made up of a mixture of on-premises private cloud[1] resources combined with third-party public cloud resources that use some kind of orchestration[2] between them. The advantage of the hybrid cloud model is that it allows workloads and data to move between private and public clouds in a flexible way as demands, needs, and costs change, giving businesses greater flexibility and more options for data deployment and use.

In other words, if you have some IT resources in-house that you are replicating or augmenting with an external vendor, congrats, you have a hybrid cloud!

Private Cloud vs. Public Cloud

The cloud is really just a collection of purpose built servers. In a private cloud, the servers are dedicated to a single tenant or a group of related tenants. In a public cloud, the servers are shared between multiple unrelated tenants (customers). A public cloud is off-site, while a private cloud can be on-site or off-site — or on-prem or off-prem.

As an example, let’s look at a hybrid cloud meant for data storage, a hybrid data cloud. A company might set up a rule that says all accounting files that have not been touched in the last year are automatically moved off-prem to cloud storage to save cost and reduce the amount of storage needed on-site. The files are still available; they are just no longer stored on your local systems. The rules can be defined to fit an organization’s workflow and data retention policies.

The hybrid cloud concept also contains cloud computing. For example, at the end of the quarter, order processing application instances can be spun up off-premises in a hybrid computing cloud as needed to add to on-premises capacity.

Hybrid Cloud Benefits

If we accept that the hybrid cloud combines the best elements of private and public clouds, then the benefits of hybrid cloud solutions are clear, and we can identify the primary two benefits that result from the blending of private and public clouds.

Diagram of the Components of the Hybrid Cloud

Benefit 1: Flexibility and Scalability

Undoubtedly, the primary advantage of the hybrid cloud is its flexibility. It takes time and money to manage in-house IT infrastructure and adding capacity requires advance planning.

The cloud is ready and able to provide IT resources whenever needed on short notice. The term cloud bursting refers to the on-demand and temporary use of the public cloud when demand exceeds resources available in the private cloud. For example, some businesses experience seasonal spikes that can put an extra burden on private clouds. These spikes can be taken up by a public cloud. Demand also can vary with geographic location, events, or other variables. The public cloud provides the elasticity to deal with these and other anticipated and unanticipated IT loads. The alternative would be fixed cost investments in on-premises IT resources that might not be efficiently utilized.

For a data storage user, the on-premises private cloud storage provides, among other benefits, the highest speed access. For data that is not frequently accessed, or needed with the absolute lowest levels of latency, it makes sense for the organization to move it to a location that is secure, but less expensive. The data is still readily available, and the public cloud provides a better platform for sharing the data with specific clients, users, or with the general public.

Benefit 2: Cost Savings

The public cloud component of the hybrid cloud provides cost-effective IT resources without incurring capital expenses and labor costs. IT professionals can determine the best configuration, service provider, and location for each service, thereby cutting costs by matching the resource with the task best suited to it. Services can be easily scaled, redeployed, or reduced when necessary, saving costs through increased efficiency and avoiding unnecessary expenses.

Comparing Private vs Hybrid Cloud Storage Costs

To get an idea of the difference in storage costs between a purely on-premises solutions and one that uses a hybrid of private and public storage, we’ll present two scenarios. For each scenario we’ll use data storage amounts of 100 terabytes, 1 petabyte, and 2 petabytes. Each table is the same format, all we’ve done is change how the data is distributed: private (on-premises) cloud or public (off-premises) cloud. We are using the costs for our own B2 Cloud Storage in this example. The math can be adapted for any set of numbers you wish to use.

Scenario 1    100% of data on-premises storage

Data Stored
Data stored On-Premises: 100% 100 TB 1,000 TB 2,000 TB
On-premises cost range Monthly Cost
Low — $12/TB/Month $1,200 $12,000 $24,000
High — $20/TB/Month $2,000 $20,000 $40,000

Scenario 2    20% of data on-premises with 80% public cloud storage (B2)

Data Stored
Data stored On-Premises: 20% 20 TB 200 TB 400 TB
Data stored in Cloud: 80% 80 TB 800 TB 1,600 TB
On-premises cost range Monthly Cost
Low — $12/TB/Month $240 $2,400 $4,800
High — $20/TB/Month $400 $4,000 $8,000
Public cloud cost range Monthly Cost
Low — $5/TB/Month (B2) $400 $4,000 $8,000
High — $20/TB/Month $1,600 $16,000 $32,000
On-premises + public cloud cost range Monthly Cost
Low $640 $6,400 $12,800
High $2,000 $20,000 $40,000

As can be seen in the numbers above, using a hybrid cloud solution and storing 80% of the data in the cloud with a provider such as Backblaze B2 can result in significant savings over storing only on-premises. For other cost scenarios, see the B2 Cost Calculator.

When Hybrid Might Not Always Be the Right Fit

There are circumstances where the hybrid cloud might not be the best solution. Smaller organizations operating on a tight IT budget might best be served by a purely public cloud solution. The cost of setting up and running private servers is substantial.

An application that requires the highest possible speed might not be suitable for hybrid, depending on the specific cloud implementation. While latency does play a factor in data storage for some users, it is less of a factor for uploading and downloading data than it is for organizations using the hybrid cloud for computing. Because Backblaze recognized the importance of speed and low-latency for customers wishing to use computing on data stored in B2, we directly connected our data centers with those of our computing partners, ensuring that latency would not be an issue even for a hybrid cloud computing solution.

It is essential to have a good understanding of workloads and their essential characteristics in order to make the hybrid cloud work well for you. Each application needs to be examined for the right mix of private cloud, public cloud, and traditional IT resources that fit the particular workload in order to benefit most from a hybrid cloud architecture.

The Hybrid Cloud Can Be a Win-Win Solution

From the high altitude perspective, any solution that enables an organization to respond in a flexible manner to IT demands is a win. Avoiding big upfront capital expenses for in-house IT infrastructure will appeal to the CFO. Being able to quickly spin up IT resources as they’re needed will appeal to the CTO and VP of Operations.

Should You Go Hybrid?

We’ve arrived at the bottom line and the question is, should you or your organization embrace hybrid cloud infrastructures?

According to 451 Research, by 2019, 69% of companies will operate in hybrid cloud environments, and 60% of workloads will be running in some form of hosted cloud service (up from 45% in 2017). That indicates that the benefits of the hybrid cloud appeal to a broad range of companies.

In Two Years, More Than Half of Workloads Will Run in Cloud

Clearly, depending on an organization’s needs, there are advantages to a hybrid solution. While it might have been possible to dismiss the hybrid cloud in the early days of the cloud as nothing more than a buzzword, that’s no longer true. The hybrid cloud has evolved beyond the marketing hype to offer real solutions for an increasingly complex and challenging IT environment.

If an organization approaches the hybrid cloud with sufficient planning and a structured approach, a hybrid cloud can deliver on-demand flexibility, empower legacy systems and applications with new capabilities, and become a catalyst for digital transformation. The result can be an elastic and responsive infrastructure that has the ability to quickly respond to changing demands of the business.

As data management professionals increasingly recognize the advantages of the hybrid cloud, we can expect more and more of them to embrace it as an essential part of their IT strategy.

Tell Us What You’re Doing with the Hybrid Cloud

Are you currently embracing the hybrid cloud, or are you still uncertain or hanging back because you’re satisfied with how things are currently? Maybe you’ve gone totally hybrid. We’d love to hear your comments below on how you’re dealing with the hybrid cloud.


[1] Private cloud can be on-premises or a dedicated off-premises facility.

[2] Hybrid cloud orchestration solutions are often proprietary, vertical, and task dependent.

The post Confused About the Hybrid Cloud? You’re Not Alone appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Pirates Taunt Amazon Over New “Turd Sandwich” Prime Video Quality

Post Syndicated from Andy original https://torrentfreak.com/pirates-taunt-amazon-over-new-turd-sandwich-prime-video-quality-180419/

Even though they generally aren’t paying for the content they consume, don’t fall into the trap of believing that all pirates are eternally grateful for even poor quality media.

Without a doubt, some of the most quality-sensitive individuals are to be found in pirate communities and they aren’t scared to make their voices known when release groups fail to come up with the best possible goods.

This week there’s been a sustained chorus of disapproval over the quality of pirate video releases sourced from Amazon Prime. The anger is usually directed at piracy groups who fail to capture content in the correct manner but according to a number of observers, the problem is actually at Amazon’s end.

Discussions on Reddit, for example, report that episodes in a single TV series have been declining in filesize and bitrate, from 1.56 GB in 720p at a 3073 kb/s video bitrate for episode 1, down to 907 MB in 720p at just 1514 kb/s video bitrate for episode 10.

Numerous theories as to why this may be the case are being floated around, including that Amazon is trying to save on bandwidth expenses. While this is a possibility, the company hasn’t made any announcements to that end.

Indeed, one legitimate customer reported that he’d raised the quality issue with Amazon and they’d said that the problem was “probably on his end”.

“I have Amazon Prime Video and I noticed the quality was always great for their exclusive shows, so I decided to try buying the shows on Amazon instead of iTunes this year. I paid for season pass subscriptions for Legion, Billions and Homeland this year,” he wrote.

“Just this past weekend, I have noticed a significant drop in details compared to weeks before! So naturally I assumed it was an issue on my end. I started trying different devices, calling support, etc, but nothing really helped.

“Billions continued to look like a blurry mess, almost like I was watching a standard definition DVD instead of the crystal clear HD I paid for and have experienced in the past! And when I check the previous episodes, sure enough, they look fantastic again. What the heck??”

With Amazon distancing itself from the issues, piracy groups have already begun to dig in the knife. Release group DEFLATE has been particularly critical.

“Amazon, in their infinite wisdom, have decided to start fucking with the quality of their encodes. They’re now reaching Netflix’s subpar 1080p.H264 levels, and their H265 encodes aren’t even close to what Netflix produces,” the group said in a file attached to S02E07 of The Good Fight released on Sunday.

“Netflix is able to produce drastic visual improvements with their H265 encodes compared to H264 across every original. In comparison, Amazon can’t decide whether H265 or H264 is going to produce better results, and as a result we suffer for it.”

Arrr! The quality be fallin’

So what’s happening exactly?

A TorrentFreak source (who tells us he’s been working in the BluRay/DCP authoring business for the last 10 years) was kind enough to give us two opinions, one aimed at the techies and another at us mere mortals.

“In technical terms, it appears [Amazon has] increased the CRF [Constant Rate Factor] value they use when encoding for both the HEVC [H265] and H264 streams. Previously, their H264 streams were using CRF 18 and a max bitrate of 15Mbit/s, which usually resulted in file sizes of roughly 3GB, or around 10Mbit/s. Similarly with their HEVC streams, they were using CRF 20 and resulting in streams which were around the same size,” he explained.

“In the past week, the H264 streams have decreased by up to 50% for some streams. While there are no longer any x264 headers embedded in the H264 streams, the HEVC streams still retain those headers and the CRF value used has been increased, so it does appear this change has been done on purpose.”

In layman’s terms, our source believes that Amazon had previously been using an encoding profile that was “right on the edge of relatively good quality” which kept bitrates relatively low but high enough to ensure no perceivable loss of quality.

“H264 streams encoded with CRF 18 could provide an acceptable compromise between quality and file size, where the loss of detail is often negligible when watched at regular viewing distances, at a desk, or in a lounge room on a larger TV,” he explained.

“Recently, it appears these values have been intentionally changed in order to lower the bitrate and file sizes for reasons unknown. As a result, the quality of some streams has been reduced by up to 50% of their previous values. This has introduced a visual loss of quality, comparable to that of viewing something in standard definition versus high definition.”

With the situation failing to improve during the week, by the time piracy group DEFLATE released S03E14 of Supergirl on Tuesday their original criticism had transformed into flat-out insults.

“These are only being done in H265 because Amazon have shit the bed, and it’s a choice between a turd sandwich and a giant douche,” they wrote, offering these images as illustrative of the problem and these indicating what should be achievable.

With DEFLATE advising customers to start complaining to Amazon, the memes have already begun, with unfavorable references to now-defunct group YIFY (which was often chastized for its low quality rips) and even a spin on one of the most well known anti-piracy campaigns.

You wouldn’t download stream….

TorrentFreak contacted Amazon Prime for comment on both the recent changes and growing customer complaints but at the time of publication we were yet to receive a response.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Colour sensing with a Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/colour-sensing-raspberry-pi/

In their latest video and tutorial, Electronic Hub shows you how to detect colour using a Raspberry Pi and a TCS3200 colour sensor.

Raspberry Pi Color Sensor (TCS3200) Interface | Color Detector

A simple Raspberry Pi based project using TCS3200 Color Sensor. The project demonstrates how to interface a Color Sensor (like TCS3200) with Raspberry Pi and implement a simple Color Detector using Raspberry Pi.

What is a TCS3200 colour sensor?

Colour sensors sense reflected light from nearby objects. The bright light of the TCS3200’s on-board white LEDs hits an object’s surface and is reflected back. The sensor has an 8×8 array of photodiodes, which are covered by either a red, blue, green, or clear filter. The type of filter determines what colour a diode can detect. Then the overall colour of an object is determined by how much light of each colour it reflects. (For example, a red object reflects mostly red light.)

Colour sensing with the TCS3200 Color Sensor and a Raspberry Pi

As Electronics Hub explains:

TCS3200 is one of the easily available colour sensors that students and hobbyists can work on. It is basically a light-to-frequency converter, i.e. based on colour and intensity of the light falling on it, the frequency of its output signal varies.

I’ll save you a physics lesson here, but you can find a detailed explanation of colour sensing and the TCS3200 on the Electronics Hub blog.

Raspberry Pi colour sensor

The TCS3200 colour sensor is connected to several of the onboard General Purpose Input Output (GPIO) pins on the Raspberry Pi.

Colour sensing with the TCS3200 Color Sensor and a Raspberry Pi

These connections allow the Raspberry Pi 3 to run one of two Python scripts that Electronics Hub has written for the project. The first displays the RAW RGB values read by the sensor. The second detects the primary colours red, green, and blue, and it can be expanded for more colours with the help of the first script.

Colour sensing with the TCS3200 Color Sensor and a Raspberry Pi

Electronic Hub’s complete build uses a breadboard for simply prototyping

Use it in your projects

This colour sensing setup is a simple means of adding a new dimension to your builds. Why not build a candy-sorting robot that organises your favourite sweets by colour? Or add colour sensing to your line-following buggy to allow for multiple path options!

If your Raspberry Pi project uses colour sensing, we’d love to see it, so be sure to share it in the comments!

The post Colour sensing with a Raspberry Pi appeared first on Raspberry Pi.

Hollywood Studios Get ISP Blocking Order Against Rarbg in India

Post Syndicated from Ernesto original https://torrentfreak.com/hollywood-studios-score-blocking-order-against-rarbg-in-india-180417/

While the major Hollywood studios are very reluctant to bring a pirate site blocking case to their home turf, they are very active abroad.

The companies are the driving force behind lawsuits in Europe, Australia, and are also active in India, where they booked a new success last week.

Website blocking is by no means a new phenomenon in India. The country is known for so-called John Doe orders, where a flurry of websites are temporarily blocked to protect the release of a specific title.

The major Hollywood studios are taking a different approach. Disney Enterprises, Twentieth Century Fox, Paramount Pictures, Columbia Pictures, Universal, and Warner Bros. are requesting blockades, accusing sites of being structural copyright infringers.

One of the most recent targets is the popular torrent site Rarbg. The Hollywood studios describe Rarbg as a ‘habitual’ copyright infringer and demand that several Internet providers block access to the site.

“It is submitted that the Defendant Website aids and facilitates the accessibility and availability of infringing material, and induce third parties, intentionally and/or knowingly, to infringe through their websites by various means,’ the movie studios allege.

The complaint filed at the High Court of Delhi lists more than 20 Internet providers as co-defendants, and also includes India’s Department of Telecommunications and Department of Electronics and Information Technology in the mix.

The two Government departments are added because they have the power to enforce blocking orders. Specifically, the Hollywood studios note that the Department of Technology’s license agreement with ISPs requires these companies to ensure that copyright infringing content is not carried on their networks.

“It is submitted that the DoT itself acknowledges the fact that service providers have an obligation to ensure that no violation of third party intellectual property rights takes place through their networks and that effective protection is provided to right holders of such intellectual property,” the studios write.

Last week the court granted an injunction that requires local Internet providers including Bharti Airtel, Reliance Communications, Telenor, You Broadband, and Vodafone to block Rarbg.

Blocking order

As requested, the Department of Telecommunications and Department of Electronics and Information Technology are directed to notify all local internet and telecom service providers that they must block the torrent site as well.

The order is preliminary and can still be contested in court. However, given the history of similar blocking efforts around the world, it is likely that it will be upheld.

While there’s not much coverage on the matter, this isn’t the first blocking request the companies have filed in India. Last October, a similar case was filed against another popular torrent site, 1337x.to, with success.

TorrentFreak reached out to the law firm representing the Hollywood studios to get a broader overview of the blocking plans in India. At the time of writing, we have yet to hear back.

A copy of the order obtained by Disney Enterprises, Twentieth Century Fox, Paramount Pictures, Columbia Pictures, Universal, Warner Bros and the local Disney owned media conglomerate UTV Software, is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Introducing Microsoft Azure Sphere

Post Syndicated from corbet original https://lwn.net/Articles/751994/rss

Microsoft has issued a
press release
describing the security dangers involved with the
Internet of things (“a weaponized stove, baby monitors that spy, the
contents of your refrigerator being held for ransom
“) and introducing
“Microsoft Azure Sphere” as a combination of hardware and software to
address the problem. “Unlike the RTOSes common to MCUs today, our
defense-in-depth IoT OS offers multiple layers of security. It combines
security innovations pioneered in Windows, a security monitor, and a custom
Linux kernel to create a highly-secured software environment and a
trustworthy platform for new IoT experiences.

Russia’s Encryption War: 1.8m Google & Amazon IPs Blocked to Silence Telegram

Post Syndicated from Andy original https://torrentfreak.com/russias-encryption-war-1-8m-google-amazon-ips-blocked-to-silence-telegram-180417/

The rules in Russia are clear. Entities operating an encrypted messaging service need to register with the authorities. They also need to hand over their encryption keys so that if law enforcement sees fit, users can be spied on.

Free cross-platform messaging app Telegram isn’t playing ball. An impressive 200,000,000 people used the software in March (including a growing number for piracy purposes) and founder Pavel Durov says he will not compromise their security, despite losing a lawsuit against the Federal Security Service which compels him to do so.

“Telegram doesn’t have shareholders or advertisers to report to. We don’t do deals with marketers, data miners or government agencies. Since the day we launched in August 2013 we haven’t disclosed a single byte of our users’ private data to third parties,” Durov said.

“Above all, we at Telegram believe in people. We believe that humans are inherently intelligent and benevolent beings that deserve to be trusted; trusted with freedom to share their thoughts, freedom to communicate privately, freedom to create tools. This philosophy defines everything we do.”

But by not handing over its keys, Telegram is in trouble with Russia. The FSB says it needs access to Telegram messages to combat terrorism so, in response to its non-compliance, telecoms watchdog Rozcomnadzor filed a lawsuit to degrade Telegram via web-blocking. Last Friday, that process ended in the state’s favor.

After an 18-minute hearing, a Moscow court gave the go-ahead for Telegram to be banned in Russia. The hearing was scheduled just the day before, giving Telegram little time to prepare. In protest, its lawyers didn’t even turn up to argue the company’s position.

Instead, Durov took to his VKontakte account to announce that Telegram would take counter-measures.

“Telegram will use built-in methods to bypass blocks, which do not require actions from users, but 100% availability of the service without a VPN is not guaranteed,” Durov wrote.

Telegram can appeal the blocking decision but Russian authorities aren’t waiting around for a response. They are clearly prepared to match Durov’s efforts, no matter what the cost.

In instructions sent out yesterday nationwide, Rozomnadzor ordered ISPs to block Telegram. The response was immediate and massive. Telegram was using both Amazon and Google to provide service to its users so, within hours, huge numbers of IP addresses belonging to both companies were targeted.

Initially, 655,352 Amazon IP addresses were placed on Russia’s nationwide blacklist. It was later reported that a further 131,000 IP addresses were added to that total. But the Russians were just getting started.

Servers.ru reports that a further 1,048,574 IP addresses belonging to Google were also targeted Monday. Rozcomnadzor said the court ruling against Telegram compelled it to take whatever action is needed to take Telegram down but with at least 1,834,996 addresses now confirmed blocked, it remains unclear what effect it’s had on the service.

Friday’s court ruling states that restrictions against Telegram can be lifted provided that the service hands over its encryption keys to the FSB. However, Durov responded by insisting that “confidentiality is not for sale, and human rights should not be compromised because of fear or greed.”

But of course, money is still part of the Telegram equation. While its business model in terms of privacy stands in stark contrast to that of Facebook, Telegram is also involved in the world’s biggest initial coin offering (ICO). According to media reports, it has raised $1.7 billion in pre-sales thus far.

This week’s action against Telegram is the latest in Russia’s war on ‘unauthorized’ encryption.

At the end of March, authorities suggested that around 15 million IP addresses (13.5 million belonging to Amazon) could be blocked to target chat software Zello. While those measures were averted, a further 500 domains belonging to Google were caught in the dragnet.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Microsoft Denies Piracy Extortion Claims, Returns Fire

Post Syndicated from Ernesto original https://torrentfreak.com/microsoft-denies-piracy-extortion-claims-returns-fire-180416/

For many years, Microsoft and the Software Alliance (BSA) have carried out piracy investigations into organizations large and small.

Companies accused of using Microsoft software without permission usually get a letter asking them to pay up, or face legal consequences.

This also happened to Hanna Instruments, a Rhode Island-based company that sells analytical instruments. Last year, the company was accused of using Microsoft Office products without a proper license.

In a letter, BSA’s lawyers informed Hanna that it would face up to $4,950,000 in damages if the case went to court. Instead, however, they offered to settle the matter for $72,074.

Adding some extra pressure, BSA also warned that Microsoft could get a court order that would allow U.S. marshals to raid the company’s premises.

Where most of these cases are resolved behind closed doors, this one escalated. After being repeatedly contacted by BSA’s lawyers, Hanna decided to take the matter to court, claiming that Microsoft and BSA were trying to ‘extort’ money on ‘baseless’ accusations.

“BSA, Microsoft, and their counsel have, without supplying one scintilla of evidence, issued a series of letters for the sole purpose of extorting inflated monetary damages,” the company informed the court.

Late last week Microsoft and BSA replied to the complaint. While the two companies admit that they reached out to Hanna and offered a settlement, they deny several other allegations, including the extortion claims.

Instead, the companies submit a counterclaim, backing up their copyright infringement accusations and demanding damages.

“Hanna has engaged and continues to engage in the unauthorized installation, reproduction, and distribution and other unlawful use of Microsoft Software on computers on its premises and has used unlicensed copies of Microsoft Software to conduct its business,” they write.

According to Microsoft and BSA, the Rhode Island company still uses unauthorized product keys to activate and install unlicensed Microsoft software.

Turning Hanna’s own evidence against itself, they argue that two product keys were part of a batch of an educational program in China — not for commercial use in the United States.

Microsoft / BSA counterclaim

Another key could be traced back to what appears to be a counterfeit store which Microsoft has since shut down.

“The materials provided by Hanna also indicate that it purchased at least one copy of Microsoft Software from BuyCheapSoftware.com, a now-defunct website that was sued by Microsoft for selling stolen, abused, and otherwise unauthorized decoupled product keys,” Microsoft and BSA write.

According to Hanna, BSA previously failed to provide evidence to prove that the company was using unlicensed keys. However, the counterclaim suggests that the initial accusations had merit.

Whether BSA’s tactic of bringing up millions of dollars in damages and a possible raid by the U.S. Marshalls is the best strategy to resolve such a matter is up for debate of course.

It could very well be that Hanna was duped into buying counterfeit software, without knowing it. Perhaps this will come out as the case progresses. That said, it could also help if both sides simply have a good conversation to see if they can make peace, without threats.

Microsoft and BSA’s reply and counterclaim is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Now You Can Create Encrypted Amazon EBS Volumes by Using Your Custom Encryption Keys When You Launch an Amazon EC2 Instance

Post Syndicated from Nishit Nagar original https://aws.amazon.com/blogs/security/create-encrypted-amazon-ebs-volumes-custom-encryption-keys-launch-amazon-ec2-instance-2/

Amazon Elastic Block Store (EBS) offers an encryption solution for your Amazon EBS volumes so you don’t have to build, maintain, and secure your own infrastructure for managing encryption keys for block storage. Amazon EBS encryption uses AWS Key Management Service (AWS KMS) customer master keys (CMKs) when creating encrypted Amazon EBS volumes, providing you all the benefits associated with using AWS KMS. You can specify either an AWS managed CMK or a customer-managed CMK to encrypt your Amazon EBS volume. If you use a customer-managed CMK, you retain granular control over your encryption keys, such as having AWS KMS rotate your CMK every year. To learn more about creating CMKs, see Creating Keys.

In this post, we demonstrate how to create an encrypted Amazon EBS volume using a customer-managed CMK when you launch an EC2 instance from the EC2 console, AWS CLI, and AWS SDK.

Creating an encrypted Amazon EBS volume from the EC2 console

Follow these steps to launch an EC2 instance from the EC2 console with Amazon EBS volumes that are encrypted by customer-managed CMKs:

  1. Sign in to the AWS Management Console and open the EC2 console.
  2. Select Launch instance, and then, in Step 1 of the wizard, select an Amazon Machine Image (AMI).
  3. In Step 2 of the wizard, select an instance type, and then provide additional configuration details in Step 3. For details about configuring your instances, see Launching an Instance.
  4. In Step 4 of the wizard, specify additional EBS volumes that you want to attach to your instances.
  5. To create an encrypted Amazon EBS volume, first add a new volume by selecting Add new volume. Leave the Snapshot column blank.
  6. In the Encrypted column, select your CMK from the drop-down menu. You can also paste the full Amazon Resource Name (ARN) of your custom CMK key ID in this box. To learn more about finding the ARN of a CMK, see Working with Keys.
  7. Select Review and Launch. Your instance will launch with an additional Amazon EBS volume with the key that you selected. To learn more about the launch wizard, see Launching an Instance with Launch Wizard.

Creating Amazon EBS encrypted volumes from the AWS CLI or SDK

You also can use RunInstances to launch an instance with additional encrypted Amazon EBS volumes by setting Encrypted to true and adding kmsKeyID along with the actual key ID in the BlockDeviceMapping object, as shown in the following command:

$> aws ec2 run-instances –image-id ami-b42209de –count 1 –instance-type m4.large –region us-east-1 –block-device-mappings file://mapping.json

In this example, mapping.json describes the properties of the EBS volume that you want to create:


{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": true,
"VolumeSize": 100,
"VolumeType": "gp2",
"Encrypted": true,
"kmsKeyID": "arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef"
}
}

You can also launch instances with additional encrypted EBS data volumes via an Auto Scaling or Spot Fleet by creating a launch template with the above BlockDeviceMapping. For example:

$> aws ec2 create-launch-template –MyLTName –image-id ami-b42209de –count 1 –instance-type m4.large –region us-east-1 –block-device-mappings file://mapping.json

To learn more about launching an instance with the AWS CLI or SDK, see the AWS CLI Command Reference.

In this blog post, we’ve demonstrated a single-step, streamlined process for creating Amazon EBS volumes that are encrypted under your CMK when you launch your EC2 instance, thereby streamlining your instance launch workflow. To start using this functionality, navigate to the EC2 console.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon EC2 forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

[$] The second half of the 4.17 merge window

Post Syndicated from corbet original https://lwn.net/Articles/751482/rss

By the time the 4.17 merge window was closed and 4.17-rc1 was
released, 11,769 non-merge changesets had been pulled into the
mainline repository. 4.17 thus looks to be a typically busy development
cycle, with a merge window only slightly more busy than 4.16 had.
Some 6,000 of those changes were pulled after last week’s summary was written. There was a
lot of the usual maintenance work in those patches (over 10% of those
changes were to device-tree files, for example), but also some more
significant changes.