Tag Archives: car

digiKam 5.6.0 is released

Post Syndicated from ris original https://lwn.net/Articles/726278/rss

The digiKam Team has released
version 5.6.0 of the digiKam Software Collection for photo management. “With this version the HTML gallery and the video slideshow tools are back, database shrinking (e.g. purging stale thumbnails) is also supported on MySQL, grouping items feature has been improved, the support for custom sidecars type-mime have been added, the geolocation bookmarks introduce fixes to be fully functional with bundles, the support for custom sidecars, and of course a lots of bug has been fixed.

From Idea to Launch: Getting Your First Customers

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-to-get-your-first-customers/

line outside of Apple

After deciding to build an unlimited backup service and developing our own storage platform, the next step was to get customers and feedback. Not all customers are created equal. Let’s talk about the types, and when and how to attract them.

How to Get Your First Customers

First Step – Don’t Launch Publicly
Launch when you’re ready for the judgments of people who don’t know you at all. Until then, don’t launch. Sign up users and customers either that you know, those you can trust to cut you some slack (while providing you feedback), or at minimum those for whom you can set expectations. For months the Backblaze website was a single page with no ability to get the product and minimal info on what it would be. This is not to counter the Lean Startup ‘iterate quickly with customer feedback’ advice. Rather, this is an acknowledgement that there are different types of feedback required based on your development stage.

Sign Up Your Friends
We knew all of our first customers; they were friends, family, and previous co-workers. Many knew what we were up to and were excited to help us. No magic marketing or tech savviness was required to reach them – we just asked that they try the service. We asked them to provide us feedback on their experience and collected it through email and conversations. While the feedback wasn’t unbiased, it was nonetheless wide-ranging, real, and often insightful. These people were willing to spend time carefully thinking about their feedback and delving deeper into the conversations.

Broaden to Beta
Unless you’re famous or your service costs $1 million per customer, you’ll probably need to expand quickly beyond your friends to build a business – and to get broader feedback. Our next step was to broaden the customer base to beta users.

Opening up the service in beta provides three benefits:

  1. Air cover for the early warts. There are going to be issues, bugs, unnecessarily complicated user flows, and poorly worded text. Beta tells people, “We don’t consider the product ‘done’ and you should expect some of these issues. Please be patient with us.”
  2. A request for feedback. Some people always provide feedback, but beta communicates that you want it.
  3. An awareness opportunity. Opening up in beta provides an early (but not only) opportunity to have an announcement and build awareness.

Pitching Beta to Press
Not all press cares about, or is even willing to cover, beta products. Much of the mainstream press wants to write about services that are fully live, have scale, and are important in the marketplace. However, there are a number of sites that like to cover the leading edge – and that means covering betas. Techcrunch, Ars Technica, and SimpleHelp covered our initial private beta launch. I’ll go into the details of how to work with the press to cover your announcements in a post next month.

Private vs. Public Beta
Both private and public beta provide all three of the benefits above. The difference between the two is that private betas are much more controlled, whereas public ones bring in more users. But this isn’t an either/or – I recommend doing both.

Private Beta
For our original beta in 2008, we decided that we were comfortable with about 1,000 users subscribing to our service. That would provide us with a healthy amount of feedback and get some early adoption, while not overwhelming us or our server capacity, and equally important not causing cash flow issues from having to buy more equipment. So we decided to limit the sign-up to only the first 1,000 people who signed up; then we would shut off sign-ups for a while.

But how do you even get 1,000 people to sign up for your service? In our case, get some major publications to write about our beta. (Note: In a future post I’ll explain exactly how to find and reach out to writers. Sign up to receive all of the entrepreneurial posts in this series.)

Public Beta
For our original service (computer backup), we did not have a public beta; but when we launched Backblaze B2, we had a private and then a public beta. The private beta allowed us to work out early kinks, while the public beta brought us a more varied set of use cases. In public beta, there is no cap on the number of users that may try the service.

While this is a first-class problem to have, if your service is flooded and stops working, it’s still a problem. Think through what you will do if that happens. In our early days, when our system could get overwhelmed by volume, we had a static web page hosted with a different registrar that wouldn’t let customers sign up but would tell them when our service would be open again. When we reached a critical volume level we would redirect to it in order to at least provide status for when we could accept more customers.

Collect Feedback
Since one of the goals of betas is to get feedback, we made sure that we had our email addresses clearly presented on the site so users could send us thoughts. We were most interested in broad qualitative feedback on users’ experience, so all emails went to an internal mailing list that would be read by everyone at Backblaze.

For our B2 public and private betas, we also added an optional short survey to the sign-up process. In order to be considered for the private beta you had to fill the survey out, though we found that 80% of users continued to fill out the survey even when it was not required. This survey had both closed-end questions (“how much data do you have”) and open-ended ones (“what do you want to use cloud storage for?”).

BTW, despite us getting a lot of feedback now via our support team, Twitter, and marketing surveys, we are always open to more – you can email me directly at gleb.budman {at} backblaze.com.

Don’t Throw Away Users
Initially our backup service was available only on Windows, but we had an email sign-up list for people who wanted it for their Mac. This provided us with a sense of market demand and a ready list of folks who could be beta users and early adopters when we had a Mac version. Have a service targeted at doctors but lawyers are expressing interest? Capture that.

Product Launch

When
The first question is “when” to launch. Presuming your service is in ‘public beta’, what is the advantage of moving out of beta and into a “version 1.0”, “gold”, or “public availability”? That depends on your service and customer base. Some services fly through public beta. Gmail, on the other hand, was (in)famous for being in beta for 5 years, despite having over 100 million users.

The term beta says to users, “give us some leeway, but feel free to use the service”. That’s fine for many consumer apps and will have near zero impact on them. However, services aimed at businesses and government will often not be adopted with a beta label as the enterprise customers want to know the company feels the service is ‘ready’. While Backblaze started out as a purely consumer service, because it was a data backup service, it was important for customers to trust that the service was ready.

No product is bug-free. But from a product readiness perspective, the nomenclature should also be a reflection of the quality of the product. You can launch a product with one feature that works well out of beta. But a product with fifty features on which half the users will bump into problems should likely stay in beta. The customer feedback, surveys, and your own internal testing should guide you in determining this quality during the beta. Be careful about “we’ve only seen that one time” or “I haven’t been able to reproduce that on my machine”; those issues are likely to scale with customers when you launch.

How
Launching out of beta can be as simple as removing the beta label from the website/product. However, this can be a great time to reach out to press, write a blog post, and send an email announcement to your customers.

Consider thanking your beta testers somehow; can they get some feature turned out for free, an extension of their trial, or premium support? If nothing else, remember to thank them for their feedback. Users that signed up during your beta are likely the ones who will propel your service. They had the need and interest to both be early adopters and deal with bugs. They are likely the key to getting 1,000 true fans.

The Beginning
The title of this post was “Getting your first customers”, because getting to launch may feel like the peak of your journey when you’re pre-launch, but it really is just the beginning. It’s a step along the journey of building your business. If your launch is wildly successful, enjoy it, work to build on the momentum, but don’t lose track of building your business. If your launch is a dud, go out for a coffee with your team, say “well that sucks”, and then get back to building your business. You can learn a tremendous amount from your early customers, and they can become your biggest fans, but the success of your business will depend on what you continue to do the months and years after your launch.

The post From Idea to Launch: Getting Your First Customers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Kim Dotcom Opposes US’s “Fugitive” Claims at Supreme Court

Post Syndicated from Ernesto original https://torrentfreak.com/kim-dotcom-opposes-uss-fugitive-claims-supreme-court-170622/

megaupload-logoWhen Megaupload and Kim Dotcom were raided five years ago, the authorities seized millions of dollars in cash and other property.

The US government claimed the assets were obtained through copyright crimes so went after the bank accounts, cars, and other seized possessions of the Megaupload defendants.

Kim Dotcom and his colleagues were branded as “fugitives” and the Government won its case. Dotcom’s legal team quickly appealed this verdict, but lost once more at the Fourth Circuit appeals court.

A few weeks ago Dotcom and his former colleagues petitioned the Supreme Court to take on the case.

They don’t see themselves as “fugitives” and want the assets returned. The US Government opposed the request, but according to a new reply filed by Megaupload’s legal team, the US Government ignores critical questions.

The Government has a “vested financial stake” in maintaining the current situation, they write, which allows the authorities to use their “fugitive” claims as an offensive weapon.

“Far from being directed towards persons who have fled or avoided our country while claiming assets in it, fugitive disentitlement is being used offensively to strip foreigners of their assets abroad,” the reply brief (pdf) reads.

According to Dotcom’s lawyers there are several conflicting opinions from lower courts, which should be clarified by the Supreme Court. That Dotcom and his colleagues have decided to fight their extradition in New Zealand, doesn’t warrant the seizure of their assets.

“Absent review, forfeiture of tens of millions of dollars will be a fait accompli without the merits being reached,” they write, adding that this is all the more concerning because the US Government’s criminal case may not be as strong as claimed.

“This is especially disconcerting because the Government’s criminal case is so dubious. When the Government characterizes Petitioners as ‘designing and profiting from a system that facilitated wide-scale copyright infringement,’ it continues to paint a portrait of secondary copyright infringement, which is not a crime.”

The defense team cites several issues that warrant review and urges the Supreme Court to hear the case. If not, the Government will effectively be able to use assets seizures as a pressure tool to urge foreign defendants to come to the US.

“If this stands, the Government can weaponize fugitive disentitlement in order to claim assets abroad,” the reply brief reads.

“It is time for the Court to speak to the Questions Presented. Over the past two decades it has never had a better vehicle to do so, nor is any such vehicle elsewhere in sight,” Dotcom’s lawyers add.

Whether the Supreme Court accepts or denies the case will likely be decided in the weeks to come.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

HiveMQ 3.2.5 released

Post Syndicated from The HiveMQ Team original http://www.hivemq.com/blog/hivemq-3-2-5-released/

The HiveMQ team is pleased to announce the availability of HiveMQ 3.2.5. This is a maintenance release for the 3.2 series and brings the following improvements:

  • Fixed an issue that caused cluster nodes to not be operational for a long time after start up
  • Fixed an issue that could cause wildcard (+ operator) subscriptions to get lost
  • Fixed an issue that could cause QoS=1 messages to get lost when using cleanSession=false and shared subscription groups
  • Fixed an issue that could cause the current session count metric to be incorrect
  • Fixed an issue that could lead to QoS=0 message to be resent incorrectly when using shared subscriptions
  • Fixed various issues that could cause false Exceptions to be logged
  • Fixed an issue that could lead to an increase in memory usage when using retained messages
  • Improved documentation
  • Improved logging
  • Performance improvements

You can download the new HiveMQ version here.

We strongly recommend to upgrade if you are an HiveMQ 3.2.x user.

Have a great day,
The HiveMQ Team

How to Create an AMI Builder with AWS CodeBuild and HashiCorp Packer – Part 2

Post Syndicated from Heitor Lessa original https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer-part-2/

Written by AWS Solutions Architects Jason Barto and Heitor Lessa

 
In Part 1 of this post, we described how AWS CodeBuild, AWS CodeCommit, and HashiCorp Packer can be used to build an Amazon Machine Image (AMI) from the latest version of Amazon Linux. In this post, we show how to use AWS CodePipeline, AWS CloudFormation, and Amazon CloudWatch Events to continuously ship new AMIs. We use Ansible by Red Hat to harden the OS on the AMIs through a well-known set of security controls outlined by the Center for Internet Security in its CIS Amazon Linux Benchmark.

You’ll find the source code for this post in our GitHub repo.

At the end of this post, we will have the following architecture:

Requirements

 
To follow along, you will need Git and a text editor. Make sure Git is configured to work with AWS CodeCommit, as described in Part 1.

Technologies

 
In addition to the services and products used in Part 1 of this post, we also use these AWS services and third-party software:

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Amazon CloudWatch Events enables you to react selectively to events in the cloud and in your applications. Specifically, you can create CloudWatch Events rules that match event patterns, and take actions in response to those patterns.

AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. AWS CodePipeline builds, tests, and deploys your code every time there is a code change, based on release process models you define.

Amazon SNS is a fast, flexible, fully managed push notification service that lets you send individual messages or to fan out messages to large numbers of recipients. Amazon SNS makes it simple and cost-effective to send push notifications to mobile device users or email recipients. The service can even send messages to other distributed services.

Ansible is a simple IT automation system that handles configuration management, application deployment, cloud provisioning, ad-hoc task-execution, and multinode orchestration.

Getting Started

 
We use CloudFormation to bootstrap the following infrastructure:

Component Purpose
AWS CodeCommit repository Git repository where the AMI builder code is stored.
S3 bucket Build artifact repository used by AWS CodePipeline and AWS CodeBuild.
AWS CodeBuild project Executes the AWS CodeBuild instructions contained in the build specification file.
AWS CodePipeline pipeline Orchestrates the AMI build process, triggered by new changes in the AWS CodeCommit repository.
SNS topic Notifies subscribed email addresses when an AMI build is complete.
CloudWatch Events rule Defines how the AMI builder should send a custom event to notify an SNS topic.
Region AMI Builder Launch Template
N. Virginia (us-east-1)
Ireland (eu-west-1)

After launching the CloudFormation template linked here, we will have a pipeline in the AWS CodePipeline console. (Failed at this stage simply means we don’t have any data in our newly created AWS CodeCommit Git repository.)

Next, we will clone the newly created AWS CodeCommit repository.

If this is your first time connecting to a AWS CodeCommit repository, please see instructions in our documentation on Setup steps for HTTPS Connections to AWS CodeCommit Repositories.

To clone the AWS CodeCommit repository (console)

  1. From the AWS Management Console, open the AWS CloudFormation console.
  2. Choose the AMI-Builder-Blogpost stack, and then choose Output.
  3. Make a note of the Git repository URL.
  4. Use git to clone the repository.

For example: git clone https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/AMI-Builder_repo

To clone the AWS CodeCommit repository (CLI)

# Retrieve CodeCommit repo URL
git_repo=$(aws cloudformation describe-stacks --query 'Stacks[0].Outputs[?OutputKey==`GitRepository`].OutputValue' --output text --stack-name "AMI-Builder-Blogpost")

# Clone repository locally
git clone ${git_repo}

Bootstrap the Repo with the AMI Builder Structure

 
Now that our infrastructure is ready, download all the files and templates required to build the AMI.

Your local Git repo should have the following structure:

.
├── ami_builder_event.json
├── ansible
├── buildspec.yml
├── cloudformation
├── packer_cis.json

Next, push these changes to AWS CodeCommit, and then let AWS CodePipeline orchestrate the creation of the AMI:

git add .
git commit -m "My first AMI"
git push origin master

AWS CodeBuild Implementation Details

 
While we wait for the AMI to be created, let’s see what’s changed in our AWS CodeBuild buildspec.yml file:

...
phases:
  ...
  build:
    commands:
      ...
      - ./packer build -color=false packer_cis.json | tee build.log
  post_build:
    commands:
      - egrep "${AWS_REGION}\:\sami\-" build.log | cut -d' ' -f2 > ami_id.txt
      # Packer doesn't return non-zero status; we must do that if Packer build failed
      - test -s ami_id.txt || exit 1
      - sed -i.bak "s/<<AMI-ID>>/$(cat ami_id.txt)/g" ami_builder_event.json
      - aws events put-events --entries file://ami_builder_event.json
      ...
artifacts:
  files:
    - ami_builder_event.json
    - build.log
  discard-paths: yes

In the build phase, we capture Packer output into a file named build.log. In the post_build phase, we take the following actions:

  1. Look up the AMI ID created by Packer and save its findings to a temporary file (ami_id.txt).
  2. Forcefully make AWS CodeBuild to fail if the AMI ID (ami_id.txt) is not found. This is required because Packer doesn’t fail if something goes wrong during the AMI creation process. We have to tell AWS CodeBuild to stop by informing it that an error occurred.
  3. If an AMI ID is found, we update the ami_builder_event.json file and then notify CloudWatch Events that the AMI creation process is complete.
  4. CloudWatch Events publishes a message to an SNS topic. Anyone subscribed to the topic will be notified in email that an AMI has been created.

Lastly, the new artifacts phase instructs AWS CodeBuild to upload files built during the build process (ami_builder_event.json and build.log) to the S3 bucket specified in the Outputs section of the CloudFormation template. These artifacts can then be used as an input artifact in any later stage in AWS CodePipeline.

For information about customizing the artifacts sequence of the buildspec.yml, see the Build Specification Reference for AWS CodeBuild.

CloudWatch Events Implementation Details

 
CloudWatch Events allow you to extend the AMI builder to not only send email after the AMI has been created, but to hook up any of the supported targets to react to the AMI builder event. This event publication means you can decouple from Packer actions you might take after AMI completion and plug in other actions, as you see fit.

For more information about targets in CloudWatch Events, see the CloudWatch Events API Reference.

In this case, CloudWatch Events should receive the following event, match it with a rule we created through CloudFormation, and publish a message to SNS so that you can receive an email.

Example CloudWatch custom event

[
        {
            "Source": "com.ami.builder",
            "DetailType": "AmiBuilder",
            "Detail": "{ \"AmiStatus\": \"Created\"}",
            "Resources": [ "ami-12cd5guf" ]
        }
]

Cloudwatch Events rule

{
  "detail-type": [
    "AmiBuilder"
  ],
  "source": [
    "com.ami.builder"
  ],
  "detail": {
    "AmiStatus": [
      "Created"
    ]
  }
}

Example SNS message sent in email

{
    "version": "0",
    "id": "f8bdede0-b9d7...",
    "detail-type": "AmiBuilder",
    "source": "com.ami.builder",
    "account": "<<aws_account_number>>",
    "time": "2017-04-28T17:56:40Z",
    "region": "eu-west-1",
    "resources": ["ami-112cd5guf "],
    "detail": {
        "AmiStatus": "Created"
    }
}

Packer Implementation Details

 
In addition to the build specification file, there are differences between the current version of the HashiCorp Packer template (packer_cis.json) and the one used in Part 1.

Variables

  "variables": {
    "vpc": "{{env `BUILD_VPC_ID`}}",
    "subnet": "{{env `BUILD_SUBNET_ID`}}",
         “ami_name”: “Prod-CIS-Latest-AMZN-{{isotime \”02-Jan-06 03_04_05\”}}”
  },
  • ami_name: Prefixes a name used by Packer to tag resources during the Builders sequence.
  • vpc and subnet: Environment variables defined by the CloudFormation stack parameters.

We no longer assume a default VPC is present and instead use the VPC and subnet specified in the CloudFormation parameters. CloudFormation configures the AWS CodeBuild project to use these values as environment variables. They are made available throughout the build process.

That allows for more flexibility should you need to change which VPC and subnet will be used by Packer to launch temporary resources.

Builders

  "builders": [{
    ...
    "ami_name": “{{user `ami_name`| clean_ami_name}}”,
    "tags": {
      "Name": “{{user `ami_name`}}”,
    },
    "run_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "run_volume_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "snapshot_tags": {
      "Name": “{{user `ami_name`}}",
    },
    ...
    "vpc_id": "{{user `vpc` }}",
    "subnet_id": "{{user `subnet` }}"
  }],

We now have new properties (*_tag) and a new function (clean_ami_name) and launch temporary resources in a VPC and subnet specified in the environment variables. AMI names can only contain a certain set of ASCII characters. If the input in project deviates from the expected characters (for example, includes whitespace or slashes), Packer’s clean_ami_name function will fix it.

For more information, see functions on the HashiCorp Packer website.

Provisioners

  "provisioners": [
    {
        "type": "shell",
        "inline": [
            "sudo pip install ansible"
        ]
    }, 
    {
        "type": "ansible-local",
        "playbook_file": "ansible/playbook.yaml",
        "role_paths": [
            "ansible/roles/common"
        ],
        "playbook_dir": "ansible",
        "galaxy_file": "ansible/requirements.yaml"
    },
    {
      "type": "shell",
      "inline": [
        "rm .ssh/authorized_keys ; sudo rm /root/.ssh/authorized_keys"
      ]
    }

We used shell provisioner to apply OS patches in Part 1. Now, we use shell to install Ansible on the target machine and ansible-local to import, install, and execute Ansible roles to make our target machine conform to our standards.

Packer uses shell to remove temporary keys before it creates an AMI from the target and temporary EC2 instance.

Ansible Implementation Details

 
Ansible provides OS patching through a custom Common role that can be easily customized for other tasks.

CIS Benchmark and Cloudwatch Logs are implemented through two Ansible third-party roles that are defined in ansible/requirements.yaml as seen in the Packer template.

The Ansible provisioner uses Ansible Galaxy to download these roles onto the target machine and execute them as instructed by ansible/playbook.yaml.

For information about how these components are organized, see the Playbook Roles and Include Statements in the Ansible documentation.

The following Ansible playbook (ansible</playbook.yaml) controls the execution order and custom properties:

---
- hosts: localhost
  connection: local
  gather_facts: true    # gather OS info that is made available for tasks/roles
  become: yes           # majority of CIS tasks require root
  vars:
    # CIS Controls whitepaper:  http://bit.ly/2mGAmUc
    # AWS CIS Whitepaper:       http://bit.ly/2m2Ovrh
    cis_level_1_exclusions:
    # 3.4.2 and 3.4.3 effectively blocks access to all ports to the machine
    ## This can break automation; ignoring it as there are stronger mechanisms than that
      - 3.4.2 
      - 3.4.3
    # CloudWatch Logs will be used instead of Rsyslog/Syslog-ng
    ## Same would be true if any other software doesn't support Rsyslog/Syslog-ng mechanisms
      - 4.2.1.4
      - 4.2.2.4
      - 4.2.2.5
    # Autofs is not installed in newer versions, let's ignore
      - 1.1.19
    # Cloudwatch Logs role configuration
    logs:
      - file: /var/log/messages
        group_name: "system_logs"
  roles:
    - common
    - anthcourtney.cis-amazon-linux
    - dharrisio.aws-cloudwatch-logs-agent

Both third-party Ansible roles can be easily configured through variables (vars). We use Ansible playbook variables to exclude CIS controls that don’t apply to our case and to instruct the CloudWatch Logs agent to stream the /var/log/messages log file to CloudWatch Logs.

If you need to add more OS or application logs, you can easily duplicate the playbook and make changes. The CloudWatch Logs agent will ship configured log messages to CloudWatch Logs.

For more information about parameters you can use to further customize third-party roles, download Ansible roles for the Cloudwatch Logs Agent and CIS Amazon Linux from the Galaxy website.

Committing Changes

 
Now that Ansible and CloudWatch Events are configured as a part of the build process, commiting any changes to the AWS CodeComit Git Repository will triger a new AMI build process that can be followed through the AWS CodePipeline console.

When the build is complete, an email will be sent to the email address you provided as a part of the CloudFormation stack deployment. The email serves as notification that an AMI has been built and is ready for use.

Summary

 
We used AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Packer, and Ansible to build a pipeline that continuously builds new, hardened CIS AMIs. We used Amazon SNS so that email addresses subscribed to a SNS topic are notified upon completion of the AMI build.

By treating our AMI creation process as code, we can iterate and track changes over time. In this way, it’s no different from a software development workflow. With that in mind, software patches, OS configuration, and logs that need to be shipped to a central location are only a git commit away.

Next Steps

 
Here are some ideas to extend this AMI builder:

  • Hook up a Lambda function in Cloudwatch Events to update EC2 Auto Scaling configuration upon completion of the AMI build.
  • Use AWS CodePipeline parallel steps to build multiple Packer images.
  • Add a commit ID as a tag for the AMI you created.
  • Create a scheduled Lambda function through Cloudwatch Events to clean up old AMIs based on timestamp (name or additional tag).
  • Implement Windows support for the AMI builder.
  • Create a cross-account or cross-region AMI build.

Cloudwatch Events allow the AMI builder to decouple AMI configuration and creation so that you can easily add your own logic using targets (AWS Lambda, Amazon SQS, Amazon SNS) to add events or recycle EC2 instances with the new AMI.

If you have questions or other feedback, feel free to leave it in the comments or contribute to the AMI Builder repo on GitHub.

Court Grants Subpoenas to Unmask ‘TVAddons’ and ‘ZemTV’ Operators

Post Syndicated from Ernesto original https://torrentfreak.com/court-grants-subpoenas-to-unmask-tvaddons-and-zemtv-operators-170621/

Earlier this month we broke the news that third-party Kodi add-on ZemTV and the TVAddons library were being sued in a federal court in Texas.

In a complaint filed by American satellite and broadcast provider Dish Network, both stand accused of copyright infringement, facing up to $150,000 for each offense.

While the allegations are serious, Dish doesn’t know the full identities of the defendants.

To find out more, the company requested a broad range of subpoenas from the court, targeting Amazon, Github, Google, Twitter, Facebook, PayPal, and several hosting providers.

From Dish’s request

This week the court granted the subpoenas, which means that they can be forwarded to the companies in question. Whether that will be enough to identify the people behind ‘TVAddons’ and ‘ZemTV’ remains to be seen, but Dish has cast its net wide.

For example, the subpoena directed at Google covers any type of information that can be used to identify the account holder of taacc14@gmail.com, which is believed to be tied to ZemTV.

The information requested from Google includes IP address logs with session date and timestamps, but also covers “all communications,” including GChat messages from 2014 onwards.

Similarly, Twitter is required to hand over information tied to the accounts of the users “TV Addons” and “shani_08_kodi” as well as other accounts linked to tvaddons.ag and streamingboxes.com. This also applies the various tweets that were sent through the account.

The subpoena specifically mentions “all communications, including ‘tweets’, Twitter sent to or received from each Twitter Account during the time period of February 1, 2014 to present.”

From the Twitter subpoena

Similar subpoenas were granted for the other services, tailored towards the information Dish hopes to find there. For example, the broadcast provider also requests details of each transaction from PayPal, as well as all debits and credits to the accounts.

In some parts, the subpoenas appear to be quite broad. PayPal is asked to reveal information on any account with the credit card statement “Shani,” for example. Similarly, Github is required to hand over information on accounts that are ‘associated’ with the tvaddons.ag domain, which is referenced by many people who are not directly connected to the site.

The service providers in question still have the option to challenge the subpoenas or ask the court for further clarification. A full overview of all the subpoena requests is available here (Exhibit 2 and onwards), including all the relevant details. This also includes several letters to foreign hosting providers.

While Dish still appears to be keen to find out who is behind ‘TVAddons’ and ‘ZemTV,’ not much has been heard from the defendants in question.

ZemTV developer “Shani” shut down his addon soon after the lawsuit was announced, without mentioning it specifically. TVAddons, meanwhile, has been offline for well over a week, without any notice in public about the reason for the prolonged downtime.

The court’s order granting the subpoenas and letters of request is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Sync vs. Backup vs. Storage

Post Syndicated from Yev original https://www.backblaze.com/blog/sync-vs-backup-vs-storage/

Cloud Sync vs. Cloud Backup vs. Cloud Storage

Google Drive recently announced their new Backup and Sync feature for Google Drive, which allows users to select folders on their computer that they want to back up to their Google Drive account (note: these files count against your Google Drive storage limit). Whenever new backup services are announced, we get a lot of questions so I thought we should take a minute to review the differences in cloud based services.

What is the Cloud? Sync Vs Backup Vs Storage

There is still a lot of confusion in the space about what exactly the “cloud” is and how different services interact with it. When folks use a syncing and sharing service like Dropbox, Box, Google Drive, OneDrive or any of the others, they often assume those are acting as a cloud backup solution as well. Adding to the confusion, cloud storage services are often the backend for backup and sync services as well as standalone services. To help sort this out, we’ll define some of the terms below as they apply to a traditional computer set-up with a bunch of apps and data.

Cloud Sync (ex. Dropbox, iCloud Drive, OneDrive, Box, Google Drive) – these services sync folders on your computer to folders on other machines or to the cloud – allowing users to work from a folder or directory across devices. Typically these services have tiered pricing, meaning you pay for the amount of data you store with the service. If there is data loss, sometimes these services even have a rollback feature, of course only files that are in the synced folders are available to be recovered.

Cloud Backup (ex. Backblaze Cloud Backup, Mozy, Carbonite) – these services work in the background automatically. The user does not need to take any action like setting up specific folders. Backup services typically back up any new or changed data on your computer to another location. Before the cloud took off, that location was primarily a CD or an external hard drive – but as cloud storage became more readily available it became the most popular storage medium. Typically these services have fixed pricing, and if there is a system crash or data loss, all backed up data is available for restore. In addition, these services have rollback features in case there is data loss / accidental file deletion.

Cloud Storage (ex. Backblaze B2, Amazon S3, Microsoft Azure) – these services are where many online backup and syncing and sharing services store data. Cloud storage providers typically serve as the endpoint for data storage. These services typically provide APIs, CLIs, and access points for individuals and developers to tie in their cloud storage offerings directly. These services are priced “per GB” meaning you pay for the amount of storage that you use. Since these services are designed for high-availability and durability, data can live solely on these services – though we still recommend having multiple copies of your data, just in case.

What Should You Use?

Backblaze strongly believes in a 3-2-1 Backup Strategy. A 3-2-1 strategy means having at least 3 total copies of your data, 2 of which are local but on different mediums (e.g. an external hard drive in addition to your computer’s local drive), and at least 1 copy offsite. The best setup is data on your computer, a copy on a hard drive that lives somewhere not inside your computer, and another copy with a cloud backup provider. Backblaze Cloud Backup is a great compliment to other services, like Time Machine, Dropbox, and even the free-tiers of cloud storage services.

What is The Difference Between Cloud Sync and Backup?

Let’s take a look at some sync setups that we see fairly frequently.

Example 1) Users have one folder on their computer that is designated for Dropbox, Google Drive, OneDrive, or one of the other syncing/sharing services. Users save or place data into those directories when they want them to appear on other devices. Often these users are using the free-tier of those syncing and sharing services and only have a few GB of data uploaded in them.

Example 2) Users are paying for extended storage for Dropbox, Google Drive, OneDrive, etc… and use those folders as the “Documents” folder – essentially working out of those directories. Files in that folder are available across devices, however, files outside of that folder (e.g. living on the computer’s desktop or anywhere else) are not synced or stored by the service.

What both examples are missing however is the backup of photos, movies, videos, and the rest of the data on their computer. That’s where cloud backup providers excel, by automatically backing up user data with little or no set-up, and no need for the dragging-and-dropping of files. Backblaze actually scans your hard drive to find all the data, regardless of where it might be hiding. The results are, all the user’s data is kept in the Backblaze cloud and the portion of the data that is synced is also kept in that provider’s cloud – giving the user another layer of redundancy. Best of all, Backblaze will actually back up your Dropbox, iCloud Drive, Google Drive, and OneDrive folders.

Data Recovery

The most important feature to think about is how easy it is to get your data back from all of these services. With sync and share services, retrieving a lot of data, especially if you are in a high-data tier, can be cumbersome and take awhile. Generally, the sync and share services only allow customers to download files over the Internet. If you are trying to download more than a couple gigabytes of data, the process can take time and can be fraught with errors.

With cloud storage services, you can usually only retrieve data over the Internet as well, and you pay for both the storage and the egress of the data, so retrieving a large amount of data can be both expensive and time consuming.

Cloud backup services will enable you to download files over the internet too and can also suffer from long download times. At Backblaze we never want our customers to feel like we’re holding their data hostage, which is why we have a lot of restore options, including our Restore Return Refund policy, which allows people to restore their data via a USB Hard Drive, and then return that drive to us for a refund. Cloud sync providers do not provide this capability.

One popular data recovery use case we’ve seen when a person has a lot of data to restore is to download just the files that are needed immediately, and then order a USB Hard Drive restore for the remaining files that are not as time sensitive. The user gets all their files back in a few days, and their network is spared the download charges.

The bottom line is that all of these services have merit for different use-cases. Have questions about which is best for you? Sound off in the comments below!

The post Sync vs. Backup vs. Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

US Embassy Threatens to Close Domain Registry Over ‘Pirate Bay’ Domain

Post Syndicated from Andy original https://torrentfreak.com/us-embassy-threatens-to-close-domain-registry-over-pirate-bay-domain-170620/

Domains have become an integral part of the piracy wars and no one knows this better than The Pirate Bay.

The site has burned through numerous domains over the years, with copyright holders and authorities successfully pressurizing registries to destabilize the site.

The latest news on this front comes from the Central American country of Costa Rica, where the local domain registry is having problems with the United States government.

The drama is detailed in a letter to ICANN penned by Dr. Pedro León Azofeifa, President of the Costa Rican Academy of Science, which operates NIC Costa Rica, the registry in charge of local .CR domain names.

Azofeifa’s letter is addressed to ICANN board member Thomas Schneider and pulls no punches. It claims that for the past two years the United States Embassy in Costa Rica has been pressuring NIC Costa Rica to take action against a particular domain.

“Since 2015, the United Estates Embassy in Costa Rica, who represents the interests of the United States Department of Commerce, has frequently contacted our organization regarding the domain name thepiratebay.cr,” the letter to ICANN reads.

“These interactions with the United States Embassy have escalated with time and include great pressure since 2016 that is exemplified by several phone calls, emails, and meetings urging our ccTLD to take down the domain, even though this would go against our domain name policies.”

The letter states that following pressure from the US, the Costa Rican Ministry of Commerce carried out an investigation which concluded that not taking down the domain was in line with best practices that only require suspensions following a local court order. That didn’t satisfy the United States though, far from it.

“The representative of the United States Embassy, Mr. Kevin Ludeke, Economic Specialist, who claims to represent the interests of the US Department of
Commerce, has mentioned threats to close our registry, with repeated harassment
regarding our practices and operation policies,” the letter to ICANN reads.

Ludeke is indeed listed on the US Embassy site for Costa Rica. He’s also referenced in a 2008 diplomatic cable leaked previously by Wikileaks. Contacted via email, Ludeke did not immediately respond to TorrentFreak’s request for comment.

Extract from the letter to ICANN

Surprisingly, Azofeifa says the US representative then got personal, making negative comments towards his Executive Director, “based on no clear evidence or statistical data to support his claims, as a way to pressure our organization to take down the domain name without following our current policies.”

Citing the Tunis Agenda for the Information Society of 2005, Azofeifa asserts that “policy authority for Internet-related public policy issues is the sovereign right of the States,” which in Costa Rica’s case means that there must be “a final judgment from the Courts of Justice of the Republic of Costa Rica” before the registry will suspend a domain.

But it seems legal action was not the preferred route of the US Embassy. Demanding that NIC Costa Rica take unilateral action, Mr. Ludeke continued with “pressure and harassment to take down the domain name without its proper process and local court order.”

Azofeifa’s letter to ICANN, which is cc’d to Stafford Fitzgerald Haney, United States Ambassador to Costa Rica and various people in the Costa Rican Ministry of Commerce, concludes with a request for suggestions on how to deal with the matter.

While the response should prove very interesting, none of the parties involved appear to have noticed that ThePirateBay.cr isn’t officially connected to The Pirate Bay

The domain and associated site appeared in the wake of the December 2014 shut down of The Pirate Bay, claiming to be the real deal and even going as far as making fake accounts in the names of famous ‘pirate’ groups including ettv and YIFY.

Today it acts as an unofficial and unaffiliated reverse proxy to The Pirate Bay while presenting the site’s content as its own. It’s also affiliated with a fake KickassTorrents site, Kickass.cd, which to this day claims that it’s a reincarnation of the defunct torrent giant.

But perhaps the most glaring issue in this worrying case is the apparent willingness of the United States to call out Costa Rica for not doing anything about a .CR domain run by third parties, when the real Pirate Bay’s .org domain is under United States’ jurisdiction.

Registered by the Public Interest Registry in Reston, Virginia, ThePirateBay.org is the famous site’s main domain. TorrentFreak asked PIR if anyone from the US government had ever requested action against the domain but at the time of publication, we had received no response.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Shelfchecker Smart Shelf: build a home library system

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/smart-shelf-home-library/

Are you tired of friends borrowing your books and never returning them? Maybe you’re sure you own 1984 but can’t seem to locate it? Do you find a strange satisfaction in using the supermarket self-checkout simply because of the barcode beep? With the ShelfChecker smart shelf from maker Annelynn described on Instructables, you can be your own librarian and never misplace your books again! Beep!

Shelfchecker smart shelf annelynn Raspberry Pi

Harry Potter and the Aesthetically Pleasing Smart Shelf

The ShelfChecker smart shelf

Annelynn built her smart shelf utilising a barcode scanner, LDR light sensors, a Raspberry Pi, plus a few other peripherals and some Python scripts. She has created a fully integrated library checkout system with accompanying NeoPixel location notification for your favourite books.

This build allows you to issue your book-borrowing friends their own IDs and catalogue their usage of your treasured library. On top of that, you’ll be able to use LED NeoPixels to highlight your favourite books, registering their removal and return via light sensor tracking.

Using light sensors for book cataloguing

Once Annelynn had built the shelf, she drilled holes to fit the eight LDRs that would guard her favourite books, and separated them with corner brackets to prevent confusion.

Shelfchecker smart shelf annelynn Raspberry Pi

Corner brackets keep the books in place without confusion between their respective light sensors

Due to the limitations of the MCP3008 Adafruit microchip, the smart shelf can only keep track of eight of your favourite books. But this limitation won’t stop you from cataloguing your entire home library; it simply means you get to pick your ultimate favourites that will occupy the prime real estate on your wall.

Obviously, the light sensors sense light. So when you remove or insert a book, light floods or is blocked from that book’s sensor. The sensor sends this information to the Raspberry Pi. In response, an Arduino controls the NeoPixel strip along the ‘favourites’ shelf to indicate the book’s status.

Shelfchecker smart shelf annelynn Raspberry Pi

The book you are looking for is temporarily unavailable

Code your own library

While keeping a close eye on your favourite books, the system also allows creation of a complete library catalogue system with the help of a MySQL database. Users of the library can log into the system with a barcode scanner, and take out or return books recorded in the database guided by an LCD screen attached to the Pi.

Shelfchecker smart shelf annelynn Raspberry Pi

Beep!

I won’t go into an extensive how-to on creating MySQL databases here on the blog, because my glamourous assistant Janina has pulled up these MySQL tutorials to help you get started. Annelynn’s Github scripts are also packed with useful comments to keep you on track.

Raspberry Pi and books

We love books and libraries. And considering the growing number of Code Clubs and makespaces into libraries across the world, and the host of book-based Pi builds we’ve come across, the love seems to be mutual.

We’ve seen the Raspberry Pi introduced into the Wordery bookseller warehouse, a Pi-powered page-by-page book scanner by Jonathon Duerig, and these brilliant text-to-speech and page turner projects that use our Pis!

Did I say we love books? In fact we love them so much that members of our team have even written a few.*

If you’ve set up any sort of digital making event in a library, have in some way incorporated Raspberry Pi into your own personal book collection, or even managed to recreate the events of your favourite story using digital making, make sure to let us know in the comments below.

* Shameless plug**

Fancy adding some Pi to your home library? Check out these publications from the Raspberry Pi staff:

A Beginner’s Guide to Coding by Marc Scott

Adventures in Raspberry Pi by Carrie Anne Philbin

Getting Started with Raspberry Pi by Matt Richardson

Raspberry Pi User Guide by Eben Upton

The MagPi Magazine, Essentials Guides and Project Books

Make Your Own Game and Build Your Own Website by CoderDojo

** Shameless Pug

 

The post Shelfchecker Smart Shelf: build a home library system appeared first on Raspberry Pi.

Top 10 Most Pirated Movies of The Week on BitTorrent – 06/19/17

Post Syndicated from Ernesto original https://torrentfreak.com/top-10-pirated-movies-week-bittorrent-061917/

This week we have three newcomers in our chart.

Wonder Woman is the most downloaded movie.

The data for our weekly download chart is estimated by TorrentFreak, and is for informational and educational reference only. All the movies in the list are Web-DL/Webrip/HDRip/BDrip/DVDrip unless stated otherwise.

RSS feed for the weekly movie download chart.

This week’s most downloaded movies are:
Movie Rank Rank last week Movie name IMDb Rating / Trailer
Most downloaded movies via torrents
1 (2) Wonder Woman (TC) 8.2 / trailer
2 (…) Power Rangers 6.5 / trailer
3 (1) The Fate of the Furious 6.7 / trailer
4 (…) Chips 5.8 / trailer
5 (5) The Boss Baby 6.5 / trailer
6 (4) John Wick: Chapter 2 8.0 / trailer
7 (3) Life 6.8 / trailer
8 (…) The Mummy 2017 (HDTS) 5.8 / trailer
9 (7) Logan 8.6 / trailer
10 (6) Pirates of the Caribbean: Dead Men Tell No Tales (TS) 7.1 / trailer

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Pirate Bay Isn’t Affected By Adverse Court Rulings – Everyone Else Is

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-isnt-affected-by-adverse-court-rulings-everyone-else-is-170618/

For more than a decade The Pirate Bay has been the world’s most controversial site. Delivering huge quantities of copyrighted content to the masses, the platform is revered and reviled across the copyright spectrum.

Its reputation is one of a defiant Internet swashbuckler, but due to changes in how the site has been run in more recent times, its current philosophy is more difficult to gauge. What has never been in doubt, however, is the site’s original intent to be as provocative as possible.

Through endless publicity stunts, some real, some just for the ‘lulz’, The Pirate Bay managed to attract a massive audience, all while incurring the wrath of every major copyright holder in the world.

Make no mistake, they all queued up to strike back, but every subsequent rightsholder action was met by a Pirate Bay middle finger, two fingers, or chin flick, depending on the mood of the day. This only served to further delight the masses, who happily spread the word while keeping their torrents flowing.

This vicious circle of being targeted by the entertainment industries, mocking them, and then reaping the traffic benefits, developed into the cheapest long-term marketing campaign the Internet had ever seen. But nothing is ever truly for free and there have been consequences.

After taunting Hollywood and the music industry with its refusals to capitulate, endless legal action that the site would have ordinarily been forced to participate in largely took place without The Pirate Bay being present. It doesn’t take a law degree to work out what happened in each and every one of those cases, whatever complex route they took through the legal system. No defense, no win.

For example, the web-blocking phenomenon across the UK, Europe, Asia and Australia was driven by the site’s absolute resilience and although there would clearly have been other scapegoats had The Pirate Bay disappeared, the site was the ideal bogeyman the copyright lobby required to move forward.

Filing blocking lawsuits while bringing hosts, advertisers, and ISPs on board for anti-piracy initiatives were also made easier with the ‘evil’ Pirate Bay still online. Immune from every anti-piracy technique under the sun, the existence of the platform in the face of all onslaughts only strengthened the cases of those arguing for even more drastic measures.

Over a decade, this has meant a significant tightening of the sharing and streaming climate. Without any big legislative changes but plenty of case law against The Pirate Bay, web-blocking is now a walk in the park, ad hoc domain seizures are a fairly regular occurrence, and few companies want to host sharing sites. Advertisers and brands are also hesitant over where they place their ads. It’s a very different world to the one of 10 years ago.

While it would be wrong to attribute every tightening of the noose to the actions of The Pirate Bay, there’s little doubt that the site and its chaotic image played a huge role in where copyright enforcement is today. The platform set out to provoke and succeeded in every way possible, gaining supporters in their millions. It could also be argued it kicked a hole in a hornets’ nest, releasing the hell inside.

But perhaps the site’s most amazing achievement is the way it has managed to stay online, despite all the turmoil.

This week yet another ruling, this time from the powerful European Court of Justice, found that by offering links in the manner it does, The Pirate Bay and other sites are liable for communicating copyright works to the public. Of course, this prompted the usual swathe of articles claiming that this could be the final nail in the site’s coffin.

Wrong.

In common with every ruling, legal defeat, and legislative restriction put in place due to the site’s activities, this week’s decision from the ECJ will have zero effect on the Pirate Bay’s availability. For right or wrong, the site was breaking the law long before this ruling and will continue to do so until it decides otherwise.

What we have instead is a further tightened legal landscape that will have a lasting effect on everything BUT the site, including weaker torrent sites, Internet users, and user-uploaded content sites such as YouTube.

With The Pirate Bay carrying on regardless, that is nothing short of remarkable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

“Kodi Boxes Are a Fire Risk”: Awful Timing or Opportunism?

Post Syndicated from Andy original https://torrentfreak.com/kodi-boxes-are-a-fire-risk-awful-timing-or-opportunism-170618/

Anyone who saw the pictures this week couldn’t have failed to be moved by the plight of Londoners caught up in the Grenfell Tower inferno. The apocalyptic images are likely to stay with people for years to come and the scars for those involved may never heal.

As the building continued to smolder and the death toll increased, UK tabloids provided wall-to-wall coverage of the disaster. On Thursday, however, The Sun took a short break to put out yet another sensationalized story about Kodi. Given the week’s events, it was bound to raise eyebrows.

“HOT GOODS: Kodi boxes are a fire hazard because thousands of IPTV devices nabbed by customs ‘failed UK electrical standards’,” the headline reads.

Another sensational ‘Kodi’ headline

“It’s estimated that thousands of Brits have bought so-called Kodi boxes which can be connected to telly sets to stream pay-per-view sport and films for free,” the piece continued.

“But they could be a fire hazard, according to the Federation Against Copyright Theft (FACT), which has been nabbing huge deliveries of the devices as they arrive in the UK.”

As the image below shows, “Kodi box” fire hazard claims appeared next to images from other news articles about the huge London fire. While all separate stories, the pairing is not a great look.

A ‘Kodi Box’, as depicted in The Sun

FACT chief executive Kieron Sharp told The Sun that his group had uncovered two parcels of 2,000 ‘Kodi’ boxes and found that they “failed electrical safety standards”, making them potentially dangerous. While that may well be the case, the big question is all about timing.

It’s FACT’s job to reduce copyright infringement on behalf of clients such as The Premier League so it’s no surprise that they’re making a sustained effort to deter the public from buying these devices. That being said, it can’t have escaped FACT or The Sun that fire and death are extremely sensitive topics this week.

That leaves us with a few options including unfortunate opportunism or perhaps terrible timing, but let’s give the benefit of the doubt for a moment.

There’s a good argument that FACT and The Sun brought a valid issue to the public’s attention at a time when fire safety is on everyone’s lips. So, to give credit where it’s due, providing people with a heads-up about potentially dangerous devices is something that most people would welcome.

However, it’s difficult to offer congratulations on the PSA when the story as it appears in The Sun does nothing – absolutely nothing – to help people stay safe.

If some boxes are a risk (and that’s certainly likely given the level of Far East imports coming into the UK) which ones are dangerous? Where were they manufactured? Who sold them? What are the serial numbers? Which devices do people need to get out of their houses?

Sadly, none of these questions were answered or even addressed in the article, making it little more than scaremongering. Only making matters worse, the piece notes that it isn’t even clear how many of the seized devices are indeed a fire risk and that more tests need to be done. Is this how we should tackle such an important issue during an extremely sensitive week?

Timing and lack of useful information aside, one then has to question the terminology employed in the article.

As a piece of computer software, Kodi cannot catch fire. So, what we’re actually talking about here is small computers coming into the country without passing safety checks. The presence of Kodi on the devices – if indeed Kodi was even installed pre-import – is absolutely irrelevant.

Anti-piracy groups warning people of the dangers associated with their piracy habits is nothing new. For years, Internet users have been told that their computers will become malware infested if they share files or stream infringing content. While in some cases that may be true, there’s rarely any effort by those delivering the warnings to inform people on how to stay safe.

A classic example can be found in the numerous reports put out by the Digital Citizens Alliance in the United States. The DCA has produced several and no doubt expensive reports which claim to highlight the risks Internet users are exposed to on ‘pirate’ sites.

The DCA claims to do this in the interests of consumers but the group offers no practical advice on staying safe nor does it provide consumers with risk reduction strategies. Like many high-level ‘drug prevention’ documents shuffled around government, it could be argued that on a ‘street’ level their reports are next to useless.

Demonizing piracy is a well-worn and well-understood strategy but if warnings are to be interpreted as representing genuine concern for the welfare of people, they have to be a lot more substantial than mere scaremongering.

Anyone concerned about potentially dangerous devices can check out these useful guides from Electrical Safety First (pdf) and the Electrical Safety Council (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Pirate Bay Ruling is Bad News For Google & YouTube, Experts Says

Post Syndicated from Andy original https://torrentfreak.com/pirate-bay-ruling-is-bad-news-for-google-youtube-experts-says-170615/

After years of legal wrangling, yesterday the European Court of Justice handed down a decision in the case between Dutch anti-piracy outfit BREIN and ISPs Ziggo and XS4ALL.

BREIN had demanded that the ISPs block The Pirate Bay, but both providers dug in their heels, forcing the case through the Supreme Court and eventually the ECJ.

For BREIN, yesterday’s decision will have been worth the wait. Although The Pirate Bay does not provide the content that’s ultimately downloaded and shared by its users, the ECJ said that it plays an important role in how that content is presented.

“Whilst it accepts that the works in question are placed online by the users, the Court highlights the fact that the operators of the platform play an essential role in making those works available,” the Court said.

With that established the all-important matter is whether by providing such a platform, the operators of The Pirate Bay are effectively engaging in a “communication to the public” of copyrighted works. According to the ECJ, that’s indeed the case.

“The Court holds that the making available and management of an online sharing platform must be considered to be an act of communication for the purposes of the directive,” the ECJ said.

Add into the mix that The Pirate Bay generates profit from its activities and there’s a potent case for copyright liability.

While the case was about The Pirate Bay, ECJ rulings tend to have an effect far beyond individual cases. That’s certainly the opinion of Enzo Mazza, chief at Italian anti-piracy group FIMI.

“The ruling will have a major impact on the way that entities like Google operate, because it will expose them to a greater and more direct responsibility,” Mazza told La Repubblica.

“So far, Google has worked against piracy by eliminating illegal content after it gets reported. But that is not enough. It is a fairly ineffective intervention.”

Mazza says that platforms like Google, YouTube, and thousands of similar sites that help to organize and curate user-uploaded content are somewhat similar to The Pirate Bay. In any event, they are not neutral intermediaries, he insists.

The conclusion that the decision is bad for platforms like YouTube is shared by Fulvio Sarzana, a lawyer with Sarzana and Partners, a law firm specializing in Internet and copyright disputes.

“In the ruling, the Court has in fact attributed, for the first time, secondary liability to sharing platforms due to the violation of copyrights carried out by the users of a platform,” Sarzana informs TF.

“This will have consequences for video-sharing platforms and user-generated content sites like YouTube, but it excludes responsibility for platforms that play a purely passive role, without affecting users’ content. This the case with cyberlockers, for example.”

Sarzana says that “unfortunate judgments” like this should be expected, until the approval of a new European copyright law. Enzo Mazza, on the other hand, feels that the copyright reform debate should take account of this ruling when formulating legislation to stop platforms like YouTube exploiting copyright works without an appropriate license.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

ISP Doesn’t Have to Expose Alleged BitTorrent Pirates, Finnish Court Rules

Post Syndicated from Ernesto original https://torrentfreak.com/isp-doesnt-have-to-expose-alleged-bittorrent-pirates-finnish-court-rules-170615/

finlandStarting three years ago, copyright holders began sending out thousands of settlement letters to alleged pirates in Finland, a practice often described as copyright trolling.

This week, however, the local Market Court has put the brakes on these efforts, with a rather significant ruling.

In the case in question, filmmakers requested the personal information of hundreds of alleged BitTorrent users from Internet provider DNA. However, after a careful review by a panel of seven judges, the Court decided not to grant the request.

The rightsholders provided a detailed log from a BitTorrent monitoring tool as evidence. While the Court didn’t doubt that the pirated material had been shared, it questioned how significant the infringements were.

The provided list of IP-addresses and timestamps don’t show how much data was shared, or for how long.

The evidence included an overview of the total number of users sharing the same file in a single BitTorrent swarm. However, the fact that thousands of people were sharing the same file says nothing about the significance of individual infringements.

“[T]he applicant has not claimed or provided any explanation that would indicate that the distribution of its work, by an IP address in the application, would have repeatedly occurred or for a longer period of time,” the Market Court writes.

The verdict, first reported by Iltalethi, refers to a recent case in the European Court of Justice, and stressed that the significance of an infringement must be weighed against the defendants’ privacy rights. In this case, the court decided that the evidence doesn’t warrant the exposure of the alleged pirates.

“Since the applicant has not provided sufficient proof of compliance with the conditions set out in Article 60a of the Copyright Act to adoption of an application, the application must be dismissed,” the Market Court writes.

The outcome is a clear victory for the accused BitTorrent users. Time will tell whether rightsholders will adapt their evidence to the ruling, or whether they will test their luck elsewhere. The current ruling can still be appealed.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Man Faces Prison For Sharing Pirated Deadpool Movie on Facebook

Post Syndicated from Ernesto original https://torrentfreak.com/man-faces-prison-for-sharing-pirated-deadpool-movie-on-facebook-170614/

With roughly two billion active users per month, Facebook is by far the largest social networking site around.

While most of the content posted to the site is relatively harmless, some people use it to share things they are not supposed to.

This is also what 21-year-old Trevon Maurice Franklin from Fresno, California, did early last year. Just a week after the box-office hit Deadpool premiered in theaters, he shared a pirated copy of the movie on the social network.

Franklin, who used the screen name “Tre-Von M. King,” saw his post go viral as it allegedly reached five million views. This didn’t go unnoticed by Twentieth Century Fox, and soon after the feds were involved as well.

The FBI began to investigate the possibly criminal Facebook post and decided to build a case. This eventually led to an indictment, and the alleged “pirate” was arrested soon after.

Facebook post from early 2016

The U.S. Attorney’s Office for the Central District of California, which released the news a few hours ago, states that Franklin faces up to three years in prison for the alleged copyright infringement.

“Franklin is charged in a one-count indictment returned by a federal grand jury on April 7 with reproducing and distributing a copyrighted work, a felony offense that carries a statutory maximum penalty of three years in federal prison,” the office wrote in a press release.

According to comments on Facebook, posted last year, several people warned “Tre-Von M. King” that it wasn’t wise to post copyright-infringing material on Facebook. However, Franklin said he wasn’t worried that he would get in trouble.

Comment from early 2016

While the case is significant, there are also plenty of questions that remain unanswered.

Was the defendant involved in recording the copyright infringing copy? Was it already widely available elsewhere? Are the reported five million “views” people who watched a large part of the movie, or is this just the number of people who might have seen it in their feeds?

Thus far we have not seen a copy of the indictment in the court records, but a follow-up may be warranted when it becomes available.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Making Waves: print out sound waves with the Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/printed-sound-wave/

For fun, Eunice Lee, Matthew Zhang, and Bomani McClendon have worked together to create Waves, an audiovisual project that records people’s spoken responses to personal questions and prints them in the form of a sound wave as a gift for being truthful.

Waves

Waves is a Raspberry Pi project centered around transforming the transience of the spoken word into something concrete and physical. In our setup, a user presses a button corresponding to an intimate question (ex: what’s your motto?) and answers it into a microphone while pressing down on the button.

What are you grateful for?

“I’m grateful for finishing this project,” admits maker Eunice Lee as she presses a button and speaks into the microphone that is part of the Waves project build. After a brief moment, her confession appears on receipt paper as a waveform, and she grins toward the camera, happy with the final piece.

Eunice testing Waves

Waves is a Raspberry Pi project centered around transforming the transience of the spoken word into something concrete and physical. In our setup, a user presses a button corresponding to an intimate question (ex: what’s your motto?) and answers it into a microphone while pressing down on the button.

Sound wave machine

Alongside a Raspberry Pi 3, the Waves device is comprised of four tactile buttons, a standard USB microphone, and a thermal receipt printer. This type of printer has become easily available for the maker movement from suppliers such as Adafruit and Pimoroni.

Eunice Lee, Matthew Zhang, Bomani McClendon - Sound Wave Raspberry Pi

Definitely more fun than a polygraph test

The trio designed four colour-coded cards that represent four questions, each of which has a matching button on the breadboard. Press the button that belongs to the question to be answered, and Python code directs the Pi to record audio via the microphone. Releasing the button stops the audio recording. “Once the recording has been saved, the script viz.py is launched,” explains Lee. “This script takes the audio file and, using Python matplotlib magic, turns it into a nice little waveform image.”

From there, the Raspberry Pi instructs the thermal printer to produce a printout of the sound wave image along with the question.

Making for fun

Eunice, Bomani, and Matt, students of design and computer science at Northwestern University in Illinois, built Waves as a side project. They wanted to make something at the intersection of art and technology and were motivated by the pure joy of creating.

Eunice Lee, Matthew Zhang, Bomani McClendon - Sound Wave Raspberry Pi

Making makes people happy

They have noted improvements that can be made to increase the scope of their sound wave project. We hope to see many more interesting builds from these three, and in the meantime we invite you all to look up their code on Eunice’s GitHub to create your own Waves at home.

The post Making Waves: print out sound waves with the Raspberry Pi appeared first on Raspberry Pi.

Pirate Bay Facilitates Piracy and Can be Blocked, Top EU Court Rules

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-bay-facilitates-piracy-and-can-be-blocked-top-eu-court-rules-170614/

pirate bayIn 2014, The Court of The Hague handed down its decision in a long running case which had previously forced two Dutch ISPs, Ziggo and XS4ALL, to block The Pirate Bay.

The Court ruled against local anti-piracy outfit BREIN, concluding that the blockade was ineffective and restricted the ISPs’ entrepreneurial freedoms.

The Pirate Bay was unblocked by all local ISPs while BREIN took the matter to the Supreme Court, which subsequently referred the case to the EU Court of Justice, seeking further clarification.

After a careful review of the case, the Court of Justice today ruled that The Pirate Bay can indeed be blocked.

While the operators don’t share anything themselves, they knowingly provide users with a platform to share copyright-infringing links. This can be seen as “an act of communication” under the EU Copyright Directive, the Court concludes.

“Whilst it accepts that the works in question are placed online by the users, the Court highlights the fact that the operators of the platform play an essential role in making those works available,” the Court explains in a press release (pdf).

According to the ruling, The Pirate Bay indexes torrents in a way that makes it easy for users to find infringing content while the site makes a profit. The Pirate Bay is aware of the infringements, and although moderators sometimes remove “faulty” torrents, infringing links remain online.

“In addition, the same operators expressly display, on blogs and forums accessible on that platform, their intention of making protected works available to users, and encourage the latter to make copies of those works,” the Court writes.

The ruling means that there are no major obstacles for the Dutch Supreme Court to issue an ISP blockade, but a final decision in the underlying case will likely take a few more months.

A decision at the European level is important, as it may also affect court orders in other countries where The Pirate Bay and other torrent sites are already blocked, including Austria, Belgium, Finland, Italy, and its home turf Sweden.

Despite the negative outcome, the Pirate Bay team is not overly worried.

“Copyright holders will remain stubborn and fight to hold onto a dying model. Clueless and corrupt law makers will put corporate interests before the public’s. Their combined jackassery is what keeps TPB alive,” TPB’s plc365 tells TorrentFreak.

“The reality is that regardless of the ruling, nothing substantial will change. Maybe more ISPs will block TPB. More people will use one of the hundreds of existing proxies, and even more new ones will be created as a result.”

Pirate Bay moderator “Xe” notes that while it’s an extra barrier to access the site, blockades will eventually help people to get around censorship efforts, which are not restricted to TPB.

“They’re an issue for everyone in the sense that they’re an obstacle which has to be overcome. But learning how to work around them isn’t hard and knowing how to work around them is becoming a core skill for everyone who uses the Internet.

“Blockades are not a major issue for the site in the sense that they’re nothing new: we’ve long since adapted to them. We serve the needs of millions of people every day in spite of them,” Xe adds.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

UK Police Claim Success in Keeping Gambling Ads off Pirate Sites

Post Syndicated from Andy original https://torrentfreak.com/uk-police-claim-success-in-keeping-gambling-ads-off-pirate-sites-170614/

Over the past several years, there has been a major effort by entertainment industry groups to cut off revenue streams to ‘pirate’ sites. The theory is that if sites cannot generate funds, their operators will eventually lose interest.

Since advertising is a key money earner for any website, significant resources have been expended trying to keep ads off sites that directly or indirectly profit from infringement. It’s been a multi-pronged affair, with agencies being encouraged to do the right thing and brands warned that their ads appearing on pirate sites does nothing for their image.

One sector that has trailed behind most is the gambling industry. Up until fairly recently, ads for some of the UK’s largest bookmakers have been a regular feature on many large pirate sites, either embedded in pages or more often than not, appearing via popup or pop-under spreads. Now, however, a significant change is being reported.

According to the City of London Police’s Intellectual Property Crime Unit (PIPCU), over the past 12 months there has been an 87% drop in adverts for licensed gambling operators being displayed on infringing websites.

The research was carried out by whiteBULLET, a brand safety and advertising solutions company which helps advertisers to assess whether placing an advert on a particular URL will cause it to appear on a pirate site.

PIPCU says that licensed gambling operators have an obligation to “keep crime out of gambling” due to their commitments under the Gambling Act 2005. However, the Gambling Commission, the UK’s gambling regulatory body, has recently been taking additional steps to tackle the problem.

In September 2015, the Commission consulted on amendments (pdf) to licensing conditions that would compel licensees to ensure that advertisements “placed by themselves and others” do not appear on websites providing unauthorized access to copyrighted content.

After the consultation was published in May 2016 (pdf), all respondents agreed in principle that gambling operators should not advertise on pirate sites. A month later, the Commission said it would ban the placement of gambling ads on such platforms.

When the new rules came into play last October, 40 gambling companies (including Bet365, Coral and Sky Bet, who had previously been called out for displaying ads on pirate sites) were making use of PIPCU’s ‘Infringing Website List‘, a database of sites that police claim are actively involved in piracy.

Speaking yesterday, acting Detective Superintendent Peter Ratcliffe, Head of the Police Intellectual Property Crime Unit (PIPCU), welcomed the ensuing reduction in ad placement on ‘pirate’ domains.

“The success of a strong relationship built between PIPCU and The Gambling Commission can be seen by these figures. This is a fantastic example of a joint working initiative between police and an industry regulator,” Ratcliffe said.

“We commend the 40 gambling companies who are already using the Infringing Website List and encourage others to sign up. We will continue to encourage all UK advertisers to become a member of the Infringing Website List to ensure they’re not inadvertently funding criminal websites.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.