Tag Archives: eu

From Idea to Launch: Getting Your First Customers

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-to-get-your-first-customers/

line outside of Apple

After deciding to build an unlimited backup service and developing our own storage platform, the next step was to get customers and feedback. Not all customers are created equal. Let’s talk about the types, and when and how to attract them.

How to Get Your First Customers

First Step – Don’t Launch Publicly
Launch when you’re ready for the judgments of people who don’t know you at all. Until then, don’t launch. Sign up users and customers either that you know, those you can trust to cut you some slack (while providing you feedback), or at minimum those for whom you can set expectations. For months the Backblaze website was a single page with no ability to get the product and minimal info on what it would be. This is not to counter the Lean Startup ‘iterate quickly with customer feedback’ advice. Rather, this is an acknowledgement that there are different types of feedback required based on your development stage.

Sign Up Your Friends
We knew all of our first customers; they were friends, family, and previous co-workers. Many knew what we were up to and were excited to help us. No magic marketing or tech savviness was required to reach them – we just asked that they try the service. We asked them to provide us feedback on their experience and collected it through email and conversations. While the feedback wasn’t unbiased, it was nonetheless wide-ranging, real, and often insightful. These people were willing to spend time carefully thinking about their feedback and delving deeper into the conversations.

Broaden to Beta
Unless you’re famous or your service costs $1 million per customer, you’ll probably need to expand quickly beyond your friends to build a business – and to get broader feedback. Our next step was to broaden the customer base to beta users.

Opening up the service in beta provides three benefits:

  1. Air cover for the early warts. There are going to be issues, bugs, unnecessarily complicated user flows, and poorly worded text. Beta tells people, “We don’t consider the product ‘done’ and you should expect some of these issues. Please be patient with us.”
  2. A request for feedback. Some people always provide feedback, but beta communicates that you want it.
  3. An awareness opportunity. Opening up in beta provides an early (but not only) opportunity to have an announcement and build awareness.

Pitching Beta to Press
Not all press cares about, or is even willing to cover, beta products. Much of the mainstream press wants to write about services that are fully live, have scale, and are important in the marketplace. However, there are a number of sites that like to cover the leading edge – and that means covering betas. Techcrunch, Ars Technica, and SimpleHelp covered our initial private beta launch. I’ll go into the details of how to work with the press to cover your announcements in a post next month.

Private vs. Public Beta
Both private and public beta provide all three of the benefits above. The difference between the two is that private betas are much more controlled, whereas public ones bring in more users. But this isn’t an either/or – I recommend doing both.

Private Beta
For our original beta in 2008, we decided that we were comfortable with about 1,000 users subscribing to our service. That would provide us with a healthy amount of feedback and get some early adoption, while not overwhelming us or our server capacity, and equally important not causing cash flow issues from having to buy more equipment. So we decided to limit the sign-up to only the first 1,000 people who signed up; then we would shut off sign-ups for a while.

But how do you even get 1,000 people to sign up for your service? In our case, get some major publications to write about our beta. (Note: In a future post I’ll explain exactly how to find and reach out to writers. Sign up to receive all of the entrepreneurial posts in this series.)

Public Beta
For our original service (computer backup), we did not have a public beta; but when we launched Backblaze B2, we had a private and then a public beta. The private beta allowed us to work out early kinks, while the public beta brought us a more varied set of use cases. In public beta, there is no cap on the number of users that may try the service.

While this is a first-class problem to have, if your service is flooded and stops working, it’s still a problem. Think through what you will do if that happens. In our early days, when our system could get overwhelmed by volume, we had a static web page hosted with a different registrar that wouldn’t let customers sign up but would tell them when our service would be open again. When we reached a critical volume level we would redirect to it in order to at least provide status for when we could accept more customers.

Collect Feedback
Since one of the goals of betas is to get feedback, we made sure that we had our email addresses clearly presented on the site so users could send us thoughts. We were most interested in broad qualitative feedback on users’ experience, so all emails went to an internal mailing list that would be read by everyone at Backblaze.

For our B2 public and private betas, we also added an optional short survey to the sign-up process. In order to be considered for the private beta you had to fill the survey out, though we found that 80% of users continued to fill out the survey even when it was not required. This survey had both closed-end questions (“how much data do you have”) and open-ended ones (“what do you want to use cloud storage for?”).

BTW, despite us getting a lot of feedback now via our support team, Twitter, and marketing surveys, we are always open to more – you can email me directly at gleb.budman {at} backblaze.com.

Don’t Throw Away Users
Initially our backup service was available only on Windows, but we had an email sign-up list for people who wanted it for their Mac. This provided us with a sense of market demand and a ready list of folks who could be beta users and early adopters when we had a Mac version. Have a service targeted at doctors but lawyers are expressing interest? Capture that.

Product Launch

When
The first question is “when” to launch. Presuming your service is in ‘public beta’, what is the advantage of moving out of beta and into a “version 1.0”, “gold”, or “public availability”? That depends on your service and customer base. Some services fly through public beta. Gmail, on the other hand, was (in)famous for being in beta for 5 years, despite having over 100 million users.

The term beta says to users, “give us some leeway, but feel free to use the service”. That’s fine for many consumer apps and will have near zero impact on them. However, services aimed at businesses and government will often not be adopted with a beta label as the enterprise customers want to know the company feels the service is ‘ready’. While Backblaze started out as a purely consumer service, because it was a data backup service, it was important for customers to trust that the service was ready.

No product is bug-free. But from a product readiness perspective, the nomenclature should also be a reflection of the quality of the product. You can launch a product with one feature that works well out of beta. But a product with fifty features on which half the users will bump into problems should likely stay in beta. The customer feedback, surveys, and your own internal testing should guide you in determining this quality during the beta. Be careful about “we’ve only seen that one time” or “I haven’t been able to reproduce that on my machine”; those issues are likely to scale with customers when you launch.

How
Launching out of beta can be as simple as removing the beta label from the website/product. However, this can be a great time to reach out to press, write a blog post, and send an email announcement to your customers.

Consider thanking your beta testers somehow; can they get some feature turned out for free, an extension of their trial, or premium support? If nothing else, remember to thank them for their feedback. Users that signed up during your beta are likely the ones who will propel your service. They had the need and interest to both be early adopters and deal with bugs. They are likely the key to getting 1,000 true fans.

The Beginning
The title of this post was “Getting your first customers”, because getting to launch may feel like the peak of your journey when you’re pre-launch, but it really is just the beginning. It’s a step along the journey of building your business. If your launch is wildly successful, enjoy it, work to build on the momentum, but don’t lose track of building your business. If your launch is a dud, go out for a coffee with your team, say “well that sucks”, and then get back to building your business. You can learn a tremendous amount from your early customers, and they can become your biggest fans, but the success of your business will depend on what you continue to do the months and years after your launch.

The post From Idea to Launch: Getting Your First Customers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How to Create an AMI Builder with AWS CodeBuild and HashiCorp Packer – Part 2

Post Syndicated from Heitor Lessa original https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer-part-2/

Written by AWS Solutions Architects Jason Barto and Heitor Lessa

 
In Part 1 of this post, we described how AWS CodeBuild, AWS CodeCommit, and HashiCorp Packer can be used to build an Amazon Machine Image (AMI) from the latest version of Amazon Linux. In this post, we show how to use AWS CodePipeline, AWS CloudFormation, and Amazon CloudWatch Events to continuously ship new AMIs. We use Ansible by Red Hat to harden the OS on the AMIs through a well-known set of security controls outlined by the Center for Internet Security in its CIS Amazon Linux Benchmark.

You’ll find the source code for this post in our GitHub repo.

At the end of this post, we will have the following architecture:

Requirements

 
To follow along, you will need Git and a text editor. Make sure Git is configured to work with AWS CodeCommit, as described in Part 1.

Technologies

 
In addition to the services and products used in Part 1 of this post, we also use these AWS services and third-party software:

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Amazon CloudWatch Events enables you to react selectively to events in the cloud and in your applications. Specifically, you can create CloudWatch Events rules that match event patterns, and take actions in response to those patterns.

AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. AWS CodePipeline builds, tests, and deploys your code every time there is a code change, based on release process models you define.

Amazon SNS is a fast, flexible, fully managed push notification service that lets you send individual messages or to fan out messages to large numbers of recipients. Amazon SNS makes it simple and cost-effective to send push notifications to mobile device users or email recipients. The service can even send messages to other distributed services.

Ansible is a simple IT automation system that handles configuration management, application deployment, cloud provisioning, ad-hoc task-execution, and multinode orchestration.

Getting Started

 
We use CloudFormation to bootstrap the following infrastructure:

Component Purpose
AWS CodeCommit repository Git repository where the AMI builder code is stored.
S3 bucket Build artifact repository used by AWS CodePipeline and AWS CodeBuild.
AWS CodeBuild project Executes the AWS CodeBuild instructions contained in the build specification file.
AWS CodePipeline pipeline Orchestrates the AMI build process, triggered by new changes in the AWS CodeCommit repository.
SNS topic Notifies subscribed email addresses when an AMI build is complete.
CloudWatch Events rule Defines how the AMI builder should send a custom event to notify an SNS topic.
Region AMI Builder Launch Template
N. Virginia (us-east-1)
Ireland (eu-west-1)

After launching the CloudFormation template linked here, we will have a pipeline in the AWS CodePipeline console. (Failed at this stage simply means we don’t have any data in our newly created AWS CodeCommit Git repository.)

Next, we will clone the newly created AWS CodeCommit repository.

If this is your first time connecting to a AWS CodeCommit repository, please see instructions in our documentation on Setup steps for HTTPS Connections to AWS CodeCommit Repositories.

To clone the AWS CodeCommit repository (console)

  1. From the AWS Management Console, open the AWS CloudFormation console.
  2. Choose the AMI-Builder-Blogpost stack, and then choose Output.
  3. Make a note of the Git repository URL.
  4. Use git to clone the repository.

For example: git clone https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/AMI-Builder_repo

To clone the AWS CodeCommit repository (CLI)

# Retrieve CodeCommit repo URL
git_repo=$(aws cloudformation describe-stacks --query 'Stacks[0].Outputs[?OutputKey==`GitRepository`].OutputValue' --output text --stack-name "AMI-Builder-Blogpost")

# Clone repository locally
git clone ${git_repo}

Bootstrap the Repo with the AMI Builder Structure

 
Now that our infrastructure is ready, download all the files and templates required to build the AMI.

Your local Git repo should have the following structure:

.
├── ami_builder_event.json
├── ansible
├── buildspec.yml
├── cloudformation
├── packer_cis.json

Next, push these changes to AWS CodeCommit, and then let AWS CodePipeline orchestrate the creation of the AMI:

git add .
git commit -m "My first AMI"
git push origin master

AWS CodeBuild Implementation Details

 
While we wait for the AMI to be created, let’s see what’s changed in our AWS CodeBuild buildspec.yml file:

...
phases:
  ...
  build:
    commands:
      ...
      - ./packer build -color=false packer_cis.json | tee build.log
  post_build:
    commands:
      - egrep "${AWS_REGION}\:\sami\-" build.log | cut -d' ' -f2 > ami_id.txt
      # Packer doesn't return non-zero status; we must do that if Packer build failed
      - test -s ami_id.txt || exit 1
      - sed -i.bak "s/<<AMI-ID>>/$(cat ami_id.txt)/g" ami_builder_event.json
      - aws events put-events --entries file://ami_builder_event.json
      ...
artifacts:
  files:
    - ami_builder_event.json
    - build.log
  discard-paths: yes

In the build phase, we capture Packer output into a file named build.log. In the post_build phase, we take the following actions:

  1. Look up the AMI ID created by Packer and save its findings to a temporary file (ami_id.txt).
  2. Forcefully make AWS CodeBuild to fail if the AMI ID (ami_id.txt) is not found. This is required because Packer doesn’t fail if something goes wrong during the AMI creation process. We have to tell AWS CodeBuild to stop by informing it that an error occurred.
  3. If an AMI ID is found, we update the ami_builder_event.json file and then notify CloudWatch Events that the AMI creation process is complete.
  4. CloudWatch Events publishes a message to an SNS topic. Anyone subscribed to the topic will be notified in email that an AMI has been created.

Lastly, the new artifacts phase instructs AWS CodeBuild to upload files built during the build process (ami_builder_event.json and build.log) to the S3 bucket specified in the Outputs section of the CloudFormation template. These artifacts can then be used as an input artifact in any later stage in AWS CodePipeline.

For information about customizing the artifacts sequence of the buildspec.yml, see the Build Specification Reference for AWS CodeBuild.

CloudWatch Events Implementation Details

 
CloudWatch Events allow you to extend the AMI builder to not only send email after the AMI has been created, but to hook up any of the supported targets to react to the AMI builder event. This event publication means you can decouple from Packer actions you might take after AMI completion and plug in other actions, as you see fit.

For more information about targets in CloudWatch Events, see the CloudWatch Events API Reference.

In this case, CloudWatch Events should receive the following event, match it with a rule we created through CloudFormation, and publish a message to SNS so that you can receive an email.

Example CloudWatch custom event

[
        {
            "Source": "com.ami.builder",
            "DetailType": "AmiBuilder",
            "Detail": "{ \"AmiStatus\": \"Created\"}",
            "Resources": [ "ami-12cd5guf" ]
        }
]

Cloudwatch Events rule

{
  "detail-type": [
    "AmiBuilder"
  ],
  "source": [
    "com.ami.builder"
  ],
  "detail": {
    "AmiStatus": [
      "Created"
    ]
  }
}

Example SNS message sent in email

{
    "version": "0",
    "id": "f8bdede0-b9d7...",
    "detail-type": "AmiBuilder",
    "source": "com.ami.builder",
    "account": "<<aws_account_number>>",
    "time": "2017-04-28T17:56:40Z",
    "region": "eu-west-1",
    "resources": ["ami-112cd5guf "],
    "detail": {
        "AmiStatus": "Created"
    }
}

Packer Implementation Details

 
In addition to the build specification file, there are differences between the current version of the HashiCorp Packer template (packer_cis.json) and the one used in Part 1.

Variables

  "variables": {
    "vpc": "{{env `BUILD_VPC_ID`}}",
    "subnet": "{{env `BUILD_SUBNET_ID`}}",
         “ami_name”: “Prod-CIS-Latest-AMZN-{{isotime \”02-Jan-06 03_04_05\”}}”
  },
  • ami_name: Prefixes a name used by Packer to tag resources during the Builders sequence.
  • vpc and subnet: Environment variables defined by the CloudFormation stack parameters.

We no longer assume a default VPC is present and instead use the VPC and subnet specified in the CloudFormation parameters. CloudFormation configures the AWS CodeBuild project to use these values as environment variables. They are made available throughout the build process.

That allows for more flexibility should you need to change which VPC and subnet will be used by Packer to launch temporary resources.

Builders

  "builders": [{
    ...
    "ami_name": “{{user `ami_name`| clean_ami_name}}”,
    "tags": {
      "Name": “{{user `ami_name`}}”,
    },
    "run_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "run_volume_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "snapshot_tags": {
      "Name": “{{user `ami_name`}}",
    },
    ...
    "vpc_id": "{{user `vpc` }}",
    "subnet_id": "{{user `subnet` }}"
  }],

We now have new properties (*_tag) and a new function (clean_ami_name) and launch temporary resources in a VPC and subnet specified in the environment variables. AMI names can only contain a certain set of ASCII characters. If the input in project deviates from the expected characters (for example, includes whitespace or slashes), Packer’s clean_ami_name function will fix it.

For more information, see functions on the HashiCorp Packer website.

Provisioners

  "provisioners": [
    {
        "type": "shell",
        "inline": [
            "sudo pip install ansible"
        ]
    }, 
    {
        "type": "ansible-local",
        "playbook_file": "ansible/playbook.yaml",
        "role_paths": [
            "ansible/roles/common"
        ],
        "playbook_dir": "ansible",
        "galaxy_file": "ansible/requirements.yaml"
    },
    {
      "type": "shell",
      "inline": [
        "rm .ssh/authorized_keys ; sudo rm /root/.ssh/authorized_keys"
      ]
    }

We used shell provisioner to apply OS patches in Part 1. Now, we use shell to install Ansible on the target machine and ansible-local to import, install, and execute Ansible roles to make our target machine conform to our standards.

Packer uses shell to remove temporary keys before it creates an AMI from the target and temporary EC2 instance.

Ansible Implementation Details

 
Ansible provides OS patching through a custom Common role that can be easily customized for other tasks.

CIS Benchmark and Cloudwatch Logs are implemented through two Ansible third-party roles that are defined in ansible/requirements.yaml as seen in the Packer template.

The Ansible provisioner uses Ansible Galaxy to download these roles onto the target machine and execute them as instructed by ansible/playbook.yaml.

For information about how these components are organized, see the Playbook Roles and Include Statements in the Ansible documentation.

The following Ansible playbook (ansible</playbook.yaml) controls the execution order and custom properties:

---
- hosts: localhost
  connection: local
  gather_facts: true    # gather OS info that is made available for tasks/roles
  become: yes           # majority of CIS tasks require root
  vars:
    # CIS Controls whitepaper:  http://bit.ly/2mGAmUc
    # AWS CIS Whitepaper:       http://bit.ly/2m2Ovrh
    cis_level_1_exclusions:
    # 3.4.2 and 3.4.3 effectively blocks access to all ports to the machine
    ## This can break automation; ignoring it as there are stronger mechanisms than that
      - 3.4.2 
      - 3.4.3
    # CloudWatch Logs will be used instead of Rsyslog/Syslog-ng
    ## Same would be true if any other software doesn't support Rsyslog/Syslog-ng mechanisms
      - 4.2.1.4
      - 4.2.2.4
      - 4.2.2.5
    # Autofs is not installed in newer versions, let's ignore
      - 1.1.19
    # Cloudwatch Logs role configuration
    logs:
      - file: /var/log/messages
        group_name: "system_logs"
  roles:
    - common
    - anthcourtney.cis-amazon-linux
    - dharrisio.aws-cloudwatch-logs-agent

Both third-party Ansible roles can be easily configured through variables (vars). We use Ansible playbook variables to exclude CIS controls that don’t apply to our case and to instruct the CloudWatch Logs agent to stream the /var/log/messages log file to CloudWatch Logs.

If you need to add more OS or application logs, you can easily duplicate the playbook and make changes. The CloudWatch Logs agent will ship configured log messages to CloudWatch Logs.

For more information about parameters you can use to further customize third-party roles, download Ansible roles for the Cloudwatch Logs Agent and CIS Amazon Linux from the Galaxy website.

Committing Changes

 
Now that Ansible and CloudWatch Events are configured as a part of the build process, commiting any changes to the AWS CodeComit Git Repository will triger a new AMI build process that can be followed through the AWS CodePipeline console.

When the build is complete, an email will be sent to the email address you provided as a part of the CloudFormation stack deployment. The email serves as notification that an AMI has been built and is ready for use.

Summary

 
We used AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Packer, and Ansible to build a pipeline that continuously builds new, hardened CIS AMIs. We used Amazon SNS so that email addresses subscribed to a SNS topic are notified upon completion of the AMI build.

By treating our AMI creation process as code, we can iterate and track changes over time. In this way, it’s no different from a software development workflow. With that in mind, software patches, OS configuration, and logs that need to be shipped to a central location are only a git commit away.

Next Steps

 
Here are some ideas to extend this AMI builder:

  • Hook up a Lambda function in Cloudwatch Events to update EC2 Auto Scaling configuration upon completion of the AMI build.
  • Use AWS CodePipeline parallel steps to build multiple Packer images.
  • Add a commit ID as a tag for the AMI you created.
  • Create a scheduled Lambda function through Cloudwatch Events to clean up old AMIs based on timestamp (name or additional tag).
  • Implement Windows support for the AMI builder.
  • Create a cross-account or cross-region AMI build.

Cloudwatch Events allow the AMI builder to decouple AMI configuration and creation so that you can easily add your own logic using targets (AWS Lambda, Amazon SQS, Amazon SNS) to add events or recycle EC2 instances with the new AMI.

If you have questions or other feedback, feel free to leave it in the comments or contribute to the AMI Builder repo on GitHub.

DynamoDB Accelerator (DAX) Now Generally Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/dynamodb-accelerator-dax-now-generally-available/

Earlier this year I told you about Amazon DynamoDB Accelerator (DAX), a fully-managed caching service that sits in front of (logically speaking) your Amazon DynamoDB tables. DAX returns cached responses in microseconds, making it a great fit for eventually-consistent read-intensive workloads. DAX supports the DynamoDB API, and is seamless and easy to use. As a managed service, you simply create your DAX cluster and use it as the target for your existing reads and writes. You don’t have to worry about patching, cluster maintenance, replication, or fault management.

Now Generally Available
Today I am pleased to announce that DAX is now generally available. We have expanded DAX into additional AWS Regions and used the preview time to fine-tune performance and availability:

Now in Five Regions – DAX is now available in the US East (Northern Virginia), EU (Ireland), US West (Oregon), Asia Pacific (Tokyo), and US West (Northern California) Regions.

In Production – Our preview customers are reporting that they are using DAX in production, that they loved how easy it was to add DAX to their application, and have told us that their apps are now running 10x faster.

Getting Started with DAX
As I outlined in my earlier post, it is easy to use DAX to accelerate your existing DynamoDB applications. You simply create a DAX cluster in the desired region, update your application to reference the DAX SDK for Java (the calls are the same; this is a drop-in replacement), and configure the SDK to use the endpoint to your cluster. As a read-through/write-through cache, DAX seamlessly handles all of the DynamoDB read/write APIs.

We are working on SDK support for other languages, and I will share additional information as it becomes available.

DAX Pricing
You pay for each node in the cluster (see the DynamoDB Pricing page for more information) on a per-hour basis, with prices starting at $0.269 per hour in the US East (Northern Virginia) and US West (Oregon) regions. With DAX, each of the nodes in your cluster serves as a read target and as a failover target for high availability. The DAX SDK is cluster aware and will issue round-robin requests to all nodes in the cluster so that you get to make full use of the cluster’s cache resources.

Because DAX can easily handle sudden spikes in read traffic, you may be able to reduce the amount of provisioned throughput for your tables, resulting in an overall cost savings while still returning results in microseconds.

Jeff;

 

MPAA & RIAA Demand Tough Copyright Standards in NAFTA Negotiations

Post Syndicated from Andy original https://torrentfreak.com/mpaa-riaa-demand-tough-copyright-standards-in-nafta-negotiations-170621/

The North American Free Trade Agreement (NAFTA) between the United States, Canada, and Mexico was negotiated more than 25 years ago. With a quarter of a decade of developments to contend with, the United States wants to modernize.

“While our economy and U.S. businesses have changed considerably over that period, NAFTA has not,” the government says.

With this in mind, the US requested comments from interested parties seeking direction for negotiation points. With those comments now in, groups like the MPAA and RIAA have been making their positions known. It’s no surprise that intellectual property enforcement is high on the agenda.

“Copyright is the lifeblood of the U.S. motion picture and television industry. As such, MPAA places high priority on securing strong protection and enforcement disciplines in the intellectual property chapters of trade agreements,” the MPAA writes in its submission.

“Strong IPR protection and enforcement are critical trade priorities for the music industry. With IPR, we can create good jobs, make significant contributions to U.S. economic growth and security, invest in artists and their creativity, and drive technological innovation,” the RIAA notes.

While both groups have numerous demands, it’s clear that each seeks an environment where not only infringers can be held liable, but also Internet platforms and services.

For the RIAA, there is a big focus on the so-called ‘Value Gap’, a phenomenon found on user-uploaded content sites like YouTube that are able to offer infringing content while avoiding liability due to Section 512 of the DMCA.

“Today, user-uploaded content services, which have developed sophisticated on-demand music platforms, use this as a shield to avoid licensing music on fair terms like other digital services, claiming they are not legally responsible for the music they distribute on their site,” the RIAA writes.

“Services such as Apple Music, TIDAL, Amazon, and Spotify are forced to compete with services that claim they are not liable for the music they distribute.”

But if sites like YouTube are exercising their rights while acting legally under current US law, how can partners Canada and Mexico do any better? For the RIAA, that can be achieved by holding them to standards envisioned by the group when the DMCA was passed, not how things have panned out since.

Demanding that negotiators “protect the original intent” of safe harbor, the RIAA asks that a “high-level and high-standard service provider liability provision” is pursued. This, the music group says, should only be available to “passive intermediaries without requisite knowledge of the infringement on their platforms, and inapplicable to services actively engaged in communicating to the public.”

In other words, make sure that YouTube and similar sites won’t enjoy the same level of safe harbor protection as they do today.

The RIAA also requires any negotiated safe harbor provisions in NAFTA to be flexible in the event that the DMCA is tightened up in response to the ongoing safe harbor rules study.

In any event, NAFTA should not “support interpretations that no longer reflect today’s digital economy and threaten the future of legitimate and sustainable digital trade,” the RIAA states.

For the MPAA, Section 512 is also perceived as a problem. While noting that the original intent was to foster a system of shared responsibility between copyright owners and service providers, the MPAA says courts have subsequently let copyright holders down. Like the RIAA, the MPAA also suggests that Canada and Mexico can be held to higher standards.

“We recommend a new approach to this important trade policy provision by moving to high-level language that establishes intermediary liability and appropriate limitations on liability. This would be fully consistent with U.S. law and avoid the same misinterpretations by policymakers and courts overseas,” the MPAA writes.

“In so doing, a modernized NAFTA would be consistent with Trade Promotion Authority’s negotiating objective of ‘ensuring that standards of protection and enforcement keep pace with technological developments’.”

The MPAA also has some specific problems with Mexico, including unauthorized camcording. The Hollywood group says that 85 illicit audio and video recordings of films were linked to Mexican theaters in 2016. However, recording is not currently a criminal offense in Mexico.

Another issue for the MPAA is that criminal sanctions for commercial scale infringement are only available if the infringement is for profit.

“This has hampered enforcement against the above-discussed camcording problem but also against online infringement, such as peer-to-peer piracy, that may be on a scale that is immensely harmful to U.S. rightsholders but nonetheless occur without profit by the infringer,” the MPAA writes.

“The modernized NAFTA like other U.S. bilateral free trade agreements must provide for criminal sanctions against commercial scale infringements without proof of profit motive.”

Also of interest are the MPAA’s complaints against Mexico’s telecoms laws. Unlike in the US and many countries in Europe, Mexico’s ISPs are forbidden to hand out their customers’ personal details to rights holders looking to sue. This, the MPAA says, needs to change.

The submissions from the RIAA and MPAA can be found here and here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Building Loosely Coupled, Scalable, C# Applications with Amazon SQS and Amazon SNS

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/building-loosely-coupled-scalable-c-applications-with-amazon-sqs-and-amazon-sns/

 
Stephen Liedig, Solutions Architect

 

One of the many challenges professional software architects and developers face is how to make cloud-native applications scalable, fault-tolerant, and highly available.

Fundamental to your project success is understanding the importance of making systems highly cohesive and loosely coupled. That means considering the multi-dimensional facets of system coupling to support the distributed nature of the applications that you are building for the cloud.

By that, I mean addressing not only the application-level coupling (managing incoming and outgoing dependencies), but also considering the impacts of of platform, spatial, and temporal coupling of your systems. Platform coupling relates to the interoperability, or lack thereof, of heterogeneous systems components. Spatial coupling deals with managing components at a network topology level or protocol level. Temporal, or runtime coupling, refers to the ability of a component within your system to do any kind of meaningful work while it is performing a synchronous, blocking operation.

The AWS messaging services, Amazon SQS and Amazon SNS, help you deal with these forms of coupling by providing mechanisms for:

  • Reliable, durable, and fault-tolerant delivery of messages between application components
  • Logical decomposition of systems and increased autonomy of components
  • Creating unidirectional, non-blocking operations, temporarily decoupling system components at runtime
  • Decreasing the dependencies that components have on each other through standard communication and network channels

Following on the recent topic, Building Scalable Applications and Microservices: Adding Messaging to Your Toolbox, in this post, I look at some of the ways you can introduce SQS and SNS into your architectures to decouple your components, and show how you can implement them using C#.

Walkthrough

To illustrate some of these concepts, consider a web application that processes customer orders. As good architects and developers, you have followed best practices and made your application scalable and highly available. Your solution included implementing load balancing, dynamic scaling across multiple Availability Zones, and persisting orders in a Multi-AZ Amazon RDS database instance, as in the following diagram.


In this example, the application is responsible for handling and persisting the order data, as well as dealing with increases in traffic for popular items.

One potential point of vulnerability in the order processing workflow is in saving the order in the database. The business expects that every order has been persisted into the database. However, any potential deadlock, race condition, or network issue could cause the persistence of the order to fail. Then, the order is lost with no recourse to restore the order.

With good logging capability, you may be able to identify when an error occurred and which customer’s order failed. This wouldn’t allow you to “restore” the transaction, and by that stage, your customer is no longer your customer.

As illustrated in the following diagram, introducing an SQS queue helps improve your ordering application. Using the queue isolates the processing logic into its own component and runs it in a separate process from the web application. This, in turn, allows the system to be more resilient to spikes in traffic, while allowing work to be performed only as fast as necessary in order to manage costs.


In addition, you now have a mechanism for persisting orders as messages (with the queue acting as a temporary database), and have moved the scope of your transaction with your database further down the stack. In the event of an application exception or transaction failure, this ensures that the order processing can be retired or redirected to the Amazon SQS Dead Letter Queue (DLQ), for re-processing at a later stage. (See the recent post, Using Amazon SQS Dead-Letter Queues to Control Message Failure, for more information on dead-letter queues.)

Scaling the order processing nodes

This change allows you now to scale the web application frontend independently from the processing nodes. The frontend application can continue to scale based on metrics such as CPU usage, or the number of requests hitting the load balancer. Processing nodes can scale based on the number of orders in the queue. Here is an example of scale-in and scale-out alarms that you would associate with the scaling policy.

Scale-out Alarm

aws cloudwatch put-metric-alarm --alarm-name AddCapacityToCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
--statistic Average --period 300 --threshold 3 --comparison-operator GreaterThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
--evaluation-periods 2 --alarm-actions <arn of the scale-out autoscaling policy>

Scale-in Alarm

aws cloudwatch put-metric-alarm --alarm-name RemoveCapacityFromCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
 --statistic Average --period 300 --threshold 1 --comparison-operator LessThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
 --evaluation-periods 2 --alarm-actions <arn of the scale-in autoscaling policy>

In the above example, use the ApproximateNumberOfMessagesVisible metric to discover the queue length and drive the scaling policy of the Auto Scaling group. Another useful metric is ApproximateAgeOfOldestMessage, when applications have time-sensitive messages and developers need to ensure that messages are processed within a specific time period.

Scaling the order processing implementation

On top of scaling at an infrastructure level using Auto Scaling, make sure to take advantage of the processing power of your Amazon EC2 instances by using as many of the available threads as possible. There are several ways to implement this. In this post, we build a Windows service that uses the BackgroundWorker class to process the messages from the queue.

Here’s a closer look at the implementation. In the first section of the consuming application, use a loop to continually poll the queue for new messages, and construct a ReceiveMessageRequest variable.

public static void PollQueue()
{
    while (_running)
    {
        Task<ReceiveMessageResponse> receiveMessageResponse;

        // Pull messages off the queue
        using (var sqs = new AmazonSQSClient())
        {
            const int maxMessages = 10;  // 1-10

            //Receiving a message
            var receiveMessageRequest = new ReceiveMessageRequest
            {
                // Get URL from Configuration
                QueueUrl = _queueUrl, 
                // The maximum number of messages to return. 
                // Fewer messages might be returned. 
                MaxNumberOfMessages = maxMessages, 
                // A list of attributes that need to be returned with message.
                AttributeNames = new List<string> { "All" },
                // Enable long polling. 
                // Time to wait for message to arrive on queue.
                WaitTimeSeconds = 5 
            };

            receiveMessageResponse = sqs.ReceiveMessageAsync(receiveMessageRequest);
        }

The WaitTimeSeconds property of the ReceiveMessageRequest specifies the duration (in seconds) that the call waits for a message to arrive in the queue before returning a response to the calling application. There are a few benefits to using long polling:

  • It reduces the number of empty responses by allowing SQS to wait until a message is available in the queue before sending a response.
  • It eliminates false empty responses by querying all (rather than a limited number) of the servers.
  • It returns messages as soon any message becomes available.

For more information, see Amazon SQS Long Polling.

After you have returned messages from the queue, you can start to process them by looping through each message in the response and invoking a new BackgroundWorker thread.

// Process messages
if (receiveMessageResponse.Result.Messages != null)
{
    foreach (var message in receiveMessageResponse.Result.Messages)
    {
        Console.WriteLine("Received SQS message, starting worker thread");

        // Create background worker to process message
        BackgroundWorker worker = new BackgroundWorker();
        worker.DoWork += (obj, e) => ProcessMessage(message);
        worker.RunWorkerAsync();
    }
}
else
{
    Console.WriteLine("No messages on queue");
}

The event handler, ProcessMessage, is where you implement business logic for processing orders. It is important to have a good understanding of how long a typical transaction takes so you can set a message VisibilityTimeout that is long enough to complete your operation. If order processing takes longer than the specified timeout period, the message becomes visible on the queue. Other nodes may pick it and process the same order twice, leading to unintended consequences.

Handling Duplicate Messages

In order to manage duplicate messages, seek to make your processing application idempotent. In mathematics, idempotent describes a function that produces the same result if it is applied to itself:

f(x) = f(f(x))

No matter how many times you process the same message, the end result is the same (definition from Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions, Hohpe and Wolf, 2004).

There are several strategies you could apply to achieve this:

  • Create messages that have inherent idempotent characteristics. That is, they are non-transactional in nature and are unique at a specified point in time. Rather than saying “place new order for Customer A,” which adds a duplicate order to the customer, use “place order <orderid> on <timestamp> for Customer A,” which creates a single order no matter how often it is persisted.
  • Deliver your messages via an Amazon SQS FIFO queue, which provides the benefits of message sequencing, but also mechanisms for content-based deduplication. You can deduplicate using the MessageDeduplicationId property on the SendMessage request or by enabling content-based deduplication on the queue, which generates a hash for MessageDeduplicationId, based on the content of the message, not the attributes.
var sendMessageRequest = new SendMessageRequest
{
    QueueUrl = _queueUrl,
    MessageBody = JsonConvert.SerializeObject(order),
    MessageGroupId = Guid.NewGuid().ToString("N"),
    MessageDeduplicationId = Guid.NewGuid().ToString("N")
};
  • If using SQS FIFO queues is not an option, keep a message log of all messages attributes processed for a specified period of time, as an alternative to message deduplication on the receiving end. Verifying the existence of the message in the log before processing the message adds additional computational overhead to your processing. This can be minimized through low latency persistence solutions such as Amazon DynamoDB. Bear in mind that this solution is dependent on the successful, distributed transaction of the message and the message log.

Handling exceptions

Because of the distributed nature of SQS queues, it does not automatically delete the message. Therefore, you must explicitly delete the message from the queue after processing it, using the message ReceiptHandle property (see the following code example).

However, if at any stage you have an exception, avoid handling it as you normally would. The intention is to make sure that the message ends back on the queue, so that you can gracefully deal with intermittent failures. Instead, log the exception to capture diagnostic information, and swallow it.

By not explicitly deleting the message from the queue, you can take advantage of the VisibilityTimeout behavior described earlier. Gracefully handle the message processing failure and make the unprocessed message available to other nodes to process.

In the event that subsequent retries fail, SQS automatically moves the message to the configured DLQ after the configured number of receives has been reached. You can further investigate why the order process failed. Most importantly, the order has not been lost, and your customer is still your customer.

private static void ProcessMessage(Message message)
{
    using (var sqs = new AmazonSQSClient())
    {
        try
        {
            Console.WriteLine("Processing message id: {0}", message.MessageId);

            // Implement messaging processing here
            // Ensure no downstream resource contention (parallel processing)
            // <your order processing logic in here…>
            Console.WriteLine("{0} Thread {1}: {2}", DateTime.Now.ToString("s"), Thread.CurrentThread.ManagedThreadId, message.MessageId);
            
            // Delete the message off the queue. 
            // Receipt handle is the identifier you must provide 
            // when deleting the message.
            var deleteRequest = new DeleteMessageRequest(_queueName, message.ReceiptHandle);
            sqs.DeleteMessageAsync(deleteRequest);
            Console.WriteLine("Processed message id: {0}", message.MessageId);

        }
        catch (Exception ex)
        {
            // Do nothing.
            // Swallow exception, message will return to the queue when 
            // visibility timeout has been exceeded.
            Console.WriteLine("Could not process message due to error. Exception: {0}", ex.Message);
        }
    }
}

Using SQS to adapt to changing business requirements

One of the benefits of introducing a message queue is that you can accommodate new business requirements without dramatically affecting your application.

If, for example, the business decided that all orders placed over $5000 are to be handled as a priority, you could introduce a new “priority order” queue. The way the orders are processed does not change. The only significant change to the processing application is to ensure that messages from the “priority order” queue are processed before the “standard order” queue.

The following diagram shows how this logic could be isolated in an “order dispatcher,” whose only purpose is to route order messages to the appropriate queue based on whether the order exceeds $5000. Nothing on the web application or the processing nodes changes other than the target queue to which the order is sent. The rates at which orders are processed can be achieved by modifying the poll rates and scalability settings that I have already discussed.

Extending the design pattern with Amazon SNS

Amazon SNS supports reliable publish-subscribe (pub-sub) scenarios and push notifications to known endpoints across a wide variety of protocols. It eliminates the need to periodically check or poll for new information and updates. SNS supports:

  • Reliable storage of messages for immediate or delayed processing
  • Publish / subscribe – direct, broadcast, targeted “push” messaging
  • Multiple subscriber protocols
  • Amazon SQS, HTTP, HTTPS, email, SMS, mobile push, AWS Lambda

With these capabilities, you can provide parallel asynchronous processing of orders in the system and extend it to support any number of different business use cases without affecting the production environment. This is commonly referred to as a “fanout” scenario.

Rather than your web application pushing orders to a queue for processing, send a notification via SNS. The SNS messages are sent to a topic and then replicated and pushed to multiple SQS queues and Lambda functions for processing.

As the diagram above shows, you have the development team consuming “live” data as they work on the next version of the processing application, or potentially using the messages to troubleshoot issues in production.

Marketing is consuming all order information, via a Lambda function that has subscribed to the SNS topic, inserting the records into an Amazon Redshift warehouse for analysis.

All of this, of course, is happening without affecting your order processing application.

Summary

While I haven’t dived deep into the specifics of each service, I have discussed how these services can be applied at an architectural level to build loosely coupled systems that facilitate multiple business use cases. I’ve also shown you how to use infrastructure and application-level scaling techniques, so you can get the most out of your EC2 instances.

One of the many benefits of using these managed services is how quickly and easily you can implement powerful messaging capabilities in your systems, and lower the capital and operational costs of managing your own messaging middleware.

Using Amazon SQS and Amazon SNS together can provide you with a powerful mechanism for decoupling application components. This should be part of design considerations as you architect for the cloud.

For more information, see the Amazon SQS Developer Guide and Amazon SNS Developer Guide. You’ll find tutorials on all the concepts covered in this post, and more. To can get started using the AWS console or SDK of your choice visit:

Happy messaging!

За очакванията към обществените медии

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/06/20/19203bbc/

 

Контекстът, в който работят обществените медии, се променя бързо. В резултат на тези промени се наблюдава и промяна в аудиторията по отношение на обществените медии. Поради това са необходими изследвания, за да се разберат приоритетите на обществеността, като се има предвид тази променяща се среда.

Какви са очакванията на хората по отношение на обществените телевизия и радио? Основен въпрос предвид финансирането с публичен ресурс.

На сайта на регулатора Ofcom   са  оповестени резултатите от изследването на обществените очаквания в Обединеното кралство към Би Би Си с акцент върху отличителността на Би Би Си – според доклада отличителни за Би Би Си са качество, професионализъм, достоверност на информацията, богатство от таланти,  ангажирани с производството и представянето на предаванията.

Събрани и обобщени са мнения за ролята на Би Би Си в обществото и новите й обществени функции.

Докладът

Важен е докладът –  но и подходът. За сравнение –  как е тук: без доклад. В миналото Народно събрание Полина Карастоянова, шеф на парламентарната комисия за култура и медии, проведе среща, наречена обществено обсъждане (10 март 2016) на обществените медии –  ето така. 

Filed under: EU Law, Media Law

All Systems Go! 2017 CfP Open

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/all-systems-go-2017-cfp-open.html

The All Systems Go! 2017 Call for Participation is Now Open!

We’d like to invite presentation proposals for All Systems Go! 2017!

All Systems Go! is an Open Source community conference focused on the projects and technologies at the foundation of modern Linux systems — specifically low-level user-space technologies. Its goal is to provide a friendly and collaborative gathering place for individuals and communities working to push these technologies forward.

All Systems Go! 2017 takes place in Berlin, Germany on October 21st+22nd.

All Systems Go! is a 2-day event with 2-3 talks happening in parallel. Full presentation slots are 30-45 minutes in length and lightning talk slots are 5-10 minutes.

We are now accepting submissions for presentation proposals. In particular, we are looking for sessions including, but not limited to, the following topics:

  • Low-level container executors and infrastructure
  • IoT and embedded OS infrastructure
  • OS, container, IoT image delivery and updating
  • Building Linux devices and applications
  • Low-level desktop technologies
  • Networking
  • System and service management
  • Tracing and performance measuring
  • IPC and RPC systems
  • Security and Sandboxing

While our focus is definitely more on the user-space side of things, talks about kernel projects are welcome too, as long as they have a clear and direct relevance for user-space.

Please submit your proposals by September 3rd. Notification of acceptance will be sent out 1-2 weeks later.

To submit your proposal now please visit our CFP submission web site.

For further information about All Systems Go! visit our conference web site.

systemd.conf will not take place this year in lieu of All Systems Go!. All Systems Go! welcomes all projects that contribute to Linux user space, which, of course, includes systemd. Thus, anything you think was appropriate for submission to systemd.conf is also fitting for All Systems Go!

Bitcoin, UASF… и политиката

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2064

Напоследък се заговори из Нета за UASF при Bitcoin. Надали обаче много хора са обърнали внимание на тия акроними. (Обикновено статиите по въпроса на свой ред са салата от други акроними, което също не улеснява разбирането им.) Какво, по дяволите, значи това? И важно ли е?

Всъщност не е особено важно, освен за хора, които сериозно се занимават с криптовалути. Останалите спокойно могат да не му обръщат внимание.

Поне на пръв поглед. Защото дава и сериозно разбиране за ефективността на някои фундаментални политически понятия. Затова смятам да му посветя тук част от времето си – и да изгубя част от вашето.

1. Проблемите на Bitcoin

Електронна валута, която се контролира не от политикани и меринджеи, а от строги правила – мечта, нали? Край на страховете, че поредният популист ще отвори печатницата за пари и ще превърне спестяванията ви в шарена тоалетна хартия… Но идеи без проблеми няма (за реализациите им да не говорим). Така е и с Bitcoin.

Всички транзакции в биткойни се записват в блокове, които образуват верига – така нареченият блокчейн. По този начин всяка стотинка (пардон, сатоши 🙂 ) може да бъде проследена до самото ѝ създаване. Адресите, между които се обменят парите, са анонимни, но самите обмени са публични и явни. Може да ги проследи и провери за валидност всеки, които има нужния софтуер (достъпен свободно) и поддържа „пълен възел“ (full node), тоест е склонен да отдели стотина гигабайта на диска си.

Проблемът е, че блокът на Bitcoin има фиксиран максимален размер – до 1 мегабайт. Той побира максимум 2-3 хиляди транзакции. При 6 блока на час това означава около 15 000 транзакции на час, или около 360 000 на денонощие. Звучи много, но всъщност е абсолютно недостатъчно – доста големи банки правят по повече транзакции на секунда. Та, от известно време насам нуждата от транзакции надхвърля капацитета на блокчейна. Което създава проблем за потребителите на валутата. Някои от тях започват да я изоставят и да се насочват към традиционни валути, или към други криптовалути. Съответно, влиянието и ролята ѝ спада.

2. Положението с решенията

Предлагани са немалко решения на този проблем. Последното се нарича SegWit (segregated witness). Срещу всички тях (и конкретно срещу това) обаче има сериозна съпротива от ключови фактори в Bitcoin.

Сравнително скоро след създаването на Bitcoin в него беше въведено правилото, че транзакциите са платени. (Иначе беше много лесно да бъдат генерирани огромен брой транзакции за минимална сума напред-назад, и така да бъде задръстен блокчейнът.) Всяка транзакция указва колко ще плати за включването си в блок. (Това е, което я „узаконява“.)

Кои транзакции от чакащите реда си ще включи в блок решава този, който създава блока. Това е „копачът“, който е решил целта от предишния блок. Той прибира заплащането за включените транзакции, освен стандартната „награда“ за блока. Затова копачите имат изгода транзакциите да са колкото се може по-скъпи – тоест, капацитетът на блокчейна да е недостатъчен.

В добавка, немалко копачи използват „хак“ в технологията на системата – така нареченият ASICBOOST. Едно от предимствата на SegWit е, че пречи на подобни хакове – тоест, на тези „копачи“. (Подробности можете да намерите тук.)

Резултатът е, че някои копачи се съпротивляват на въвеждането на SegWit. А „копаещата мощност“ е, която служи като „демократичен глас“ в системата на Bitcoin. Вече е правен опит да се въведе SegWit, който не сполучи. За да е по-добър консенсусът, този опит изискваше SegWit да се приеме когато 95% от копаещата мощност го подкрепи. Скоро стана ясно, че това няма да се случи.

3. UASF? WTF? (Демек, кво е тва UASF?)

Не зная колко точно е процентът на отхвърлящите SegWit копачи. Но към момента копаенето е централизирано до степен да се върши почти всичкото от малък брой мощни компании. Напълно е възможно отхвърлящите SegWit да са над 50% от копаещата мощност. Ако е така, въвеждането на SegWit чрез подкрепа от нея би било невъзможно. (Разбира се, това ще значи в близко бъдеще упадъка на Bitcoin и превръщането му от „царя на криптовалутите“ в евтин музеен експонат. В крайна сметка тези копачи ще са си изкопали гроба. Но ако има на света нещо, на което може да се разчита винаги и докрай, това е човешката глупост.)

За да се избегне такъв сценарий, девелоперите от Bitcoin Core Team предложиха т.нар. User-Activated Soft Fork, съкратено UASF. Същността му е, че от 1 август нататък възлите в мрежата на Bitcoin, които подкрепят SegWit, ще започнат да смятат блокове, които не потвърждават че го поддържат, за невалидни.

Отхвърлящите SegWit копачи могат да продължат да си копаят по старому. Поддържащите го ще продължат по новому. Съответно блокчейнът на Bitcoin от този момент нататък ще се раздели на два – клон без SegWit и клон с него.

4. Какъв ще е резултатът?

Преобладаващата копаеща мощност може да се окаже в първия – тоест, по правилата на Сатоши Накамото той ще е основният. Но ако мрежата е разделена на две, всяка ще има своя основен клон, така че няма да бъдат технически обединени. Ще има две различни валути на име Bitcoin, и всяка ще претендира, че е основната.

Как ще се разреши този спор? Потребителите на Bitcoin търсят по-ниски цени за транзакции, така че огромният процент от тях бързо ще се ориентират към веригата със SegWit. А ценността и приетостта на Bitcoin се дължи просто на факта, че хората го приемат и са склонни да го използват. Затова и Segwit-натият Bitcoin ще запази ролята (и цената) на оригиналния Bitcoin, докато този без SegWit ще поевтинее и ще загуби повечето от релевантността си.

(Всъщност, подобно „разцепление“ вече се е случвало с No. 2 в света на криптовалутите – Ethereum. Затова има Ethereum и Ethereum Classic. Вторите изгубиха борбата да са наследникът на оригиналния Ethereum, но продължава да ги има, макар и да са с много по-малка роля и цена.)

Отхвърлилите SegWit копачи скоро ще се окажат в положение да копаят нещо, което струва жълти стотинки. Затова вероятно те шумно или тихо ще преминат към поддръжка на SegWit. Не бих се учудил дори доста от тях да го направят още на 1 август. (Въпреки че някои сигурно ще продължат да опищяват света колко лошо е решението и какви загуби понасят от него. Може да има дори съдебни процеси… Подробностите ще ги видим.)

5. Политиката

Ако сте издържали дотук, четете внимателно – същността на този запис е в тази част.

Наскоро си говорих с горда випускничка на български икономически ВУЗ. Изслушах обяснение как икономията от мащаба не съществува и е точно обратното. Как малките фирми са по-ефективни от големите и т.н…

Нищо чудно, че ги учат на глупости. Който плаща, дори зад сцената, той поръчва музиката. Странно ми е, че обучаваните вярват на тези глупости при положение, че реалността е пред очите им. И че в нея големите фирми разоряват и/или купуват малките, а не обратното. Няма как да е иначе. Както законите на Нютон важат еднакво за лабораторни тежести и за търговски контейнери, така и дисипативните закони важат еднакво за тенджери с вода и за икономически системи.

В ИТ бизнеса динамиката е много над средната. Където не е и няма как да бъде регулиран лесно, където нещата са по-laissez-faire, както е примерно в копаенето на биткойни, е още по-голяма. Нищо чудно, че копаенето премина толкова бързо от милиони индивидуални участници към малък брой лесно картелиращи се тиранозаври. Всяка система еволюира вътрешно в такава посока… Затова „перфектна система“ и „щастие завинаги“ няма как да съществуват. Затова, ако щете, свободата трябва да се замесва и изпича всеки ден.

„Преобладаващата копаеща мощност“, било като преобладаващият брой индивиди във вида, било като основната маса пари, било като управление на най-популярните сред гласоподавателите мемове, лесно може да се съсредоточи в тесен кръг ръце. И законите на вътрешната еволюция на системите, като конкретно изражение на дисипативните закони, водят именно натам… Тогава всяко гласуване започва да подкрепя статуквото. Демокрацията престава да бъде възможност за промяна – такава остава само разделянето на възгледите в отделни системи. Единствено тогава новото получава възможност реално да конкурира старото.

Затова и всеки биологичен вид наоколо е започнал някога като миниатюрна различна клонка от могъщото тогава стъбло на друг вид. Който днес познават само палеобиолозите. И всяка могъща банка, или производствена или медийна фирма е започнала – като сума пари, или производствен капацитет, или интелектуална собственост – като обикновена будка за заеми, или работилничка, или ателие. В сянката на тогавашните тиранозаври, помнени днес само от историците. Намерили начин да се отделят и скрият някак от тях, за да съберат мощта да ги конкурират…

Който разбрал – разбрал.

The Pirate Bay Isn’t Affected By Adverse Court Rulings – Everyone Else Is

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-isnt-affected-by-adverse-court-rulings-everyone-else-is-170618/

For more than a decade The Pirate Bay has been the world’s most controversial site. Delivering huge quantities of copyrighted content to the masses, the platform is revered and reviled across the copyright spectrum.

Its reputation is one of a defiant Internet swashbuckler, but due to changes in how the site has been run in more recent times, its current philosophy is more difficult to gauge. What has never been in doubt, however, is the site’s original intent to be as provocative as possible.

Through endless publicity stunts, some real, some just for the ‘lulz’, The Pirate Bay managed to attract a massive audience, all while incurring the wrath of every major copyright holder in the world.

Make no mistake, they all queued up to strike back, but every subsequent rightsholder action was met by a Pirate Bay middle finger, two fingers, or chin flick, depending on the mood of the day. This only served to further delight the masses, who happily spread the word while keeping their torrents flowing.

This vicious circle of being targeted by the entertainment industries, mocking them, and then reaping the traffic benefits, developed into the cheapest long-term marketing campaign the Internet had ever seen. But nothing is ever truly for free and there have been consequences.

After taunting Hollywood and the music industry with its refusals to capitulate, endless legal action that the site would have ordinarily been forced to participate in largely took place without The Pirate Bay being present. It doesn’t take a law degree to work out what happened in each and every one of those cases, whatever complex route they took through the legal system. No defense, no win.

For example, the web-blocking phenomenon across the UK, Europe, Asia and Australia was driven by the site’s absolute resilience and although there would clearly have been other scapegoats had The Pirate Bay disappeared, the site was the ideal bogeyman the copyright lobby required to move forward.

Filing blocking lawsuits while bringing hosts, advertisers, and ISPs on board for anti-piracy initiatives were also made easier with the ‘evil’ Pirate Bay still online. Immune from every anti-piracy technique under the sun, the existence of the platform in the face of all onslaughts only strengthened the cases of those arguing for even more drastic measures.

Over a decade, this has meant a significant tightening of the sharing and streaming climate. Without any big legislative changes but plenty of case law against The Pirate Bay, web-blocking is now a walk in the park, ad hoc domain seizures are a fairly regular occurrence, and few companies want to host sharing sites. Advertisers and brands are also hesitant over where they place their ads. It’s a very different world to the one of 10 years ago.

While it would be wrong to attribute every tightening of the noose to the actions of The Pirate Bay, there’s little doubt that the site and its chaotic image played a huge role in where copyright enforcement is today. The platform set out to provoke and succeeded in every way possible, gaining supporters in their millions. It could also be argued it kicked a hole in a hornets’ nest, releasing the hell inside.

But perhaps the site’s most amazing achievement is the way it has managed to stay online, despite all the turmoil.

This week yet another ruling, this time from the powerful European Court of Justice, found that by offering links in the manner it does, The Pirate Bay and other sites are liable for communicating copyright works to the public. Of course, this prompted the usual swathe of articles claiming that this could be the final nail in the site’s coffin.

Wrong.

In common with every ruling, legal defeat, and legislative restriction put in place due to the site’s activities, this week’s decision from the ECJ will have zero effect on the Pirate Bay’s availability. For right or wrong, the site was breaking the law long before this ruling and will continue to do so until it decides otherwise.

What we have instead is a further tightened legal landscape that will have a lasting effect on everything BUT the site, including weaker torrent sites, Internet users, and user-uploaded content sites such as YouTube.

With The Pirate Bay carrying on regardless, that is nothing short of remarkable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Digital painter rundown

Post Syndicated from Eevee original https://eev.ee/blog/2017/06/17/digital-painter-rundown/

Another patron post! IndustrialRobot asks:

You should totally write about drawing/image manipulation programs! (Inspired by https://eev.ee/blog/2015/05/31/text-editor-rundown/)

This is a little trickier than a text editor comparison — while most text editors are cross-platform, quite a few digital art programs are not. So I’m effectively unable to even try a decent chunk of the offerings. I’m also still a relatively new artist, and image editors are much harder to briefly compare than text editors…

Right, now that your expectations have been suitably lowered:

Krita

I do all of my digital art in Krita. It’s pretty alright.

Okay so Krita grew out of Calligra, which used to be KOffice, which was an office suite designed for KDE (a Linux desktop environment). I bring this up because KDE has a certain… reputation. With KDE, there are at least three completely different ways to do anything, each of those ways has ludicrous amounts of customization and settings, and somehow it still can’t do what you want.

Krita inherits this aesthetic by attempting to do literally everything. It has 17 different brush engines, more than 70 layer blending modes, seven color picker dockers, and an ungodly number of colorspaces. It’s clearly intended primarily for drawing, but it also supports animation and vector layers and a pretty decent spread of raster editing tools. I just right now discovered that it has Photoshop-like “layer styles” (e.g. drop shadow), after a year and a half of using it.

In fairness, Krita manages all of this stuff well enough, and (apparently!) it manages to stay out of your way if you’re not using it. In less fairness, they managed to break erasing with a Wacom tablet pen for three months?

I don’t want to rag on it too hard; it’s an impressive piece of work, and I enjoy using it! The emotion it evokes isn’t so much frustration as… mystified bewilderment.

I once filed a ticket suggesting the addition of a brush size palette — a panel showing a grid of fixed brush sizes that makes it easy to switch between known sizes with a tablet pen (and increases the chances that you’ll be able to get a brush back to the right size again). It’s a prominent feature of Paint Tool SAI and Clip Studio Paint, and while I’ve never used either of those myself, I’ve seen a good few artists swear by it.

The developer response was that I could emulate the behavior by creating brush presets. But that’s flat-out wrong: getting the same effect would require creating a ton of brush presets for every brush I have, plus giving them all distinct icons so the size is obvious at a glance. Even then, it would be much more tedious to use and fill my presets with junk.

And that sort of response is what’s so mysterious to me. I’ve never even been able to use this feature myself, but a year of amateur painting with Krita has convinced me that it would be pretty useful. But a developer didn’t see the use and suggested an incredibly tedious alternative that only half-solves the problem and creates new ones. Meanwhile, of the 28 existing dockable panels, a quarter of them are different ways to choose colors.

What is Krita trying to be, then? What does Krita think it is? Who precisely is the target audience? I have no idea.


Anyway, I enjoy drawing in Krita well enough. It ships with a respectable set of brushes, and there are plenty more floating around. It has canvas rotation, canvas mirroring, perspective guide tools, and other art goodies. It doesn’t colordrop on right click by default, which is arguably a grave sin (it shows a customizable radial menu instead), but that’s easy to rebind. It understands having a background color beneath a bottom transparent layer, which is very nice. You can also toggle any brush between painting and erasing with the press of a button, and that turns out to be very useful.

It doesn’t support infinite canvases, though it does offer a one-click button to extend the canvas in a given direction. I’ve never used it (and didn’t even know what it did until just now), but would totally use an infinite canvas.

I haven’t used the animation support too much, but it’s pretty nice to have. Granted, the only other animation software I’ve used is Aseprite, so I don’t have many points of reference here. It’s a relatively new addition, too, so I assume it’ll improve over time.

The one annoyance I remember with animation was really an interaction with a larger annoyance, which is: working with selections kind of sucks. You can’t drag a selection around with the selection tool; you have to switch to the move tool. That would be fine if you could at least drag the selection ring around with the selection tool, but you can’t do that either; dragging just creates a new selection.

If you want to copy a selection, you have to explicitly copy it to the clipboard and paste it, which creates a new layer. Ctrl-drag with the move tool doesn’t work. So then you have to merge that layer down, which I think is where the problem with animation comes in: a new layer is non-animated by default, meaning it effectively appears in any frame, so simply merging it down with merge it onto every single frame of the layer below. And you won’t even notice until you switch frames or play back the animation. Not ideal.

This is another thing that makes me wonder about Krita’s sense of identity. It has a lot of fancy general-purpose raster editing features that even GIMP is still struggling to implement, like high color depth support and non-destructive filters, yet something as basic as working with selections is clumsy. (In fairness, GIMP is a bit clumsy here too, but it has a consistent notion of “floating selection” that’s easy enough to work with.)

I don’t know how well Krita would work as a general-purpose raster editor; I’ve never tried to use it that way. I can’t think of anything obvious that’s missing. The only real gotcha is that some things you might expect to be tools, like smudge or clone, are just types of brush in Krita.

GIMP

Ah, GIMP — open source’s answer to Photoshop.

It’s very obviously intended for raster editing, and I’m pretty familiar with it after half a lifetime of only using Linux. I even wrote a little Scheme script for it ages ago to automate some simple edits to a couple hundred files, back before I was aware of ImageMagick. I don’t know what to say about it, specifically; it’s fairly powerful and does a wide variety of things.

In fact I’d say it’s almost frustratingly intended for raster editing. I used GIMP in my first attempts at digital painting, before I’d heard of Krita. It was okay, but so much of it felt clunky and awkward. Painting is split between a pencil tool, a paintbrush tool, and an airbrush tool; I don’t really know why. The default brushes are largely uninteresting. Instead of brush presets, there are tool presets that can be saved for any tool; it’s a neat idea, but doesn’t feel like a real substitute for brush presets.

Much of the same functionality as Krita is there, but it’s all somehow more clunky. I’m sure it’s possible to fiddle with the interface to get something friendlier for painting, but I never really figured out how.

And then there’s the surprising stuff that’s missing. There’s no canvas rotation, for example. There’s only one type of brush, and it just stamps the same pattern along a path. I don’t think it’s possible to smear or blend or pick up color while painting. The only way to change the brush size is via the very sensitive slider on the tool options panel, which I remember being a little annoying with a tablet pen. Also, you have to specifically enable tablet support? It’s not difficult or anything, but I have no idea why the default is to ignore tablet pressure and treat it like a regular mouse cursor.

As I mentioned above, there’s also no support for high color depth or non-destructive editing, which is honestly a little embarrassing. Those are the major things Serious Professionals™ have been asking for for ages, and GIMP has been trying to provide them, but it’s taking a very long time. The first signs of GEGL, a new library intended to provide these features, appeared in GIMP 2.6… in 2008. The last major release was in 2012. GIMP has been working on this new plumbing for almost as long as Krita’s entire development history. (To be fair, Krita has also raised almost €90,000 from three Kickstarters to fund its development; I don’t know that GIMP is funded at all.)

I don’t know what’s up with GIMP nowadays. It’s still under active development, but the exact status and roadmap are a little unclear. I still use it for some general-purpose editing, but I don’t see any reason to use it to draw.

I do know that canvas rotation will be in the next release, and there was some experimentation with embedding MyPaint’s brush engine (though when I tried it it was basically unusable), so maybe GIMP is interested in wooing artists? I guess we’ll see.

MyPaint

Ah, MyPaint. I gave it a try once. Once.

It’s a shame, really. It sounds pretty great: specifically built for drawing, has very powerful brushes, supports an infinite canvas, supports canvas rotation, has a simple UI that gets out of your way. Perfect.

Or so it seems. But in MyPaint’s eagerness to shed unnecessary raster editing tools, it forgot a few of the more useful ones. Like selections.

MyPaint has no notion of a selection, nor of copy/paste. If you want to move a head to align better to a body, for example, the sanctioned approach is to duplicate the layer, erase the head from the old layer, erase everything but the head from the new layer, then move the new layer.

I can’t find anything that resembles HSL adjustment, either. I guess the workaround for that is to create H/S/L layers and floodfill them with different colors until you get what you want.

I can’t work seriously without these basic editing tools. I could see myself doodling in MyPaint, but Krita works just as well for doodling as for serious painting, so I’ve never gone back to it.

Drawpile

Drawpile is the modern equivalent to OpenCanvas, I suppose? It lets multiple people draw on the same canvas simultaneously. (I would not recommend it as a general-purpose raster editor.)

It’s a little clunky in places — I sometimes have bugs where keyboard focus gets stuck in the chat, or my tablet cursor becomes invisible — but the collaborative part works surprisingly well. It’s not a brush powerhouse or anything, and I don’t think it allows textured brushes, but it supports tablet pressure and canvas rotation and locked alpha and selections and whatnot.

I’ve used it a couple times, and it’s worked well enough that… well, other people made pretty decent drawings with it? I’m not sure I’ve managed yet. And I wouldn’t use it single-player. Still, it’s fun.

Aseprite

Aseprite is for pixel art so it doesn’t really belong here at all. But it’s very good at that and I like it a lot.

That’s all

I can’t name any other serious contender that exists for Linux.

I’m dimly aware of a thing called “Photo Shop” that’s more intended for photos but functions as a passable painter. More artists seem to swear by Paint Tool SAI and Clip Studio Paint. Also there’s Paint.NET, but I have no idea how well it’s actually suited for painting.

And that’s it! That’s all I’ve got. Krita for drawing, GIMP for editing, Drawpile for collaborative doodling.

Pirate Bay Ruling is Bad News For Google & YouTube, Experts Says

Post Syndicated from Andy original https://torrentfreak.com/pirate-bay-ruling-is-bad-news-for-google-youtube-experts-says-170615/

After years of legal wrangling, yesterday the European Court of Justice handed down a decision in the case between Dutch anti-piracy outfit BREIN and ISPs Ziggo and XS4ALL.

BREIN had demanded that the ISPs block The Pirate Bay, but both providers dug in their heels, forcing the case through the Supreme Court and eventually the ECJ.

For BREIN, yesterday’s decision will have been worth the wait. Although The Pirate Bay does not provide the content that’s ultimately downloaded and shared by its users, the ECJ said that it plays an important role in how that content is presented.

“Whilst it accepts that the works in question are placed online by the users, the Court highlights the fact that the operators of the platform play an essential role in making those works available,” the Court said.

With that established the all-important matter is whether by providing such a platform, the operators of The Pirate Bay are effectively engaging in a “communication to the public” of copyrighted works. According to the ECJ, that’s indeed the case.

“The Court holds that the making available and management of an online sharing platform must be considered to be an act of communication for the purposes of the directive,” the ECJ said.

Add into the mix that The Pirate Bay generates profit from its activities and there’s a potent case for copyright liability.

While the case was about The Pirate Bay, ECJ rulings tend to have an effect far beyond individual cases. That’s certainly the opinion of Enzo Mazza, chief at Italian anti-piracy group FIMI.

“The ruling will have a major impact on the way that entities like Google operate, because it will expose them to a greater and more direct responsibility,” Mazza told La Repubblica.

“So far, Google has worked against piracy by eliminating illegal content after it gets reported. But that is not enough. It is a fairly ineffective intervention.”

Mazza says that platforms like Google, YouTube, and thousands of similar sites that help to organize and curate user-uploaded content are somewhat similar to The Pirate Bay. In any event, they are not neutral intermediaries, he insists.

The conclusion that the decision is bad for platforms like YouTube is shared by Fulvio Sarzana, a lawyer with Sarzana and Partners, a law firm specializing in Internet and copyright disputes.

“In the ruling, the Court has in fact attributed, for the first time, secondary liability to sharing platforms due to the violation of copyrights carried out by the users of a platform,” Sarzana informs TF.

“This will have consequences for video-sharing platforms and user-generated content sites like YouTube, but it excludes responsibility for platforms that play a purely passive role, without affecting users’ content. This the case with cyberlockers, for example.”

Sarzana says that “unfortunate judgments” like this should be expected, until the approval of a new European copyright law. Enzo Mazza, on the other hand, feels that the copyright reform debate should take account of this ruling when formulating legislation to stop platforms like YouTube exploiting copyright works without an appropriate license.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

BackMap, the haptic navigation system

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/backmap-haptic/

At this year’s TechCrunch Disrupt NY hackathon, one team presented BackMap, a haptic feedback system which helps visually impaired people to navigate cities and venues. It is assisted by a Raspberry Pi and integrated into a backpack.

Good vibrations with BackMap

The team, including Shashank Sharma, wrote an iOS phone app in Swift, Apple’s open-source programming language. To convert between addresses and geolocations, they used the Esri APIs offered by PubNub. So far, so standard. However, they then configured their BackMap setup so that the user can input their destination via the app, and then follow the route without having to look at a screen or listen to directions. Instead, vibrating motors have been integrated into the straps of a backpack and hooked up to a Raspberry Pi. Whenever the user needs to turn left or right, the Pi makes the respective motor vibrate.

Disrupt NY 2017 Hackathon | Part 1

Disrupt NY 2017 Hackathon presentations filmed live on May 15th, 2017. Preceding the Disrupt Conference is Hackathon weekend on May 13-14, where developers and engineers descend from all over the world to take part in a 24-hour hacking endurance test.

BackMap can also be adapted for indoor navigation by receiving signals from beacons. This could be used to direct users to toilet facilities or exhibition booths at conferences. The team hopes to upgrade the BackMap device to use a wristband format in the future.

Accessible Pi

Here at Pi Towers, we are always glad to see Pi builds for people with disabilities: we’ve seen Sanskriti and Aman’s Braille teacher Mudra, the audio e-reader Valdema by Finnish non-profit Kolibre, and Myrijam and Paul’s award-winning, eye-movement-controlled wheelchair, to name but a few.

Our mission is to bring the power of coding and digital making to everyone, and we are lucky to be part of a diverse community of makers and educators who have often worked proactively to make events and resources accessible to as many people as possible. There is, for example, the autism- and Tourette’s syndrome-friendly South London Raspberry Jam, organised by Femi Owolade-Coombes and his mum Grace. The Raspberry VI website is a portal to all things Pi for visually impaired and blind people. Deaf digital makers may find Jim Roberts’ video tutorials, which are signed in ASL, useful. And anyone can contribute subtitles in any language to our YouTube channel.

If you create or use accessible tutorials, or run a Jam, Code Club, or CoderDojo that is designed to be friendly to people who are neuroatypical or have a disability, let us know how to find your resource or event in the comments!

The post BackMap, the haptic navigation system appeared first on Raspberry Pi.

Съд на ЕС: за достъпа до The Pirate Bay

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/06/15/the-pirate-bay-5/

Вчера беше публикувано решението на Съда на ЕС по дело C‑610/15 Stichting Brein срещу Ziggo BV и XS4ALL Internet BV.

Решението засяга функционирането и достъпа до The Pirate Bay.

Спорът

9 Ziggo и XS4ALL са доставчици на достъп до интернет. Значителна част от техните абонати ползват платформата за онлайн споделяне TPB, индексатор на BitTorrent файлове. BitTorrent е протокол, чрез който потребителите (наричани „равноправни устройства“ или „peers“) могат да споделят файлове. Съществената характеристика на BitTorrent се състои в това, че файловете, които се споделят, са разделени на малки сегменти, като по този начин отпада необходимостта от централен сървър за съхраняване на тези файлове, което облекчава тежестта на индивидуалните сървъри в процеса на споделянето. За да могат да споделят файлове, потребителите трябва най-напред да свалят специален софтуер, наречен „BitTorrent клиент“, който не се предлага от платформата за онлайн споделяне TPB. Този BitTorrent клиент представлява софтуер, който позволява създаването на торент файлове.

10      Потребителите (наричани „seeders“ [сийдъри]), които желаят да предоставят файл от своя компютър на разположение на други потребители (наричани „leechers“ [лийчъри]), трябва да създадат торент файл чрез своя BitTorrent клиент. Торент файловете препращат към централен сървър (наричан „tracker“ [тракер]), който идентифицира потребители, които могат да споделят конкретен торент файл, както и прилежащия към него медиен файл. Тези торент файлове се качват (upload) от сийдърите (на платформа за онлайн споделяне, каквато е TPB, която след това ги индексира, за да могат те да бъдат намирани от потребителите на платформата за онлайн споделяне и произведенията, към които тези торент файлове препращат, да могат да бъдат сваляни (download) на компютрите на последните на отделни сегменти чрез техния BitTorrent клиент.

11      Често пъти вместо торенти се използват магнитни линкове. Тези линкове идентифицират съдържанието на торента и препращат към него чрез цифров отпечатък.

12      Голямото мнозинство от предлаганите на платформата за онлайн споделяне TPB торент файлове препращат към произведения, които са обект на закрила от авторски права, без да е дадено разрешение от носителите на авторското право на администраторите и на потребителите на тази платформа за извършване на действията по споделянето.

13      Главното искане на Stichting Brein в производството пред националната юрисдикция е да разпореди на Ziggo и на XS4ALL да блокират имената на домейни и интернет адресите на платформата за онлайн споделяне TPB с цел да се предотврати възможността за ползване на услугите на тези доставчици на достъп до интернет за нарушаване на авторското и сродните му права на носителите на правата, чиито интереси защитава Stichting Brein.

Въпросите

 При тези обстоятелства Hoge Raad der Nederlanden (Върховен съд на Нидерландия) решава да спре производството по делото и да постави на Съда следните преюдициални въпроси:

„1)      Налице ли е публично разгласяване по смисъла на член 3, параграф 1 от Директива 2001/29 от страна на администратора на уебсайт, ако на този уебсайт не са налице защитени произведения, но съществува система […], с която намиращи се в компютрите на потребителите метаданни за защитени произведения се индексират и категоризират за потребителите по начин, по който последните могат да проследяват, да качват онлайн, както и да свалят закриляните произведения?

2)      При отрицателен отговор на първия въпрос:

Дават ли член 8, параграф 3 от Директива 2001/29 и член 11 от Директива 2004/48 основание за издаването на забрана по отношение на посредник по смисъла на тези разпоредби, който по описания във въпрос 1 начин улеснява извършването на нарушения от трети лица?“.

Вече имаме заключението на Генералния адвокат Szpunar, според което

обстоятелството, че операторът на уебсайт индексира файлове, съдържащи закриляни с авторско право произведения, които се предлагат за споделяне в peer-to-peer мрежа, и предоставя търсачка, с което позволява тези файлове да бъдат намирани, представлява публично разгласяване по смисъла на член 3, параграф 1 от Директива 2001/29, когато операторът знае, че дадено произведение е предоставено на разположение в мрежата без съгласието на носителите на авторските права, но не предприема действия за блокиране на достъпа до това произведение.

Решението

Понятието „публично разгласяване“ обединява два кумулативни елемента, а именно „акт на разгласяване“ на произведение и „публичност“ на разгласяването (решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 29 и цитираната съдебна практика). За да се прецени дали даден ползвател извършва акт на публично разгласяване по смисъла на член 3, параграф 1 от Директива 2001/29, трябва да се отчетат няколко допълнителни критерия, които не са самостоятелни и са взаимозависими.

  • ключовата роля на потребителя и съзнателния характер на неговата намеса. Всъщност този потребител извършва акт на разгласяване, когато, като съзнава напълно последиците от своето поведение, се намесва, за да предостави на клиентите си достъп до произведение, което е обект на закрила, и по-специално когато без неговата намеса тези клиенти по принцип не биха могли да се ползват от разпространеното произведение. (вж. в този смисъл решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 31 и цитираната съдебна практика).
  • понятието „публично“ се отнася до неопределен брой потенциални адресати и освен това предполага наличие на доста голям брой лица (решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 32 и цитираната съдебна практика).
  • закриляното произведение трябва да бъде разгласено, като се използва специфичен технически способ, различен от използваните дотогава, или, ако не е използван такъв способ — пред „нова публика“, тоест публика, която не е била вече взета предвид от носителите на авторското право при даването на разрешение за първоначалното публично разгласяване на произведението им (решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 33 и цитираната съдебна практика).
  • дали публичното разгласяване  е извършено с цел печалба (решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 34 и цитираната съдебна практика).

42 В случая, видно от акта за преюдициално запитване, значителна част от абонатите на Ziggo и XS4ALL са сваляли медийни файлове чрез платформата за онлайн споделяне TPB. Както следва и от представените пред Съда становища, тази платформа се използва от значителен брой лица, като администраторите от TPB съобщават на своята платформа за онлайн споделяне за десетки милиони „потребители“. В това отношение разглежданото в главното производство разгласяване се отнася най-малкото до всички потребители на тази платформа. Тези потребители могат да имат достъп във всеки момент и едновременно до защитените произведения, които са споделени посредством посочената платформа. Следователно това разгласяване се отнася до неопределен брой потенциални адресати и предполага наличие на голям брой лица (вж. в този смисъл решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 45 и цитираната съдебна практика).

43      От това следва, че с разгласяване като разглежданото в главното производство закриляни произведения действително се разгласяват „публично“ по смисъла на член 3, параграф 1 от Директива 2001/29.

44      Освен това, що се отнася до въпроса дали тези произведения са разгласяват на „нова“ публика по смисъла на съдебната практика, цитирана в точка 28 от настоящото съдебно решение, следва да се посочи, че в решението си от 13 февруари 2014 г., Svensson и др. (C‑466/12, EU:C:2014:76, т. 24 и 31), както и в определението си от 21 октомври 2014 г., BestWater International (C‑348/13, EU:C:2014:2315, т. 14) Съдът е приел, че това е публика, която носителите на авторските права не са имали предвид, когато са дали разрешение за първоначалното разгласяване.

45      В случая, видно от становищата, представени пред Съда, от една страна, администраторите на платформата за онлайн споделяне TPB са знаели, че тази платформа, която предоставят на разположение на потребителите и която администрират, дава достъп до произведения, публикувани без разрешение на носителите на правата, и от друга страна, че същите администратори изразяват изрично в блоговете и форумите на тази платформа своята цел да предоставят закриляните произведения на разположение на потребителите и поощряват последните да реализират копия от тези произведения. Във всички случаи, видно от акта за преюдициално запитване, администраторите на онлайн платформата TPB не може да не са знаели, че тази платформа дава достъп до произведения, публикувани без разрешението на носителите на правата, с оглед на обстоятелството, което се подчертава изрично от запитващата юрисдикция, че голяма част от торент файловете, които се намират на платформата за онлайн споделяне TPB, препращат към произведения, публикувани без разрешението на носителите на правата. При тези обстоятелства следва да се приеме, че е налице разгласяване пред „нова публика“ (вж. в този смисъл решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 50).

46      От друга страна, не може да се оспори, че предоставянето на разположение и администрирането на платформа за онлайн споделяне като разглежданата в главното производство се извършва с цел да се извлече печалба, тъй като тази платформа генерира, видно от становищата, представени пред Съда, значителни приходи от реклама.

47      Вследствие на това трябва да се приеме, че предоставянето на разположение и администрирането на платформа за онлайн споделяне като разглежданата в главното производство, съставлява „публично разгласяване“ по смисъла на член 3, параграф 1 от Директива 2001/29.

48      С оглед на всички изложени съображения на първия въпрос следва да се отговори, че понятието „публично разгласяване“  трябва да се тълкува в смисъл, че  в неговия обхват попада предоставянето на разположение и администрирането в интернет на платформа за споделяне, която чрез индексиране на метаданните относно закриляните произведения и с предлагането на търсачка позволява на потребителите на платформата да намират тези произведения и да ги споделят в рамките на мрежа с равноправен достъп (peer-to-peer).

Масовите коментари са, че решението засилва позициите на търсещите блокиране организации.

Filed under: Digital, EU Law, Media Law Tagged: съд на ес

ISP Doesn’t Have to Expose Alleged BitTorrent Pirates, Finnish Court Rules

Post Syndicated from Ernesto original https://torrentfreak.com/isp-doesnt-have-to-expose-alleged-bittorrent-pirates-finnish-court-rules-170615/

finlandStarting three years ago, copyright holders began sending out thousands of settlement letters to alleged pirates in Finland, a practice often described as copyright trolling.

This week, however, the local Market Court has put the brakes on these efforts, with a rather significant ruling.

In the case in question, filmmakers requested the personal information of hundreds of alleged BitTorrent users from Internet provider DNA. However, after a careful review by a panel of seven judges, the Court decided not to grant the request.

The rightsholders provided a detailed log from a BitTorrent monitoring tool as evidence. While the Court didn’t doubt that the pirated material had been shared, it questioned how significant the infringements were.

The provided list of IP-addresses and timestamps don’t show how much data was shared, or for how long.

The evidence included an overview of the total number of users sharing the same file in a single BitTorrent swarm. However, the fact that thousands of people were sharing the same file says nothing about the significance of individual infringements.

“[T]he applicant has not claimed or provided any explanation that would indicate that the distribution of its work, by an IP address in the application, would have repeatedly occurred or for a longer period of time,” the Market Court writes.

The verdict, first reported by Iltalethi, refers to a recent case in the European Court of Justice, and stressed that the significance of an infringement must be weighed against the defendants’ privacy rights. In this case, the court decided that the evidence doesn’t warrant the exposure of the alleged pirates.

“Since the applicant has not provided sufficient proof of compliance with the conditions set out in Article 60a of the Copyright Act to adoption of an application, the application must be dismissed,” the Market Court writes.

The outcome is a clear victory for the accused BitTorrent users. Time will tell whether rightsholders will adapt their evidence to the ruling, or whether they will test their luck elsewhere. The current ruling can still be appealed.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Notes on open-sourcing abandoned code

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/notes-on-open-sourcing-abandoned-code.html

Some people want a law that compels companies to release their source code for “abandoned software”, in the name of cybersecurity, so that customers who bought it can continue to patch bugs long after the seller has stopped supporting the product. This is a bad policy, for a number of reasons.

Code is Speech

First of all, code is speech. That was the argument why Phil Zimmerman could print the source code to PGP in a book, ship it overseas, and then have somebody scan the code back into a computer. Compelled speech is a violation of free speech. That was one of the arguments in the Apple vs. FBI case, where the FBI demanded that Apple write code for them, compelling speech.

Compelling the opening of previously closed source is compelled speech.

There might still be legal arguments that get away with it. After all state already compels some speech, such as warning labels, where is services a narrow, legitimate government interest. So the courts may allow it. Also, like many free-speech issues (e.g. the legality of hate-speech), people may legitimately disagree with the courts about what “is” legal and what “should” be legal.

But here’s the thing. What rights “should” be protected changes depending on what side you are on. Whether something deserves the protection of “free speech” depends upon whether the speaker is “us” or the speaker is “them”. If it’s “them”, then you’ll find all sorts of reasons why their speech is a special case, and what it doesn’t deserve protection.

That’s what’s happening here. The legitimate government purpose of “product safety” looms large, the “code is speech” doesn’t, because they hate closed-source code, and hate Microsoft in particular. The open-source community has been strong on “code is speech” when it applies to them, but weak when it applies to closed-source.

Define abandoned

What, precisely, does ‘abandoned’ mean? Consider Windows 3.1. Microsoft hasn’t sold it for decades. Yet, it’s not precisely abandoned either, because they still sell modern versions of Windows. Being forced to show even 30 year old source code would give competitors a significant advantage in creating Windows-compatible code like WINE.

When code is truly abandoned, such as when the vendor has gone out of business, chances are good they don’t have the original source code anyway. Thus, in order for this policy to have any effect, you’d have to force vendors to give a third-party escrow service a copy of their code whenever they release a new version of their product.

All the source code

And that is surprisingly hard and costly. Most companies do not precisely know what source code their products are based upon. Yes, technically, all the code is in that ZIP file they gave to the escrow service, but it doesn’t build. Essential build steps are missing, so that source code won’t compile. It’s like the dependency hell that many open-source products experience, such as downloading and installing two different versions of Python at different times during the build. Except, it’s a hundred times worse.

Often times building closed-source requires itself an obscure version of a closed-source tool that itself has been abandoned by its original vendor. You often times can’t even define which is the source code. For example, engine control units (ECUs) are Matlab code that compiles down to C, which is then integrated with other C code, all of which is (using a special compiler) is translated to C. Unless you have all these closed source products, some of which are no longer sold, the source-code to the ECU will not help you in patch bugs.

For small startups running fast, such as off Kickstarter, forcing them to escrow code that actually builds would force upon them an undue burden, harming innovation.

Binary patch and reversing

Then there is the issue of why you need the source code in the first place. Here’s the deal with binary exploits like buffer-overflows: if you know enough to exploit it, you know enough to patch it. Just add some binary code onto the end of the function the program that verifies the input, then replace where the vulnerability happens to a jump instruction to the new code.

I know this is possible and fairly trivial because I’ve done it myself. Indeed, one of the reason Microsoft has signed kernel components is specifically because they got tired of me patching the live kernel this way (and, almost sued me for reverse engineering their code in violation of their EULA).

Given the aforementioned difficulties in building software, this would be the easier option for third parties trying to fix bugs. The only reason closed-source companies don’t do this already is because they need to fix their products permanently anyway, which involves checking in the change into their source control systems and rebuilding.

Conclusion

So what we see here is that there is no compelling benefit to forcing vendors to release code for “abandoned” products, while at the same time, there are significant costs involved, not the least of which is a violation of the principle that “code is speech”.

It doesn’t exist as a serious proposal. It only exists as a way to support open-source advocacy and security advocacy. Both would gladly stomp on your rights and drive up costs in order to achieve their higher moral goal.


Bonus: so let’s say you decide that “Window XP” has been abandoned, which is exactly the intent of proponents. You think what would happen is that we (the open-source community) would then be able to continue to support WinXP and patch bugs.

But what we’d see instead is a lot more copies of WinXP floating around, with vulnerabilities, as people decided to use it instead of paying hundreds of dollars for a new Windows 10 license.

Indeed, part of the reason for Micrsoft abandoning WinXP is because it’s riddled with flaws that can’t practically be fixed, whereas the new features of Win10 fundamentally fixes them. Getting rid of SMBv1 is just one of many examples.

Making Waves: print out sound waves with the Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/printed-sound-wave/

For fun, Eunice Lee, Matthew Zhang, and Bomani McClendon have worked together to create Waves, an audiovisual project that records people’s spoken responses to personal questions and prints them in the form of a sound wave as a gift for being truthful.

Waves

Waves is a Raspberry Pi project centered around transforming the transience of the spoken word into something concrete and physical. In our setup, a user presses a button corresponding to an intimate question (ex: what’s your motto?) and answers it into a microphone while pressing down on the button.

What are you grateful for?

“I’m grateful for finishing this project,” admits maker Eunice Lee as she presses a button and speaks into the microphone that is part of the Waves project build. After a brief moment, her confession appears on receipt paper as a waveform, and she grins toward the camera, happy with the final piece.

Eunice testing Waves

Waves is a Raspberry Pi project centered around transforming the transience of the spoken word into something concrete and physical. In our setup, a user presses a button corresponding to an intimate question (ex: what’s your motto?) and answers it into a microphone while pressing down on the button.

Sound wave machine

Alongside a Raspberry Pi 3, the Waves device is comprised of four tactile buttons, a standard USB microphone, and a thermal receipt printer. This type of printer has become easily available for the maker movement from suppliers such as Adafruit and Pimoroni.

Eunice Lee, Matthew Zhang, Bomani McClendon - Sound Wave Raspberry Pi

Definitely more fun than a polygraph test

The trio designed four colour-coded cards that represent four questions, each of which has a matching button on the breadboard. Press the button that belongs to the question to be answered, and Python code directs the Pi to record audio via the microphone. Releasing the button stops the audio recording. “Once the recording has been saved, the script viz.py is launched,” explains Lee. “This script takes the audio file and, using Python matplotlib magic, turns it into a nice little waveform image.”

From there, the Raspberry Pi instructs the thermal printer to produce a printout of the sound wave image along with the question.

Making for fun

Eunice, Bomani, and Matt, students of design and computer science at Northwestern University in Illinois, built Waves as a side project. They wanted to make something at the intersection of art and technology and were motivated by the pure joy of creating.

Eunice Lee, Matthew Zhang, Bomani McClendon - Sound Wave Raspberry Pi

Making makes people happy

They have noted improvements that can be made to increase the scope of their sound wave project. We hope to see many more interesting builds from these three, and in the meantime we invite you all to look up their code on Eunice’s GitHub to create your own Waves at home.

The post Making Waves: print out sound waves with the Raspberry Pi appeared first on Raspberry Pi.

Pirate Bay Facilitates Piracy and Can be Blocked, Top EU Court Rules

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-bay-facilitates-piracy-and-can-be-blocked-top-eu-court-rules-170614/

pirate bayIn 2014, The Court of The Hague handed down its decision in a long running case which had previously forced two Dutch ISPs, Ziggo and XS4ALL, to block The Pirate Bay.

The Court ruled against local anti-piracy outfit BREIN, concluding that the blockade was ineffective and restricted the ISPs’ entrepreneurial freedoms.

The Pirate Bay was unblocked by all local ISPs while BREIN took the matter to the Supreme Court, which subsequently referred the case to the EU Court of Justice, seeking further clarification.

After a careful review of the case, the Court of Justice today ruled that The Pirate Bay can indeed be blocked.

While the operators don’t share anything themselves, they knowingly provide users with a platform to share copyright-infringing links. This can be seen as “an act of communication” under the EU Copyright Directive, the Court concludes.

“Whilst it accepts that the works in question are placed online by the users, the Court highlights the fact that the operators of the platform play an essential role in making those works available,” the Court explains in a press release (pdf).

According to the ruling, The Pirate Bay indexes torrents in a way that makes it easy for users to find infringing content while the site makes a profit. The Pirate Bay is aware of the infringements, and although moderators sometimes remove “faulty” torrents, infringing links remain online.

“In addition, the same operators expressly display, on blogs and forums accessible on that platform, their intention of making protected works available to users, and encourage the latter to make copies of those works,” the Court writes.

The ruling means that there are no major obstacles for the Dutch Supreme Court to issue an ISP blockade, but a final decision in the underlying case will likely take a few more months.

A decision at the European level is important, as it may also affect court orders in other countries where The Pirate Bay and other torrent sites are already blocked, including Austria, Belgium, Finland, Italy, and its home turf Sweden.

Despite the negative outcome, the Pirate Bay team is not overly worried.

“Copyright holders will remain stubborn and fight to hold onto a dying model. Clueless and corrupt law makers will put corporate interests before the public’s. Their combined jackassery is what keeps TPB alive,” TPB’s plc365 tells TorrentFreak.

“The reality is that regardless of the ruling, nothing substantial will change. Maybe more ISPs will block TPB. More people will use one of the hundreds of existing proxies, and even more new ones will be created as a result.”

Pirate Bay moderator “Xe” notes that while it’s an extra barrier to access the site, blockades will eventually help people to get around censorship efforts, which are not restricted to TPB.

“They’re an issue for everyone in the sense that they’re an obstacle which has to be overcome. But learning how to work around them isn’t hard and knowing how to work around them is becoming a core skill for everyone who uses the Internet.

“Blockades are not a major issue for the site in the sense that they’re nothing new: we’ve long since adapted to them. We serve the needs of millions of people every day in spite of them,” Xe adds.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.