Tag Archives: Aspect

Copyright Holders Want ISPs to Police Pirate Sites and Issue Warnings

Post Syndicated from Ernesto original https://torrentfreak.com/copyright-holders-want-isps-to-police-pirate-sites-and-issue-warnings-171124/

Online piracy is a worldwide phenomenon and increasingly it ends up on the desks of lawmakers everywhere.

Frustrated by the ever-evolving piracy landscape, copyright holders are calling on local authorities to help out.

This is also the case in South Africa at the moment, where the Government is finalizing a new Cybercrimes and Cybersecurity Bill.

Responding to a call for comments, anti-piracy group SAFACT, film producers, and local broadcaster M-Net seized the opportunity to weigh in with some suggestions. Writing to the Department of Justice and Constitutional Development, they ask for measures to make it easier to block pirate sites and warn copyright infringers.

“A balanced approach to address the massive copyright infringement on the Internet is necessary,” they say.

On the site-blocking front, the copyright holder representatives suggest an EU-style amendment that would allow for injunctions against ISPs to bar access to pirate sites.

“It is suggested that South Africa should consider adopting technology-neutral ‘no fault’ enforcement legislation that would enable intermediaries to take action against online infringements, in line with Article 8.3 of the EU Copyright Directive (2001/29/EC), which addresses copyright infringement through site blocking,” it reads.

Request and response (via Business Tech)

In addition, ISPs should also be obliged to take further measures to deter piracy. New legislation should require providers to “police” unauthorized file-sharing and streaming sites, and warn subscribers who are caught pirating.

“Obligations should be imposed on ISPs to co-operate with rights-holders and Government to police illegal filesharing or streaming websites and to issue warnings to end-users identified as engaging in illegal file-sharing and to block infringing content,” the rightsholders say.

The demands were made public by the Department recently, which also included an official response from the Government. While the suggestions are not dismissed based on their content, they don’t fit the purpose of the legislation.

“The Bill does not deal with copyright infringements. These aspects must be dealt with in terms of copyright-related legislation,” the Department writes.

SAFACT, the filmmakers, and M-Net are not without options though. The Government points out that the new Copyright Amendment Bill, which was introduced recently, would be a better fit for these asks. So it’s likely that they will try again.

This doesn’t mean that any of the proposed language will be adopted, of course. However, now that the demands are on the table, South Africans are likely to hear more blocking and warning chatter in the near future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

The AWS Cloud Goes Underground at re:Invent

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/the-aws-cloud-goes-underground-at-reinvent/

As you wander through the AWS re:Invent campus, take a minute to think about your expectations for all of the elements that need to come together…

Starting with the location, my colleagues have chosen the best venues, designed the sessions, picked the speakers, laid out the menu, selected the color schemes, programmed or printed all of the signs, and much more, all with the goal of creating an optimal learning environment for you and tens of thousands of other AWS customers.

However, as is often the case, the part that you can see is just a part of the picture. Behind the scenes, people, processes, plans, and systems come together to put all of this infrastructure in to place and to make it run so smoothly that you don’t usually notice it.

Today I would like to tell you about a mission-critical aspect of the re:Invent infrastructure that is actually underground. In addition to providing great Wi-Fi for your phones, tablets, cameras, laptops, and other devices, we need to make sure that a myriad of events, from the live-streamed keynotes, to the live-streamed keynotes and the WorkSpaces-powered hands-on labs are well-connected to each other and to the Internet. With events running at hotels up and down the Las Vegas Strip, reliable, low-latency connectivity is essential!

Thank You CenturyLink / Level3
Over the years we have been working with the great folks at Level3 to make this happen. They recently became part of CenturyLink and are now the Official Network Sponsor of re:Invent, responsible for the network fiber, circuits, and services that tie the re:Invent campus together.

To make this happen, they set up two miles of dark fiber beneath the Strip, routed to multiple Availability Zones in two separate AWS Regions. The Sands Expo Center is equipped with redundant 10 gigabit connections and the other venues (Aria, MGM, Mirage, and Wynn) are each provisioned for 2 to 10 gigabits, meaning that over half of the Strip is enabled for Direct Connect. According to the IT manager at one of the facilities, this may be the largest temporary hybrid network ever configured in Las Vegas.

On the Wi-Fi side, showNets is plugged in to the same network; your devices are talking directly to Direct Connect access points (how cool is that?).

Here’s a simplified illustration of how it all fits together:

The CenturyLink team will be onsite at re:Invent and will be tweeting live network stats throughout the week.

I hope you have enjoyed this quick look behind the scenes and beneath the street!

Jeff;

New White House Announcement on the Vulnerability Equities Process

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/new_white_house_1.html

The White House has released a new version of the Vulnerabilities Equities Process (VEP). This is the inter-agency process by which the US government decides whether to inform the software vendor of a vulnerability it finds, or keep it secret and use it to eavesdrop on or attack other systems. You can read the new policy or the fact sheet, but the best place to start is Cybersecurity Coordinator Rob Joyce’s blog post.

In considering a way forward, there are some key tenets on which we can build a better process.

Improved transparency is critical. The American people should have confidence in the integrity of the process that underpins decision making about discovered vulnerabilities. Since I took my post as Cybersecurity Coordinator, improving the VEP and ensuring its transparency have been key priorities, and we have spent the last few months reviewing our existing policy in order to improve the process and make key details about the VEP available to the public. Through these efforts, we have validated much of the existing process and ensured a rigorous standard that considers many potential equities.

The interests of all stakeholders must be fairly represented. At a high level we consider four major groups of equities: defensive equities; intelligence / law enforcement / operational equities; commercial equities; and international partnership equities. Additionally, ordinary people want to know the systems they use are resilient, safe, and sound. These core considerations, which have been incorporated into the VEP Charter, help to standardize the process by which decision makers weigh the benefit to national security and the national interest when deciding whether to disclose or restrict knowledge of a vulnerability.

Accountability of the process and those who operate it is important to establish confidence in those served by it. Our public release of the unclassified portions Charter will shed light on aspects of the VEP that were previously shielded from public review, including who participates in the VEP’s governing body, known as the Equities Review Board. We make it clear that departments and agencies with protective missions participate in VEP discussions, as well as other departments and agencies that have broader equities, like the Department of State and the Department of Commerce. We also clarify what categories of vulnerabilities are submitted to the process and ensure that any decision not to disclose a vulnerability will be reevaluated regularly. There are still important reasons to keep many of the specific vulnerabilities evaluated in the process classified, but we will release an annual report that provides metrics about the process to further inform the public about the VEP and its outcomes.

Our system of government depends on informed and vigorous dialogue to discover and make available the best ideas that our diverse society can generate. This publication of the VEP Charter will likely spark discussion and debate. This discourse is important. I also predict that articles will make breathless claims of “massive stockpiles” of exploits while describing the issue. That simply isn’t true. The annual reports and transparency of this effort will reinforce that fact.

Mozilla is pleased with the new charter. I am less so; it looks to me like the same old policy with some new transparency measures — which I’m not sure I trust. The devil is in the details, and we don’t know the details — and it has giant loopholes that pretty much anything can fall through:

The United States Government’s decision to disclose or restrict vulnerability information could be subject to restrictions by partner agreements and sensitive operations. Vulnerabilities that fall within these categories will be cataloged by the originating Department/Agency internally and reported directly to the Chair of the ERB. The details of these categories are outlined in Annex C, which is classified. Quantities of excepted vulnerabilities from each department and agency will be provided in ERB meetings to all members.

This is me from last June:

There’s a lot we don’t know about the VEP. The Washington Post says that the NSA used EternalBlue “for more than five years,” which implies that it was discovered after the 2010 process was put in place. It’s not clear if all vulnerabilities are given such consideration, or if bugs are periodically reviewed to determine if they should be disclosed. That said, any VEP that allows something as dangerous as EternalBlue — or the Cisco vulnerabilities that the Shadow Brokers leaked last August — to remain unpatched for years isn’t serving national security very well. As a former NSA employee said, the quality of intelligence that could be gathered was “unreal.” But so was the potential damage. The NSA must avoid hoarding vulnerabilities.

I stand by that, and am not sure the new policy changes anything.

More commentary.

Here’s more about the Windows vulnerabilities hoarded by the NSA and released by the Shadow Brokers.

EDITED TO ADD (11/18): More news.

EDITED TO ADD (11/22): Adam Shostack points out that the process does not cover design flaws or trade-offs, and that those need to be covered:

…we need the VEP to expand to cover those issues. I’m not going to claim that will be easy, that the current approach will translate, or that they should have waited to handle those before publishing. One obvious place it gets harder is the sources and methods tradeoff. But we need the internet to be a resilient and trustworthy infrastructure.

timeShift(GrafanaBuzz, 1w) Issue 22

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/11/17/timeshiftgrafanabuzz-1w-issue-22/

Welome to TimeShift

We hope you liked our recent article with videos and slides from the events we’ve participated in recently. With Thanksgiving right around the corner, we’re getting a breather from work-related travel, but only a short one. We have some events in the coming weeks, and of course are busy filling in the details for GrafanaCon EU.

This week we have a lot of articles, videos and presentations to share, as well as some important plugin updates. Enjoy!


Latest Release

Grafana 4.6.2 is now available and includes some bug fixes:

  • Prometheus: Fixes bug with new Prometheus alerts in Grafana. Make sure to download this version if your using Prometheus for alerting. More details in the issue. #9777
  • Color picker: Bug after using textbox input field to change/paste color string #9769
  • Cloudwatch: build using golang 1.9.2 #9667, thanks @mtanda
  • Heatmap: Fixed tooltip for “time series buckets” mode #9332
  • InfluxDB: Fixed query editor issue when using > or < operators in WHERE clause #9871

Download Grafana 4.6.2 Now


From the Blogosphere

Cloud Tech 10 – 13th November 2017 – Grafana, Linux FUSE Adapter, Azure Stack and more!: Mark Whitby is a Cloud Solution Architect at Microsoft UK. Each week he prodcues a video reviewing new developments with Microsoft Azure. This week Mark covers the new Azure Monitoring Plugin we recently announced. He also shows you how to get up and running with Grafana quickly using the Azure Marketplace.

Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes: Oracle published an article on monitoring WebLogic server on Kubernetes. To do this, you’ll use the WebLogic Monitoring Exporter to scrape the server metrics and feed them to Prometheus, then visualize the data in Grafana. Marina goes into a lot of detail and provides sample files and configs to help you get going.

Getting Started with Prometheus: Will Robinson has started a new series on monitoring with Prometheus from someone who has never touched it before. Part 1 introduces a number of monitoring tools and concepts, and helps define a number of monitoring terms. Part 2 teaches you how to spin up Prometheus in a Docker container, and takes a look at writing queries. Looking forward to the third post, when he dives into the visualization aspect.

Monitoring with Prometheus: Alexander Schwartz has made the slides from his most recent presentation from the Continuous Lifcycle Conference in Germany available. In his talk, he discussed getting started with Prometheus, how it differs from other monitoring concepts, and provides examples of how to monitor and alert. We’ll link to the video of the talk when it’s available.

Using Grafana with SiriDB: Jeroen van der Heijden has written an in-depth tutorial to help you visualize data from the open source TSDB, SiriDB in Grafana. This tutorial will get you familiar with setting up SiriDB and provides a sample dashboard to help you get started.

Real-Time Monitoring with Grafana, StatsD and InfluxDB – Artur Caliendo Prado: This is a video from a talk at The Conf, held in Brazil. Artur’s presentation focuses on the experiences they had building a monitoring stack at Youse, how their monitoring became more complex as they scaled, and the platform they built to make sense of their data.

Using Grafana & Inlfuxdb to view XIV Host Performance Metrics – Part 4 Array Stats: This is the fourth part in a series of posts about host performance metrics. This post dives in to array stats to identify workloads and maintain balance across ports. Check out part 1, part 2 and part 3.


GrafanaCon Tickets are Going Fast

Tickets are going fast for GrafanaCon EU, but we still have a seat reserved for you. Join us March 1-2, 2018 in Amsterdam for 2 days of talks centered around Grafana and the surrounding monitoring ecosystem including Graphite, Prometheus, InfluxData, Elasticsearch, Kubernetes, and more.

Get Your Ticket Now


Grafana Plugins

Plugin authors are often adding new features and fixing bugs, which will make your plugin perform better – so it’s important to keep your plugins up to date. We’ve made updating easy; for on-prem Grafana, use the Grafana-cli tool, or update with 1 click if you’re using Hosted Grafana.

UPDATED PLUGIN

Hawkular data source – There is an important change in this release – as this datasource is now able to fetch not only Hawkular Metrics but also Hawkular Alerts, the server URL in the datasource configuration must be updated: http://myserver:123/hawkular/metrics must be changed to http://myserver:123/hawkular

Some of the changes (see the release notes) for more details):

  • Allow per-query tenant configuration
  • Annotations can now be configured out of Availability metrics and Hawkular Alerts events in addition to string metrics
  • allows dot character in tag names

Update

UPDATED PLUGIN

Diagram Panel – This is the first release in a while for the popular Diagram Panel plugin.

In addition to these changes, there are also a number of bug fixes:

Update

UPDATED PLUGIN

Influx Admin Panel – received a number of improvements:

  • Fix issue always showing query results
  • When there is only one row, swap rows/cols (ie: SHOW DIAGNOSTICS)
  • Improved auto-refresh behavior
  • Fix query time sorting
  • show ‘status’ field (killed, etc)

Update


Upcoming Events:

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We have some awesome talks and events coming soon. Hope to see you at one of these!

How to Use Open Source Projects for Performance Monitoring | Webinar
Nov. 29, 1pm EST
:
Check out how you can use popular open source projects, for performance monitoring of your Infrastructure, Application, and Cloud faster, easier, and to scale. In this webinar, Daniel Lee from Grafana Labs, and Chris Churilo from InfluxData, will provide you with step by step instruction from download & configure, to collecting metrics and building dashboards and alerts.

RSVP

KubeCon | Austin, TX – Dec. 6-8, 2017: We’re sponsoring KubeCon 2017! This is the must-attend conference for cloud native computing professionals. KubeCon + CloudNativeCon brings together leading contributors in:

  • Cloud native applications and computing
  • Containers
  • Microservices
  • Central orchestration processing
  • And more

Buy Tickets

FOSDEM | Brussels, Belgium – Feb 3-4, 2018: FOSDEM is a free developer conference where thousands of developers of free and open source software gather to share ideas and technology. Carl Bergquist is managing the Cloud and Monitoring Devroom, and the CFP is now open. There is no need to register; all are welcome. If you’re interested in speaking at FOSDEM, submit your talk now!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

We were glad to be a part of InfluxDays this year, and looking forward to seeing the InfluxData team in NYC in February.


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

I enjoy writing these weekly roudups, but am curious how I can improve them. Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Building a Multi-region Serverless Application with Amazon API Gateway and AWS Lambda

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/building-a-multi-region-serverless-application-with-amazon-api-gateway-and-aws-lambda/

This post written by: Magnus Bjorkman – Solutions Architect

Many customers are looking to run their services at global scale, deploying their backend to multiple regions. In this post, we describe how to deploy a Serverless API into multiple regions and how to leverage Amazon Route 53 to route the traffic between regions. We use latency-based routing and health checks to achieve an active-active setup that can fail over between regions in case of an issue. We leverage the new regional API endpoint feature in Amazon API Gateway to make this a seamless process for the API client making the requests. This post does not cover the replication of your data, which is another aspect to consider when deploying applications across regions.

Solution overview

Currently, the default API endpoint type in API Gateway is the edge-optimized API endpoint, which enables clients to access an API through an Amazon CloudFront distribution. This typically improves connection time for geographically diverse clients. By default, a custom domain name is globally unique and the edge-optimized API endpoint would invoke a Lambda function in a single region in the case of Lambda integration. You can’t use this type of endpoint with a Route 53 active-active setup and fail-over.

The new regional API endpoint in API Gateway moves the API endpoint into the region and the custom domain name is unique per region. This makes it possible to run a full copy of an API in each region and then use Route 53 to use an active-active setup and failover. The following diagram shows how you do this:

Active/active multi region architecture

  • Deploy your Rest API stack, consisting of API Gateway and Lambda, in two regions, such as us-east-1 and us-west-2.
  • Choose the regional API endpoint type for your API.
  • Create a custom domain name and choose the regional API endpoint type for that one as well. In both regions, you are configuring the custom domain name to be the same, for example, helloworldapi.replacewithyourcompanyname.com
  • Use the host name of the custom domain names from each region, for example, xxxxxx.execute-api.us-east-1.amazonaws.com and xxxxxx.execute-api.us-west-2.amazonaws.com, to configure record sets in Route 53 for your client-facing domain name, for example, helloworldapi.replacewithyourcompanyname.com

The above solution provides an active-active setup for your API across the two regions, but you are not doing failover yet. For that to work, set up a health check in Route 53:

Route 53 Health Check

A Route 53 health check must have an endpoint to call to check the health of a service. You could do a simple ping of your actual Rest API methods, but instead provide a specific method on your Rest API that does a deep ping. That is, it is a Lambda function that checks the status of all the dependencies.

In the case of the Hello World API, you don’t have any other dependencies. In a real-world scenario, you could check on dependencies as databases, other APIs, and external dependencies. Route 53 health checks themselves cannot use your custom domain name endpoint’s DNS address, so you are going to directly call the API endpoints via their region unique endpoint’s DNS address.

Walkthrough

The following sections describe how to set up this solution. You can find the complete solution at the blog-multi-region-serverless-service GitHub repo. Clone or download the repository locally to be able to do the setup as described.

Prerequisites

You need the following resources to set up the solution described in this post:

  • AWS CLI
  • An S3 bucket in each region in which to deploy the solution, which can be used by the AWS Serverless Application Model (SAM). You can use the following CloudFormation templates to create buckets in us-east-1 and us-west-2:
    • us-east-1:
    • us-west-2:
  • A hosted zone registered in Amazon Route 53. This is used for defining the domain name of your API endpoint, for example, helloworldapi.replacewithyourcompanyname.com. You can use a third-party domain name registrar and then configure the DNS in Amazon Route 53, or you can purchase a domain directly from Amazon Route 53.

Deploy API with health checks in two regions

Start by creating a small “Hello World” Lambda function that sends back a message in the region in which it has been deployed.


"""Return message."""
import logging

logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    """Lambda handler for getting the hello world message."""

    region = context.invoked_function_arn.split(':')[3]

    logger.info("message: " + "Hello from " + region)
    
    return {
		"message": "Hello from " + region
    }

Also create a Lambda function for doing a health check that returns a value based on another environment variable (either “ok” or “fail”) to allow for ease of testing:


"""Return health."""
import logging
import os

logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    """Lambda handler for getting the health."""

    logger.info("status: " + os.environ['STATUS'])
    
    return {
		"status": os.environ['STATUS']
    }

Deploy both of these using an AWS Serverless Application Model (SAM) template. SAM is a CloudFormation extension that is optimized for serverless, and provides a standard way to create a complete serverless application. You can find the full helloworld-sam.yaml template in the blog-multi-region-serverless-service GitHub repo.

A few things to highlight:

  • You are using inline Swagger to define your API so you can substitute the current region in the x-amazon-apigateway-integration section.
  • Most of the Swagger template covers CORS to allow you to test this from a browser.
  • You are also using substitution to populate the environment variable used by the “Hello World” method with the region into which it is being deployed.

The Swagger allows you to use the same SAM template in both regions.

You can only use SAM from the AWS CLI, so do the following from the command prompt. First, deploy the SAM template in us-east-1 with the following commands, replacing “<your bucket in us-east-1>” with a bucket in your account:


> cd helloworld-api
> aws cloudformation package --template-file helloworld-sam.yaml --output-template-file /tmp/cf-helloworld-sam.yaml --s3-bucket <your bucket in us-east-1> --region us-east-1
> aws cloudformation deploy --template-file /tmp/cf-helloworld-sam.yaml --stack-name multiregionhelloworld --capabilities CAPABILITY_IAM --region us-east-1

Second, do the same in us-west-2:


> aws cloudformation package --template-file helloworld-sam.yaml --output-template-file /tmp/cf-helloworld-sam.yaml --s3-bucket <your bucket in us-west-2> --region us-west-2
> aws cloudformation deploy --template-file /tmp/cf-helloworld-sam.yaml --stack-name multiregionhelloworld --capabilities CAPABILITY_IAM --region us-west-2

The API was created with the default endpoint type of Edge Optimized. Switch it to Regional. In the Amazon API Gateway console, select the API that you just created and choose the wheel-icon to edit it.

API Gateway edit API settings

In the edit screen, select the Regional endpoint type and save the API. Do the same in both regions.

Grab the URL for the API in the console by navigating to the method in the prod stage.

API Gateway endpoint link

You can now test this with curl:


> curl https://2wkt1cxxxx.execute-api.us-west-2.amazonaws.com/prod/helloworld
{"message": "Hello from us-west-2"}

Write down the domain name for the URL in each region (for example, 2wkt1cxxxx.execute-api.us-west-2.amazonaws.com), as you need that later when you deploy the Route 53 setup.

Create the custom domain name

Next, create an Amazon API Gateway custom domain name endpoint. As part of using this feature, you must have a hosted zone and domain available to use in Route 53 as well as an SSL certificate that you use with your specific domain name.

You can create the SSL certificate by using AWS Certificate Manager. In the ACM console, choose Get started (if you have no existing certificates) or Request a certificate. Fill out the form with the domain name to use for the custom domain name endpoint, which is the same across the two regions:

Amazon Certificate Manager request new certificate

Go through the remaining steps and validate the certificate for each region before moving on.

You are now ready to create the endpoints. In the Amazon API Gateway console, choose Custom Domain Names, Create Custom Domain Name.

API Gateway create custom domain name

A few things to highlight:

  • The domain name is the same as what you requested earlier through ACM.
  • The endpoint configuration should be regional.
  • Select the ACM Certificate that you created earlier.
  • You need to create a base path mapping that connects back to your earlier API Gateway endpoint. Set the base path to v1 so you can version your API, and then select the API and the prod stage.

Choose Save. You should see your newly created custom domain name:

API Gateway custom domain setup

Note the value for Target Domain Name as you need that for the next step. Do this for both regions.

Deploy Route 53 setup

Use the global Route 53 service to provide DNS lookup for the Rest API, distributing the traffic in an active-active setup based on latency. You can find the full CloudFormation template in the blog-multi-region-serverless-service GitHub repo.

The template sets up health checks, for example, for us-east-1:


HealthcheckRegion1:
  Type: "AWS::Route53::HealthCheck"
  Properties:
    HealthCheckConfig:
      Port: "443"
      Type: "HTTPS_STR_MATCH"
      SearchString: "ok"
      ResourcePath: "/prod/healthcheck"
      FullyQualifiedDomainName: !Ref Region1HealthEndpoint
      RequestInterval: "30"
      FailureThreshold: "2"

Use the health check when you set up the record set and the latency routing, for example, for us-east-1:


Region1EndpointRecord:
  Type: AWS::Route53::RecordSet
  Properties:
    Region: us-east-1
    HealthCheckId: !Ref HealthcheckRegion1
    SetIdentifier: "endpoint-region1"
    HostedZoneId: !Ref HostedZoneId
    Name: !Ref MultiregionEndpoint
    Type: CNAME
    TTL: 60
    ResourceRecords:
      - !Ref Region1Endpoint

You can create the stack by using the following link, copying in the domain names from the previous section, your existing hosted zone name, and the main domain name that is created (for example, hellowordapi.replacewithyourcompanyname.com):

The following screenshot shows what the parameters might look like:
Serverless multi region Route 53 health check

Specifically, the domain names that you collected earlier would map according to following:

  • The domain names from the API Gateway “prod”-stage go into Region1HealthEndpoint and Region2HealthEndpoint.
  • The domain names from the custom domain name’s target domain name goes into Region1Endpoint and Region2Endpoint.

Using the Rest API from server-side applications

You are now ready to use your setup. First, demonstrate the use of the API from server-side clients. You can demonstrate this by using curl from the command line:


> curl https://hellowordapi.replacewithyourcompanyname.com/v1/helloworld/
{"message": "Hello from us-east-1"}

Testing failover of Rest API in browser

Here’s how you can use this from the browser and test the failover. Find all of the files for this test in the browser-client folder of the blog-multi-region-serverless-service GitHub repo.

Use this html file:


<!DOCTYPE HTML>
<html>
<head>
    <meta charset="utf-8"/>
    <meta http-equiv="X-UA-Compatible" content="IE=edge"/>
    <meta name="viewport" content="width=device-width, initial-scale=1"/>
    <title>Multi-Region Client</title>
</head>
<body>
<div>
   <h1>Test Client</h1>

    <p id="client_result">

    </p>

    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
    <script src="settings.js"></script>
    <script src="client.js"></script>
</body>
</html>

The html file uses this JavaScript file to repeatedly call the API and print the history of messages:


var messageHistory = "";

(function call_service() {

   $.ajax({
      url: helloworldMultiregionendpoint+'v1/helloworld/',
      dataType: "json",
      cache: false,
      success: function(data) {
         messageHistory+="<p>"+data['message']+"</p>";
         $('#client_result').html(messageHistory);
      },
      complete: function() {
         // Schedule the next request when the current one's complete
         setTimeout(call_service, 10000);
      },
      error: function(xhr, status, error) {
         $('#client_result').html('ERROR: '+status);
      }
   });

})();

Also, make sure to update the settings in settings.js to match with the API Gateway endpoints for the DNS-proxy and the multi-regional endpoint for the Hello World API: var helloworldMultiregionendpoint = "https://hellowordapi.replacewithyourcompanyname.com/";

You can now open the HTML file in the browser (you can do this directly from the file system) and you should see something like the following screenshot:

Serverless multi region browser test

You can test failover by changing the environment variable in your health check Lambda function. In the Lambda console, select your health check function and scroll down to the Environment variables section. For the STATUS key, modify the value to fail.

Lambda update environment variable

You should see the region switch in the test client:

Serverless multi region broker test switchover

During an emulated failure like this, the browser might take some additional time to switch over due to connection keep-alive functionality. If you are using a browser like Chrome, you can kill all the connections to see a more immediate fail-over: chrome://net-internals/#sockets

Summary

You have implemented a simple way to do multi-regional serverless applications that fail over seamlessly between regions, either being accessed from the browser or from other applications/services. You achieved this by using the capabilities of Amazon Route 53 to do latency based routing and health checks for fail-over. You unlocked the use of these features in a serverless application by leveraging the new regional endpoint feature of Amazon API Gateway.

The setup was fully scripted using CloudFormation, the AWS Serverless Application Model (SAM), and the AWS CLI, and it can be integrated into deployment tools to push the code across the regions to make sure it is available in all the needed regions. For more information about cross-region deployments, see Building a Cross-Region/Cross-Account Code Deployment Solution on AWS on the AWS DevOps blog.

[$] The state of Linus

Post Syndicated from corbet original https://lwn.net/Articles/738230/rss

A traditional Kernel-Summit agenda item was a slot where Linus Torvalds had
the opportunity to discuss the aspects of the development community that he
was (or, more often, was not) happy with. In 2017, this discussion moved
to the smaller
Maintainers Summit. Torvalds is mostly content with the state of the
community, it seems, but
the group still found plenty of process-related things to talk about.

Fraud Detection in Pokémon Go

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/fraud_detection.html

I play Pokémon Go. (There, I’ve admitted it.) One of the interesting aspects of the game I’ve been watching is how the game’s publisher, Niantec, deals with cheaters.

There are three basic types of cheating in Pokémon Go. The first is botting, where a computer plays the game instead of a person. The second is spoofing, which is faking GPS to convince the game that you’re somewhere you’re not. These two cheats are often used together — and you see the results in the many high-level accounts for sale on the Internet. The third type of cheating is the use of third-party apps like trackers to get extra information about the game.

None of this would matter if everyone played independently. The only reason any player cares about whether other players are cheating is that there is a group aspect of the game: gym battling. Everyone’s enjoyment of that part of the game is affected by cheaters who can pretend to be where they’re not, especially if they have lots of powerful Pokémon that they collected effortlessly.

Niantec has been trying to deal with this problem since the game debuted, mostly by banning accounts when it detects cheating. Its initial strategy was basic — algorithmically detecting impossibly fast travel between physical locations or super-human amounts of playing, and then banning those accounts — with limited success. The limiting factor in all of this is false positives. While Niantec wants to stop cheating, it doesn’t want to block or limit any legitimate players. This makes it a very difficult problem, and contributes to the balance in the attacker/defender arms race.

Recently, Niantic implemented two new anti-cheating measures. The first is machine learning to detect cheaters. About this, we know little. The second is to limit the functionality of cheating accounts rather than ban them outright, making it harder for cheaters to know when they’ve been discovered.

“This is may very well be the beginning of Niantic’s machine learning approach to active bot countering,” user Dronpes writes on The Silph Road subreddit. “If the parameters for a shadowban are constantly adjusted server-side, as they can now easily be, then Niantic’s machine learning engineers can train their detection (classification) algorithms in ever-improving, ever more aggressive ways, and botters will constantly be forced to re-evaluate what factors may be triggering the detection.”

One of the expected future features in the game is trading. Creating a market for rare or powerful Pokémon would add a huge additional financial incentive to cheat. Unless Niantec can effectively prevent botting and spoofing, it’s unlikely to implement that feature.

Cheating detection in virtual reality games is going to be a constant problem as these games become more popular, especially if there are ways to monetize the results of cheating. This means that cheater detection will continue to be a critical component of these games’ success. Anything Niantec learns in Pokémon Go will be useful in whatever games come next.

Mystic, level 39 — if you must know.

And, yes, I know the game tracks works by tracking your location. I’m all right with that. As I repeatedly say, Internet privacy is all about trade-offs.

Kim Dotcom Wins Settlement Over Military-Style Police Raid

Post Syndicated from Andy original https://torrentfreak.com/kim-dotcom-wins-settlement-military-style-police-raid-171103/

It’s been spoken about thousands of times in the past half-decade but the 2012 raid on Kim Dotcom’s home in New Zealand was extraordinary by any standard.

At the behest of the US Government, 72 police officers – including some from the elite heavily armed Special Tactics Group (STG) – descended on Dotcom’s Coatesville mansion. Two helicopters were used during the raid, footage from which was later released to the public as the scale and nature of the operation became clear.

To be clear, no one in the Dotcom residence had any history of violence. Nevertheless, considerable force was used to attack rooms in the building, all of it aimed at detaining the founder of what was then the world’s most famous file-hosting site. The FBI, it seems, would stop at nothing in pursuit of the man they claimed was the planet’s most notorious copyright infringer.

As the dust settled, it became clear that the overwhelming use of force was not only unprecedented but also completely unnecessary, a point Dotcom himself became intent on pressing home.

The entrepreneur was particularly angry at the treatment received by former wife Mona, who was seven months pregnant with twins at the time. So, in response, the Megaupload founder and his wife sued the police, hoping to hold the authorities to account for their actions.

The case has dragged on for years but this morning came news of a breakthrough. According to information released by Kim Dotcom, the lawsuit has been resolved after a settlement was reached with the police.

“Today, Mona and I are glad to reach a confidential settlement of our case against the New Zealand Police. We have respect for the Police in this country. They work hard and have, with this one exception, treated me and my family with courtesy and respect,” Dotcom said.

“We were shocked at the uncharacteristic handling of my arrest for a non-violent Internet copyright infringement charge brought by the United States, which is not even a crime in New Zealand.”

Dotcom said police could have simply asked to be let in, at which point he could have been arrested. Instead, under pressure from US authorities and “special interests in Hollywood”, they turned the whole event into a massive publicity stunt aimed at pleasing the US.

“The New Zealand Police we know do not carry guns. They try to resolve matters in a non-violent manner, unlike what we see from the United States. We are sad that our officers, good people simply doing their job, were tainted by US priorities and arrogance,” Dotcom said.

“We sued the Police because we believed their military-style raid on a family with children in a non-violent case went far beyond what a civilised community should expect from its police force. New Zealanders deserve and should expect better.”

Kim Dotcom has developed a reputation for fighting back across all aspects of his long-running case, and this particular action was no different. He’d planned to take the case all the way to the High Court but in the end decided that doing so wouldn’t be in the best interests of his family.

Noting that New Zealand has a new government “for the better”, Dotcom said that raking up the past would only serve to further disrupt his family.

“Our children are now settled and integrated safely here into their community and they love it. We do not want to relive past events. We do not want to disrupt our children’s new lives. We do not want to revictimise them. We want them to grow up happy,” he said.

“That is why we chose New Zealand to be our family home in the first place. We are fortunate to live here. Under the totality of the circumstances, we thought settlement was best for our children.”

According to NZ Herald, the Dotcoms aren’t the only ones to have made peace with the police. Other people arrested in 2012, including Dotcom associates Bram van der Kolk and Mathias Ortmann, were paid six-figure sums to settle. The publication speculates that as the main target of the raid, Dotcom’s settlment amount would’ve been more.

But while this matter is now closed, others remain. It was previously determined that Kiwi spy agency the Government Communications Security Bureau (GCSB) unlawfully spied on the Dotcoms over an extended period. Ron Mansfield, New Zealand counsel for the Dotcoms, says that case will continue.

“The GCSB refuses to disclose what it did or the actual private communications it stole. The Dotcoms understandably believe that they are entitled to know this. That action is pending appeal in the Court of Appeal,” he says.

Also before the Court of Appeal is the case to extradite Dotcom and his associates to the United States. That hearing is set for February 2018 but whatever the outcome, a further appeal to the Supreme Court is likely, meaning that Dotcom will remain in New Zealand until 2020, at least.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

IPTV Piracy Generates More Internet Traffic Than Torrents

Post Syndicated from Ernesto original https://torrentfreak.com/iptv-piracy-generates-more-internet-traffic-than-torrents-171101/

Increasingly, people are trading in their expensive cable subscriptions, opting to use cheaper or free Internet TV instead.

This is made easy and convenient with help from a variety of easy-to-use set-top boxes, many of which are specifically configured to receive pirated content.

Following this trend, there has also been an uptick in the availability of unlicensed TV subscriptions, with dozens of vendors offering virtually any channel imaginable. Either for free or in exchange for a small fee.

Until now the true scope of this piracy ecosystem was largely unknown, but a new report published by Canadian broadband management company Sandvine reveals that it’s massive.

The company monitored traffic across multiple fixed access tier-1 networks in North America and found that 6.5% of households are communicating with known TV piracy services. This translates to seven million subscribers and many more potential viewers.

One of the interesting aspects of IPTV piracy is that most services charge money, around $10 per month. This means that there’s a lot of money involved. If the seven million figure is indeed accurate, these IPTV vendors would generate roughly $800 million in North America alone.

“TV piracy could quickly become almost a billion dollar a year industry for pirates,” Sandvine writes in its report, noting that the real rightsholders are being substantially harmed.

Pirate subscription TV ecosystem

According to Sandvine, roughly 95% of the IPTV subscriptions run off custom set-top boxes. Kodi-powered devices and Roku boxes follow at a respectable distance with 3% and 2% of the market, respectively.

With millions of viewers, there’s undoubtedly a large audience of pirate subscription TV viewers. This is also reflected in the bandwidth these services consume. During peak hours, 6.5% of all downstream traffic on fixed networks is generated by TV piracy services.

To put this into perspective; this is more than all BitTorrent traffic during the peak hours, which was “only” 1.73% last year, and dropping.

The pirate IPTV numbers are quite impressive, also when compared to Netflix and YouTube. While the two video giants still have a larger share of overall Internet traffic on fixed networks, pirate TV subscriptions are not that far behind.

Internet traffic share throughout the day

The graph above points out another issue. It highlights that many IPTV services continue to stream data even when they’re not actively used (tuned into a channel with the TV off). As a result, they have a larger share of the overall traffic during the night when most people don’t use Netflix or YouTube.

This wasted traffic is referred to as “phantom bandwidth” and can be as high as one terabyte per month for a single connection. Physically powering off the box is often the only way to prevent this.

Needless to say, “phantom bandwidth” increases IPTV traffic numbers, so it doesn’t necessarily mean that all this traffic is actively consumed.

Finally, Sandvine looked at the different types of content people are streaming with these pirate subscriptions. Live sporting events are popular, as we’ve seen with the megafight between Floyd Mayweather and Conor McGregor. The same is true for news channels and premium TV such as HBO and international broadcasts.

The most viewed of all in North America, with 4.6% of all pirated TV traffic, is the Indian Star Plus HD.

All and all it is safe to conclude that IPTV piracy is making up a large part of the pirate ecosystem. This hasn’t gone unnoticed to copyright holders of course. In recent months we have seen enforcement actions against several providers and if this trend continues, more are likely to follow.

Looking ahead, it would be interesting to see some numbers of the “on demand” piracy streaming websites and devices as well. IPTV subscriptions are substantial, but it would be no surprise if pirate streaming boxes and sites generate even more traffic.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

UK ‘Pirate’ Kodi Box Seller Handed a Suspended Prison Sentence

Post Syndicated from Andy original https://torrentfreak.com/uk-pirate-kodi-box-seller-handed-a-suspended-prison-sentence-171021/

After being raided by police and Trading Standards in 2015, Middlesbrough-based shopkeeper Brian ‘Tomo’ Thompson found himself in the spotlight.

Accused of selling “fully-loaded” Kodi boxes (those with ‘pirate’ addons installed), Thompson continued to protest his innocence.

“All I want to know is whether I am doing anything illegal. I know it’s a gray area but I want it in black and white,” he said last September.

Unlike other cases, where copyright holders took direct action, Thompson was prosecuted by his local council. At the time, he seemed prepared to martyr himself to test the limits of the law.

“This may have to go to the crown court and then it may go all the way to the European court, but I want to make a point with this and I want to make it easier for people to know what is legal and what isn’t,” he said. “I expect it go against me but at least I will know where I stand.”

In an opinion piece not long after this statement, we agreed with Thompson’s sentiment, noting that barring a miracle, the Middlesbrough man would indeed lose his case, probably in short order. But Thompson’s case turned out to be less than straightforward.

Thompson wasn’t charged with straightforward “making available” under the Copyrights, Designs and Patents Acts. If he had, there would’ve been no question that he’d been breaking law. This is due to a European Court of Justice decision in the BREIN v Filmspeler case earlier this year which determined that selling fully loaded boxes in the EU is illegal.

Instead, for reasons best known to the prosecution, ‘Tomo’ stood accused of two offenses under section 296ZB of the Copyright, Designs and Patents Act, which deals with devices and services designed to “circumvent technological measures”. It’s a different aspect of copyright law previously applied to cases where encryption has been broken on official products.

“A person commits an offense if he — in the course of a business — sells or lets for hire, any device, product or component which is primarily designed, produced, or adapted for the purpose of enabling or facilitating the circumvention of effective technological measures,” the law reads.

‘Tomo’ in his store

In January this year, Thompson entered his official ‘not guilty’ plea, setting up a potentially fascinating full trial in which we would’ve heard how ‘circumvention of technological measures’ could possibly relate to streaming illicit content from entirely unprotected far-flung sources.

Last month, however, Thompson suddenly had a change of heart, entering guilty pleas against one count of selling and one count of advertising devices for the purpose of enabling or facilitating the circumvention of effective technological measures.

That plea stomped on what could’ve been a really interesting trial, particularly since the Federation Against Copyright Theft’s own lawyer predicted it could be difficult and complex.

As a result, Thompson appeared at Teeside Crown Court on Friday for sentencing. Prosecutor Cameron Crowe said Thompson advertised and sold the ‘pirate’ devices for commercial gain, fully aware that they would be used to access infringing content and premium subscription services.

Crowe said that Thompson made around £40,000 from the devices while potentially costing Sky around £200,000 in lost subscription fees. When Thompson was raided in June 2015, a diary revealed he’d sold 159 devices in the previous four months, sales which generated £17,000 in revenue.

After his arrest, Thompson changed premises and continued to offer the devices for sale on social media.

Passing sentence, Judge Peter Armstrong told the 55-year-old businessman that he’d receive an 18-month prison term, suspended for two years.

“If anyone was under any illusion as to whether such devices as these, fully loaded Kodi boxes, were illegal or not, they can no longer be in any such doubt,” Judge Armstrong told the court, as reported by Gazette Live.

“I’ve come to the conclusion that in all the circumstances an immediate custodial sentence is not called for. But as a warning to others in future, they may not be so lucky.”

Also sentenced Friday was another local seller, Julian Allen, who sold devices to Thompson, among others. He was arrested following raids on his Geeky Kit businesses in 2015 and pleaded guilty this July to using or acquiring criminal property.

But despite making more than £135,000 from selling ‘pirate’ boxes, he too avoided jail, receiving a 21-month prison sentence suspended for two years instead.

While Thompson’s and Allen’s sentences are likely to be portrayed by copyright holders as a landmark moment, the earlier ruling from the European Court of Justice means that selling these kinds of devices for infringing purposes has always been illegal.

Perhaps the big surprise, given the dramatic lead up to both cases, is the relative leniency of their sentences. All that being said, however, a line has been drawn in the sand and other sellers should be aware.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

[$] A look at the 4.14 development cycle

Post Syndicated from corbet original https://lwn.net/Articles/736578/rss

The 4.14 kernel, due in the first half of November, is moving into the
relatively slow part of the development cycle as of this writing. The time
is thus ripe for a look at the changes that went into this kernel cycle and
how they got there. While 4.14 is a fairly typical kernel development
cycle, there are a couple of aspects that stand out this time around.

EU Piracy Report Suppression Raises Questions Over Transparency

Post Syndicated from Andy original https://torrentfreak.com/eu-piracy-report-suppression-raises-questions-transparency-170922/

Over the years, copyright holders have made hundreds of statements against piracy, mainly that it risks bringing industries to their knees through widespread and uncontrolled downloading from the Internet.

But while TV shows like Game of Thrones have been downloaded millions of times, the big question (one could argue the only really important question) is whether this activity actually affects sales. After all, if piracy has a massive negative effect on industry, something needs to be done. If it does not, why all the panic?

Quite clearly, the EU Commission wanted to find out the answer to this potential multi-billion dollar question when it made the decision to invest a staggering 360,000 euros in a dedicated study back in January 2014.

With a final title of ‘Estimating displacement rates of copyrighted content in the EU’, the completed study is an intimidating 307 pages deep. Shockingly, until this week, few people even knew it existed because, for reasons unknown, the EU Commission decided not to release it.

However, thanks to the sheer persistence of Member of the European Parliament Julia Reda, the public now has a copy and it contains quite a few interesting conclusions. But first, some background.

The study uses data from 2014 and covers four broad types of content: music,
audio-visual material, books and videogames. Unlike other reports, the study also considered live attendances of music and cinema visits in the key regions of Germany, UK, Spain, France, Poland and Sweden.

On average, 51% of adults and 72% of minors in the EU were found to have illegally downloaded or streamed any form of creative content, with Poland and Spain coming out as the worst offenders. However, here’s the kicker.

“In general, the results do not show robust statistical evidence of displacement of sales by online copyright infringements,” the study notes.

“That does not necessarily mean that piracy has no effect but only that the statistical analysis does not prove with sufficient reliability that there is an effect.”

For a study commissioned by the EU with huge sums of public money, this is a potentially damaging conclusion, not least for the countless industry bodies that lobby day in, day out, for tougher copyright law based on the “fact” that piracy is damaging to sales.

That being said, the study did find that certain sectors can be affected by piracy, notably recent top movies.

“The results show a displacement rate of 40 per cent which means that for every ten recent top films watched illegally, four fewer films are consumed legally,” the study notes.

“People do not watch many recent top films a second time but if it happens, displacement is lower: two legal consumptions are displaced by every ten illegal second views. This suggests that the displacement rate for older films is lower than the 40 per cent for recent top films. All in all, the estimated loss for recent top films is 5 per cent of current sales volumes.”

But while there is some negative effect on the movie industry, others can benefit. The study found that piracy had a slightly positive effect on the videogames industry, suggesting that those who play pirate games eventually become buyers of official content.

On top of displacement rates, the study also looked at the public’s willingness to pay for content, to assess whether price influences pirate consumption. Interestingly, the industry that had the most displaced sales – the movie industry – had the greatest number of people unhappy with its pricing model.

“Overall, the analysis indicates that for films and TV-series current prices are higher than 80 per cent of the illegal downloaders and streamers are willing to pay,” the study notes.

For other industries, where sales were not found to have been displaced or were positively affected by piracy, consumer satisfaction with pricing was greatest.

“For books, music and games, prices are at a level broadly corresponding to the
willingness to pay of illegal downloaders and streamers. This suggests that a
decrease in the price level would not change piracy rates for books, music and
games but that prices can have an effect on displacement rates for films and
TV-series,” the study concludes.

So, it appears that products that are priced fairly do not suffer significant displacement from piracy. Those that are priced too high, on the other hand, can expect to lose some sales.

Now that it’s been released, the findings of the study should help to paint a more comprehensive picture of the infringement climate in the EU, while laying to rest some of the wild claims of the copyright lobby. That being said, it shouldn’t have taken the toils of Julia Reda to bring them to light.

“This study may have remained buried in a drawer for several more years to come if it weren’t for an access to documents request I filed under the European Union’s Freedom of Information law on July 27, 2017, after having become aware of the public tender for this study dating back to 2013,” Reda explains.

“I would like to invite the Commission to become a provider of more solid and timely evidence to the copyright debate. Such data that is valuable both financially and in terms of its applicability should be available to everyone when it is financed by the European Union – it should not be gathering dust on a shelf until someone actively requests it.”

The full study can be downloaded here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Surviving Your First Year

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/startup-stages-surviving-your-first-year/

Surviving Your First Year

This post by Backblaze’s CEO and co-founder Gleb Budman is the fifth in a series about entrepreneurship. You can choose posts in the series from the list below:

  1. How Backblaze got Started: The Problem, The Solution, and the Stuff In-Between
  2. Building a Competitive Moat: Turning Challenges Into Advantages
  3. From Idea to Launch: Getting Your First Customers
  4. How to Get Your First 1,000 Customers
  5. Surviving Your First Year

Use the Join button above to receive notification of new posts in this series.

In my previous posts, I talked about coming up with an idea, determining the solution, and getting your first customers. But you’re building a company, not a product. Let’s talk about what the first year should look like.

The primary goals for that first year are to: 1) set up the company; 2) build, launch, and learn; and 3) survive.

Setting Up the Company

The company you’re building is more than the product itself, and you’re not going to do it alone. You don’t want to spend too much time on this since getting customers is key, but if you don’t set up the basics, there are all sorts of issues down the line.

startup idea board

Find Your Co-Founders & Determine Roles

You may already have the idea, but who do you need to execute it? At Backblaze, we needed people to build the web experience, the client backup application, and the server/storage side. We also needed someone to handle the business/marketing aspects, and we felt that the design and user experience were critical. As a result, we started with five co-founders: three engineers, a designer, and me for the business and marketing.

Of course not every role needs to be filled by a co-founder. You can hire employees for positions as well. But think through the strategic skills you’ll need to launch and consider co-founders with those skill sets.

Too many people think they can just “work together” on everything. Don’t. Determine roles as quickly as possible so that it’s clear who is responsible for what work and which decisions. We were lucky in that we had worked together and thus knew what each person would do, but even so we assigned titles early on to clarify roles.

Takeaway:   Fill critical roles and explicitly split roles and responsibilities.

Get Your Legal Basics In Place

When we’re excited about building a product, legal basics are often the last thing we want to deal with. You don’t need to go overboard, but it’s critical to get certain things done.

  1. Determine ownership split. What is the percentage breakdown of the company that each of the founders will own? It can be a tough discussion, but it only becomes more difficult later when there is more value and people have put more time into it. At Backblaze we split the equity equally five ways. This is uncommon. The benefit of this is that all the founders feel valued and “in it together.” The benefit of the more common split where someone has a dominant share is that person is typically empowered to be the ultimate decision-maker. Slicing Pie provides some guidance on how to think about splitting equity. Regardless of which way you want you go, don’t put it off.
  2. Incorporate. Hard to be a company if you’re not. There are various formats, but if you plan to raise angel/venture funding, a Delaware-based C-corp is standard.
  3. Deal With Stock. At a minimum, issue stock to the founders, have each one buy their shares, and file an 83(b). Buying your shares at this stage might be $100. Filing the 83(b) election marks the date at which you purchased your shares, and shows that you bought them for what they were worth. This one piece of paper paper can make the difference between paying long-term capital gains rates (~20%) or income tax rates (~40%).
  4. Assign Intellectual Property. Ask everyone to sign a Proprietary Information and Inventions Assignment (“PIIA”). This document says that what they do at the company is owned by the company. Early on we had a friend who came by and brainstormed ideas. We thought of it as interesting banter. He later said he owned part of our storage design. While we worked it out together, a PIIA makes ownership clear.

The ownership split can be worked out by the founders directly. For the other items, I would involve lawyers. Some law firms will set up the basics and defer payment until you raise money or the business can pay for services out of operations. Gunderson Dettmer did that for us (ask for Bennett Yee). Cooley will do this on a casey-by-case basis as well.

Takeaway:  Don’t let the excitement of building a company distract you from filing the basic legal documents required to protect and grow your company.

Get Health Insurance

This item may seem out of place, but not having health insurance can easily bankrupt you personally, and that certainly won’t bode well for your company. While you can buy individual health insurance, it will often be less expensive to buy it as a company. Also, it will make recruiting employees more difficult if you do not offer healthcare. When we contacted brokers they asked us to send the W-2 of each employee that wanted coverage, but the founders weren’t taking a salary at first. To work around this, make the founders ‘officers’ of the company, and the healthcare brokers can then insure them. (Of course, you need to be ok with your co-founders being officers, but hopefully, that is logical anyway.)

Takeaway:  Don’t take your co-founders’ physical and financial health for granted. Health insurance can serve as both individual protection and a recruiting tool for future employees.

Building, Launching & Learning

Getting the company set up gives you the foundation, but ultimately a company with no product and no customers isn’t very interesting.

Build

Ideally, you have one person on the team focusing on all of the items above and everyone else can be heads-down building product. There is a lot to say about building product, but for this post, I’ll just say that your goal is to get something out the door that is good enough to start collecting feedback. It doesn’t have to have every feature you dream of and doesn’t have to support 1 billion users on day one.

Launch

If you’re building a car or rocket, that may take some time. But with the availability of open-source software and cloud services, most startups should launch inside of a year.

Launching forces a scoping of the feature set to what’s critical, rallies the company around a goal, starts building awareness of your company and solution, and pushes forward the learning process. Backblaze launched in public beta on June 2, 2008, eight months after the founders all started working on it full-time.

Takeaway:  Focus on the most important features and launch.

Learn & Iterate

As much as we think we know about the customers and their needs, the launch process and beyond opens up all sorts of insights. This early period is critical to collect feedback and iterate, especially while both the product and company are still quite malleable. We initially planned on building peer-to-peer and local backup immediately on the heels of our online offering, but after launching found minimal demand for those features. On the other hand, there was tremendous demand from companies and resellers.

Takeaway:  Use the critical post-launch period to collect feedback and iterate.

Surviving

“Live to fight another day.” If the company doesn’t survive, it’s hard to change the world. Let’s talk about some of the survival components.

Consider What You As A Founding Team Want & How You Work

Are you doing this because you hope to get rich? See yourself on the cover of Fortune? Make your own decisions? Work from home all the time? Founder fighting is the number one reason companies fail; the founders need to be on the same page as much as possible.

At Backblaze we agreed very early on that we wanted three things:

  1. Build products we were proud of
  2. Have fun
  3. Make money

This has driven various decisions over the years and has evolved into being part of the culture. For example, while Backblaze is absolutely a company with a profit motive, we do not compromise the product to make more money. Other directions are not bad; they’re just different.

Do you want a lifestyle business? Or want to build a billion dollar business? Want to run it forever or build it for a couple years and do something else?

Pretend you’re getting married to each other. Do some introspection and talk about your vision of the future a lot. Do you expect everyone to work 20 or 100 hours every week? In the office or remote? How do you like to work? What pet peeves do you have?

When getting married each person brings the “life they’ve known,” often influenced by the life their parents lived. Together they need to decide which aspects of their previous lives they want to keep, toss, or change. As founders coming together, you have the same opportunity for your new company.

Takeaway:  In order for a company to survive, the founders must agree on what they want the company to be. Have the discussions early.

Determine How You Will Fund Your Business

Raising venture capital is often seen as the only path, and considered the most important thing to start doing on day one. However, there are a variety of options for funding your business, including using money from savings, part-time work, friends & family money, loans, angels, and customers. Consider the right option for you, your founding team, and your business.

Conserve Cash

Whichever option you choose for funding your business, chances are high that you will not be flush with cash on day one. In certain situations, you actually don’t want to conserve cash because you’ve raised $100m and now you want to run as fast as you can to capture a market — cash is plentiful and time is not. However, with the exception of founder struggles, running out of cash is the most common way companies go under. There are many ways to conserve cash — limit hiring of employees and consultants, use lawyers and accountants sparingly, don’t spend on advertising, work from a home office, etc. The most important way is to simply ensure that you and your team are cash conscious, challenging decisions that commit you to spending cash.

Backblaze spent a total of $94,122 to get to public beta launch. That included building the backup application, our own server infrastructure, the website with account/billing/restore functionality, the marketing involved in getting to launch, and all the steps above in setting up the company, paying for healthcare, etc. The five founders took no salary during this time (which, of course, would have cost dramatically more), so most of this money went to computers, servers, hard drives, and other infrastructure.

Takeaway:  Minimize cash burn — it extends your runway and gives you options.

Slowly Flesh Out Your Team

We started with five co-founders, and thus a fairly fleshed-out team. A year in, we only added one person, a Mac architect. Three months later we shipped a beta of our Mac version, which has resulted in more than 50% of our revenue.

Minimizing hiring is key to cash conservation, and hiring ahead of getting market feedback is risky since you may realize that the talent you need will change. However, once you start getting feedback, think about the key people that you need to move your company forward. But be rigorous in determining whether they’re critical. We didn’t hire our first customer support person until all five founders were spending 20% of their time on it.

Takeaway:  Don’t hire in anticipation of market growth; hire to fuel the growth.

Keep Your Spirits Up

Startups are roller coasters of emotion. There have been some serious articles about founders suffering from depression and worse. The idea phase is exhilarating, then there is the slog of building. The launch is a blast, but the week after there are crickets.

On June 2, 2008, we launched in public beta with great press and hordes of customers. But a few months later we were signing up only about 10 new customers per month. That’s $50 new monthly recurring revenue (MRR) after a year of work and no salary.

On August 25, 2008, we brought on our Mac architect. Two months later, on October 26, 2008, Apple launched Time Machine — completely free and built-in backup for all Macs.

There were plenty of times when our prospects looked bleak. In the rearview mirror it’s easy to say, “well sure, but now you have lots of customers,” or “yes, but Time Machine doesn’t do cloud backup.” But at the time neither of these were a given.

Takeaway:  Getting up each day and believing that as a team you’ll figure it out will let you get to the point where you can look in the rearview mirror and say, “It looked bleak back then.”

Succeeding in Your First Year

I titled the post “Surviving Your First Year,” but if you manage to, 1) set up the company; 2) build, launch, and learn; and 3) survive, you will have done more than survive: you’ll have truly succeeded in your first year.

The post Surviving Your First Year appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

[$] Notes from the LPC scheduler microconference

Post Syndicated from corbet original https://lwn.net/Articles/734039/rss

The scheduler
workloads microconference
at the 2017 Linux Plumbers Conference covered
several aspects of the kernel’s CPU scheduler. While workloads were on the
agenda, so were a rework of the realtime scheduler’s push/pull mechanism, a
distinctly different approach to multi-core scheduling, and the use of
tracing for workload simulation and analysis. As the following summary
shows, CPU scheduling has not yet reached a point where all of the
important questions have been answered.

New – Per-Second Billing for EC2 Instances and EBS Volumes

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-per-second-billing-for-ec2-instances-and-ebs-volumes/

Back in the old days, you needed to buy or lease a server if you needed access to compute power. When we launched EC2 back in 2006, the ability to use an instance for an hour, and to pay only for that hour, was big news. The pay-as-you-go model inspired our customers to think about new ways to develop, test, and run applications of all types.

Today, services like AWS Lambda prove that we can do a lot of useful work in a short time. Many of our customers are dreaming up applications for EC2 that can make good use of a large number of instances for shorter amounts of time, sometimes just a few minutes.

Per-Second Billing for EC2 and EBS
Effective October 2nd, usage of Linux instances that are launched in On-Demand, Reserved, and Spot form will be billed in one-second increments. Similarly, provisioned storage for EBS volumes will be billed in one-second increments.

Per-second billing also applies to Amazon EMR and AWS Batch:

Amazon EMR – Our customers add capacity to their EMR clusters in order to get their results more quickly. With per-second billing for the EC2 instances in the clusters, adding nodes is more cost-effective than ever.

AWS Batch – Many of the batch jobs that our customers run complete in less than an hour. AWS Batch already launches and terminates Spot Instances; with per-second billing batch processing will become even more economical.

Some of our more sophisticated customers have built systems to get the most value from EC2 by strategically choosing the most advantageous target instances when managing their gaming, ad tech, or 3D rendering fleets. Per-second billing obviates the need for this extra layer of instance management, and brings the costs savings to all customers and all workloads.

While this will result in a price reduction for many workloads (and you know we love price reductions), I don’t think that’s the most important aspect of this change. I believe that this change will inspire you to innovate and to think about your compute-bound problems in new ways. How can you use it to improve your support for continuous integration? Can it change the way that you provision transient environments for your dev and test workloads? What about your analytics, batch processing, and 3D rendering?

One of the many advantages of cloud computing is the elastic nature of provisioning or deprovisioning resources as you need them. By billing usage down to the second we will enable customers to level up their elasticity, save money, and customers will be positioned to take advantage of continuing advances in computing.

Things to Know
This change is effective in all AWS Regions and will be effective October 2, for all Linux instances that are newly launched or already running. Per-second billing is not currently applicable to instances running Microsoft Windows or Linux distributions that have a separate hourly charge. There is a 1 minute minimum charge per-instance.

List prices and Spot Market prices are still listed on a per-hour basis, but bills are calculated down to the second, as is Reserved Instance usage (you can launch, use, and terminate multiple instances within an hour and get the Reserved Instance Benefit for all of the instances). Also, bills will show times in decimal form, like this:

The Dedicated Per Region Fee, EBS Snapshots, and products in AWS Marketplace are still billed on an hourly basis.

Jeff;

 

Backblaze’s Upgrade Guide for macOS High Sierra

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/macos-high-sierra-upgrade-guide/

High Sierra

Apple introduced macOS 10.13 “High Sierra” at its 2017 Worldwide Developers Conference in June. On Tuesday, we learned we don’t have long to wait — the new OS will be available on September 25. It’s a free upgrade, and millions of Mac users around the world will rush to install it.

We understand. A new OS from Apple is exciting, But please, before you upgrade, we want to remind you to back up your Mac. You want your data to be safe from unexpected problems that could happen in the upgrade. We do, too. To make that easier, Backblaze offers this macOS High Sierra upgrade guide.

Why Upgrade to macOS 10.13 High Sierra?

High Sierra, as the name suggests, is a follow-on to the previous macOS, Sierra. Its major focus is on improving the base OS with significant improvements that will support new capabilities in the future in the file system, video, graphics, and virtual/augmented reality.

But don’t despair; there also are outward improvements that will be readily apparent to everyone when they boot the OS for the first time. We’ll cover both the inner and outer improvements coming in this new OS.

Under the Hood of High Sierra

APFS (Apple File System)

Apple has been rolling out its first file system upgrade for a while now. It’s already in iOS: now High Sierra brings APFS to the Mac. Apple touts APFS as a new file system optimized for Flash/SSD storage and featuring strong encryption, better and faster file handling, safer copying and moving of files, and other improved file system fundamentals.

We went into detail about the enhancements and improvements that APFS has over the previous file system, HFS+, in an earlier post. Many of these improvements, including enhanced performance, security and reliability of data, will provide immediate benefits to users, while others provide a foundation for future storage innovations and will require work by Apple and third parties to support in their products and services.

Most of us won’t notice these improvements, but we’ll benefit from better, faster, and safer file handling, which I think all of us can appreciate.

Video

High Sierra includes High Efficiency Video Encoding (HEVC, aka H.265), which preserves better detail and color while also introducing improved compression over H.264 (MPEG-4 AVC). Even existing Macs will benefit from the HEVC software encoding in High Sierra, but newer Mac models include HEVC hardware acceleration for even better performance.

MacBook Pro

Metal 2

macOS High Sierra introduces Metal 2, the next-generation of Apple’s Metal graphics API that was launched three years ago. Apple claims that Metal 2 provides up to 10x better performance in key areas. It provides near-direct access to the graphics processor (GPU), enabling the GPU to take control over key aspects of the rendering pipeline. Metal 2 will enhance the Mac’s capability for machine learning, and is the technology driving the new virtual reality platform on Macs.

audio video editor screenshot

Virtual Reality

We’re about to see an explosion of virtual reality experiences on both the Mac and iOS thanks to High Sierra and iOS 11. Content creators will be able to use apps like Final Cut Pro X, Epic Unreal 4 Editor, and Unity Editor to create fully immersive worlds that will revolutionize entertainment and education and have many professional uses, as well.

Users will want the new iMac with Retina 5K display or the upcoming iMac Pro to enjoy them, or any supported Mac paired with the latest external GPU and VR headset.

iMac and HTC virtual reality player

Outward Improvements

Siri

Siri logo

Expect a more nature voice from Siri in High Sierra. She or he will be less robotic, with greater expression and use of intonation in speech. Siri will also learn more about your preferences in things like music, helping you choose music that fits your taste and putting together playlists expressly for you. Expect Siri to be able to answer your questions about music-related trivia, as well.

Siri:  what does “scaramouche” refer to in the song Bohemian Rhapsody?

Photos

HD MacBook Pro screenshot

Photos has been redesigned with a new layout and new tools. A redesigned Edit view includes new tools for fine-tuning color and contrast and making adjustments within a defined color range. Some fun elements for creating special effects and memories also have been added. Photos now works with external apps such as Photoshop and Pixelmator. Compatibility with third-party extension adds printing and publishing services to help get your photos out into the world.

Safari

Safari logo

Apple claims that Safari in High Sierra is the world’s fastest desktop browser, outperforming Chrome and other browsers in a range of benchmark tests. They’ve also added autoplay blocking for those pesky videos that play without your permission and tracking blocking to help protect your privacy.

Can My Mac Run macOS High Sierra 10.13?

All Macs introduced in mid 2010 or later are compatible. MacBook and iMac computers introduced in late 2009 are also compatible. You’ll need OS X 10.7.5 “Lion” or later installed, along with at least 2 GB RAM and 8.8 GB of available storage to manage the upgrade.
Some features of High Sierra require an internet connection or an Apple ID. You can check to see if your Mac is compatible with High Sierra on Apple’s website.

Conquering High Sierra — What Do I Do Before I Upgrade?

Back Up That Mac!

It’s always smart to back up before you upgrade the operating system or make any other crucial changes to your computer. Upgrading your OS is a major change to your computer, and if anything goes wrong…well, you don’t want that to happen.

iMac backup screenshot

We recommend the 3-2-1 Backup Strategy to make sure your data is safe. What does that mean? Have three copies of your data. There’s the “live” version on your Mac, a local backup (Time Machine, another copy on a local drive or other computer), and an offsite backup like Backblaze. No matter what happens to your computer, you’ll have a way to restore the files if anything goes wrong. Need help understanding how to back up your Mac? We have you covered with a handy Mac backup guide.

Check for App and Driver Updates

This is when it helps to do your homework. Check with app developers or device manufacturers to find if their apps and devices have updates to work with High Sierra. Visit their websites or use the Check for Updates feature built into most apps (often found in the File or Help menus).

If you’ve downloaded apps through the Mac App Store, make sure to open them and click on the Updates button to download the latest updates.

Updating can be hit or miss when you’ve installed apps that didn’t come from the Mac App Store. To make it easier, visit the MacUpdate website. MacUpdate tracks changes to thousands of Mac apps.


Will Backblaze work with macOS High Sierra?

Yes. We’ve taken care to ensure that Backblaze works with High Sierra. We’ve already enhanced our Macintosh client to report the space available on an APFS container and we plan to add additional support for APFS capabilities that enhance Backblaze’s capabilities in the future.

Of course, we’ll watch Apple’s release carefully for any last minute surprises. We’ll officially offer support for High Sierra once we’ve had a chance to thoroughly test the release version.


Set Aside Time for the Upgrade

Depending on the speed of your Internet connection and your computer, upgrading to High Sierra will take some time. You’ll be able to use your Mac straightaway after answering a few questions at the end of the upgrade process.

If you’re going to install High Sierra on multiple Macs, a time-and-bandwidth-saving tip came from a Backblaze customer who suggested copying the installer from your Mac’s Applications folder to a USB Flash drive (or an external drive) before you run it. The installer routinely deletes itself once the upgrade process is completed, but if you grab it before that happens you can use it on other computers.

Where Do I get High Sierra?

Apple says that High Sierra will be available on September 25. Like other Mac operating system releases, Apple offers macOS 10.13 High Sierra for download from the Mac App Store, which is included on the Mac. As long as your Mac is supported and running OS X 10.7.5 “Lion” (released in 2012) or later, you can download and run the installer. It’s free. Thank you, Apple.

Better to be Safe than Sorry

Back up your Mac before doing anything to it, and make Backblaze part of your 3-2-1 backup strategy. That way your data is secure. Even if you have to roll back after an upgrade, or if you run into other problems, your data will be safe and sound in your backup.

Tell us How it Went

Are you getting ready to install High Sierra? Still have questions? Let us know in the comments. Tell us how your update went and what you like about the new release of macOS.

And While You’re Waiting for High Sierra…

While you’re waiting for Apple to release High Sierra on September 25, you might want to check out these other posts about using your Mac and Backblaze.

The post Backblaze’s Upgrade Guide for macOS High Sierra appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

New UK IP Crime Report Reveals Continued Focus on ‘Pirate’ Kodi Boxes

Post Syndicated from Andy original https://torrentfreak.com/new-uk-ip-crime-report-reveals-continued-focus-on-pirate-kodi-boxes-170908/

The UK’s Intellectual Property Office has published its annual IP Crime Report, spanning the period 2016 to 2017.

It covers key events in the copyright and trademark arenas and is presented with input from the police and trading standards, plus private entities such as the BPI, Premier League, and Federation Against Copyright Theft, to name a few.

The report begins with an interesting statistic. Despite claims that many millions of UK citizens regularly engage in some kind of infringement, figures from the Ministry of Justice indicate that just 47 people were found guilty of offenses under the Copyright, Designs and Patents Act during 2016. That’s down on the 69 found guilty in the previous year.

Despite this low conviction rate, 15% of all internet users aged 12+ are reported to have consumed at least one item of illegal content between March and May 2017. Figures supplied by the Industry Trust for IP indicate that 19% of adults watch content via various IPTV devices – often referred to as set-top, streaming, Android, or Kodi boxes.

“At its cutting edge IP crime is innovative. It exploits technological loopholes before they become apparent. IP crime involves sophisticated hackers, criminal financial experts, international gangs and service delivery networks. Keeping pace with criminal innovation places a burden on IP crime prevention resources,” the report notes.

The report covers a broad range of IP crime, from counterfeit sportswear to foodstuffs, but our focus is obviously on Internet-based infringement. Various contributors cover various aspects of online activity as it affects them, including music industry group BPI.

“The main online piracy threats to the UK recorded music industry at present are from BitTorrent networks, linking/aggregator sites, stream-ripping sites, unauthorized streaming sites and cyberlockers,” the BPI notes.

The BPI’s website blocking efforts have been closely reported, with 63 infringing sites blocked to date via various court orders. However, the BPI reports that more than 700 related URLs, IP addresses, and proxy sites/ proxy aggregators have also been rendered inaccessible as part of the same action.

“Site blocking has proven to be a successful strategy as the longer the blocks are in place, the more effective they are. We have seen traffic to these sites reduce by an average of 70% or more,” the BPI reports.

While prosecutions against music pirates are a fairly rare event in the UK, the Crown Prosecution Service (CPS) Specialist Fraud Division highlights that their most significant prosecution of the past 12 months involved a prolific music uploader.

As first revealed here on TF, Wayne Evans was an uploader not only on KickassTorrents and The Pirate Bay, but also some of his own sites. Known online as OldSkoolScouse, Evans reportedly cost the UK’s Performing Rights Society more than £1m in a single year. He was sentenced in December 2016 to 12 months in prison.

While Evans has been free for some time already, the CPS places particular emphasis on the importance of the case, “since it provided sentencing guidance for the Copyright, Designs and Patents Act 1988, where before there was no definitive guideline.”

The CPS says the case was useful on a number of fronts. Despite illegal distribution of content being difficult to investigate and piracy losses proving tricky to quantify, the court found that deterrent sentences are appropriate for the kinds of offenses Evans was accused of.

The CPS notes that various factors affect the severity of such sentences, not least the length of time the unlawful activity has persisted and particularly if it has done so after the service of a cease and desist notice. Other factors include the profit made by defendants and/or the loss caused to copyright holders “so far as it can accurately be calculated.”

Importantly, however, the CPS says that beyond issues of personal mitigation and timely guilty pleas, a jail sentence is probably going to be the outcome for others engaging in this kind of activity in future. That’s something for torrent and streaming site operators and their content uploaders to consider.

“[U]nless the unlawful activity of this kind is very amateur, minor or short-lived, or in the absence of particularly compelling mitigation or other exceptional circumstances, an immediate custodial sentence is likely to be appropriate in cases of illegal distribution of copyright infringing articles,” the CPS concludes.

But while a music-related trial provided the highlight of the year for the CPS, the online infringement world is still dominated by the rise of streaming sites and the now omnipresent “fully-loaded Kodi Box” – set-top devices configured to receive copyright-infringing live TV and VOD.

In the IP Crime Report, the Intellectual Property Office references a former US Secretary of Defense to describe the emergence of the threat.

“The echoes of Donald Rumsfeld’s famous aphorism concerning ‘known knowns’ and ‘known unknowns’ reverberate across our landscape perhaps more than any other. The certainty we all share is that we must be ready to confront both ‘known unknowns’ and ‘unknown unknowns’,” the IPO writes.

“Not long ago illegal streaming through Kodi Boxes was an ‘unknown’. Now, this technology updates copyright infringement by empowering TV viewers with the technology they need to subvert copyright law at the flick of a remote control.”

While the set-top box threat has grown in recent times, the report highlights the important legal clarifications that emerged from the BREIN v Filmspeler case, which found itself before the European Court of Justice.

As widely reported, the ECJ determined that the selling of piracy-configured devices amounts to a communication to the public, something which renders their sale illegal. However, in a submission by PIPCU, the Police Intellectual Property Crime Unit, box sellers are said to cast a keen eye on the legal situation.

“Organised criminals, especially those in the UK who distribute set-top boxes, are aware of recent developments in the law and routinely exploit loopholes in it,” PIPCU reports.

“Given recent judgments on the sale of pre-programmed set-top boxes, it is now unlikely criminals would advertise the devices in a way which is clearly infringing by offering them pre-loaded or ‘fully loaded’ with apps and addons specifically designed to access subscription services for free.”

With sellers beginning to clean up their advertising, it seems likely that detection will become more difficult than when selling was considered a gray area. While that will present its own issues, PIPCU still sees problems on two fronts – a lack of clear legislation and a perception of support for ‘pirate’ devices among the public.

“There is no specific legislation currently in place for the prosecution of end users or sellers of set-top boxes. Indeed, the general public do not see the usage of these devices as potentially breaking the law,” the unit reports.

“PIPCU are currently having to try and ‘shoehorn’ existing legislation to fit the type of criminality being observed, such as conspiracy to defraud (common law) to tackle this problem. Cases are yet to be charged and results will be known by late 2017.”

Whether these prosecutions will be effective remains to be seen, but PIPCU’s comments suggest an air of caution set to a backdrop of box-sellers’ tendency to adapt to legal challenges.

“Due to the complexity of these cases it is difficult to substantiate charges under the Fraud Act (2006). PIPCU have convicted one person under the Serious Crime Act (2015) (encouraging or assisting s11 of the Fraud Act). However, this would not be applicable unless the suspect had made obvious attempts to encourage users to use the boxes to watch subscription only content,” PIPCU notes, adding;

“The selling community is close knit and adapts constantly to allow itself to operate in the gray area where current legislation is unclear and where they feel they can continue to sell ‘under the radar’.”

More generally, pirate sites as a whole are still seen as a threat. As reported last month, the current anti-piracy narrative is that pirate sites represent a danger to their users. As a result, efforts are underway to paint torrent and streaming sites as risky places to visit, with users allegedly exposed to malware and other malicious content. The scare strategy is supported by PIPCU.

“Unlike the purchase of counterfeit physical goods, consumers who buy unlicensed content online are not taking a risk. Faulty copyright doesn’t explode, burn or break. For this reason the message as to why the public should avoid copyright fraud needs to be re-focused.

“A more concerted attempt to push out a message relating to malware on pirate websites, the clear criminality and the links to organized crime of those behind the sites are crucial if public opinion is to be changed,” the unit advises.

But while the changing of attitudes is desirable for pro-copyright entities, PIPCU says that winning over the public may not prove to be an easy battle. It was given a small taste of backlash itself, after taking action against the operator of a pirate site.

“The scale of the problem regarding public opinion of online copyright crime is evidenced by our own experience. After PIPCU executed a warrant against the owner of a streaming website, a tweet about the event (read by 200,000 people) produced a reaction heavily weighted against PIPCU’s legitimate enforcement action,” PIPCU concludes.

In summary, it seems likely that more effort will be expended during the next 12 months to target the set-top box threat, but there doesn’t appear to be an abundance of confidence in existing legislation to tackle all but the most egregious offenders. That being said, a line has now been drawn in the sand – if the public is prepared to respect it.

The full IP Crime Report 2016-2017 is available here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.