Tag Archives: communications

E-Mailing Private HTTPS Keys

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/e-mailing_priva.html

I don’t know what to make of this story:

The email was sent on Tuesday by the CEO of Trustico, a UK-based reseller of TLS certificates issued by the browser-trusted certificate authorities Comodo and, until recently, Symantec. It was sent to Jeremy Rowley, an executive vice president at DigiCert, a certificate authority that acquired Symantec’s certificate issuance business after Symantec was caught flouting binding industry rules, prompting Google to distrust Symantec certificates in its Chrome browser. In communications earlier this month, Trustico notified DigiCert that 50,000 Symantec-issued certificates Trustico had resold should be mass revoked because of security concerns.

When Rowley asked for proof the certificates were compromised, the Trustico CEO emailed the private keys of 23,000 certificates, according to an account posted to a Mozilla security policy forum. The report produced a collective gasp among many security practitioners who said it demonstrated a shockingly cavalier treatment of the digital certificates that form one of the most basic foundations of website security.

Generally speaking, private keys for TLS certificates should never be archived by resellers, and, even in the rare cases where such storage is permissible, they should be tightly safeguarded. A CEO being able to attach the keys for 23,000 certificates to an email raises troubling concerns that those types of best practices weren’t followed.

I am croggled by the multiple layers of insecurity here.

BoingBoing post.

Judge Issues Mixed Order in RIAA’s Piracy Case Against ISP Grande

Post Syndicated from Ernesto original https://torrentfreak.com/judge-issues-mixed-order-in-riaas-piracy-case-against-isp-grande-180306/

Regular Internet providers are being put under increasing pressure for not doing enough to curb copyright infringement.

Last year several major record labels, represented by the RIAA, filed a lawsuit in a Texas District Court, accusing ISP Grande Communications of turning a blind eye on its pirating subscribers.

According to the RIAA, the Internet provider knew that some of its subscribers were frequently distributing copyrighted material, and accused the company of failing to take any meaningful action in response.

Grande disagreed with this assertion and filed a motion to dismiss the case. The ISP argued that it doesn’t encourage any of its customers to download copyrighted works, and that it has no control over the content subscribers access.

The Internet provider admitted that it received millions of takedown notices through the piracy tracking company Rightscorp. However, it believes that these notices are flawed and not worthy of acting upon. It was not keeping subscribers on board with a profit motive, as the RIAA suggested.

A few days ago US Magistrate Judge Andrew Austin issued his “report and recommendation” on the motions to dismiss, which brings some good and bad news for both sides.

First of all, Judge Austin recommends granting the motion to dismiss the piracy claims against Grande’s management company Patriot Media Consulting, which is also listed as a defendant.

According to the order, the RIAA failed to show that Patriot employees were involved in the decisions or actions that led to the infringements, only that they may have been involved in formulating Grande’s infringement related policies.

“This is a far cry from showing that Patriot as an entity was an active participant in the alleged secondary infringement,” Judge Austin writes.

Moving to Grande Communications itself, Judge Austin recommends dropping the vicarious infringement claim, as Grande requested. To show vicarious infringement, the RIAA would have to prove that the ISP has a direct financial interest in the infringing activity. That is not the case here.

The record labels argued that the availability of copyrighted music lures customers, but the Judge found this allegation too vague, as it would apply to all ISPs.

“There are no allegations that Grande’s actions in failing to adequately police their infringing subscribers is a draw to subscribers to purchase its services, so that they can then use those services to infringe on UMG’s (and others’) copyrights,” Judge Austin argues.

“Instead UMG only alleges that the existence of music and the BitTorrent protocol is the draw. But that would impose liability on every ISP, as the music at issue is available on the Internet generally, as is the BitTorrent protocol, and is not something exclusively available through Grande’s services.”

While the above is good news for the Internet provider, the report and recommendation opt to keep the contributory infringement claim alive. Contributory copyright infringement happens where a defendant intentionally induces or encourages direct infringement.

Grande argued that Rightcorp’s notices were not sufficient to show that copyrighted material was ever downloaded, but Judge Austin disagrees. The RIAA has made a “plausible claim” that the ISP’s subscribers are infringing the labels’ copyrights.

“It would be inappropriate to dismiss the case based on factual allegations Grande makes about the Rightscorp notices and system, without any evidence to back those up,” Judge Austin’s recommendation reads.

In addition, Grande also argued that it’s protected from a secondary copyright infringement claim under the “staple article of commerce” doctrine, as “it is beyond dispute” that ISPs have numerous non-infringing uses.

Referring to the legal case between BMG and Cox Communications, Judge Austin says that this isn’t as clear as Grande suggests.

“The Court acknowledges that this is not yet a well-defined area of the law, and that there are good arguments on both sides of this issue,” the recommendation reads.

“However, at this point in the case, the Court is persuaded that UMG has pled a plausible claim of secondary infringement based on Grande’s alleged failure to act when presented with evidence of ongoing, pervasive infringement by its subscribers.”

The recommendation, therefore, is to deny the motion to dismiss the contributory infringement claim against Grande. If the U.S. District Court Judge adopts this position, it would mean that the case is heading to trial based on this claim.

Judge Austin’s full report and recommendations filing is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Intimate Partner Threat

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/intimate_partne.html

Princeton’s Karen Levy has a good article computer security and the intimate partner threat:

When you learn that your privacy has been compromised, the common advice is to prevent additional access — delete your insecure account, open a new one, change your password. This advice is such standard protocol for personal security that it’s almost a no-brainer. But in abusive romantic relationships, disconnection can be extremely fraught. For one, it can put the victim at risk of physical harm: If abusers expect digital access and that access is suddenly closed off, it can lead them to become more violent or intrusive in other ways. It may seem cathartic to delete abusive material, like alarming text messages — but if you don’t preserve that kind of evidence, it can make prosecution more difficult. And closing some kinds of accounts, like social networks, to hide from a determined abuser can cut off social support that survivors desperately need. In some cases, maintaining a digital connection to the abuser may even be legally required (for instance, if the abuser and survivor share joint custody of children).

Threats from intimate partners also change the nature of what it means to be authenticated online. In most contexts, access credentials­ — like passwords and security questions — are intended to insulate your accounts against access from an adversary. But those mechanisms are often completely ineffective for security in intimate contexts: The abuser can compel disclosure of your password through threats of violence and has access to your devices because you’re in the same physical space. In many cases, the abuser might even own your phone — or might have access to your communications data because you share a family plan. Things like security questions are unlikely to be effective tools for protecting your security, because the abuser knows or can guess at intimate details about your life — where you were born, what your first job was, the name of your pet.

Best Practices for Running Apache Kafka on AWS

Post Syndicated from Prasad Alle original https://aws.amazon.com/blogs/big-data/best-practices-for-running-apache-kafka-on-aws/

This post was written in partnership with Intuit to share learnings, best practices, and recommendations for running an Apache Kafka cluster on AWS. Thanks to Vaishak Suresh and his colleagues at Intuit for their contribution and support.

Intuit, in their own words: Intuit, a leading enterprise customer for AWS, is a creator of business and financial management solutions. For more information on how Intuit partners with AWS, see our previous blog post, Real-time Stream Processing Using Apache Spark Streaming and Apache Kafka on AWS. Apache Kafka is an open-source, distributed streaming platform that enables you to build real-time streaming applications.

The best practices described in this post are based on our experience in running and operating large-scale Kafka clusters on AWS for more than two years. Our intent for this post is to help AWS customers who are currently running Kafka on AWS, and also customers who are considering migrating on-premises Kafka deployments to AWS.

AWS offers Amazon Kinesis Data Streams, a Kafka alternative that is fully managed.

Running your Kafka deployment on Amazon EC2 provides a high performance, scalable solution for ingesting streaming data. AWS offers many different instance types and storage option combinations for Kafka deployments. However, given the number of possible deployment topologies, it’s not always trivial to select the most appropriate strategy suitable for your use case.

In this blog post, we cover the following aspects of running Kafka clusters on AWS:

  • Deployment considerations and patterns
  • Storage options
  • Instance types
  • Networking
  • Upgrades
  • Performance tuning
  • Monitoring
  • Security
  • Backup and restore

Note: While implementing Kafka clusters in a production environment, make sure also to consider factors like your number of messages, message size, monitoring, failure handling, and any operational issues.

Deployment considerations and patterns

In this section, we discuss various deployment options available for Kafka on AWS, along with pros and cons of each option. A successful deployment starts with thoughtful consideration of these options. Considering availability, consistency, and operational overhead of the deployment helps when choosing the right option.

Single AWS Region, Three Availability Zones, All Active

One typical deployment pattern (all active) is in a single AWS Region with three Availability Zones (AZs). One Kafka cluster is deployed in each AZ along with Apache ZooKeeper and Kafka producer and consumer instances as shown in the illustration following.

In this pattern, this is the Kafka cluster deployment:

  • Kafka producers and Kafka cluster are deployed on each AZ.
  • Data is distributed evenly across three Kafka clusters by using Elastic Load Balancer.
  • Kafka consumers aggregate data from all three Kafka clusters.

Kafka cluster failover occurs this way:

  • Mark down all Kafka producers
  • Stop consumers
  • Debug and restack Kafka
  • Restart consumers
  • Restart Kafka producers

Following are the pros and cons of this pattern.

Pros Cons
  • Highly available
  • Can sustain the failure of two AZs
  • No message loss during failover
  • Simple deployment


  • Very high operational overhead:
    • All changes need to be deployed three times, one for each Kafka cluster
    • Maintaining and monitoring three Kafka clusters
    • Maintaining and monitoring three consumer clusters

A restart is required for patching and upgrading brokers in a Kafka cluster. In this approach, a rolling upgrade is done separately for each cluster.

Single Region, Three Availability Zones, Active-Standby

Another typical deployment pattern (active-standby) is in a single AWS Region with a single Kafka cluster and Kafka brokers and Zookeepers distributed across three AZs. Another similar Kafka cluster acts as a standby as shown in the illustration following. You can use Kafka mirroring with MirrorMaker to replicate messages between any two clusters.

In this pattern, this is the Kafka cluster deployment:

  • Kafka producers are deployed on all three AZs.
  • Only one Kafka cluster is deployed across three AZs (active).
  • ZooKeeper instances are deployed on each AZ.
  • Brokers are spread evenly across all three AZs.
  • Kafka consumers can be deployed across all three AZs.
  • Standby Kafka producers and a Multi-AZ Kafka cluster are part of the deployment.

Kafka cluster failover occurs this way:

  • Switch traffic to standby Kafka producers cluster and Kafka cluster.
  • Restart consumers to consume from standby Kafka cluster.

Following are the pros and cons of this pattern.

Pros Cons
  • Less operational overhead when compared to the first option
  • Only one Kafka cluster to manage and consume data from
  • Can handle single AZ failures without activating a standby Kafka cluster
  • Added latency due to cross-AZ data transfer among Kafka brokers
  • For Kafka versions before 0.10, replicas for topic partitions have to be assigned so they’re distributed to the brokers on different AZs (rack-awareness)
  • The cluster can become unavailable in case of a network glitch, where ZooKeeper does not see Kafka brokers
  • Possibility of in-transit message loss during failover

Intuit recommends using a single Kafka cluster in one AWS Region, with brokers distributing across three AZs (single region, three AZs). This approach offers stronger fault tolerance than otherwise, because a failed AZ won’t cause Kafka downtime.

Storage options

There are two storage options for file storage in Amazon EC2:

Ephemeral storage is local to the Amazon EC2 instance. It can provide high IOPS based on the instance type. On the other hand, Amazon EBS volumes offer higher resiliency and you can configure IOPS based on your storage needs. EBS volumes also offer some distinct advantages in terms of recovery time. Your choice of storage is closely related to the type of workload supported by your Kafka cluster.

Kafka provides built-in fault tolerance by replicating data partitions across a configurable number of instances. If a broker fails, you can recover it by fetching all the data from other brokers in the cluster that host the other replicas. Depending on the size of the data transfer, it can affect recovery process and network traffic. These in turn eventually affect the cluster’s performance.

The following table contrasts the benefits of using an instance store versus using EBS for storage.

Instance store EBS
  • Instance storage is recommended for large- and medium-sized Kafka clusters. For a large cluster, read/write traffic is distributed across a high number of brokers, so the loss of a broker has less of an impact. However, for smaller clusters, a quick recovery for the failed node is important, but a failed broker takes longer and requires more network traffic for a smaller Kafka cluster.
  • Storage-optimized instances like h1, i3, and d2 are an ideal choice for distributed applications like Kafka.


  • The primary advantage of using EBS in a Kafka deployment is that it significantly reduces data-transfer traffic when a broker fails or must be replaced. The replacement broker joins the cluster much faster.
  • Data stored on EBS is persisted in case of an instance failure or termination. The broker’s data stored on an EBS volume remains intact, and you can mount the EBS volume to a new EC2 instance. Most of the replicated data for the replacement broker is already available in the EBS volume and need not be copied over the network from another broker. Only the changes made after the original broker failure need to be transferred across the network. That makes this process much faster.



Intuit chose EBS because of their frequent instance restacking requirements and also other benefits provided by EBS.

Generally, Kafka deployments use a replication factor of three. EBS offers replication within their service, so Intuit chose a replication factor of two instead of three.

Instance types

The choice of instance types is generally driven by the type of storage required for your streaming applications on a Kafka cluster. If your application requires ephemeral storage, h1, i3, and d2 instances are your best option.

Intuit used r3.xlarge instances for their brokers and r3.large for ZooKeeper, with ST1 (throughput optimized HDD) EBS for their Kafka cluster.

Here are sample benchmark numbers from Intuit tests.

Configuration Broker bytes (MB/s)
  • r3.xlarge
  • ST1 EBS
  • 12 brokers
  • 12 partitions


Aggregate 346.9

If you need EBS storage, then AWS has a newer-generation r4 instance. The r4 instance is superior to R3 in many ways:

  • It has a faster processor (Broadwell).
  • EBS is optimized by default.
  • It features networking based on Elastic Network Adapter (ENA), with up to 10 Gbps on smaller sizes.
  • It costs 20 percent less than R3.

Note: It’s always best practice to check for the latest changes in instance types.


The network plays a very important role in a distributed system like Kafka. A fast and reliable network ensures that nodes can communicate with each other easily. The available network throughput controls the maximum amount of traffic that Kafka can handle. Network throughput, combined with disk storage, is often the governing factor for cluster sizing.

If you expect your cluster to receive high read/write traffic, select an instance type that offers 10-Gb/s performance.

In addition, choose an option that keeps interbroker network traffic on the private subnet, because this approach allows clients to connect to the brokers. Communication between brokers and clients uses the same network interface and port. For more details, see the documentation about IP addressing for EC2 instances.

If you are deploying in more than one AWS Region, you can connect the two VPCs in the two AWS Regions using cross-region VPC peering. However, be aware of the networking costs associated with cross-AZ deployments.


Kafka has a history of not being backward compatible, but its support of backward compatibility is getting better. During a Kafka upgrade, you should keep your producer and consumer clients on a version equal to or lower than the version you are upgrading from. After the upgrade is finished, you can start using a new protocol version and any new features it supports. There are three upgrade approaches available, discussed following.

Rolling or in-place upgrade

In a rolling or in-place upgrade scenario, upgrade one Kafka broker at a time. Take into consideration the recommendations for doing rolling restarts to avoid downtime for end users.

Downtime upgrade

If you can afford the downtime, you can take your entire cluster down, upgrade each Kafka broker, and then restart the cluster.

Blue/green upgrade

Intuit followed the blue/green deployment model for their workloads, as described following.

If you can afford to create a separate Kafka cluster and upgrade it, we highly recommend the blue/green upgrade scenario. In this scenario, we recommend that you keep your clusters up-to-date with the latest Kafka version. For additional details on Kafka version upgrades or more details, see the Kafka upgrade documentation.

The following illustration shows a blue/green upgrade.

In this scenario, the upgrade plan works like this:

  • Create a new Kafka cluster on AWS.
  • Create a new Kafka producers stack to point to the new Kafka cluster.
  • Create topics on the new Kafka cluster.
  • Test the green deployment end to end (sanity check).
  • Using Amazon Route 53, change the new Kafka producers stack on AWS to point to the new green Kafka environment that you have created.

The roll-back plan works like this:

  • Switch Amazon Route 53 to the old Kafka producers stack on AWS to point to the old Kafka environment.

For additional details on blue/green deployment architecture using Kafka, see the re:Invent presentation Leveraging the Cloud with a Blue-Green Deployment Architecture.

Performance tuning

You can tune Kafka performance in multiple dimensions. Following are some best practices for performance tuning.

 These are some general performance tuning techniques:

  • If throughput is less than network capacity, try the following:
    • Add more threads
    • Increase batch size
    • Add more producer instances
    • Add more partitions
  • To improve latency when acks =-1, increase your num.replica.fetches value.
  • For cross-AZ data transfer, tune your buffer settings for sockets and for OS TCP.
  • Make sure that num.io.threads is greater than the number of disks dedicated for Kafka.
  • Adjust num.network.threads based on the number of producers plus the number of consumers plus the replication factor.
  • Your message size affects your network bandwidth. To get higher performance from a Kafka cluster, select an instance type that offers 10 Gb/s performance.

For Java and JVM tuning, try the following:

  • Minimize GC pauses by using the Oracle JDK, which uses the new G1 garbage-first collector.
  • Try to keep the Kafka heap size below 4 GB.


Knowing whether a Kafka cluster is working correctly in a production environment is critical. Sometimes, just knowing that the cluster is up is enough, but Kafka applications have many moving parts to monitor. In fact, it can easily become confusing to understand what’s important to watch and what you can set aside. Items to monitor range from simple metrics about the overall rate of traffic, to producers, consumers, brokers, controller, ZooKeeper, topics, partitions, messages, and so on.

For monitoring, Intuit used several tools, including Newrelec, Wavefront, Amazon CloudWatch, and AWS CloudTrail. Our recommended monitoring approach follows.

For system metrics, we recommend that you monitor:

  • CPU load
  • Network metrics
  • File handle usage
  • Disk space
  • Disk I/O performance
  • Garbage collection
  • ZooKeeper

For producers, we recommend that you monitor:

  • Batch-size-avg
  • Compression-rate-avg
  • Waiting-threads
  • Buffer-available-bytes
  • Record-queue-time-max
  • Record-send-rate
  • Records-per-request-avg

For consumers, we recommend that you monitor:

  • Batch-size-avg
  • Compression-rate-avg
  • Waiting-threads
  • Buffer-available-bytes
  • Record-queue-time-max
  • Record-send-rate
  • Records-per-request-avg


Like most distributed systems, Kafka provides the mechanisms to transfer data with relatively high security across the components involved. Depending on your setup, security might involve different services such as encryption, Kerberos, Transport Layer Security (TLS) certificates, and advanced access control list (ACL) setup in brokers and ZooKeeper. The following tells you more about the Intuit approach. For details on Kafka security not covered in this section, see the Kafka documentation.

Encryption at rest

For EBS-backed EC2 instances, you can enable encryption at rest by using Amazon EBS volumes with encryption enabled. Amazon EBS uses AWS Key Management Service (AWS KMS) for encryption. For more details, see Amazon EBS Encryption in the EBS documentation. For instance store–backed EC2 instances, you can enable encryption at rest by using Amazon EC2 instance store encryption.

Encryption in transit

Kafka uses TLS for client and internode communications.


Authentication of connections to brokers from clients (producers and consumers) to other brokers and tools uses either Secure Sockets Layer (SSL) or Simple Authentication and Security Layer (SASL).

Kafka supports Kerberos authentication. If you already have a Kerberos server, you can add Kafka to your current configuration.


In Kafka, authorization is pluggable and integration with external authorization services is supported.

Backup and restore

The type of storage used in your deployment dictates your backup and restore strategy.

The best way to back up a Kafka cluster based on instance storage is to set up a second cluster and replicate messages using MirrorMaker. Kafka’s mirroring feature makes it possible to maintain a replica of an existing Kafka cluster. Depending on your setup and requirements, your backup cluster might be in the same AWS Region as your main cluster or in a different one.

For EBS-based deployments, you can enable automatic snapshots of EBS volumes to back up volumes. You can easily create new EBS volumes from these snapshots to restore. We recommend storing backup files in Amazon S3.

For more information on how to back up in Kafka, see the Kafka documentation.


In this post, we discussed several patterns for running Kafka in the AWS Cloud. AWS also provides an alternative managed solution with Amazon Kinesis Data Streams, there are no servers to manage or scaling cliffs to worry about, you can scale the size of your streaming pipeline in seconds without downtime, data replication across availability zones is automatic, you benefit from security out of the box, Kinesis Data Streams is tightly integrated with a wide variety of AWS services like Lambda, Redshift, Elasticsearch and it supports open source frameworks like Storm, Spark, Flink, and more. You may refer to kafka-kinesis connector.

If you have questions or suggestions, please comment below.

Additional Reading

If you found this post useful, be sure to check out Implement Serverless Log Analytics Using Amazon Kinesis Analytics and Real-time Clickstream Anomaly Detection with Amazon Kinesis Analytics.

About the Author

Prasad Alle is a Senior Big Data Consultant with AWS Professional Services. He spends his time leading and building scalable, reliable Big data, Machine learning, Artificial Intelligence and IoT solutions for AWS Enterprise and Strategic customers. His interests extend to various technologies such as Advanced Edge Computing, Machine learning at Edge. In his spare time, he enjoys spending time with his family.



The Challenges of Opening a Data Center — Part 2

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/factors-for-choosing-data-center/

Rows of storage pods in a data center

This is part two of a series on the factors that an organization needs to consider when opening a data center and the challenges that must be met in the process.

In Part 1 of this series, we looked at the different types of data centers, the importance of location in planning a data center, data center certification, and the single most expensive factor in running a data center, power.

In Part 2, we continue to look at factors that need to considered both by those interested in a dedicated data center and those seeking to colocate in an existing center.

Power (continued from Part 1)

In part 1, we began our discussion of the power requirements of data centers.

As we discussed, redundancy and failover is a chief requirement for data center power. A redundantly designed power supply system is also a necessity for maintenance, as it enables repairs to be performed on one network, for example, without having to turn off servers, databases, or electrical equipment.

Power Path

The common critical components of a data center’s power flow are:

  • Utility Supply
  • Generators
  • Transfer Switches
  • Distribution Panels
  • Uninterruptible Power Supplies (UPS)
  • PDUs

Utility Supply is the power that comes from one or more utility grids. While most of us consider the grid to be our primary power supply (hats off to those of you who manage to live off the grid), politics, economics, and distribution make utility supply power susceptible to outages, which is why data centers must have autonomous power available to maintain availability.

Generators are used to supply power when the utility supply is unavailable. They convert mechanical energy, usually from motors, to electrical energy.

Transfer Switches are used to transfer electric load from one source or electrical device to another, such as from one utility line to another, from a generator to a utility, or between generators. The transfer could be manually activated or automatic to ensure continuous electrical power.

Distribution Panels get the power where it needs to go, taking a power feed and dividing it into separate circuits to supply multiple loads.

A UPS, as we touched on earlier, ensures that continuous power is available even when the main power source isn’t. It often consists of batteries that can come online almost instantaneously when the current power ceases. The power from a UPS does not have to last a long time as it is considered an emergency measure until the main power source can be restored. Another function of the UPS is to filter and stabilize the power from the main power supply.

Data Center UPS

Data center UPSs

PDU stands for the Power Distribution Unit and is the device that distributes power to the individual pieces of equipment.


After power, the networking connections to the data center are of prime importance. Can the data center obtain and maintain high-speed networking connections to the building? With networking, as with all aspects of a data center, availability is a primary consideration. Data center designers think of all possible ways service can be interrupted or lost, even briefly. Details such as the vulnerabilities in the route the network connections make from the core network (the backhaul) to the center, and where network connections enter and exit a building, must be taken into consideration in network and data center design.

Routers and switches are used to transport traffic between the servers in the data center and the core network. Just as with power, network redundancy is a prime factor in maintaining availability of data center services. Two or more upstream service providers are required to ensure that availability.

How fast a customer can transfer data to a data center is affected by: 1) the speed of the connections the data center has with the outside world, 2) the quality of the connections between the customer and the data center, and 3) the distance of the route from customer to the data center. The longer the length of the route and the greater the number of packets that must be transferred, the more significant a factor will be played by latency in the data transfer. Latency is the delay before a transfer of data begins following an instruction for its transfer. Generally latency, not speed, will be the most significant factor in transferring data to and from a data center. Packets transferred using the TCP/IP protocol suite, which is the conceptual model and set of communications protocols used on the internet and similar computer networks, must be acknowledged when received (ACK’d) and requires a communications roundtrip for each packet. If the data is in larger packets, the number of ACKs required is reduced, so latency will be a smaller factor in the overall network communications speed.

Latency generally will be less significant for data storage transfers than for cloud computing. Optimizations such as multi-threading, which is used in Backblaze’s Cloud Backup service, will generally improve overall transfer throughput if sufficient bandwidth is available.

Those interested in testing the overall speed and latency of their connection to Backblaze’s data centers can use the Check Your Bandwidth tool on our website.
Data center telecommunications equipment

Data center telecommunications equipment

Data center under floor cable runs

Data center under floor cable runs


Computer, networking, and power generation equipment generates heat, and there are a number of solutions employed to rid a data center of that heat. The location and climate of the data center is of great importance to the data center designer because the climatic conditions dictate to a large degree what cooling technologies should be deployed that in turn affect the power used and the cost of using that power. The power required and cost needed to manage a data center in a warm, humid climate will vary greatly from managing one in a cool, dry climate. Innovation is strong in this area and many new approaches to efficient and cost-effective cooling are used in the latest data centers.

Switch's uninterruptible, multi-system, HVAC Data Center Cooling Units

Switch’s uninterruptible, multi-system, HVAC Data Center Cooling Units

There are three primary ways data center cooling can be achieved:

Room Cooling cools the entire operating area of the data center. This method can be suitable for small data centers, but becomes more difficult and inefficient as IT equipment density and center size increase.

Row Cooling concentrates on cooling a data center on a row by row basis. In its simplest form, hot aisle/cold aisle data center design involves lining up server racks in alternating rows with cold air intakes facing one way and hot air exhausts facing the other. The rows composed of rack fronts are called cold aisles. Typically, cold aisles face air conditioner output ducts. The rows the heated exhausts pour into are called hot aisles. Typically, hot aisles face air conditioner return ducts.

Rack Cooling tackles cooling on a rack by rack basis. Air-conditioning units are dedicated to specific racks. This approach allows for maximum densities to be deployed per rack. This works best in data centers with fully loaded racks, otherwise there would be too much cooling capacity, and the air-conditioning losses alone could exceed the total IT load.


Data Centers are high-security facilities as they house business, government, and other data that contains personal, financial, and other secure information about businesses and individuals.

This list contains the physical-security considerations when opening or co-locating in a data center:

Layered Security Zones. Systems and processes are deployed to allow only authorized personnel in certain areas of the data center. Examples include keycard access, alarm systems, mantraps, secure doors, and staffed checkpoints.

Physical Barriers. Physical barriers, fencing and reinforced walls are used to protect facilities. In a colocation facility, one customers’ racks and servers are often inaccessible to other customers colocating in the same data center.

Backblaze racks secured in the data center

Backblaze racks secured in the data center

Monitoring Systems. Advanced surveillance technology monitors and records activity on approaching driveways, building entrances, exits, loading areas, and equipment areas. These systems also can be used to monitor and detect fire and water emergencies, providing early detection and notification before significant damage results.

Top-tier providers evaluate their data center security and facilities on an ongoing basis. Technology becomes outdated quickly, so providers must stay-on-top of new approaches and technologies in order to protect valuable IT assets.

To pass into high security areas of a data center requires passing through a security checkpoint where credentials are verified.

Data Center security

The gauntlet of cameras and steel bars one must pass before entering this data center

Facilities and Services

Data center colocation providers often differentiate themselves by offering value-added services. In addition to the required space, power, cooling, connectivity and security capabilities, the best solutions provide several on-site amenities. These accommodations include offices and workstations, conference rooms, and access to phones, copy machines, and office equipment.

Additional features may consist of kitchen facilities, break rooms and relaxation lounges, storage facilities for client equipment, and secure loading docks and freight elevators.

Moving into A Data Center

Moving into a data center is a major job for any organization. We wrote a post last year, Desert To Data in 7 Days — Our New Phoenix Data Center, about what it was like to move into our new data center in Phoenix, Arizona.

Desert To Data in 7 Days — Our New Phoenix Data Center

Visiting a Data Center

Our Director of Product Marketing Andy Klein wrote a popular post last year on what it’s like to visit a data center called A Day in the Life of a Data Center.

A Day in the Life of a Data Center

Would you Like to Know More about The Challenges of Opening and Running a Data Center?

That’s it for part 2 of this series. If readers are interested, we could write a post about some of the new technologies and trends affecting data center design and use. Please let us know in the comments.

Here's a tip!Here’s a tip on finding all the posts tagged with data center on our blog. Just follow https://www.backblaze.com/blog/tag/data-center/.

Don’t miss future posts on data centers and other topics, including hard drive stats, cloud storage, and tips and tricks for backing up to the cloud. Use the Join button above to receive notification of future posts on our blog.

The post The Challenges of Opening a Data Center — Part 2 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Best Practices for Running Apache Cassandra on Amazon EC2

Post Syndicated from Prasad Alle original https://aws.amazon.com/blogs/big-data/best-practices-for-running-apache-cassandra-on-amazon-ec2/

Apache Cassandra is a commonly used, high performance NoSQL database. AWS customers that currently maintain Cassandra on-premises may want to take advantage of the scalability, reliability, security, and economic benefits of running Cassandra on Amazon EC2.

Amazon EC2 and Amazon Elastic Block Store (Amazon EBS) provide secure, resizable compute capacity and storage in the AWS Cloud. When combined, you can deploy Cassandra, allowing you to scale capacity according to your requirements. Given the number of possible deployment topologies, it’s not always trivial to select the most appropriate strategy suitable for your use case.

In this post, we outline three Cassandra deployment options, as well as provide guidance about determining the best practices for your use case in the following areas:

  • Cassandra resource overview
  • Deployment considerations
  • Storage options
  • Networking
  • High availability and resiliency
  • Maintenance
  • Security

Before we jump into best practices for running Cassandra on AWS, we should mention that we have many customers who decided to use DynamoDB instead of managing their own Cassandra cluster. DynamoDB is fully managed, serverless, and provides multi-master cross-region replication, encryption at rest, and managed backup and restore. Integration with AWS Identity and Access Management (IAM) enables DynamoDB customers to implement fine-grained access control for their data security needs.

Several customers who have been using large Cassandra clusters for many years have moved to DynamoDB to eliminate the complications of administering Cassandra clusters and maintaining high availability and durability themselves. Gumgum.com is one customer who migrated to DynamoDB and observed significant savings. For more information, see Moving to Amazon DynamoDB from Hosted Cassandra: A Leap Towards 60% Cost Saving per Year.

AWS provides options, so you’re covered whether you want to run your own NoSQL Cassandra database, or move to a fully managed, serverless DynamoDB database.

Cassandra resource overview

Here’s a short introduction to standard Cassandra resources and how they are implemented with AWS infrastructure. If you’re already familiar with Cassandra or AWS deployments, this can serve as a refresher.

Resource Cassandra AWS

A single Cassandra deployment.


This typically consists of multiple physical locations, keyspaces, and physical servers.

A logical deployment construct in AWS that maps to an AWS CloudFormation StackSet, which consists of one or many CloudFormation stacks to deploy Cassandra.
Datacenter A group of nodes configured as a single replication group.

A logical deployment construct in AWS.


A datacenter is deployed with a single CloudFormation stack consisting of Amazon EC2 instances, networking, storage, and security resources.


A collection of servers.


A datacenter consists of at least one rack. Cassandra tries to place the replicas on different racks.

A single Availability Zone.
Server/node A physical virtual machine running Cassandra software. An EC2 instance.
Token Conceptually, the data managed by a cluster is represented as a ring. The ring is then divided into ranges equal to the number of nodes. Each node being responsible for one or more ranges of the data. Each node gets assigned with a token, which is essentially a random number from the range. The token value determines the node’s position in the ring and its range of data. Managed within Cassandra.
Virtual node (vnode) Responsible for storing a range of data. Each vnode receives one token in the ring. A cluster (by default) consists of 256 tokens, which are uniformly distributed across all servers in the Cassandra datacenter. Managed within Cassandra.
Replication factor The total number of replicas across the cluster. Managed within Cassandra.

Deployment considerations

One of the many benefits of deploying Cassandra on Amazon EC2 is that you can automate many deployment tasks. In addition, AWS includes services, such as CloudFormation, that allow you to describe and provision all your infrastructure resources in your cloud environment.

We recommend orchestrating each Cassandra ring with one CloudFormation template. If you are deploying in multiple AWS Regions, you can use a CloudFormation StackSet to manage those stacks. All the maintenance actions (scaling, upgrading, and backing up) should be scripted with an AWS SDK. These may live as standalone AWS Lambda functions that can be invoked on demand during maintenance.

You can get started by following the Cassandra Quick Start deployment guide. Keep in mind that this guide does not address the requirements to operate a production deployment and should be used only for learning more about Cassandra.

Deployment patterns

In this section, we discuss various deployment options available for Cassandra in Amazon EC2. A successful deployment starts with thoughtful consideration of these options. Consider the amount of data, network environment, throughput, and availability.

  • Single AWS Region, 3 Availability Zones
  • Active-active, multi-Region
  • Active-standby, multi-Region

Single region, 3 Availability Zones

In this pattern, you deploy the Cassandra cluster in one AWS Region and three Availability Zones. There is only one ring in the cluster. By using EC2 instances in three zones, you ensure that the replicas are distributed uniformly in all zones.

To ensure the even distribution of data across all Availability Zones, we recommend that you distribute the EC2 instances evenly in all three Availability Zones. The number of EC2 instances in the cluster is a multiple of three (the replication factor).

This pattern is suitable in situations where the application is deployed in one Region or where deployments in different Regions should be constrained to the same Region because of data privacy or other legal requirements.

Pros Cons

●     Highly available, can sustain failure of one Availability Zone.

●     Simple deployment

●     Does not protect in a situation when many of the resources in a Region are experiencing intermittent failure.


Active-active, multi-Region

In this pattern, you deploy two rings in two different Regions and link them. The VPCs in the two Regions are peered so that data can be replicated between two rings.

We recommend that the two rings in the two Regions be identical in nature, having the same number of nodes, instance types, and storage configuration.

This pattern is most suitable when the applications using the Cassandra cluster are deployed in more than one Region.

Pros Cons

●     No data loss during failover.

●     Highly available, can sustain when many of the resources in a Region are experiencing intermittent failures.

●     Read/write traffic can be localized to the closest Region for the user for lower latency and higher performance.

●     High operational overhead

●     The second Region effectively doubles the cost


Active-standby, multi-region

In this pattern, you deploy two rings in two different Regions and link them. The VPCs in the two Regions are peered so that data can be replicated between two rings.

However, the second Region does not receive traffic from the applications. It only functions as a secondary location for disaster recovery reasons. If the primary Region is not available, the second Region receives traffic.

We recommend that the two rings in the two Regions be identical in nature, having the same number of nodes, instance types, and storage configuration.

This pattern is most suitable when the applications using the Cassandra cluster require low recovery point objective (RPO) and recovery time objective (RTO).

Pros Cons

●     No data loss during failover.

●     Highly available, can sustain failure or partitioning of one whole Region.

●     High operational overhead.

●     High latency for writes for eventual consistency.

●     The second Region effectively doubles the cost.

Storage options

In on-premises deployments, Cassandra deployments use local disks to store data. There are two storage options for EC2 instances:

Your choice of storage is closely related to the type of workload supported by the Cassandra cluster. Instance store works best for most general purpose Cassandra deployments. However, in certain read-heavy clusters, Amazon EBS is a better choice.

The choice of instance type is generally driven by the type of storage:

  • If ephemeral storage is required for your application, a storage-optimized (I3) instance is the best option.
  • If your workload requires Amazon EBS, it is best to go with compute-optimized (C5) instances.
  • Burstable instance types (T2) don’t offer good performance for Cassandra deployments.

Instance store

Ephemeral storage is local to the EC2 instance. It may provide high input/output operations per second (IOPs) based on the instance type. An SSD-based instance store can support up to 3.3M IOPS in I3 instances. This high performance makes it an ideal choice for transactional or write-intensive applications such as Cassandra.

In general, instance storage is recommended for transactional, large, and medium-size Cassandra clusters. For a large cluster, read/write traffic is distributed across a higher number of nodes, so the loss of one node has less of an impact. However, for smaller clusters, a quick recovery for the failed node is important.

As an example, for a cluster with 100 nodes, the loss of 1 node is 3.33% loss (with a replication factor of 3). Similarly, for a cluster with 10 nodes, the loss of 1 node is 33% less capacity (with a replication factor of 3).

  Ephemeral storage Amazon EBS Comments


(translates to higher query performance)

Up to 3.3M on I3




This results in a higher query performance on each host. However, Cassandra implicitly scales well in terms of horizontal scale. In general, we recommend scaling horizontally first. Then, scale vertically to mitigate specific issues.


Note: 3.3M IOPS is observed with 100% random read with a 4-KB block size on Amazon Linux.

AWS instance types I3 Compute optimized, C5 Being able to choose between different instance types is an advantage in terms of CPU, memory, etc., for horizontal and vertical scaling.
Backup/ recovery Custom Basic building blocks are available from AWS.

Amazon EBS offers distinct advantage here. It is small engineering effort to establish a backup/restore strategy.

a) In case of an instance failure, the EBS volumes from the failing instance are attached to a new instance.

b) In case of an EBS volume failure, the data is restored by creating a new EBS volume from last snapshot.

Amazon EBS

EBS volumes offer higher resiliency, and IOPs can be configured based on your storage needs. EBS volumes also offer some distinct advantages in terms of recovery time. EBS volumes can support up to 32K IOPS per volume and up to 80K IOPS per instance in RAID configuration. They have an annualized failure rate (AFR) of 0.1–0.2%, which makes EBS volumes 20 times more reliable than typical commodity disk drives.

The primary advantage of using Amazon EBS in a Cassandra deployment is that it reduces data-transfer traffic significantly when a node fails or must be replaced. The replacement node joins the cluster much faster. However, Amazon EBS could be more expensive, depending on your data storage needs.

Cassandra has built-in fault tolerance by replicating data to partitions across a configurable number of nodes. It can not only withstand node failures but if a node fails, it can also recover by copying data from other replicas into a new node. Depending on your application, this could mean copying tens of gigabytes of data. This adds additional delay to the recovery process, increases network traffic, and could possibly impact the performance of the Cassandra cluster during recovery.

Data stored on Amazon EBS is persisted in case of an instance failure or termination. The node’s data stored on an EBS volume remains intact and the EBS volume can be mounted to a new EC2 instance. Most of the replicated data for the replacement node is already available in the EBS volume and won’t need to be copied over the network from another node. Only the changes made after the original node failed need to be transferred across the network. That makes this process much faster.

EBS volumes are snapshotted periodically. So, if a volume fails, a new volume can be created from the last known good snapshot and be attached to a new instance. This is faster than creating a new volume and coping all the data to it.

Most Cassandra deployments use a replication factor of three. However, Amazon EBS does its own replication under the covers for fault tolerance. In practice, EBS volumes are about 20 times more reliable than typical disk drives. So, it is possible to go with a replication factor of two. This not only saves cost, but also enables deployments in a region that has two Availability Zones.

EBS volumes are recommended in case of read-heavy, small clusters (fewer nodes) that require storage of a large amount of data. Keep in mind that the Amazon EBS provisioned IOPS could get expensive. General purpose EBS volumes work best when sized for required performance.


If your cluster is expected to receive high read/write traffic, select an instance type that offers 10–Gb/s performance. As an example, i3.8xlarge and c5.9xlarge both offer 10–Gb/s networking performance. A smaller instance type in the same family leads to a relatively lower networking throughput.

Cassandra generates a universal unique identifier (UUID) for each node based on IP address for the instance. This UUID is used for distributing vnodes on the ring.

In the case of an AWS deployment, IP addresses are assigned automatically to the instance when an EC2 instance is created. With the new IP address, the data distribution changes and the whole ring has to be rebalanced. This is not desirable.

To preserve the assigned IP address, use a secondary elastic network interface with a fixed IP address. Before swapping an EC2 instance with a new one, detach the secondary network interface from the old instance and attach it to the new one. This way, the UUID remains same and there is no change in the way that data is distributed in the cluster.

If you are deploying in more than one region, you can connect the two VPCs in two regions using cross-region VPC peering.

High availability and resiliency

Cassandra is designed to be fault-tolerant and highly available during multiple node failures. In the patterns described earlier in this post, you deploy Cassandra to three Availability Zones with a replication factor of three. Even though it limits the AWS Region choices to the Regions with three or more Availability Zones, it offers protection for the cases of one-zone failure and network partitioning within a single Region. The multi-Region deployments described earlier in this post protect when many of the resources in a Region are experiencing intermittent failure.

Resiliency is ensured through infrastructure automation. The deployment patterns all require a quick replacement of the failing nodes. In the case of a regionwide failure, when you deploy with the multi-Region option, traffic can be directed to the other active Region while the infrastructure is recovering in the failing Region. In the case of unforeseen data corruption, the standby cluster can be restored with point-in-time backups stored in Amazon S3.


In this section, we look at ways to ensure that your Cassandra cluster is healthy:

  • Scaling
  • Upgrades
  • Backup and restore


Cassandra is horizontally scaled by adding more instances to the ring. We recommend doubling the number of nodes in a cluster to scale up in one scale operation. This leaves the data homogeneously distributed across Availability Zones. Similarly, when scaling down, it’s best to halve the number of instances to keep the data homogeneously distributed.

Cassandra is vertically scaled by increasing the compute power of each node. Larger instance types have proportionally bigger memory. Use deployment automation to swap instances for bigger instances without downtime or data loss.


All three types of upgrades (Cassandra, operating system patching, and instance type changes) follow the same rolling upgrade pattern.

In this process, you start with a new EC2 instance and install software and patches on it. Thereafter, remove one node from the ring. For more information, see Cassandra cluster Rolling upgrade. Then, you detach the secondary network interface from one of the EC2 instances in the ring and attach it to the new EC2 instance. Restart the Cassandra service and wait for it to sync. Repeat this process for all nodes in the cluster.

Backup and restore

Your backup and restore strategy is dependent on the type of storage used in the deployment. Cassandra supports snapshots and incremental backups. When using instance store, a file-based backup tool works best. Customers use rsync or other third-party products to copy data backups from the instance to long-term storage. For more information, see Backing up and restoring data in the DataStax documentation. This process has to be repeated for all instances in the cluster for a complete backup. These backup files are copied back to new instances to restore. We recommend using S3 to durably store backup files for long-term storage.

For Amazon EBS based deployments, you can enable automated snapshots of EBS volumes to back up volumes. New EBS volumes can be easily created from these snapshots for restoration.


We recommend that you think about security in all aspects of deployment. The first step is to ensure that the data is encrypted at rest and in transit. The second step is to restrict access to unauthorized users. For more information about security, see the Cassandra documentation.

Encryption at rest

Encryption at rest can be achieved by using EBS volumes with encryption enabled. Amazon EBS uses AWS KMS for encryption. For more information, see Amazon EBS Encryption.

Instance store–based deployments require using an encrypted file system or an AWS partner solution. If you are using DataStax Enterprise, it supports transparent data encryption.

Encryption in transit

Cassandra uses Transport Layer Security (TLS) for client and internode communications.


The security mechanism is pluggable, which means that you can easily swap out one authentication method for another. You can also provide your own method of authenticating to Cassandra, such as a Kerberos ticket, or if you want to store passwords in a different location, such as an LDAP directory.


The authorizer that’s plugged in by default is org.apache.cassandra.auth.Allow AllAuthorizer. Cassandra also provides a role-based access control (RBAC) capability, which allows you to create roles and assign permissions to these roles.


In this post, we discussed several patterns for running Cassandra in the AWS Cloud. This post describes how you can manage Cassandra databases running on Amazon EC2. AWS also provides managed offerings for a number of databases. To learn more, see Purpose-built databases for all your application needs.

If you have questions or suggestions, please comment below.

Additional Reading

If you found this post useful, be sure to check out Analyze Your Data on Amazon DynamoDB with Apache Spark and Analysis of Top-N DynamoDB Objects using Amazon Athena and Amazon QuickSight.

About the Authors

Prasad Alle is a Senior Big Data Consultant with AWS Professional Services. He spends his time leading and building scalable, reliable Big data, Machine learning, Artificial Intelligence and IoT solutions for AWS Enterprise and Strategic customers. His interests extend to various technologies such as Advanced Edge Computing, Machine learning at Edge. In his spare time, he enjoys spending time with his family.




Provanshu Dey is a Senior IoT Consultant with AWS Professional Services. He works on highly scalable and reliable IoT, data and machine learning solutions with our customers. In his spare time, he enjoys spending time with his family and tinkering with electronics & gadgets.




Hollywood Commissioned Tough Jail Sentences for Online Piracy, ISP Says

Post Syndicated from Andy original https://torrentfreak.com/hollywood-commissioned-tough-jail-sentences-for-online-piracy-isp-says-180227/

According to local prosecutors who have handled many copyright infringement cases over the past decade, Sweden is nowhere near tough enough on those who commit online infringement.

With this in mind, the government sought advice on how such crimes should be punished, not only more severely, but also in proportion to the damages alleged to have been caused by defendants’ activities.

The corresponding report was returned to Minister for Justice Heléne Fritzon earlier this month by Council of Justice member Dag Mattsson. The paper proposed a new tier of offenses that should receive special punishment when there are convictions for large-scale copyright infringement and “serious” trademark infringement.

Partitioning the offenses into two broad categories, the report envisions those found guilty of copyright infringement or trademark infringement “of a normal grade” may be sentenced to fines or imprisonment up to a maximum of two years. For those at the other end of the scale, engaged in “cases of gross crimes”, the penalty sought is a minimum of six months in prison and not more than six years.

The proposals have been criticized by those who feel that copyright infringement shouldn’t be put on a par with more serious and even potentially violent crimes. On the other hand, tools to deter larger instances of infringement have been welcomed by entertainment industry groups, who have long sought more robust sentencing options in order to protect their interests.

In the middle, however, are Internet service providers such as Bahnhof, who are often dragged into the online piracy debate due to the allegedly infringing actions of some of their customers. In a statement on the new proposals, the company is clear on why Sweden is preparing to take such a tough stance against infringement.

“It’s not a daring guess that media companies are asking for Sweden to tighten the penalty for illegal file sharing and streaming,” says Bahnhof lawyer Wilhelm Dahlborn.

“It would have been better if the need for legislative change had taken place at EU level and co-ordinated with other similar intellectual property legislation.”

Bahnhof chief Jon Karlung, who is never afraid to speak his mind on such matters, goes a step further. He believes the initiative amounts to a gift to the United States.

“It’s nothing but a commission from the American film industry,” Karlung says.

“I do not mind them going for their goals in court and trying to protect their interests, but it does not mean that the state, the police, and ultimately taxpayers should put mass resources on it.”

Bahnhof notes that the proposals for the toughest extended jail sentences aren’t directly aimed at petty file-sharers. However, the introduction of a new offense of “gross crime” means that the limitation period shifts from the current five years to ten.

It also means that due to the expansion of prison terms beyond two years, secret monitoring of communications (known as HÖK) could come into play.

“If the police have access to HÖK, it can be used to get information about which individuals are file sharing,” warns Bahnhof lawyer Wilhelm Dahlborn.

“One can also imagine a scenario where media companies increasingly report crime as gross in order to get the police to do the investigative work they have previously done. Harder punishments to tackle file-sharing also appear very old-fashioned and equally ineffective.”

As noted in our earlier report, the new proposals also include measures that would enable the state to confiscate all kinds of property, both physical items and more intangible assets such as domain names. Bahnhof also takes issue with this, noting that domains are not the problem here.

“In our opinion, it is not the domain name which is the problem, it is the content of the website that the domain name points to,” the company says.

“Moreover, confiscation of a domain name may conflict with constitutional rules on freedom of expression in a way that is very unfortunate. The issues of freedom of expression and why copyright infringement is to be treated differently haven’t been addressed much in the investigation.”

Under the new proposals, damage to rightsholders and monetary gain by the defendant would also be taken into account when assessing whether a crime is “gross” or not. This raises questions as to what extent someone could be held liable for piracy when a rightsholder maintains damage was caused yet no profit was generated.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

BMG Wants Appeals Court to Rehear Cox Piracy Liability Case

Post Syndicated from Ernesto original https://torrentfreak.com/bmg-wants-appeals-court-to-rehear-cox-piracy-liability-case-180226/

Earlier this month, the Court of Appeals for the Fourth Circuit overturned the $25 million piracy liability verdict against Internet provider Cox.

The panel of three judges concluded that the district court made an error in its jury instruction and ordered a new trial.

The erroneous instruction said that the ISP could be found liable for contributory infringement if it “knew or should have known of such infringing activity.” The Court of Appeals agrees that based on the law, the “should have known” standard is too low.

As a result of the ruling, music publisher BMG Rights Management and Cox would have to go head to head again in a new trial. However, according to BMG, the Court of Appeals itself made a mistake.

A few days ago the copyright holder petitioned the court for a rehearing en banc, asking for a do-over before all the judges of a court.

The music publisher argues that the appeals court judges mistakenly reached their decision based on a legal principle that applies to “inducement” of liability, while BMG was pursuing a claim of “material contribution.”

“The panel’s unprecedented application of a heightened knowledge standard creates a conflict with decisions and pattern jury instructions from other circuits as well as with the common-law rules underlying contributory infringement.

“All of those recognize that BMG’s material-contribution theory requires only constructive knowledge,” BMG’s brief adds.

Even if the appeals court persists with its assertion that the liability standard is “willful blindness” rather than “should have known,” a new trial would not be warranted, according to the music publisher.

They point out that plenty of evidence was presented which proved that Cox was wilfully blind to the copyright infringements and describe the erroneous instruction as a “harmless error of the most benign kind.”

The music publisher’s request for a rehearing is supported by the RIAA, which filed an amicus curiae brief together with the National Music Publishers Association.

Both music industry groups back BMG’s arguments and ask the appeals court to consider a rehearing, stating that it would be in the best interests of artists, songwriters, and other rightsholders.

“The level of copyright infringement that takes place over the Internet is ‘staggering,’ and it is vital that copyright owners have effective mechanisms to address it. It is also critical that copyright owners can adequately address infringement that occurs in other contexts.

“If the panel’s decision is not corrected, it would threaten the very incentives of artists, songwriters, and others to create valuable works and distribute them to the public,” the RIAA and NMPA add.

For the RIAA the case is particularly important since it filed a similar lawsuit against Internet provider Grande Communications last year.

Given what’s at stake, we can assume that Cox will protest the request for a rehearing. And it wouldn’t be a big surprise if other telecommunications companies take the same position.

BMG’s petition is available here (pdf) and a copy of the RIAA/NMPA motion can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Russia VPN Blocking Law Failing? No Provider Told To Block Any Site

Post Syndicated from Andy original https://torrentfreak.com/russia-vpn-blocking-law-failing-no-provider-told-to-block-any-site-180224/

Continuing Russia’s continued pressure on the restriction of banned websites for copyright infringement and other offenses, President Vladimir Putin signed a brand new bill into law July 2017.

The legislation aimed to prevent citizens from circumventing ISP blockades with the use of services such as VPNs, proxies, Tor, and other anonymizing services. The theory was that if VPNs were found to be facilitating access to banned sites, they too would find themselves on Russia’s national Internet blacklist.

The list is maintained by local telecoms watchdog Rozcomnadzor and currently contains many tens of thousands of restricted domains. In respect of VPNs, the Federal Security Service (FSB) and the Ministry of Internal Affairs is tasked with monitoring ‘unblocking’ offenses, which they are then expected to refer to the telecoms watchdog for action.

The legislation caused significant uproar both locally and overseas and was widely predicted to signal a whole new level of censorship in Russia. However, things haven’t played out that way since, far from it. Since being introduced November 1, 2017, not a single VPN has been cautioned over its activities, much less advised to block or cease and desist.

The revelation comes via Russian news outlet RBC, which received an official confirmation from Rozcomnadzor itself that no VPN or anonymization service had been asked to take action to prevent access to blocked sites. Given the attention to detail when passing the law, the reasons seem extraordinary.

While Rozcomnadzor is empowered to put VPN providers on the blacklist, it must first be instructed to do so by the FSB, after that organization has carried out an investigation. Once the FSB gives the go-ahead, Rozcomnadzor can then order the provider to connect itself to the federal state information system, known locally as FGIS.

FGIS is the system that contains the details of nationally blocked sites and if a VPN provider does not interface with it within 30 days of being ordered to do so, it too will be added to the blocklist by Rozcomnadzor. Trouble is, Rozcomnadzor hasn’t received any requests to contact VPNs from higher up the chain, so they can’t do anything.

“As of today, there have been no requests from the members of the RDD [operational and investigative activities] and state security regarding anonymizers and VPN services,” a Roskomnadzor spokesperson said.

However, the problems don’t end there. RBC quotes Karen Ghazaryan, an analyst at the Russian Electronic Communications Association (RAEC), who says that even if it had received instructions, Rozcomnadzor wouldn’t be able to block the VPN services in question for both technical and legal reasons.

“Roskomnadzor does not have leverage over most VPN services, and they can not block them for failing to comply with the law, because Roskomnadzor does not have ready technical solutions for this, and the law does not yet have relevant by-laws,” the expert said.

“Copying the Chinese model of fighting VPNs in Russia will not be possible because of its high cost and the radically different topology of the Russian segment of the Internet,” Ghazaryan adds.

This apparent inability to act is surprising, not least since millions of Russian Internet users are now using VPNs, anonymizers, and similar services on a regular basis. Ghazaryan puts the figure as high as 25% of all Russian Internet users.

However, there is also a third element to Russia’s VPN dilemma – how to differentiate between VPNs used by the public and those used in a commercial environment. China is trying to solve this problem by forcing VPN providers to register and align themselves with the state. Russia hasn’t tried that, yet.

“The [blocking] law says that it does not apply to corporate VPN networks, but there is no way to distinguish them from services used for personal needs,” concludes Sarkis Darbinian from the anti-censorship project, Roskomvoboda.

This week, Russia’s Ministry of Culture unveiled yet more new proposals for dealing with copyright infringement via a bill that would allow websites to be blocked without a court order. It’s envisioned that if pirate material is found on a site and its operator either fails to respond to a complaint or leaves the content online for more than 24 hours, ISPs will be told to block the entire site.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

The Challenges of Opening a Data Center — Part 1

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/choosing-data-center/

Backblaze storage pod in new data center

This is part one of a series. The second part will be posted later this week. Use the Join button above to receive notification of future posts in this series.

Though most of us have never set foot inside of a data center, as citizens of a data-driven world we nonetheless depend on the services that data centers provide almost as much as we depend on a reliable water supply, the electrical grid, and the highway system. Every time we send a tweet, post to Facebook, check our bank balance or credit score, watch a YouTube video, or back up a computer to the cloud we are interacting with a data center.

In this series, The Challenges of Opening a Data Center, we’ll talk in general terms about the factors that an organization needs to consider when opening a data center and the challenges that must be met in the process. Many of the factors to consider will be similar for opening a private data center or seeking space in a public data center, but we’ll assume for the sake of this discussion that our needs are more modest than requiring a data center dedicated solely to our own use (i.e. we’re not Google, Facebook, or China Telecom).

Data center technology and management are changing rapidly, with new approaches to design and operation appearing every year. This means we won’t be able to cover everything happening in the world of data centers in our series, however, we hope our brief overview proves useful.

What is a Data Center?

A data center is the structure that houses a large group of networked computer servers typically used by businesses, governments, and organizations for the remote storage, processing, or distribution of large amounts of data.

While many organizations will have computing services in the same location as their offices that support their day-to-day operations, a data center is a structure dedicated to 24/7 large-scale data processing and handling.

Depending on how you define the term, there are anywhere from a half million data centers in the world to many millions. While it’s possible to say that an organization’s on-site servers and data storage can be called a data center, in this discussion we are using the term data center to refer to facilities that are expressly dedicated to housing computer systems and associated components, such as telecommunications and storage systems. The facility might be a private center, which is owned or leased by one tenant only, or a shared data center that offers what are called “colocation services,” and rents space, services, and equipment to multiple tenants in the center.

A large, modern data center operates around the clock, placing a priority on providing secure and uninterrrupted service, and generally includes redundant or backup power systems or supplies, redundant data communication connections, environmental controls, fire suppression systems, and numerous security devices. Such a center is an industrial-scale operation often using as much electricity as a small town.

Types of Data Centers

There are a number of ways to classify data centers according to how they will be used, whether they are owned or used by one or multiple organizations, whether and how they fit into a topology of other data centers; which technologies and management approaches they use for computing, storage, cooling, power, and operations; and increasingly visible these days: how green they are.

Data centers can be loosely classified into three types according to who owns them and who uses them.

Exclusive Data Centers are facilities wholly built, maintained, operated and managed by the business for the optimal operation of its IT equipment. Some of these centers are well-known companies such as Facebook, Google, or Microsoft, while others are less public-facing big telecoms, insurance companies, or other service providers.

Managed Hosting Providers are data centers managed by a third party on behalf of a business. The business does not own data center or space within it. Rather, the business rents IT equipment and infrastructure it needs instead of investing in the outright purchase of what it needs.

Colocation Data Centers are usually large facilities built to accommodate multiple businesses within the center. The business rents its own space within the data center and subsequently fills the space with its IT equipment, or possibly uses equipment provided by the data center operator.

Backblaze, for example, doesn’t own its own data centers but colocates in data centers owned by others. As Backblaze’s storage needs grow, Backblaze increases the space it uses within a given data center and/or expands to other data centers in the same or different geographic areas.

Availability is Key

When designing or selecting a data center, an organization needs to decide what level of availability is required for its services. The type of business or service it provides likely will dictate this. Any organization that provides real-time and/or critical data services will need the highest level of availability and redundancy, as well as the ability to rapidly failover (transfer operation to another center) when and if required. Some organizations require multiple data centers not just to handle the computer or storage capacity they use, but to provide alternate locations for operation if something should happen temporarily or permanently to one or more of their centers.

Organizations operating data centers that can’t afford any downtime at all will typically operate data centers that have a mirrored site that can take over if something happens to the first site, or they operate a second site in parallel to the first one. These data center topologies are called Active/Passive, and Active/Active, respectively. Should disaster or an outage occur, disaster mode would dictate immediately moving all of the primary data center’s processing to the second data center.

While some data center topologies are spread throughout a single country or continent, others extend around the world. Practically, data transmission speeds put a cap on centers that can be operated in parallel with the appearance of simultaneous operation. Linking two data centers located apart from each other — say no more than 60 miles to limit data latency issues — together with dark fiber (leased fiber optic cable) could enable both data centers to be operated as if they were in the same location, reducing staffing requirements yet providing immediate failover to the secondary data center if needed.

This redundancy of facilities and ensured availability is of paramount importance to those needing uninterrupted data center services.

Active/Passive Data Centers

Active/Active Data Centers

LEED Certification

Leadership in Energy and Environmental Design (LEED) is a rating system devised by the United States Green Building Council (USGBC) for the design, construction, and operation of green buildings. Facilities can achieve ratings of certified, silver, gold, or platinum based on criteria within six categories: sustainable sites, water efficiency, energy and atmosphere, materials and resources, indoor environmental quality, and innovation and design.

Green certification has become increasingly important in data center design and operation as data centers require great amounts of electricity and often cooling water to operate. Green technologies can reduce costs for data center operation, as well as make the arrival of data centers more amenable to environmentally-conscious communities.

The ACT, Inc. data center in Iowa City, Iowa was the first data center in the U.S. to receive LEED-Platinum certification, the highest level available.

ACT Data Center exterior

ACT Data Center exterior

ACT Data Center interior

ACT Data Center interior

Factors to Consider When Selecting a Data Center

There are numerous factors to consider when deciding to build or to occupy space in a data center. Aspects such as proximity to available power grids, telecommunications infrastructure, networking services, transportation lines, and emergency services can affect costs, risk, security and other factors that need to be taken into consideration.

The size of the data center will be dictated by the business requirements of the owner or tenant. A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19 inch rack cabinets, which are usually placed in single rows forming corridors (so-called aisles) between them. This allows staff access to the front and rear of each cabinet. Servers differ greatly in size from 1U servers (i.e. one “U” or “RU” rack unit measuring 44.50 millimeters or 1.75 inches), to Backblaze’s Storage Pod design that fits a 4U chassis, to large freestanding storage silos that occupy many square feet of floor space.


Location will be one of the biggest factors to consider when selecting a data center and encompasses many other factors that should be taken into account, such as geological risks, neighboring uses, and even local flight paths. Access to suitable available power at a suitable price point is often the most critical factor and the longest lead time item, followed by broadband service availability.

With more and more data centers available providing varied levels of service and cost, the choices increase each year. Data center brokers can be employed to find a data center, just as one might use a broker for home or other commercial real estate.

Websites listing available colocation space, such as upstack.io, or entire data centers for sale or lease, are widely used. A common practice is for a customer to publish its data center requirements, and the vendors compete to provide the most attractive bid in a reverse auction.

Business and Customer Proximity

The center’s closeness to a business or organization may or may not be a factor in the site selection. The organization might wish to be close enough to manage the center or supervise the on-site staff from a nearby business location. The location of customers might be a factor, especially if data transmission speeds and latency are important, or the business or customers have regulatory, political, tax, or other considerations that dictate areas suitable or not suitable for the storage and processing of data.


Local climate is a major factor in data center design because the climatic conditions dictate what cooling technologies should be deployed. In turn this impacts uptime and the costs associated with cooling, which can total as much as 50% or more of a center’s power costs. The topology and the cost of managing a data center in a warm, humid climate will vary greatly from managing one in a cool, dry climate. Nevertheless, data centers are located in both extremely cold regions and extremely hot ones, with innovative approaches used in both extremes to maintain desired temperatures within the center.

Geographic Stability and Extreme Weather Events

A major obvious factor in locating a data center is the stability of the actual site as regards weather, seismic activity, and the likelihood of weather events such as hurricanes, as well as fire or flooding.

Backblaze’s Sacramento data center describes its location as one of the most stable geographic locations in California, outside fault zones and floodplains.

Sacramento Data Center

Sometimes the location of the center comes first and the facility is hardened to withstand anticipated threats, such as Equinix’s NAP of the Americas data center in Miami, one of the largest single-building data centers on the planet (six stories and 750,000 square feet), which is built 32 feet above sea level and designed to withstand category 5 hurricane winds.

Equinix Data Center in Miami

Equinix “NAP of the Americas” Data Center in Miami

Most data centers don’t have the extreme protection or history of the Bahnhof data center, which is located inside the ultra-secure former nuclear bunker Pionen, in Stockholm, Sweden. It is buried 100 feet below ground inside the White Mountains and secured behind 15.7 in. thick metal doors. It prides itself on its self-described “Bond villain” ambiance.

Bahnhof Data Center under White Mountain in Stockholm

Usually, the data center owner or tenant will want to take into account the balance between cost and risk in the selection of a location. The Ideal quadrant below is obviously favored when making this compromise.

Cost vs Risk in selecting a data center

Cost = Construction/lease, power, bandwidth, cooling, labor, taxes
Risk = Environmental (seismic, weather, water, fire), political, economic

Risk mitigation also plays a strong role in pricing. The extent to which providers must implement special building techniques and operating technologies to protect the facility will affect price. When selecting a data center, organizations must make note of the data center’s certification level on the basis of regulatory requirements in the industry. These certifications can ensure that an organization is meeting necessary compliance requirements.


Electrical power usually represents the largest cost in a data center. The cost a service provider pays for power will be affected by the source of the power, the regulatory environment, the facility size and the rate concessions, if any, offered by the utility. At higher level tiers, battery, generator, and redundant power grids are a required part of the picture.

Fault tolerance and power redundancy are absolutely necessary to maintain uninterrupted data center operation. Parallel redundancy is a safeguard to ensure that an uninterruptible power supply (UPS) system is in place to provide electrical power if necessary. The UPS system can be based on batteries, saved kinetic energy, or some type of generator using diesel or another fuel. The center will operate on the UPS system with another UPS system acting as a backup power generator. If a power outage occurs, the additional UPS system power generator is available.

Many data centers require the use of independent power grids, with service provided by different utility companies or services, to prevent against loss of electrical service no matter what the cause. Some data centers have intentionally located themselves near national borders so that they can obtain redundant power from not just separate grids, but from separate geopolitical sources.

Higher redundancy levels required by a company will of invariably lead to higher prices. If one requires high availability backed by a service-level agreement (SLA), one can expect to pay more than another company with less demanding redundancy requirements.

Stay Tuned for Part 2 of The Challenges of Opening a Data Center

That’s it for part 1 of this post. In subsequent posts, we’ll take a look at some other factors to consider when moving into a data center such as network bandwidth, cooling, and security. We’ll take a look at what is involved in moving into a new data center (including stories from Backblaze’s experiences). We’ll also investigate what it takes to keep a data center running, and some of the new technologies and trends affecting data center design and use. You can discover all posts on our blog tagged with “Data Center” by following the link https://www.backblaze.com/blog/tag/data-center/.

The second part of this series on The Challenges of Opening a Data Center will be posted later this week. Use the Join button above to receive notification of future posts in this series.

The post The Challenges of Opening a Data Center — Part 1 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Bell Asks Employees to Back Pirate Site Blocking Plan

Post Syndicated from Ernesto original https://torrentfreak.com/bell-asks-employees-to-back-pirate-site-blocking-plan-180222/

Last month, a coalition of Canadian companies called on the local telecom regulator CRTC to establish a local pirate site blocking program, which would be the first of its kind in North America.

The Canadian deal is supported by Fairplay Canada, a coalition of both copyright holders and major players in the Telco industry, such as Bell and Rogers, which also have media companies of their own.

Thus far, there’s been a fair amount of opposition to the proposal. While CTRC is reviewing FairPlay Canada’s plans, OpenMedia has launched a petition to stop the effort in its tracks, which has already been signed by tens of thousands of Canadians.

However, there are also people who are backing the blocking efforts. In some cases, with a gentle push from their employer.

Canadian law Professor Micheal Geist, who’s one of the most vocal opponents of the blocking plans, recently tweeted a note Bell sent to its employees. Through an internal message, the ISP asks its workers to “help stop online piracy and protect content creators.”

Bell’s internal message

The company clearly hopes that its employees will back the site-blocking agenda, but according to Geist, this may not be the best way to do it.

Geist points out that the internal message doesn’t encourage employees to disclose their affiliation with Bell. This raises eyebrows, in particular, because Bell agreed to a $1.25 million settlement in 2015 after it encouraged some employees to write positive reviews and ratings on Bell apps.

In this case, the message has nothing to with app ratings, but it’s clear that the company is encouraging its employees to support a regulatory effort that serves Bell’s interests.

“All Canadians can provide their views on the website blocking proposal, but corporate encouragement to employees to participate in regulatory processes on the company’s behalf may raise the kinds of concerns regarding misleading impressions that sparked the Commissioner of Competition to intervene in 2015,” Geist’s writes in a blog post.

Even if Bell’s request is ‘fair play’ and within the boundaries of what’s allowed, it may do more harm than good.

Geist’s observation was picked up by local media with iPhoneinCanada describing Bell’s effort as “disingenuous,” which might lead to even more opposition in response.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

TVAddons Suffers Big Setback as Court Completely Overturns Earlier Ruling

Post Syndicated from Andy original https://torrentfreak.com/tvaddons-suffers-big-setback-as-court-completely-overturns-earlier-ruling-180221/

On June 2, 2017 a group of Canadian telecoms giants including Bell Canada, Bell ExpressVu, Bell Media, Videotron, Groupe TVA, Rogers Communications and Rogers Media, filed a complaint in Federal Court against Montreal resident, Adam Lackman.

Better known as the man behind Kodi addon repository TVAddons, Lackman was painted as a serial infringer in the complaint. The telecoms companies said that, without gaining permission from rightsholders, Lackman communicated copyrighted TV shows including Game of Thrones, Prison Break, The Big Bang Theory, America’s Got Talent, Keeping Up With The Kardashians and dozens more, by developing, hosting, distributing and promoting infringing Kodi add-ons.

To limit the harm allegedly caused by TVAddons, the complaint demanded interim, interlocutory, and permanent injunctions restraining Lackman from developing, promoting or distributing any of the allegedly infringing add-ons or software. On top, the plaintiffs requested punitive and exemplary damages, plus costs.

On June 9, 2017 the Federal Court handed down a time-limited interim injunction against Lackman ex parte, without Lackman being able to mount a defense. Bailiffs took control of TVAddons’ domains but the most controversial move was the granting of an Anton Piller order, a civil search warrant which granted the plaintiffs no-notice permission to enter Lackman’s premises to secure evidence before it could be tampered with.

The order was executed June 12, 2017, with Lackman’s home subjected to a lengthy search during which the Canadian was reportedly refused his right to remain silent. Non-cooperation with an Anton Piller order can amount to a contempt of court, he was told.

With the situation seemingly spinning out of Lackman’s control, unexpected support came from the Honourable B. Richard Bell during a subsequent June 29, 2017 Federal Court hearing to consider the execution of the Anton Piller order.

The Judge said that Lackman had been subjected to a search “without any of the protections normally afforded to litigants in such circumstances” and took exception to the fact that the plaintiffs had ordered Lackman to spill the beans on other individuals in the Kodi addon community. He described this as a hunt for further evidence, not the task of preserving evidence it should’ve been.

Justice Bell concluded by ruling that while the prima facie case against Lackman may have appeared strong before the judge who heard the matter ex parte, the subsequent adversarial hearing undermined it, to the point that it no longer met the threshold.

As a result of these failings, Judge Bell vacated the Anton Piller order and dismissed the application for interlocutory injunction.

While this was an early victory for Lackman and TVAddons, the plaintiffs took the decision to an appeal which was heard November 29, 2017. Determined by a three-judge panel and signed by Justice Yves de Montigny, the decision was handed down Tuesday and it effectively turns the earlier ruling upside down.

The appeal had two matters to consider: whether Justice Bell made errors when he vacated the Anton Piller order, and whether he made errors when he dismissed the application for an interlocutory injunction. In short, the panel found that he did.

In a 27-page ruling, the first key issue concerns Justice Bell’s understanding of the nature of both Lackman and TVAddons.

The telecoms companies complained that the Judge got it wrong when he characterized Lackman as a software developer who came up with add-ons that permit users to access material “that is for the most part not infringing on the rights” of the telecoms companies.

The companies also challenged the Judge’s finding that the infringing add-ons offered by the site represented “just over 1%” of all the add-ons developed by Lackman.

“I agree with the [telecoms companies] that the Judge misapprehended the evidence and made palpable and overriding errors in his assessment of the strength of the appellants’ case,” Justice Yves de Montigny writes in the ruling.

“Nowhere did the appellants actually state that only a tiny proportion of the add-ons found on the respondent’s website are infringing add-ons.”

The confusion appears to have arisen from the fact that while TVAddons offered 1,500 add-ons in total, the heavily discussed ‘featured’ addon category on the site contained just 22 add-ons, 16 of which were considered to be infringing according to the original complaint. So, it was 16 add-ons out of 22 being discussed, not 16 add-ons out of a possible 1,500.

“[Justice Bell] therefore clearly misapprehended the evidence in this regard by concluding that just over 1% of the add-ons were purportedly infringing,” the appeals Judge adds.

After gaining traction with Justice Bell in the previous hearing, Lackman’s assertion that his add-ons were akin to a “mini Google” was fiercely contested by the telecoms companies. They also fell flat before the appeal hearing.

Justice de Montigny says that Justice Bell “had been swayed” when Lackman’s expert replicated the discovery of infringing content using Google but had failed to grasp the important differences between a general search engine and a dedicated Kodi add-on.

“While Google is an indiscriminate search engine that returns results based on relevance, as determined by an algorithm, infringing add-ons target predetermined infringing content in a manner that is user-friendly and reliable,” the Judge writes.

“The fact that a search result using an add-on can be replicated with Google is of little consequence. The content will always be found using Google or any other Internet search engine because they search the entire universe of all publicly available information. Using addons, however, takes one to the infringing content much more directly, effortlessly and safely.”

With this in mind, Justice de Montigny says there is a “strong prima facie case” that Lackman, by hosting and distributing infringing add-ons, made the telecoms companies’ content available to the public “at a time of their choosing”, thereby infringing paragraph 2.4(1.1) and section 27 of the Copyright Act.

On TVAddons itself, the Judge said that the platform is “clearly designed” to facilitate access to infringing material since it targets “those who want to circumvent the legal means of watching television programs and the related costs.”

Turning to Lackman, the Judge said he could not claim to have no knowledge of the infringing content delivered by the add-ons distributed on this site, since they were purposefully curated prior to distribution.

“The respondent cannot credibly assert that his participation is content neutral and that he was not negligent in failing to investigate, since at a minimum he selects and organizes the add-ons that find their way onto his website,” the Judge notes.

In a further setback, the Judge draws clear parallels with another case before the Canadian courts involving pre-loaded ‘pirate’ set-top boxes. Justice de Montigny says that TVAddons itself bears “many similarities” with those devices that are already subjected to an interlocutory injunction in Canada.

“The service offered by the respondent through the TVAddons website is no different from the service offered through the set-top boxes. The means through which access is provided to infringing content is different (one relied on hardware while the other relied on a website), but they both provided unauthorized access to copyrighted material without authorization of the copyright owners,” the Judge finds.

Continuing, the Judge makes some pointed remarks concerning the execution of the Anton Piller order. In short, he found little wrong with the way things went ahead and also contradicted some of the claims and beliefs circulated in the earlier hearing.

Citing the affidavit of an independent solicitor who monitored the order’s execution, the Judge said that the order was explained to Lackman in plain language and he was informed of his right to remain silent. He was also told that he could refuse to answer questions other than those specified in the order.

The Judge said that Lackman was allowed to have counsel present, “with whom he consulted throughout the execution of the order.” There was nothing, the Judge said, that amounted to the “interrogation” alluded to in the earlier hearing.

Justice de Montigny also criticized Justice Bell for failing to take into account that Lackman “attempted to conceal crucial evidence and lied to the independent supervising solicitor regarding the whereabouts of that evidence.”

Much was previously made of Lackman apparently being forced to hand over personal details of third-parties associated directly or indirectly with TVAddons. The Judge clarifies what happened in his ruling.

“A list of names was put to the respondent by the plaintiffs’ solicitors, but it was apparently done to expedite the questioning process. In any event, the respondent did not provide material information on the majority of the aliases put to him,” the Judge reveals.

But while not handing over evidence on third-parties will paint Lackman in a better light with concerned elements of the add-on community, the Judge was quick to bring up the Canadian’s history and criticized Justice Bell for not taking it into account when he vacated the Anton Piller order.

“[T]he respondent admitted that he was involved in piracy of satellite television signals when he was younger, and there is evidence that he was involved in the configuration and sale of ‘jailbroken’ Apple TV set-top boxes,” Justice de Montigny writes.

“When juxtaposed to the respondent’s attempt to conceal relevant evidence during the execution of the Anton Piller order, that contextual evidence adds credence to the appellants’ concern that the evidence could disappear without a comprehensive order.”

Dismissing Justice Bell’s findings as “fatally flawed”, Justice de Montigny allowed the appeal of the telecoms companies, set aside the order of June 29, 2017, declared the Anton Piller order and interim injunctions legal, and granted an interlocutory injunction to remain valid until the conclusion of the case in Federal Court. The telecoms companies were also awarded costs of CAD$50,000.

It’s worth noting that despite all the detail provided up to now, the case hasn’t yet got to the stage where the Court has tested any of the claims put forward by the telecoms companies. Everything reported to date is pre-trial and has been taken at face value.

TorrentFreak spoke with Adam Lackman but since he hadn’t yet had the opportunity to discuss the matter with his lawyers, he declined to comment further on the record. There is a statement on the TVAddons website which gives his position on the story so far.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Major US Sports Leagues Report Top Piracy Nations to Government

Post Syndicated from Ernesto original https://torrentfreak.com/major-us-sports-leagues-report-top-piracy-nations-to-government-180216/

While pirated Hollywood blockbusters often score the big headlines, there are several other industries that have been battling with piracy over the years. This includes sports organizations.

Many of the major US leagues including the NBA, NFL, NHL, MLB and the Tennis Association, are bundling their powers in the Sports Coalition, to try and curb the availability of pirated streams and videos.

A few days ago the Sports Coalition put the piracy problem on the agenda of the United States Trade Representative (USTR).

“Sports organizations, including Sports Coalition members, are heavily affected by live sports telecast piracy, including the unauthorized live retransmission of sports telecasts over the Internet,” the Sports Coalition wrote.

“The Internet piracy of live sports telecasts is not only a persistent problem, but also a global one, often involving bad actors in more than one nation.”

The USTR asked the public for comments on which countries play a central role in copyright infringement issues. In its response, the Sports Coalition stresses that piracy is a global issue but singles out several nations as particularly problematic.

The coalition recommends that the USTR should put the Netherlands and Switzerland on the “Priority Watch List” of its 2018 Special 301 Report, followed by Russia, Saudi Arabia, Seychelles and Sweden, which get a regular “Watch List” recommendation.

The main problem with these countries is that hosting providers and content distribution networks don’t do enough to curb piracy.

In the Netherlands, sawlive.tv, strikezoneme, wizlnet, AltusHost, Host Palace, Quasi Networks and SNEL pirated or provided services contributing to sports piracy, the coalition writes. In Switzerland, mlbstreamme, robinwidgetorg, strikeoutmobi, BlackHOST, Private Layer and Solar Communications are doing the same.

According to the major sports leagues, the US Government should encourage these countries to step up their anti-piracy game. This is not only important for US copyright holders, but also for licensees in other countries.

“Clearly, there is common ground – both in terms of shared economic interests and legal obligations to protect and enforce intellectual property and related rights – for the United States and the nations with which it engages in international trade to work cooperatively to stop Internet piracy of sports programming.”

Whether any of these countries will make it into the USTR’s final list has yet to be seen. For Switzerland it wouldn’t be the first time but for the Netherlands it would be new, although it has been considered before.

A document we received through a FOIA request earlier this year revealed that the US Embassy reached out to the Dutch Government in the past, to discuss similar complaints from the Sports Coalition.

The same document also revealed that local anti-piracy group BREIN consistently urged the entertainment industries it represents not to advocate placing the Netherlands on the 301 Watch List but to solve the problems behind the scenes instead.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Australian Government Launches Pirate Site-Blocking Review

Post Syndicated from Andy original https://torrentfreak.com/australian-government-launches-pirate-site-blocking-review-180214/

Following intense pressure from entertainment industry groups, in 2014 Australia began developing legislation which would allow ‘pirate’ sites to be blocked at the ISP level.

In March 2015 the Copyright Amendment (Online Infringement) Bill 2015 (pdf) was introduced to parliament and after just three months of consideration, the Australian Senate passed the legislation into law.

Soon after, copyright holders began preparing their first cases and in December 2016, the Australian Federal Court ordered dozens of local Internet service providers to block The Pirate Bay, Torrentz, TorrentHound, IsoHunt, SolarMovie, plus many proxy and mirror services.

Since then, more processes have been launched establishing site-blocking as a permanent fixture on the Aussie anti-piracy agenda. But with yet more applications for injunction looming on the horizon, how is the mechanism performing and does anything else need to be done to improve or amend it?

Those are the questions now being asked by the responsible department of the Australian Government via a consultation titled Review of Copyright Online Infringement Amendment. The review should’ve been carried out 18 months after the law’s introduction in 2015 but the department says that it delayed the consultation to let more evidence emerge.

“The Department of Communications and the Arts is seeking views from stakeholders on the questions put forward in this paper. The Department welcomes single, consolidated submissions from organizations or parties, capturing all views on the Copyright Amendment (Online Infringement) Act 2015 (Online Infringement Amendment),” the consultation paper begins.

The three key questions for response are as follows:

– How effective and efficient is the mechanism introduced by the Online Infringement Amendment?

– Is the application process working well for parties and are injunctions operating well, once granted?

– Are any amendments required to improve the operation of the Online Infringement Amendment?

Given the tendency for copyright holders to continuously demand more bang for their buck, it will perhaps come as a surprise that at least for now there is a level of consensus that the system is working as planned.

“Case law and survey data suggests the Online Infringement Amendment has enabled copyright owners to work with [Internet service providers] to reduce large-scale online copyright infringement. So far, it appears that copyright owners and [ISPs] find the current arrangement acceptable, clear and effective,” the paper reads.

Thus far under the legislation there have been four applications for injunctions through the Federal Court, notably against leading torrent indexes and browser-based streaming sites, which were both granted.

The other two processes, which began separately but will be heard together, at least in part, involve the recent trend of set-top box based streaming.

Village Roadshow, Disney, Universal, Warner Bros, Twentieth Century Fox, and Paramount are currently presenting their case to the Federal Court. Along with Hong Kong-based broadcaster Television Broadcasts Limited (TVB), which has a separate application, the companies have been told to put together quality evidence for an April 2018 hearing.

With these applications already in the pipeline, yet more are on the horizon. The paper notes that more applications are expected to reach the Federal Court shortly, with the Department of Communications monitoring to assess whether current arrangements are refined as additional applications are filed.

Thus far, however, steady progress appears to have been made. The paper cites various precedents established as a result of the blocking process including the use of landing pages to inform Internet users why sites are blocked and who is paying.

“Either a copyright owner or [ISP] can establish a landing page. If an [ISP] wishes to avoid the cost of its own landing page, it can redirect customers to one that the copyright owner would provide. Another precedent allocates responsibility for compliance costs. Cases to date have required copyright owners to pay all or a significant proportion of compliance costs,” the paper notes.

But perhaps the issue of most importance is whether site-blocking as a whole has had any effect on the levels of copyright infringement in Australia.

The Government says that research carried out by Kantar shows that downloading “fell slightly from 2015 to 2017” with a 5-10% decrease in individuals consuming unlicensed content across movies, music and television. It’s worth noting, however, that Netflix didn’t arrive on Australian shores until May 2015, just a month before the new legislation was passed.

Research commissioned by the Department of Communications and published a year later in 2016 (pdf) found that improved availability of legal streaming alternatives was the main contributor to falling infringement rates. In a juicy twist, the report also revealed that Aussie pirates were the entertainment industries’ best customers.

“The Department is aware that other factors — such as the increasing availability of television, music and film streaming services and of subscription gaming services — may also contribute to falling levels of copyright infringement,” the paper notes.

Submissions to the consultation (pdf) are invited by 5.00 pm AEST on Friday 16 March 2018 via the government’s website.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

MPA Met With Russian Site-Blocking Body to Discuss Piracy

Post Syndicated from Andy original https://torrentfreak.com/mpa-met-with-russian-site-blocking-body-to-discuss-piracy-180209/

Given Russia’s historical reputation for having a weak approach to online piracy, the last few years stand in stark contrast to those that went before.

Overseen by telecoms watchdog Rozcomnadzor, Russia now has one of the toughest site-blocking regimes in the whole world. It’s possible to have entire sites blocked in a matter of days, potentially over a single piece of infringing content. For persistent offenders, permanent blocking is now a reality.

While that process requires the involvement of the courts, the subsequent blocking of mirror sites does not, with Russia blocking more than 500 since a new law was passed in October 2017.

With anti-piracy measures now a force to be reckoned with in Russia, it’s emerged that last week Stan McCoy, president of the Motion Picture Association’s EMEA division, met with telecoms watchdog Roskomnadzor in Moscow.

McCoy met with Rozcomnadzor chief Alexander Zharov last Friday, in a meeting that was also attended by Ekaterina Mironova, head of the anti-piracy committee of the Media Communication Union (ISS).

According to Rozcomnadzor, issues discussed included copyright-related legislation and regulation. Also on the agenda was the strengthening of international cooperation, including between public organizations representing the interests of rightholders.

“In particular, an agreement was reached to expand contacts between the MPAA and the ISS,” Rozcomnadzor notes.

The ISS (known locally as Media-Communication Union MKC) was founded by the largest Russian media companies and telecom operators in February 2014. It differentiates itself from other organizations with the claim that its the first group of its type to represent the interests of communications companies, rights holders, broadcasters and large distributors.

During the meeting, McCoy was given an update on Russia’s implementation of the various anti-piracy laws introduced and developed since May 2015.

“Since the introduction of the anti-piracy laws, Roskomnadzor has received more than 2,800 rulings from the Moscow City Court on the adoption of preliminary provisional [blocking] measures to protect copyright on the Internet, including 1,630 for movies,” the watchdog reveals.

“In connection with the deletion of pirated content, access to the territory of Russia was restricted for 1,547 Internet resources. Based on the decisions of the Moscow City Court, 752 pirated sites are now permanently blocked, and according to the decisions of the Ministry of Communications, more than 600 ‘mirrors’ of these resources are blocked too.”

While it’s normally the position of the US to criticize Russia for not doing enough to tackle piracy, it must’ve been interesting to participate in a meeting where for once the Russians had the upper hand. Even though the MPAA previously campaigned for one, there is no site-blocking mechanism in the United States.

“The fight against piracy stimulates the growth of the legal online video market in Russia. Attendance of legal online sites is constantly growing. Users are attracted to high-quality content for an affordable fee,” Rozcomnadzor concludes.

The meeting’s participants will join up again during the St. Petersburg International Economic Forum scheduled to take place May 24-26.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Rightscorp Has a Massive Database of ‘Repeat Infringers’ to Pursue

Post Syndicated from Ernesto original https://torrentfreak.com/rightscorp-has-a-massive-database-of-repeat-infringers-to-pursue-180208/

Last week the Fourth Circuit Court of Appeals ruled that ISPs are required to terminate ‘repeat infringers’ based on allegations from copyright holders alone, a topic that has been contested for years.

This means that copyright holders now have a bigger incentive to send takedown notices, as ISPs can’t easily ignore them. That’s music to the ears of the various piracy tracking companies, Rightscorp included.

The piracy monetization company always maintained that multiple complaints from copyright holders are enough to classify someone as a repeat infringer, without a court order, and the Fourth Circuit has now reached the same conclusion.

“After years of uncertainty on these issues, it is gratifying for the US Court of Appeals to proclaim the law on ISP liability for subscriber infringements to be essentially what Rightscorp has always said it is,” Rightscorp President Christopher Sabec says.

Rightscorp is pleased to see that the court shares its opinion since the verdict also provides new business opportunities. The company informs TorrentFreak that it’s ready to help copyright holders to hold ISPs responsible.

“Rightscorp has always stood with content holders who wish to protect their rights against ISPs that are not taking action against repeat infringers,” Sabec tells us.

“Now, with the law addressing ISP liability for subscriber infringements finally sharpened and clarified at the appellate level, we are ready to support all efforts by rights holders to compel ISPs to abide by their responsibilities under the DMCA.”

The piracy tracking company has a treasure trove of piracy data at its disposal to issue takedown requests or back lawsuits. Over the past five years, it amassed nearly a billion “records” of copyright infringements.

“Rightscorp’s data records include no less the 969,653,557 infringements over the last five years,” Sabec says.

This number includes a lot of repeat infringers, obviously. It’s made up of IP-addresses downloading the same file on several occasions and/or multiple files over time.

While it’s unlikely that account holders will be disconnected based on infringements that happened years ago, this type of historical data can be used in court cases. Rightscorp’s infringement notices are the basis of the legal action against Cox, and are being used as evidence in a separate RIAA case against ISP Grande communications as well.

Grande previously said that it refused to act on Rightcorp’s notices because it doubts their accuracy, but the tracking company contests this. That case is still ongoing and a final decision has yet to be reached.

For now, however, Rightcorp is marketing its hundreds of thousands of recorded copyright infringements as an opportunity for rightsholders. And for a company that can use some extra cash in hand, that’s good news.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

RIAA: Cox Ruling Shows that Grande Can Be Liable for Piracy Too

Post Syndicated from Ernesto original https://torrentfreak.com/riaa-cox-ruling-shows-that-grande-can-be-liable-for-piracy-too-180207/

Regular Internet providers are being put under increasing pressure for not doing enough to curb copyright infringement.

Last year several major record labels, represented by the RIAA, filed a lawsuit in a Texas District Court, accusing ISP Grande Communications of turning a blind eye on its pirating subscribers.

“Despite their knowledge of repeat infringements, Defendants have permitted repeat infringers to use the Grande service to continue to infringe Plaintiffs’ copyrights without consequence,” the RIAA’s complaint read.

Grande disagreed with this assertion and filed a motion to dismiss the case. The ISP argued that it doesn’t encourage any of its customers to download copyrighted works, and that it has no control over the content subscribers access.

The Internet provider didn’t deny that it received millions of takedown notices through the piracy tracking company Rightscorp. However, it believed that these notices are flawed and not worthy of acting upon.

The case shows a lot of similarities with the legal battle between BMG and Cox Communications, in which the Fourth Circuit Court of Appeals issued an important verdict last week.

The appeals court overturned the $25 million piracy damages verdict against Cox due to an erroneous jury instruction but held that the ISP lost its safe harbor protection because it failed to implement a meaningful repeat infringer policy.

This week, the RIAA used the Fourth Circuit ruling as further evidence that Grande’s motion to dismiss should be denied.

The RIAA points out that both Cox and Grande used similar arguments in their defense, some of which were denied by the appeals court. The Fourth Circuit held, for example, that an ISP’s substantial non-infringing uses does not immunize it from liability for contributory copyright infringement.

In addition, the appeals court also clarified that if an ISP wilfully blinds itself to copyright infringements, that is sufficient to satisfy the knowledge requirement for contributory copyright infringement.

According to the RIAA’s filing at a Texas District Court this week, Grande has already admitted that it willingly ‘ignored’ takedown notices that were submitted on behalf of third-party copyright holders.

“Grande has already admitted that it received notices from Rightscorp and, to use Grande’s own phrase, did not ‘meaningfully investigate’ them,” the RIAA writes.

“Thus, even if this Court were to apply the Fourth Circuit’s ‘willful blindness’ standard, the level of knowledge that Grande has effectively admitted exceeds the level of knowledge that the Fourth Circuit held was ‘powerful evidence’ sufficient to establish liability for contributory infringement.”

As such, the motion to dismiss the case should be denied, the RIAA argues.

What’s not mentioned in the RIAA’s filing, however, is why Grande chose not to act upon these takedown notices. In its defense, the ISP previously explained that Rightcorp’s notices lacked specificity and were incapable of detecting actual infringements.

Grande argued that if they acted on these notices without additional proof, its subscribers could lose their Internet access even though they are using it for legal purposes. The ISP may, therefore, counter that it wasn’t willfully blind, as it saw no solid proof for the alleged infringements to begin with.

“To merely treat these allegations as true without investigation would be a disservice to Grande’s subscribers, who would run the risk of having their Internet service permanently terminated despite using Grande’s services for completely legitimate purposes,” Grande previously wrote.

This brings up a tricky issue. The Fourth Circuit made it clear last week that ISPs require a meaningful policy against repeat infringers in respond to takedown notices from copyright holders. But what are the requirements for a proper takedown notice? Do any and all notices count?

Grande clearly has no faith in the accuracy of Rightscorp’s technology but if their case goes in the same direction as Cox’s, that might not make much of a difference.

A copy of the RIAA’s summary of supplemental authority is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Jailed Streaming Site Operator Hit With Fresh $3m Damages Lawsuit

Post Syndicated from Andy original https://torrentfreak.com/jailed-streaming-site-operator-hit-with-fresh-3m-damages-lawsuit-180207/

After being founded more than half a decade ago, Swefilmer grew to become Sweden’s most popular movie and TV show streaming site. It was only a question of time before authorities stepped in to bring the show to an end.

In 2015, a Swedish operator of the site in his early twenties was raided by local police. A second man, Turkish and in his late twenties, was later arrested in Germany.

The pair, who hadn’t met in person, appeared before the Varberg District Court in January 2017, accused of making more than $1.5m from their activities between November 2013 and June 2015.

The prosecutor described Swefilmer as “organized crime”, painting the then 26-year-old as the main brains behind the site and the 23-year-old as playing a much smaller role. The former was said to have led a luxury lifestyle after benefiting from $1.5m in advertising revenue.

The sentences eventually handed down matched the defendants’ alleged level of participation. While the younger man received probation and community service, the Turk was sentenced to serve three years in prison and ordered to forfeit $1.59m.

Very quickly it became clear there would be an appeal, with plaintiffs represented by anti-piracy outfit RightsAlliance complaining that their 10m krona ($1.25m) claim for damages over the unlawful distribution of local movie Johan Falk: Kodnamn: Lisa had been ruled out by the Court.

With the appeal hearing now just a couple of weeks away, Swedish outlet Breakit is reporting that media giant Bonnier Broadcasting has launched an action of its own against the now 27-year-old former operator of Swefilmer.

According to the publication, Bonnier’s pay-TV company C More, which distributes for Fox, MGM, Paramount, Universal, Sony and Warner, is set to demand around 24m krona ($3.01m) via anti-piracy outfit RightsAlliance.

“This is about organized crime and grossly criminal individuals who earned huge sums on our and others’ content. We want to take every opportunity to take advantage of our rights,” says Johan Gustafsson, Head of Corporate Communications at Bonnier Broadcasting.

C More reportedly filed its lawsuit at the Stockholm District Court on January 30, 2018. At its core are four local movies said to have been uploaded and made available via Swefilmer.

“C More would probably never even have granted a license to [the operator] to make or allow others to make the films available to the public in a similar way as [the operator] did, but if that had happened, the fee would not be less than 5,000,000 krona ($628,350) per film or a total of 20,000,000 krona ($2,513,400),” C More’s claim reads.

Speaking with Breakit, lawyer Ansgar Firsching said he couldn’t say much about C More’s claims against his client.

“I am very surprised that two weeks before the main hearing [C More] comes in with this requirement. If you open another front, we have two trials that are partly about the same thing,” he said.

Firsching said he couldn’t elaborate at this stage but expects his client to deny the claim for damages. C More sees things differently.

“Many people live under the illusion that sites like Swefilmer are driven by idealistic teens in their parents’ basements, which is completely wrong. This is about organized crime where our content is used to generate millions and millions in revenue,” the company notes.

The appeal in the main case is set to go ahead February 20th.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Russia Blocks 500 ‘Pirate’ Sites in Four Months, Without a Single Court Order

Post Syndicated from Andy original https://torrentfreak.com/russia-blocks-500-pirate-sites-in-four-months-without-a-single-court-order-180204/

Once the legal process for blocking pirate sites has been accepted in a region, it usually follows that dozens if not hundreds of other sites are given the same treatment. Rightsholders simply point to earlier decisions and apply for new blockades under established law.

Very quickly, however, it became clear that when a domain is blocked it’s relatively easy to produce a clone or ‘mirror’ of a site to achieve the same purpose, thus circumventing a court order. This mirror site whac-a-mole was addressed in Russia last year with new legislation.

Starting October 1, 2017, Russian authorities allowed rightsholders to add mirror sites to the country’s national blocklist without having to return to court. Perhaps unsurprisingly, given the relative convenience and cost-efficiency, they have been doing that en masse.

According to Alexei Volin, Russia’s Deputy Minister of Communications and Mass Media, hundreds of mirrors of pirate sites have been blocked since the introduction of the legislation in October, affecting an audience of millions of people.

“For the past few months, we have been able to block mirrors of pirate sites. As of today, we can already note that about 500 sites are blocked as mirrors,” said Volin at the CSTB 2018 television and telecommunications expo in Moscow.

While rightsholders were expected to quickly take advantage of the change in the law, the speed at which they have done so is unprecedented. According to Volin, more pirate platforms have been blocked in the four months since October 1, 2017, than in the previous two years’ worth of judicial decisions.

“Colleagues from the industry recently found a general audience of blocked sites, it’s about 200 million people,” Volin said, while describing the results as “encouraging.”

The process is indeed quite straightforward. Following a request from a rightsholder, the Ministry of Communications decides whether the site being reported is actually a copy of a previously blocked pirate site. If it is, the owner of the site and telecoms regulator Rozcomnadzor are informed about the situation, while local ISPs are ordered to begin blocking the site.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons