In the performance-conscious world of high-speed networking, anything that
can be done to avoid copying packet data is welcome. The MSG_ZEROCOPY feature added in 4.14
enables zero-copy transmission of data, but does not address the receive
side of the equation. It now appears that the 4.18 kernel will include a zero-copy receive mechanism by Eric Dumazet
to close that gap, at least for some relatively specialized applications.
After first targeting torrent and regular streaming platforms with blocking injunctions, last year Village Roadshow and studios including Disney, Universal, Warner Bros, Twentieth Century Fox, and Paramount began looking at a new threat.
The action targeted HDSubs+, a reasonably popular IPTV service that provides hundreds of otherwise premium live channels, movies, and sports for a relatively small monthly fee. The application was filed during October 2017 and targeted Australia’s largest ISPs.
In parallel, Hong Kong-based broadcaster Television Broadcasts Limited (TVB) launched a similar action, demanding that the same ISPs (including Telstra, Optus, TPG, and Vocus, plus subsidiaries) block several ‘pirate’ IPTV services, named in court as A1, BlueTV, EVPAD, FunTV, MoonBox, Unblock, and hTV5.
Due to the similarity of the cases, both applications were heard in Federal Court in Sydney on Friday. Neither case is as straightforward as blocking a torrent or basic streaming portal, so both applicants are having to deal with additional complexities.
The TVB case is of particular interest. Up to a couple of dozen URLs maintain the services, which are used to provide the content, an EPG (electronic program guide), updates and sundry other features. While most of these appear to fit the description of an “online location” designed to assist copyright infringement, where the Android-based software for the IPTV services is hosted provides an interesting dilemma.
ComputerWorld reports that the apps – which offer live broadcasts, video-on-demand, and catch-up TV – are hosted on as-yet-unnamed sites which are functionally similar to Google Play or Apple’s App Store. They’re repositories of applications that also carry non-infringing apps, such as those for Netflix and YouTube.
Nevertheless, despite clear knowledge of this dual use, TVB wants to have these app marketplaces blocked by Australian ISPs, which would not only render the illicit apps inaccessible to the public but all of the non-infringing ones too. Part of its argument that this action would be reasonable appears to be that legal apps – such as Netflix’s for example – can also be freely accessed elsewhere.
It will be up to Justice Nicholas to decide whether the “primary purpose” of these marketplaces is to infringe or facilitate the infringement of TVB’s copyrights. However, TVB also appears to have another problem which is directly connected to the copyright status in Australia of its China-focused live programming.
Justice Nicholas questioned whether watching a stream in Australia of TVB’s live Chinese broadcasts would amount to copyright infringement because no copy of that content is being made.
“If most of what is occurring here is a reproduction of broadcasts that are not protected by copyright, then the primary purpose is not to facilitate copyright infringement,” Justice Nicholas said.
One of the problems appears to be that China is not a party to the 1961 Rome Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organisations. However, TVB is arguing that it should still receive protection because it airs pre-recorded content and the live broadcasts are also archived for re-transmission via catch-up services.
The question over whether unchoreographed live broadcasts receive protection has been raised in other regions but in most cases, a workaround has been found. The presence of broadcaster logos on screen (which receive copyright protection) is a factor and it’s been reported that broadcasters are able to record the ‘live’ action and transmit a copy just a couple of seconds later, thereby broadcasting an already-copyrighted work.
While TVB attempts to overcome its issues, Village Roadshow is facing some of its own in its efforts to take down HDSubs+.
It appears that at least partly in response to the Roadshow legal action, the service has undergone some modifications, including a change of brand to ‘Press Play Extra’. As reported by ZDNet, there have been structural changes too, which means that Roadshow can no longer “see under the hood”.
According to Justice Nicholas, there is no evidence that the latest version of the app infringes copyright but according to counsel for Village Roadshow, the new app is merely transitional and preparing for a possible future change.
“We submit the difference to be drawn is reactive to my clients serving on the operators a notice,” counsel for Roadshow argued, with an expert describing the new app as “almost like a placeholder.”
In short, Roadshow still wants all of the target domains in its original application blocked because the company believes there’s a good chance they’ll be reactivated in the future.
None of the ISPs involved in either case turned up to the hearings on Friday, which removes one layer of complexity in what appears thus far to be less than straightforward cases.
In 2016, the International Federation of the Phonographic Industry published research which claimed that half of 16 to 24-year-olds use stream-ripping tools to copy music from sites like YouTube.
While this might not have surprised those who regularly participate in the activity, IFPI said that volumes had become so vast that stream-ripping had overtaken pirate site music downloads. That was a big statement.
Probably not coincidentally, just two weeks later IFPI, RIAA, and BPI announced legal action against the world’s largest YouTube ripping site, YouTube-MP3.
“YTMP3 rapidly and seamlessly removes the audio tracks contained in videos streamed from YouTube that YTMP3’s users access, converts those audio tracks to an MP3 format, copies and stores them on YTMP3’s servers, and then distributes copies of the MP3 audio files from its servers to its users in the United States, enabling its users to download those MP3 files to their computers, tablets, or smartphones,” the complaint read.
The labels sued YouTube-MP3 for direct infringement, contributory infringement, vicarious infringement, inducing others to infringe, plus circumvention of technological measures on top. The case was big and one that would’ve been intriguing to watch play out in court, but that never happened.
A year later in September 2017, YouTubeMP3 settled out of court. No details were made public but YouTube-MP3 apparently took all the blame and the court was asked to rule in favor of the labels on all counts.
This certainly gave the impression that what YouTube-MP3 did was illegal and a strong message was sent out to other companies thinking of offering a similar service. However, other onlookers clearly saw the labels’ lawsuit as something to be studied and learned from.
One of those was the operator of NotMP3downloader.com, a site that offers Free MP3 Recorder for YouTube, a tool offering similar functionality to YouTube-MP3 while supposedly avoiding the same legal pitfalls.
Part of that involves audio being processed on the user’s machine – not by stream-ripping as such – but by stream-recording. A subtle difference perhaps, but the site’s operator thinks it’s important.
“After examining the claims made by the copyright holders against youtube-mp3.org, we identified that the charges were based on the three main points. [None] of them are applicable to our product,” he told TF this week.
The first point involves YouTube-MP3’s acts of conversion, storage and distribution of content it had previously culled from YouTube. Copies of unlicensed tracks were clearly held on its own servers, a potent direct infringement risk.
“We don’t have any servers to download, convert or store a copyrighted or any other content from YouTube. Therefore, we do not violate any law or prohibition implied in this part,” NotMP3downloader’s operator explains.
Then there’s the act of “stream-ripping” itself. While YouTube-MP3 downloaded digital content from YouTube using its own software, NotMP3downloader claims to do things differently.
“Our software doesn’t download any streaming content directly, but only launches a web browser with the video specified by a user. The capturing happens from a local machine’s sound card and doesn’t deal with any content streamed through a network,” its operator notes.
This part also seems quite important. YouTube-MP3 was accused of unlawfully circumventing technological measures implemented by YouTube to prevent people downloading or copying content. By opening up YouTube’s own website and viewing content in the way the site demands, NotMP3downloader says it does not “violate the website’s integrity nor performs direct download of audio or video files.”
Like the Betamax video recorder before it that enabled recording from analog TV, NotMP3downloader enables a user to record a YouTube stream on their local machine. This, its makers claim, means the software is completely legal and defeats all the claims made by the labels in the YouTube-MP3 lawsuit.
“What YouTube does is broadcasting content through the Internet. Thus, there is nothing wrong if users are allowed to watch such content later as they may want,” the NotMP3downloader team explain.
“It is worth noting that in Sony Corp. of America v. United City Studios, Inc. (464 U.S. 417) the United States Supreme Court held that such practice, also known as time-shifting, was lawful representing fair use under the US Copyright Act and causing no substantial harm to the copyright holder.”
While software that can record video and sounds locally are nothing new, the developments in the YouTube-MP3 case and this response from NotMP3downloader raises interesting questions.
We put some of them to none other than former RIAA Executive Vice President, Neil Turkewitz, who now works as President of Turkewitz Consulting Group.
Turkewitz stressed that he doesn’t speak for the industry as a whole or indeed the RIAA but it’s clear that his passion for protecting creators persists. He told us that in this instance, reliance on the Betamax decision is “misplaced”.
“The content is different, the activity is different, and the function is different,” Turkewitz told TF.
“The Sony decision must be understood in its context — the time shifting of audiovisual programming being broadcast from point to multipoint. The making available of content by a point-to-point interactive service like YouTube isn’t broadcasting — or at a minimum, is not a form of broadcasting akin to that considered by the Supreme Court in Sony.
“More fundamentally, broadcasting (right of communication to the public) is one of only several rights implicated by the service. And of course, issues of liability will be informed by considerations of purpose, effect and perceived harm. A court’s judgment will also be affected by whether it views the ‘innovation’ as an attempt to circumvent the requirements of law. The decision of the Supreme Court in ABC v. Aereo is certainly instructive in that regard.”
And there are other issues too. While YouTube itself is yet to take any legal action to deter users from downloading rather than merely streaming content, its terms of service are quite specific and seem to cover all eventualities.
“[Y]ou agree not to access Content or any reason other than your personal, non-commercial use solely as intended through and permitted by the normal functionality of the Service, and solely for Streaming,” YouTube’s ToS reads.
“‘Streaming’ means a contemporaneous digital transmission of the material by YouTube via the Internet to a user operated Internet enabled device in such a manner that the data is intended for real-time viewing and not intended to be downloaded (either permanently or temporarily), copied, stored, or redistributed by the user.
“You shall not copy, reproduce, distribute, transmit, broadcast, display, sell, license, or otherwise exploit any Content for any other purposes without the prior written consent of YouTube or the respective licensors of the Content.”
In this respect, it seems that a user doing anything but real-time streaming of YouTube content is breaching YouTube’s terms of service. The big question then, of course, is whether providing a tool specifically for that purpose represents an infringement of copyright.
The people behind Free MP3 Recorder believe that the “scope of application depends entirely on the end users’ intentions” which seems like a fair argument at first view. But, as usual, copyright law is incredibly complex and there are plenty of opposing views.
We asked the BPI, which took action against YouTubeMP3, for its take on this type of tool. The official response was “No comment” which doesn’t really clarify the position, at least for now.
Needless to say, the Betamax decision – relevant or not – doesn’t apply in the UK. But that only adds more parameters into the mix – and perhaps more opportunities for lawyers to make money arguing for and against tools like this in the future.
Security updates have been issued by CentOS (389-ds-base, dhcp, kernel, libreoffice, php, quagga, and ruby), Debian (ming, util-linux, vips, and zsh), Fedora (community-mysql, php, ruby, and transmission), Gentoo (newsbeuter), Mageia (libraw and mbedtls), openSUSE (php7 and python-Django), Red Hat (MRG Realtime 2.5), and SUSE (kernel).
Normally, when an application sends data over the network, it wants that
data to be transmitted as quickly as possible; the kernel’s network stack
tries to oblige. But there are applications that need their packets to be
transmitted within specific time windows. This behavior can be
approximated in user space now, but a better solution is in the works in
the form of the time-based packet
transmission patch set.
This is part one of a series. The second part will be posted later this week. Use the Join button above to receive notification of future posts in this series.
Though most of us have never set foot inside of a data center, as citizens of a data-driven world we nonetheless depend on the services that data centers provide almost as much as we depend on a reliable water supply, the electrical grid, and the highway system. Every time we send a tweet, post to Facebook, check our bank balance or credit score, watch a YouTube video, or back up a computer to the cloud we are interacting with a data center.
In this series, The Challenges of Opening a Data Center, we’ll talk in general terms about the factors that an organization needs to consider when opening a data center and the challenges that must be met in the process. Many of the factors to consider will be similar for opening a private data center or seeking space in a public data center, but we’ll assume for the sake of this discussion that our needs are more modest than requiring a data center dedicated solely to our own use (i.e. we’re not Google, Facebook, or China Telecom).
Data center technology and management are changing rapidly, with new approaches to design and operation appearing every year. This means we won’t be able to cover everything happening in the world of data centers in our series, however, we hope our brief overview proves useful.
What is a Data Center?
A data center is the structure that houses a large group of networked computer servers typically used by businesses, governments, and organizations for the remote storage, processing, or distribution of large amounts of data.
While many organizations will have computing services in the same location as their offices that support their day-to-day operations, a data center is a structure dedicated to 24/7 large-scale data processing and handling.
Depending on how you define the term, there are anywhere from a half million data centers in the world to many millions. While it’s possible to say that an organization’s on-site servers and data storage can be called a data center, in this discussion we are using the term data center to refer to facilities that are expressly dedicated to housing computer systems and associated components, such as telecommunications and storage systems. The facility might be a private center, which is owned or leased by one tenant only, or a shared data center that offers what are called “colocation services,” and rents space, services, and equipment to multiple tenants in the center.
A large, modern data center operates around the clock, placing a priority on providing secure and uninterrrupted service, and generally includes redundant or backup power systems or supplies, redundant data communication connections, environmental controls, fire suppression systems, and numerous security devices. Such a center is an industrial-scale operation often using as much electricity as a small town.
Types of Data Centers
There are a number of ways to classify data centers according to how they will be used, whether they are owned or used by one or multiple organizations, whether and how they fit into a topology of other data centers; which technologies and management approaches they use for computing, storage, cooling, power, and operations; and increasingly visible these days: how green they are.
Data centers can be loosely classified into three types according to who owns them and who uses them.
Exclusive Data Centers are facilities wholly built, maintained, operated and managed by the business for the optimal operation of its IT equipment. Some of these centers are well-known companies such as Facebook, Google, or Microsoft, while others are less public-facing big telecoms, insurance companies, or other service providers.
Managed Hosting Providers are data centers managed by a third party on behalf of a business. The business does not own data center or space within it. Rather, the business rents IT equipment and infrastructure it needs instead of investing in the outright purchase of what it needs.
Colocation Data Centers are usually large facilities built to accommodate multiple businesses within the center. The business rents its own space within the data center and subsequently fills the space with its IT equipment, or possibly uses equipment provided by the data center operator.
Backblaze, for example, doesn’t own its own data centers but colocates in data centers owned by others. As Backblaze’s storage needs grow, Backblaze increases the space it uses within a given data center and/or expands to other data centers in the same or different geographic areas.
Availability is Key
When designing or selecting a data center, an organization needs to decide what level of availability is required for its services. The type of business or service it provides likely will dictate this. Any organization that provides real-time and/or critical data services will need the highest level of availability and redundancy, as well as the ability to rapidly failover (transfer operation to another center) when and if required. Some organizations require multiple data centers not just to handle the computer or storage capacity they use, but to provide alternate locations for operation if something should happen temporarily or permanently to one or more of their centers.
Organizations operating data centers that can’t afford any downtime at all will typically operate data centers that have a mirrored site that can take over if something happens to the first site, or they operate a second site in parallel to the first one. These data center topologies are called Active/Passive, and Active/Active, respectively. Should disaster or an outage occur, disaster mode would dictate immediately moving all of the primary data center’s processing to the second data center.
While some data center topologies are spread throughout a single country or continent, others extend around the world. Practically, data transmission speeds put a cap on centers that can be operated in parallel with the appearance of simultaneous operation. Linking two data centers located apart from each other — say no more than 60 miles to limit data latency issues — together with dark fiber (leased fiber optic cable) could enable both data centers to be operated as if they were in the same location, reducing staffing requirements yet providing immediate failover to the secondary data center if needed.
This redundancy of facilities and ensured availability is of paramount importance to those needing uninterrupted data center services.
Leadership in Energy and Environmental Design (LEED) is a rating system devised by the United States Green Building Council (USGBC) for the design, construction, and operation of green buildings. Facilities can achieve ratings of certified, silver, gold, or platinum based on criteria within six categories: sustainable sites, water efficiency, energy and atmosphere, materials and resources, indoor environmental quality, and innovation and design.
Green certification has become increasingly important in data center design and operation as data centers require great amounts of electricity and often cooling water to operate. Green technologies can reduce costs for data center operation, as well as make the arrival of data centers more amenable to environmentally-conscious communities.
The ACT, Inc. data center in Iowa City, Iowa was the first data center in the U.S. to receive LEED-Platinum certification, the highest level available.
ACT Data Center exterior
ACT Data Center interior
Factors to Consider When Selecting a Data Center
There are numerous factors to consider when deciding to build or to occupy space in a data center. Aspects such as proximity to available power grids, telecommunications infrastructure, networking services, transportation lines, and emergency services can affect costs, risk, security and other factors that need to be taken into consideration.
The size of the data center will be dictated by the business requirements of the owner or tenant. A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19 inch rack cabinets, which are usually placed in single rows forming corridors (so-called aisles) between them. This allows staff access to the front and rear of each cabinet. Servers differ greatly in size from 1U servers (i.e. one “U” or “RU” rack unit measuring 44.50 millimeters or 1.75 inches), to Backblaze’s Storage Pod design that fits a 4U chassis, to large freestanding storage silos that occupy many square feet of floor space.
Location will be one of the biggest factors to consider when selecting a data center and encompasses many other factors that should be taken into account, such as geological risks, neighboring uses, and even local flight paths. Access to suitable available power at a suitable price point is often the most critical factor and the longest lead time item, followed by broadband service availability.
With more and more data centers available providing varied levels of service and cost, the choices increase each year. Data center brokers can be employed to find a data center, just as one might use a broker for home or other commercial real estate.
Websites listing available colocation space, such as upstack.io, or entire data centers for sale or lease, are widely used. A common practice is for a customer to publish its data center requirements, and the vendors compete to provide the most attractive bid in a reverse auction.
Business and Customer Proximity
The center’s closeness to a business or organization may or may not be a factor in the site selection. The organization might wish to be close enough to manage the center or supervise the on-site staff from a nearby business location. The location of customers might be a factor, especially if data transmission speeds and latency are important, or the business or customers have regulatory, political, tax, or other considerations that dictate areas suitable or not suitable for the storage and processing of data.
Local climate is a major factor in data center design because the climatic conditions dictate what cooling technologies should be deployed. In turn this impacts uptime and the costs associated with cooling, which can total as much as 50% or more of a center’s power costs. The topology and the cost of managing a data center in a warm, humid climate will vary greatly from managing one in a cool, dry climate. Nevertheless, data centers are located in both extremely cold regions and extremely hot ones, with innovative approaches used in both extremes to maintain desired temperatures within the center.
Geographic Stability and Extreme Weather Events
A major obvious factor in locating a data center is the stability of the actual site as regards weather, seismic activity, and the likelihood of weather events such as hurricanes, as well as fire or flooding.
Backblaze’s Sacramento data center describes its location as one of the most stable geographic locations in California, outside fault zones and floodplains.
Sometimes the location of the center comes first and the facility is hardened to withstand anticipated threats, such as Equinix’s NAP of the Americas data center in Miami, one of the largest single-building data centers on the planet (six stories and 750,000 square feet), which is built 32 feet above sea level and designed to withstand category 5 hurricane winds.
Equinix “NAP of the Americas” Data Center in Miami
Most data centers don’t have the extreme protection or history of the Bahnhof data center, which is located inside the ultra-secure former nuclear bunker Pionen, in Stockholm, Sweden. It is buried 100 feet below ground inside the White Mountains and secured behind 15.7 in. thick metal doors. It prides itself on its self-described “Bond villain” ambiance.
Bahnhof Data Center under White Mountain in Stockholm
Usually, the data center owner or tenant will want to take into account the balance between cost and risk in the selection of a location. The Ideal quadrant below is obviously favored when making this compromise.
Risk mitigation also plays a strong role in pricing. The extent to which providers must implement special building techniques and operating technologies to protect the facility will affect price. When selecting a data center, organizations must make note of the data center’s certification level on the basis of regulatory requirements in the industry. These certifications can ensure that an organization is meeting necessary compliance requirements.
Electrical power usually represents the largest cost in a data center. The cost a service provider pays for power will be affected by the source of the power, the regulatory environment, the facility size and the rate concessions, if any, offered by the utility. At higher level tiers, battery, generator, and redundant power grids are a required part of the picture.
Fault tolerance and power redundancy are absolutely necessary to maintain uninterrupted data center operation. Parallel redundancy is a safeguard to ensure that an uninterruptible power supply (UPS) system is in place to provide electrical power if necessary. The UPS system can be based on batteries, saved kinetic energy, or some type of generator using diesel or another fuel. The center will operate on the UPS system with another UPS system acting as a backup power generator. If a power outage occurs, the additional UPS system power generator is available.
Many data centers require the use of independent power grids, with service provided by different utility companies or services, to prevent against loss of electrical service no matter what the cause. Some data centers have intentionally located themselves near national borders so that they can obtain redundant power from not just separate grids, but from separate geopolitical sources.
Higher redundancy levels required by a company will of invariably lead to higher prices. If one requires high availability backed by a service-level agreement (SLA), one can expect to pay more than another company with less demanding redundancy requirements.
Stay Tuned for Part 2 of The Challenges of Opening a Data Center
That’s it for part 1 of this post. In subsequent posts, we’ll take a look at some other factors to consider when moving into a data center such as network bandwidth, cooling, and security. We’ll take a look at what is involved in moving into a new data center (including stories from Backblaze’s experiences). We’ll also investigate what it takes to keep a data center running, and some of the new technologies and trends affecting data center design and use. You can discover all posts on our blog tagged with “Data Center” by following the link https://www.backblaze.com/blog/tag/data-center/.
The second part of this series on The Challenges of Opening a Data Center will be posted later this week. Use the Join button above to receive notification of future posts in this series.
Some things don’t get the credit they deserve. For one of our engineers, Billy, the Locate My Computer feature is near and dear to his heart. It took him a while to build it, and it requires some regular updates, even after all these years. Billy loves the Locate My Computer feature, but really loves knowing how it’s helped customers over the years. One recent story made us decide to write a bit of a greatest hits post as an ode to one of our favorite features — Locate My Computer.
What is it?
Locate My Computer, as you’ll read in the stories below, came about because some of our users had their computers stolen and were trying to find a way to retrieve their devices. They realized that while some of their programs and services like Find My Mac were wiped, in some cases, Backblaze was still running in the background. That created the ability to use our software to figure out where the computer was contacting us from. After manually helping some of the individuals that wrote in, we decided to build it in as a feature. Little did we know the incredible stories it would lead to. We’ll get into that, but first, a little background on why the whole thing came about.
Identifying the Customer Need
“My friend’s laptop was stolen. He tracked the thief via @Backblaze for weeks & finally identified him on Facebook & Twitter. Digital 007.”
In December 2010, we saw a tweet from @DigitalRoyalty which read: “My friend’s laptop was stolen. He tracked the thief via @Backblaze for weeks & finally identified him on Facebook & Twitter. Digital 007.” Our CEO was manning Twitter at the time and reached out for the whole story. It turns out that Mat Miller had his laptop stolen, and while he was creating some restores a few days later, he noticed a new user was created on his computer and was backing up data. He restored some of those files, saw some information that could help identify the thief, and filed a police report. Read the whole story: Digital 007 — Outwitting The Thief.
Following Mat Miller’s story we heard from Mark Bao, an 18-year old entrepreneur and student at Bentley University who had his laptop stolen. The laptop was stolen out of Mark’s dorm room and the thief started using it in a variety of ways, including audition practice for Dancing with the Stars. Once Mark logged in to Backblaze and saw that there were new files being uploaded, including a dance practice video, he was able to reach out to campus police and got his laptop back. You can read more about the story on: 18 Year Old Catches Thief Using Backblaze.
After Mat and Mark’s story we thought we were onto something. In addition to those stories that had garnered some media attention, we would occasionally get requests from users that said something along the lines of, “Hey, my laptop was stolen, but I had Backblaze installed. Could you please let me know if it’s still running, and if so, what the IP address is so that I can go to the authorities?” We would help them where we could, but knew that there was probably a much more efficient method of helping individuals and businesses keep track of their computers.
Some of the Greatest Hits, and the Mafia Story
In May of 2011, we launched “Locate My Computer.” This was our way of adding a feature to our already-popular backup client that would allow users to see a rough representation of where their computer was located, and the IP address associated with its last known transmission. After speaking to law enforcement, we learned that those two things were usually enough for the authorities to subpoena an ISP and get the physical address of the last known place the computer phoned home from. From there, they could investigate and, if the device was still there, return it to its rightful owner.
Once the feature went live the stories got even more interesting. Almost immediately after we launched Locate My Computer, we were contacted by Bridgette, who told us of a break-in at her house. Luckily no one was home at the time, but the thief was able to get away with her iMac, DSLR, and a few other prized possessions. As soon as she reported the robbery to the police, they were able to use the Locate My Computer feature to find the thief’s location and recover her missing items. We even made a case study out of Bridgette’s experience. You can read it at: Backblaze And The Stolen iMac.
The crazy recovery stories didn’t end there. Shortly after Bridgette’s story, we received an email from a user (“Joe” — to protect the innocent) who was traveling to Argentina from the United States and had his laptop stolen. After he contacted the police department in Buenos Aires, and explained to them that he was using Backblaze (which the authorities thought was a computer tracking service, and in this case, we were), they were able to get the location of the computer from an ISP in Argentina. When they went to investigate, they realized that the perpetrators were foreign nationals connected to the mafia, and that in addition to a handful of stolen laptops, they were also in the possession of over $1,000,000 in counterfeit currency! Read the whole story about “Joe” and how: Backblaze Found $1 Million in Counterfeit Cash!
The Maker —
After “Joe,” we thought that our part in high-profile “busts was over, but we were wrong. About a year later we received word from a “maker” who told us that he was able to act as an “internet super-sleuth” and worked hard to find his stolen computer. After a Maker Faire in Detroit, the maker’s car was broken into while they were getting BBQ following a successful show. While some of the computers were locked and encrypted, others were in hibernation mode and wide open to prying eyes. After the police report was filed, the maker went to Backblaze to retrieve his lost files and remembered seeing the little Locate My Computer button. That’s when the story gets really interesting. The victim used a combination of ingenuity, Craigslist, Backblaze, and the local police department to get his computer back, and make a drug bust along the way. Head over to Makezine.com to read about how:How Tracking Down My Stolen Computer Triggered a Drug Bust.
While we kept hearing praise and thanks from our customers who were able to recover their data and find their computers, a little while passed before we would hear a story that was as incredible as the ones above. In July of 2016, we received an email from Una who told us one of the most amazing stories of perseverance that we’d ever heard. With the help of Backblaze and a sympathetic constable in Australia, Una tracked her stolen computer’s journey across 6 countries. She got her computer back and we wrote up the whole story: How Una Found Her Stolen Laptop.
And the Hits Keep on Coming
The most recent story came from “J,” and we’ll share the whole thing with you because it has a really nice conclusion:
Back in September of 2017, I brought my laptop to work to finish up some administrative work before I took off for a vacation. I work in a mall where traffic [is] plenty and more specifically I work at a kiosk in the middle of the mall. This allows for a high amount of traffic passing by every few seconds. I turned my back for about a minute to put away some paperwork. At the time I didn’t notice my laptop missing. About an hour later when I was gathering my belongings for the day I noticed it was gone. I was devastated. This was a high end MacBook Pro that I just purchased. So we are not talking about a little bit of money here. This was a major investment.
Time [went] on. When I got back from my vacation I reached out to my LP (Loss Prevention) team to get images from our security to submit to the police with some thread of hope that they would find whomever stole it. December approached and I did not hear anything. I gave up hope and assumed that the laptop was scrapped. I put an iCloud lock on it and my Find My Mac feature was saying that laptop was “offline.” I just assumed that they opened it, saw it was locked, and tried to scrap it for parts.
Towards the end of January I got an email from Backblaze saying that the computer was successfully backed up. This came as a shock to me as I thought it was wiped. But I guess however they wiped it didn’t remove Backblaze from the SSD. None the less, I was very happy. I sifted through the backup and found the person’s name via the search history. Then, using the Locate my Computer feature I saw where it came online. I reached out on social media to the person in question and updated the police. I finally got ahold of the person who stated she bought it online a few weeks backs. We made arrangements and I’m happy to say that I am typing this email on my computer right now.
J finished by writing: “Not only did I want to share this story with you but also wanted to say thanks! Apple’s find my computer system failed. The police failed to find it. But Backblaze saved the day. This has been the best $5 a month I have ever spent. Not only that but I got all my stuff back. Which made the deal even better! It was like it was never gone.”
Have a Story of Your Own?
We’re more than thrilled to have helped all of these people restore their lost data using Backblaze. Recovering the actual machine using Locate My Computer though, that’s the icing on the cake. We’re proud of what we’ve been able to build here at Backblaze, and we really enjoy hearing stories from people who have used our service to successfully get back up and running, whether that meant restoring their data or recovering their actual computer.
If you have any interesting data recovery or computer recovery stories that you’d like to share with us, please email email@example.com and we’ll share it with Billy and the rest of the Backblaze team. We love hearing them!
With dozens of millions of active users a day, uTorrent has long been the most used torrent client.
The software has been around for well over a decade and it’s still used to shift petabytes of data day after day. While there haven’t been many feature updates recently, parent company BitTorrent Inc. was alerted to a serious security vulnerability recently.
The security flaw in question was reported by Google vulnerability researcher Tavis Ormandy, who first reached out to BitTorrent in November last year. Google’s Project Zero allows developers a 90-day window to address security flaws but with this deadline creeping up, BitTorrent had remained quiet.
Late last month Ormandy again reached out to BitTorrent Inc’s Bram Cohen, fearing that the company might not fix the vulnerability in time.
“I don’t think bittorrent are going to make a 90 day disclosure deadline, do you have any direct contacts who could help? I’m not convinced they understand the severity or urgency,” Ormandy wrote on Twitter.
While Google’s security researcher might have expected a more swift response, the issue wasn’t ignored.
BitTorrent Inc has yet to fix the problem in the stable release, but a patch was deployed in the Beta version last week. BitTorrent’s Vice President of Engineering David Rees informed us that this will be promoted to the regular release this week, if all goes well.
While no specific details about the vulnerability have yet to be released, it is likely to be a remote execution flaw. Ormandy previously exposed a similar vulnerability in Transmission, which he said was the “first of a few remote code execution flaws in various popular torrent clients.”
BitTorrent Inc. told us that they have shared their patch with Ormandy, who confirmed that this fixes the security issues.
uTorrent Beta release notes
“We have also sent the build to Tavis and he has confirmed that it addresses all the security issues he reported,” Rees told us. “Since we have not promoted this build to stable, I will reserve reporting on the details of the security issue and its fix for now.”
BitTorrent Inc. plans to release more details about the issue when all clients are patched. Then it will also recommend users to upgrade their clients, so they are no longer at risk, and further information will also be available on Google’s Project Zero site.
Of course, people who are concerned about the issue can already upgrade to the latest uTorrent Beta release right away. Or, assuming that it’s related to the client’s remote control functionality, disable that for now.
Note: uTorrent’s Beta changelog states that the fixes were applied on January 15, but we believe that this should read February 15 instead.
While pirated Hollywood blockbusters often score the big headlines, there are several other industries that have been battling with piracy over the years. This includes sports organizations.
Many of the major US leagues including the NBA, NFL, NHL, MLB and the Tennis Association, are bundling their powers in the Sports Coalition, to try and curb the availability of pirated streams and videos.
A few days ago the Sports Coalition put the piracy problem on the agenda of the United States Trade Representative (USTR).
“Sports organizations, including Sports Coalition members, are heavily affected by live sports telecast piracy, including the unauthorized live retransmission of sports telecasts over the Internet,” the Sports Coalition wrote.
“The Internet piracy of live sports telecasts is not only a persistent problem, but also a global one, often involving bad actors in more than one nation.”
The USTR asked the public for comments on which countries play a central role in copyright infringement issues. In its response, the Sports Coalition stresses that piracy is a global issue but singles out several nations as particularly problematic.
The coalition recommends that the USTR should put the Netherlands and Switzerland on the “Priority Watch List” of its 2018 Special 301 Report, followed by Russia, Saudi Arabia, Seychelles and Sweden, which get a regular “Watch List” recommendation.
The main problem with these countries is that hosting providers and content distribution networks don’t do enough to curb piracy.
In the Netherlands, sawlive.tv, strikezoneme, wizlnet, AltusHost, Host Palace, Quasi Networks and SNEL pirated or provided services contributing to sports piracy, the coalition writes. In Switzerland, mlbstreamme, robinwidgetorg, strikeoutmobi, BlackHOST, Private Layer and Solar Communications are doing the same.
According to the major sports leagues, the US Government should encourage these countries to step up their anti-piracy game. This is not only important for US copyright holders, but also for licensees in other countries.
“Clearly, there is common ground – both in terms of shared economic interests and legal obligations to protect and enforce intellectual property and related rights – for the United States and the nations with which it engages in international trade to work cooperatively to stop Internet piracy of sports programming.”
Whether any of these countries will make it into the USTR’s final list has yet to be seen. For Switzerland it wouldn’t be the first time but for the Netherlands it would be new, although it has been considered before.
A document we received through a FOIA request earlier this year revealed that the US Embassy reached out to the Dutch Government in the past, to discuss similar complaints from the Sports Coalition.
The same document also revealed that local anti-piracy group BREIN consistently urged the entertainment industries it represents not to advocate placing the Netherlands on the 301 Watch List but to solve the problems behind the scenes instead.
Security updates have been issued by Arch Linux (dnsmasq, libmupdf, mupdf, mupdf-gl, mupdf-tools, and zathura-pdf-mupdf), CentOS (kernel), Debian (smarty3, thunderbird, and unbound), Fedora (bind, bind-dyndb-ldap, coreutils, curl, dnsmasq, dnsperf, gcab, java-1.8.0-openjdk, libxml2, mongodb, poco, rubygem-rack-protection, transmission, unbound, and wireshark), Red Hat (collectd, erlang, and openstack-nova), SUSE (bind), and Ubuntu (clamav and webkit2gtk).
Security updates have been issued by Arch Linux (bind, irssi, nrpe, perl-xml-libxml, and transmission-cli), CentOS (java-1.8.0-openjdk), Debian (awstats, libgd2, mysql-5.5, rsync, smarty3, and transmission), Fedora (keycloak-httpd-client-install and rootsh), and Red Hat (java-1.7.0-oracle and java-1.8.0-oracle).
Security updates have been issued by CentOS (linux-firmware and microcode_ctl), Fedora (icecat and transmission), Oracle (java-1.8.0-openjdk and microcode_ctl), Red Hat (java-1.8.0-openjdk), Scientific Linux (java-1.8.0-openjdk), Slackware (bind), SUSE (kernel), and Ubuntu (eglibc).
Security updates have been issued by Debian (bind9, wordpress, and xbmc), Fedora (awstats, docker, gifsicle, irssi, microcode_ctl, mupdf, nasm, osc, osc-source_validator, and php), Gentoo (newsbeuter, poppler, and rsync), Mageia (gifsicle), Red Hat (linux-firmware and microcode_ctl), Scientific Linux (linux-firmware and microcode_ctl), SUSE (kernel and openssl), and Ubuntu (bind9, eglibc, glibc, and transmission).
With millions of active users, Transmission is one of the most used BitTorrent clients around, particularly for Mac users.
The application has been around for more than a decade and has a great reputation. However, as with any other type of software, it is not immune to vulnerabilities.
One rather concerning flaw was made public by Google vulnerability researcher Tavis Ormandy a few days ago. The flaw allows outsiders to gain access to Transmission via DNS rebinding. This ultimately allows attackers to control the BitTorrent client and execute custom code.
Ormandy has published a patch, which was also shared with the private Transmission security list at the end of November. Transmission, however, has yet to address the issue in an update.
The relatively slow response was the reason why Ormandy decided to make it public before Project Zero’s usual 90-day window expired, Ars highlights. This allows other projects to address the vulnerability right away.
“I’m finding it frustrating that the transmission developers are not responding on their private security list,” Google’s vulnerability researcher writes. “I’ve never had an opensource project take this long to fix a vulnerability before, so I usually don’t even mention the 90 day limit if the vulnerability is in an open source project.”
A member of the Transmission developer team informed Ars that they will address this ASAP, noting that the issue only affects users who have remote control enabled with the default password. This means that people who disable it or change their password can easily ‘patch’ it until the official update comes out.
Interestingly, this isn’t the last BitTorrent related vulnerability Ormandy plans to expose. According to one of his tweets on the matter, this is just the “first of a few remote code execution flaws in various popular torrent clients.”
Judging from a message the researcher sent late November, uTorrent is on the list as well. Apparently, the company’s security email address wasn’t set up correctly at the time, so BitTorrent inventor Bram Cohen has been acting as a forwarding service.
While MQTT 5 is a major update to the existing protocol specification, the new version of the lightweight IoT protocol is more of an evolution rather than a revolution and retained all characteristics that contributed to its success: Its lightweightness, push communication, unique features, ease of use, extreme scalability, suitability for mobile networks and decoupling of communication participants.
Although some foundational mechanics were added or changed slightly, the new version still feels like MQTT and it sticks to its principles that made it the most popular Internet of Things protocol to date. This blog post will analyze everything you need to know about the foundational changes in version 5 of the MQTT specification before digging deep into the details of the new features during the next weeks.
MQTT is still MQTT. Mostly.
The good news: If you’re familiar with MQTT 3.1.1 (if you aren’t – start here and read the MQTT Essentials first!), then all principles and features you know about MQTT are still valid for MQTT 5. Some details of former features like Last Will and Testament changed a bit or some features were extended and additional popular features implemented by HiveMQ like TTL or Shared Subscriptions were added to the new specification.
The protocol also slightly changed on the wire and an additional control packet (AUTH) was added. But all in all, the MQTT version 5 is still clearly recognizable as MQTT.
Properties in the MQTT Header & Reason Codes
One of the most exciting and most flexible new MQTT 5 features is the possibility to add custom key-value properties in the MQTT header. This feature deserves its own blog post as it is a game changer for many deployments. Similar to protocols like HTTP, MQTT clients and brokers can add an arbitrary number of custom (or pre-defined) headers to carry metadata. This metadata can be used for application specific data. Pre-defined headers are used for the implementation of most of the new MQTT features.
Many MQTT packets now also include Reason Codes. A Reason Code indicates that a pre-defined protocol error occurred. These reason codes are typically carried on acknowledgement packets and allow client and broker to interpret error conditions (and potentially work around them). The Reason Codes are sometimes called Negative Acknowledgements. The following MQTT packets can carry Reason Codes:
Reason Codes for negative acknowledgements range from “Quota Exceeded” to “Protocol Error”. Clients and brokers are responsible for interpreting these new Reason Codes.
CONNACK Return Codes for unsupported features
With the popularity of MQTT, a lot of MQTT implementations were created and offered by companies. Not all of these implementations are completely MQTT specification compatible since sometimes features are not implemented like QoS 2, retained messages or persistent sessions. On a side note, HiveMQ is of course fully MQTT specification conformal and supports all features.
MQTT 5 provides a way for incomplete MQTT implementations (as often found in SaaS offerings) to indicate that the broker does not support specific features. It’s the client’s job to make sure that none of the unsupported features are used. The broker implementation uses pre-defined headers in the CONNACK packet (which is sent by the broker after the client sent a CONNECT packet) to indicate that specific features are not supported. These headers can of course also be used to send a notice to the client that it has no permission to use specific features.
The following pre-defined headers for indicating unimplemented features (or features not permitted for use by the client) are available in MQTT 5:
Are retained messages available?
The maximum QoS the client is allowed to use for publishing messages or subscribing to topics
If Wildcards can be used for topic subscriptions
Subscription identifiers available
If Subscription Identifiers are available for the MQTT client
These return codes are a major step forward for communicating the permissions of individual MQTT clients in heterogeneous environments. The downside of this new functionality is that MQTT clients need to implement the interpretation of these codes themselves and need to make sure that application programmers don’t use features that are not supported by the broker (or the client has no permission for).
HiveMQ is going to support all MQTT 5 features 100%, so these custom headers are only expected to be used if the administrator choses to use them for permissions in deployments.
Clean Session is now Clean Start
A popular MQTT 3.1.1 functionality is the use of clean sessions by MQTT clients, which have temporary connections or don’t subscribe to messages at all. When connecting to the broker, the client had to choose to send a CONNECT packet with the cleanSession flag enabled or disabled. With a clean session, a MQTT client indicates that the broker should discard any data for the client as soon as the underlying TCP connection breaks or if the client decides to disconnect from the broker. Also, if there was a previous session associated with the client identifier on the broker, a cleanSession CONNECT packet forced the broker to delete the previous data.
With MQTT 5, a client can choose to use a Clean Start (indicated by the Clean Start flag in the CONNECT message). When using this flag, the broker discards any previous session data and the client starts with a fresh session. The session won’t be cleaned automatically after the TCP connection was closed between client and server. To trigger the deletion of the session after the client disconnected, a new header field called “Session Expiry Interval” must be set to the value 0.
The new Clean Start streamlines and simplifies the session handling of MQTT, as it allows more flexibility and is easier to implement than the cleanSession/persistent session concept. With MQTT 5 all session are persistent unless the “Session Expiry Interval” is 0. Deletion of a session occurs after the timeout or when the client reconnects with Clean Start.
Additional MQTT Packet
MQTT 5 introduces a new MQTT Packet: The AUTH packet. This new packet is extremely useful for implementing non-trivial authentication mechanisms and we expect that this packet will be used a lot in production environments. The exact semantics are covered in a future blog post.
For the moment, it’s important to realize that this new packet can be sent by brokers and clients after connection establishment to use complex challenge/response authentication methods (like SCRAM or Kerberos as defined in the SAML framework, but can also be used for state-of-the-art authentication methods for IoT like OAuth. This packet also allows re-authentication of MQTT clients without closing the connection. Stay tuned for a deep dive into the packet very soon!
New Data Type: UTF-8 String pairs
The advent of custom headers also required a new data type to be introduced. UTF-8 string pairs. This string pair is essentially a key-value structure with both, key and value, as String data type. This data type is currently only used for custom headers.
With this new data type, MQTT uses 7 different data types that are used on the wire:
Two Byte Integer
Four Byte Integer
UTF-8 Encoded String
Variable Byte Integer
UTF-8 String Pair
Most application users typically use Binary Data and UTF-8 encoded Strings in the APIs of their MQTT library. With MQTT 5, UTF-8 String Pairs may also be used frequently. All other data types are hidden from the user but are used on the wire to craft valid MQTT packets by the MQTT client libraries and brokers.
Bi-directional DISCONNECT packets
With MQTT 3.1.1, the client could indicate that it wanted to disconnect gracefully by sending a DISCONNECT packet prior to closing the underlying TCP connection. There was no way for the MQTT broker to notify a MQTT client that something bad happened and that the broker is going to close the TCP connection. This changed with the new protocol version.
The broker is now allowed to send a MQTT DISCONNECT packet prior to closing the socket. The client is now able to interpret the reason why it was disconnected and take the respective action. A broker is not required to indicate the exact reason (e.g. for security reasons), but at least for developing applications this helps a lot to figure out why a connection was closed by the broker.
Of course DISCONNECT packets can carry Reason Codes, so it’s easy to indicate what was the reason for the disconnection (e.g. in case of invalid permissions).
No retry for QoS 1 and 2 messages
MQTT clients use standing TCP (or similar protocols with the same guarantees) connections as underlying transport. A healthy TCP connection gives bi-directional connectivity with exactly-once and in-order guarantees, so all MQTT packets sent by clients or brokers will arrive on the other end. In case the TCP connection breaks, while the message is in-flight, QoS 1 and 2 give message delivery guarantees over multiple TCP connections.
MQTT 3.1.1 allowed the re-delivery of MQTT messages while the TCP connection is healthy. In practice, this is a very bad idea, since overloaded MQTT clients may get overloaded even more. Just imagine a case where a MQTT client receives a message from a MQTT broker and needs 11 seconds to process the message (and would acknowledge the packet after the processing). Now imagine the broker would retransmit the message after a 10 second timeout. There is no advantage to this approach and it just uses precious bandwidth and overloads the MQTT client even more.
With MQTT 5, brokers and clients are not allowed to retransmit MQTT messages for healthy TCP connections. The brokers and clients must re-send unacknowledged packets when the TCP connection was closed, though. So the QoS 1 and 2 guarantees are as important as with MQTT 3.1.1.
In case you rely on retransmission packets in your use case (e.g. because your implementation does not acknowledge packets in certain cases), we suggest reconsidering this decision before upgrading to MQTT 5.
Using passwords without usernames
MQTT 3.1.1 required the MQTT client to send a username when using a password in the CONNECT packet. For certain use cases this was very inconvenient in case there was no username. A good example would be the use of OAuth, which uses a JSON web token as the only authentication and authorization information. When using such a token with MQTT 3.1.1, static usernames were used frequently, as the only relevant information was in the password field.
While there are more elegant ways in MQTT 5 to carry tokens (e.g. via the AUTH packet), it is still possible to utilize the password field of the CONNECT packet. Now users can just use the password field and don’t need to fill out the username anymore.
Although the MQTT protocol stays basically the same, there are some small changes under the hood, which are enablers for many of the new features in version 5 of the popular IoT protocol. As a user of MQTT libraries, most of these changes are good to know but do not affect how MQTT is used. Developers for MQTT libraries and brokers need to take care of the changes, especially on the changes on the protocol nitty-grittys. There were also some small changes in the specified behavior (e.g. retransmission of messages), so it’s good to revisit design decisions of deployments when upgrading to MQTT 5.
What do you like most of the foundational changes in MQTT? Let us know in the comments!
We’re pleased to start the year off the right way, with an update to Backblaze Cloud Backup, version 5.2! This is a smaller release, but does increase backup speeds, optimizes the backup client, and addresses a few minor bugs that we’re excited to lay to rest.
Increased transmission speed of files between 30MB and 400MB+.
Optimized indexing to decrease system resource usage and lower the performance impact on computers that are backing up to Backblaze.
Adjusted external hard drive monitoring and increased the speed of indexing.
In broad terms, there are two types of unauthorized online streaming of live TV. The first is via open-access websites where users can view for free. The second features premium services to which viewers are required to subscribe.
Usually available for a few dollars, euros, or pounds per month, the latter are gaining traction all around the world. Service levels are relatively high and the majority of illicit packages offer a dazzling array of programming, often putting official providers in the shade.
For this reason, commercial IPTV providers are considered a huge threat to broadcasters’ business models, since they offer a broadly comparable and accessible service at a much cheaper price. This is forcing companies such as US giant Dish Networks to court, seeking relief.
Following on from a lawsuit filed last year against Kodi add-on ZemTV and TVAddons.ag, Dish has just filed two more lawsuits targeting a pair of unauthorized pirate IPTV services.
Filed in Maryland and Texas respectively, the actions are broadly similar, with the former targeting a provider known as Spider-TV.
The suit, filed against Dima Furniture Inc. and Mohammad Yusif (individually and collectively doing business as Spider-TV), claims that the defendants are “capturing
broadcasts of television channels exclusively licensed to DISH and are unlawfully retransmitting these channels over the Internet to their customers throughout the United States, 24 hours per day, 7 days per week.”
Dish claim that the defendants profit from the scheme by selling set-top boxes along with subscriptions, charging around $199 per device loaded with 13 months of service.
Dima Furniture is a Maryland corporation, registered at Takoma Park, Maryland 20912, an address that is listed on the Spider-TV website. The connection between the defendants is further supported by FCC references which identify Spider devices in the market. Mohammad Yusif is claimed to be the president, executive director, general manager, and sole shareholder of Dima Furniture.
Dish describes itself as the fourth largest pay-television provider in the United States, delivering copyrighted programming to millions of subscribers nationwide by means of satellite delivery and over-the-top services. Dish has acquired the rights to do this, the defendants have not, the broadcaster states.
“Defendants capture live broadcast signals of the Protected Channels, transcode these signals into a format useful for streaming over the Internet, transfer the transcoded content to one or more servers provided, controlled, and maintained by Defendants, and then transmit the Protected Channels to users of the Service through
OTT delivery, including users in the United States,” the lawsuit reads.
It’s claimed that in July 2015, Yusif registered Spider-TV as a trade name of Dima Furniture with the Department of Assessments and Taxation Charter Division, describing the business as “Television Channel Installation”. Since then, the defendants have been illegally retransmitting Dish channels to customers in the United States.
The overall offer from Spider-TV appears to be considerable, with a claimed 1,300 channels from major regions including the US, Canada, UK, Europe, Middle East, and Africa.
Importantly, Dish state that the defendants know that their activities are illegal, since the provider sent at least 32 infringement notices since January 20, 2017 demanding an end to the unauthorized retransmission of its channels. It went on to send even more to the defendants’ ISPs.
“DISH and Networks sent at least thirty-three additional notices requesting the
removal of infringing content to Internet service providers associated with the Service from February 16, 2017 to the filing of this Complaint. Upon information and belief, at least some of these notices were forwarded to Defendants,” the lawsuit reads.
But while Dish says that the takedowns responded to by the ISPs were initially successful, the defendants took evasive action by transmitting the targeted channels from other locations.
Describing the defendants’ actions as “willful, malicious, intentional [and] purposeful”, Dish is suing for Direct Copyright Infringement, demanding a permanent injunction preventing the promotion and provision of the service plus statutory damages of $150,000 per registered work. The final amount isn’t specified but the numbers are potentially enormous. In addition, Dish demands attorneys’ fees, costs, and the seizure of all infringing articles.
The second lawsuit, filed in Texas, is broadly similar. It targets Mo’ Ayad Al
Zayed Trading Est., and Mo’ Ayad Fawzi Al Zayed (individually and collectively doing business as Tiger International Company), and Shenzhen Tiger Star Electronical Co., Ltd, otherwise known as Shenzhen Tiger Star.
Dish claims that these defendants also illegally capture and retransmit channels to customers in the United States. IPTV boxes costing up to $179 including one year’s service are the method of delivery.
In common with the Maryland case, Dish says it sent almost two dozen takedown notices to ISPs utilized by the defendants. These were also countered by the unauthorized service retransmitting Dish channels from other servers.
The biggest difference between the Maryland and Texas cases is that while Yusif/Spider/Dima Furniture are said to be in the US, Zayed is said to reside in Amman, Jordan, and Tiger Star is registered in Shenzhen, China. However, since the unauthorized service is targeted at customers in Texas, Dish states that the Texas court has jurisdiction.
Again, Dish is suing for Direct Infringement, demanding damages, costs, and a permanent injunction.
Over the past couple of decades, piracy of live TV has broadly taken two forms. That which relies on breaking broadcaster encryption (such as card sharing and hacked set-top boxes), and the more recent developments of P2P and IPTV-style transmission.
With the former under pressure and P2P systems such as Sopcast and AceTorrent moving along in the background, streaming from servers is now the next big thing, whether that’s for free via third-party Kodi plugins or for a small fee from premium IPTV providers.
Of course, copyright holders don’t like any of this usage but with their for-profit strategy, commercial IPTV providers have a big target on their backs. More evidence of this was revealed recently when UK-based IPTV service ACE TV announced they were taking action to avoid problems in the country.
In a message to prospective and existing customers, ACE TV said that potential legal issues were behind its decision to accept no new customers while locking down its service.
“It saddens me to announce this, but due to pressure from the authorities in the UK, we are no longer selling new subscriptions. This obviously includes trials,” the announcement reads.
Noting that it would take new order for just 24 hours more, ACE TV insisted that it wasn’t shutting down but would lock down the service while closing Facebook.
TF sources and unconfirmed rumors online suggest that the Federation Against Copyright Theft and partners the Premier League are involved. However, ACE TV didn’t respond to TorrentFreak’s request for comment so we’re unable to confirm or deny the allegations.
That being said, even if the threats came directly from the police, it’s likely that the approach would’ve been initially prompted by companies connected to FACT, since the anti-piracy outfit often puts forward names of services for investigation on behalf of its partners.
Perhaps surprisingly, ACE TV is legally incorporated in the UK as Ace Hosting Limited, a fact it makes clear on its website. While easy to find, the company’s registered address is shared by dozens of other companies, indicating a mail forwarding operation rather than a place servers or staff can be found.
This proxy location may well be the reason the company feels emboldened to carry on some level of service rather than shutting down completely, but its legal basis for doing so is interesting at best, precarious at worst.
“This website, any content contained herein and any contract brought into being as a result of usage of this website are governed by and construed in accordance with English Law,” ACE TV’s website reads.
“The parties to any such contract agree to submit to the exclusive jurisdiction of the courts of England and Wales. All contracts are concluded in English.”
It seems likely that ACE TV has been threatened under UK law, since that’s where it’s incorporated. That would seem to explain why its concerned about UK authorities and their potential effect on the business. On the other hand, however, the service claims to operate entirely legally, but under the laws of the United States. It even has a repeat infringer policy.
“Ace Hosting operates as an intermediary to cache and deliver content hosted by others at the instruction of our subscribers. We cannot remove content hosted by others,” the company says.
“As an intermediary, we are entitled to rely upon (among other things) the DMCA safe harbor available to system caching service providers and we maintain policies and procedures to terminate subscribers that would be considered repeat infringers under the DMCA.”
Whether the notices on the site have been advised by a legal professional or are there to present an air of authenticity is unclear but it’s precarious for a service of this nature to rely solely on conduit status in order to avoid liability.
Marketing, prior conduct, and overall intent play a major role in such cases and when all of that is aired in the cold light of day, the situation can look very different to a judge, particularly in the UK, where no similar cases have been successfully defended to date.
Backblaze works quietly and continuously in the background to keep you backed up, but you can ask Backblaze to immediately check whether anything needs backing up by holding down the Alt key and clicking on the Restore Options button in the Backblaze client.
Manage and Restore Your Backed Up Files
You Can Share Files You’ve Backed Up
You can share files with anyone directly from your Backblaze account.
You have a choice of how to receive your data from Backblaze. You can download individual files, download a ZIP of the files you choose, or request that your data be shipped to you anywhere in the world via FedEx.
Put Your Account on Hold for Six Months
As long as your account is current, all the data you’ve backed up is maintained for up to six months if you’re traveling or not using your computer and don’t connect to our servers. (For active accounts, data is maintained up to 30 days.)
Groups Make Managing Business or Family Members Easy
For businesses, families, or organizations, our Groups feature makes it easy to manage billing, group membership, and individual user access to files and accounts — all at no incremental charge.
You Can Browse and Restore Previous Versions of a File
Visit the View/Restore Files page to go back in time to earlier or deleted versions of your files.
You can (and should) protect your Backblaze account with two-factor verification. You can use backup codes and SMS verification in case you lose access to your smartphone and the authentication app. Sign in to your account to set that up.
Add Additional Security to Your Data
All transmissions of your data between your system and our servers is encrypted. For extra account security, you can add an optional private encryption key (PEK) to the data on our servers. Just be sure to remember your encryption key because it’s required to restore your data.
Our auto-threading feature adjusts Backblaze’s CPU usage to give you the best upload speeds, but for those of you who like to tinker, the Backblaze client on Windows and Macintosh lets you fine-tune the number of threads our client is using to upload your files to our data centers.
Use the Backblaze Downloader To Get Your Restores Faster
If you are downloading a large ZIP restore, we recommend that you use the Backblaze Downloader application for Macintosh or Windows for maximum speed.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.