The key point is that fans everywhere are going nuts in anticipation, so much so that various local governments in Mexico have agreed to hold public screenings for free, including in football stadiums and public squares.
“Fans of the series are crazy to see the new episode of Dragon Ball Super and have already organized events around the country as if it were a boxing match,” local media reports.
For example, Remberto Estrada, the municipal president of Benito Juárez, Quintana Roo, confirmed that the episode will be aired at the Cultural Center of the Arts in Cancun. The mayor of Ciudad Juarez says that a viewing will go ahead at the Plaza de la Mexicanidad with giant screens and cosplay contests on the sidelines.
Many local government Twitter accounts sent out official invitations, like the one shown below.
But despite all the preparations, there is a big problem. According to reports, no group or organization has the rights to show Dragon Ball Super in public in Mexico, a fact confirmed by Toei Animation, the company behind the show.
“To the viewers and fans of Dragon Ball. We have become aware of the plans to exhibit episode # 130 of our Dragon Ball Super series in stadiums, plazas, and public places throughout Latin America,” the company said in an official announcement.
“Toei Animation has not authorized these public shows and does not support or sponsor any of these events nor do we or any of our titles endorse any institution exhibiting the unauthorized episode.
“In an effort to support copyright laws, to protect the work of thousands of persons and many labor sectors, we request that you please enjoy our titles at the official platforms and broadcasters and not support illegal screenings that incite piracy.”
Armando Cabada, mayor of Ciudad Juarez, Chihuahua, was one of the first municipal officials to offer support to the episode 130 movement. He believes that since the events are non-profit, they can go ahead but others have indicated their screenings will only go ahead if they can get the necessary permission.
Crunchyroll, the US video-streaming company that holds some Dragon Ball Super rights, is reportedly trying to communicate with the establishments and organizations planning to host the events to ensure that everything remains legal and above board. At this stage, however, there’s no indication that any agreements have been reached or whether they’re simply getting in touch to deliver a warning.
One region that has already confirmed its event won’t go ahead is Mexico City. The head of the local government there told disappointed fans that since they can’t get permission from Toei, the whole thing has been canceled.
What will happen in the other locations Saturday night if licenses haven’t been obtained is anyone’s guess but thousands of disappointed fans in multiple locations raises the potential for the kind of battle the Mexican authorities can well do without, even if Dragon Ball Super thrives on them.
BitTorrent users today have several basic ways to download content. The most popular is via a dedicated torrent client installed on a Windows, Linux, Android or similar operating system at home.
While this kind of activity is necessarily ‘local’, power users over the years have turned to systems that enable them to download and share potentially huge quantities of data.
Essentially computer servers running torrent client software in remote locations, these so-called ‘seedboxes’ became a must-have for anyone looking to stand out in the torrent world as a sharing sensation.
While widespread, companies selling access to seedboxes haven’t really generated much noise publicly over the years. However, this week an announcement from one of the longer-standing companies caught our attention. After being founded eight years ago, popular provider SeedStuff.ca has decided to exit the seedbox business.
“We originally opened in 2010, however we have seen an ever changing climate in the industry and as new technologies emerge and people shift to more conventional means of file sharing our services have seen a steady decline over the past few years,” the company said in a statement published on its website.
“At this time, it simply is no longer viable to continue offering the services we do.”
Considering BitTorrent itself made its mark as a disruptive technology, it’s interesting that a company like SeedStuff would have its business disrupted by other file-sharing methods. So, we asked the provider a little more about its history and its ultimate decision to close down.
“We started from the backroom IRC channels on 56k connections, so torrents have always been a blessing,” a spokesperson said.
“Between 2005 and 2010, I think the rise of ‘Private’ trackers really started to make the scene shine. You were able to find and connect with the content you wanted as well as the communities of people who shared interests as well.
“The private trackers gamified seeding and rewarded their best members, this is what really paved the way for seedboxes. The users felt a need to compete and often did not have access to the means to do so, but could contract these machines out to help them succeed. The demand for seedboxes started in about 2010, which I think you will see coincided with a huge spike of private tracker activity.”
SeedStuff says its initial aim was to improve user experience by not following the decision by many existing providers to “stuff as many users as possible” into each server. Restricting each unit to a maximum of four users and accepting just a small profit on each, the service grew while gaining support from customers.
“At our peak, we serviced over 4000 customers per month. Our total email database was well over 10,000 customer accounts. We did not monitor bandwidth or user activities as we felt this to be intrusive. We only dealt with server providers who offered unlimited bandwidth so that we were able to allow for the best user experience without limits,” the company explains.
But after several years of growth, SeedStuff noticed a change. In addition to suffering a painful database crash caused by a host and a failed backup regime, in 2015 the company observed a shift in user patterns.
“We noticed around this time that streaming services had started to become mainstream in almost every home and people were simply not using our services anymore. The main cancellation reason for the last three years has been ‘Not needed anymore’,” SeedStuff notes.
“I think torrenting developed for many reasons including ease of use, availability and cost to access media. Many of these issues have been improved by current systems so there is no need for consumers to use torrents for half their content, but we aren’t there yet and the industry seems to be dialing it back again.”
SeedStuff believes that while there will be a steady decline in torrent usage, the protocol will remain relevant for a long time to come. It could even enjoy a resurgence if distribution companies restrict availability or require multiple accounts to access all content.
“If a customer needs dedicated Netflix, HBO, CBS and Hulu accounts to access the shows they want, they might see these costs as too much compared to a decent torrenting connection,” the company says.
Of course, market changes can always have an effect on a company’s direction but SeedStuff says that in addition to tackling a myriad of technical issues, in the end there were also problems with team members migrating to other areas.
“Some of our team also moved on to new projects and started new companies which are now more exciting to them. Everything compounded and eventually lead us to split and go our separate ways. We just wanted to thank everyone who remained a customer through the years and are sorry we had to shut down,” the company concludes.
While there are plenty of other seedbox providers around, it seems fairly clear that things aren’t what they used to be, with streaming and other technologies all helping to disrupt the market. SeedStuff points towards IPFS as yet another potential torrent disrupter of the future. Time will tell.
This is part two of a series on the factors that an organization needs to consider when opening a data center and the challenges that must be met in the process.
In Part 1 of this series, we looked at the different types of data centers, the importance of location in planning a data center, data center certification, and the single most expensive factor in running a data center, power.
In Part 2, we continue to look at factors that need to considered both by those interested in a dedicated data center and those seeking to colocate in an existing center.
In part 1, we began our discussion of the power requirements of data centers.
As we discussed, redundancy and failover is a chief requirement for data center power. A redundantly designed power supply system is also a necessity for maintenance, as it enables repairs to be performed on one network, for example, without having to turn off servers, databases, or electrical equipment.
The common critical components of a data center’s power flow are:
Uninterruptible Power Supplies (UPS)
Utility Supply is the power that comes from one or more utility grids. While most of us consider the grid to be our primary power supply (hats off to those of you who manage to live off the grid), politics, economics, and distribution make utility supply power susceptible to outages, which is why data centers must have autonomous power available to maintain availability.
Generators are used to supply power when the utility supply is unavailable. They convert mechanical energy, usually from motors, to electrical energy.
Transfer Switches are used to transfer electric load from one source or electrical device to another, such as from one utility line to another, from a generator to a utility, or between generators. The transfer could be manually activated or automatic to ensure continuous electrical power.
Distribution Panels get the power where it needs to go, taking a power feed and dividing it into separate circuits to supply multiple loads.
A UPS, as we touched on earlier, ensures that continuous power is available even when the main power source isn’t. It often consists of batteries that can come online almost instantaneously when the current power ceases. The power from a UPS does not have to last a long time as it is considered an emergency measure until the main power source can be restored. Another function of the UPS is to filter and stabilize the power from the main power supply.
Data center UPSs
PDU stands for the Power Distribution Unit and is the device that distributes power to the individual pieces of equipment.
After power, the networking connections to the data center are of prime importance. Can the data center obtain and maintain high-speed networking connections to the building? With networking, as with all aspects of a data center, availability is a primary consideration. Data center designers think of all possible ways service can be interrupted or lost, even briefly. Details such as the vulnerabilities in the route the network connections make from the core network (the backhaul) to the center, and where network connections enter and exit a building, must be taken into consideration in network and data center design.
Routers and switches are used to transport traffic between the servers in the data center and the core network. Just as with power, network redundancy is a prime factor in maintaining availability of data center services. Two or more upstream service providers are required to ensure that availability.
How fast a customer can transfer data to a data center is affected by: 1) the speed of the connections the data center has with the outside world, 2) the quality of the connections between the customer and the data center, and 3) the distance of the route from customer to the data center. The longer the length of the route and the greater the number of packets that must be transferred, the more significant a factor will be played by latency in the data transfer. Latency is the delay before a transfer of data begins following an instruction for its transfer. Generally latency, not speed, will be the most significant factor in transferring data to and from a data center. Packets transferred using the TCP/IP protocol suite, which is the conceptual model and set of communications protocols used on the internet and similar computer networks, must be acknowledged when received (ACK’d) and requires a communications roundtrip for each packet. If the data is in larger packets, the number of ACKs required is reduced, so latency will be a smaller factor in the overall network communications speed.
Those interested in testing the overall speed and latency of their connection to Backblaze’s data centers can use the Check Your Bandwidth tool on our website.
Data center telecommunications equipment
Data center under floor cable runs
Computer, networking, and power generation equipment generates heat, and there are a number of solutions employed to rid a data center of that heat. The location and climate of the data center is of great importance to the data center designer because the climatic conditions dictate to a large degree what cooling technologies should be deployed that in turn affect the power used and the cost of using that power. The power required and cost needed to manage a data center in a warm, humid climate will vary greatly from managing one in a cool, dry climate. Innovation is strong in this area and many new approaches to efficient and cost-effective cooling are used in the latest data centers.
Switch’s uninterruptible, multi-system, HVAC Data Center Cooling Units
There are three primary ways data center cooling can be achieved:
Room Cooling cools the entire operating area of the data center. This method can be suitable for small data centers, but becomes more difficult and inefficient as IT equipment density and center size increase.
Row Cooling concentrates on cooling a data center on a row by row basis. In its simplest form, hot aisle/cold aisle data center design involves lining up server racks in alternating rows with cold air intakes facing one way and hot air exhausts facing the other. The rows composed of rack fronts are called cold aisles. Typically, cold aisles face air conditioner output ducts. The rows the heated exhausts pour into are called hot aisles. Typically, hot aisles face air conditioner return ducts.
Rack Cooling tackles cooling on a rack by rack basis. Air-conditioning units are dedicated to specific racks. This approach allows for maximum densities to be deployed per rack. This works best in data centers with fully loaded racks, otherwise there would be too much cooling capacity, and the air-conditioning losses alone could exceed the total IT load.
Data Centers are high-security facilities as they house business, government, and other data that contains personal, financial, and other secure information about businesses and individuals.
This list contains the physical-security considerations when opening or co-locating in a data center:
Layered Security Zones. Systems and processes are deployed to allow only authorized personnel in certain areas of the data center. Examples include keycard access, alarm systems, mantraps, secure doors, and staffed checkpoints.
Physical Barriers. Physical barriers, fencing and reinforced walls are used to protect facilities. In a colocation facility, one customers’ racks and servers are often inaccessible to other customers colocating in the same data center.
Backblaze racks secured in the data center
Monitoring Systems. Advanced surveillance technology monitors and records activity on approaching driveways, building entrances, exits, loading areas, and equipment areas. These systems also can be used to monitor and detect fire and water emergencies, providing early detection and notification before significant damage results.
Top-tier providers evaluate their data center security and facilities on an ongoing basis. Technology becomes outdated quickly, so providers must stay-on-top of new approaches and technologies in order to protect valuable IT assets.
To pass into high security areas of a data center requires passing through a security checkpoint where credentials are verified.
The gauntlet of cameras and steel bars one must pass before entering this data center
Facilities and Services
Data center colocation providers often differentiate themselves by offering value-added services. In addition to the required space, power, cooling, connectivity and security capabilities, the best solutions provide several on-site amenities. These accommodations include offices and workstations, conference rooms, and access to phones, copy machines, and office equipment.
Additional features may consist of kitchen facilities, break rooms and relaxation lounges, storage facilities for client equipment, and secure loading docks and freight elevators.
Would you Like to Know More about The Challenges of Opening and Running a Data Center?
That’s it for part 2 of this series. If readers are interested, we could write a post about some of the new technologies and trends affecting data center design and use. Please let us know in the comments.
Don’t miss future posts on data centers and other topics, including hard drive stats, cloud storage, and tips and tricks for backing up to the cloud. Use the Join button above to receive notification of future posts on our blog.
This is part one of a series. The second part will be posted later this week. Use the Join button above to receive notification of future posts in this series.
Though most of us have never set foot inside of a data center, as citizens of a data-driven world we nonetheless depend on the services that data centers provide almost as much as we depend on a reliable water supply, the electrical grid, and the highway system. Every time we send a tweet, post to Facebook, check our bank balance or credit score, watch a YouTube video, or back up a computer to the cloud we are interacting with a data center.
In this series, The Challenges of Opening a Data Center, we’ll talk in general terms about the factors that an organization needs to consider when opening a data center and the challenges that must be met in the process. Many of the factors to consider will be similar for opening a private data center or seeking space in a public data center, but we’ll assume for the sake of this discussion that our needs are more modest than requiring a data center dedicated solely to our own use (i.e. we’re not Google, Facebook, or China Telecom).
Data center technology and management are changing rapidly, with new approaches to design and operation appearing every year. This means we won’t be able to cover everything happening in the world of data centers in our series, however, we hope our brief overview proves useful.
What is a Data Center?
A data center is the structure that houses a large group of networked computer servers typically used by businesses, governments, and organizations for the remote storage, processing, or distribution of large amounts of data.
While many organizations will have computing services in the same location as their offices that support their day-to-day operations, a data center is a structure dedicated to 24/7 large-scale data processing and handling.
Depending on how you define the term, there are anywhere from a half million data centers in the world to many millions. While it’s possible to say that an organization’s on-site servers and data storage can be called a data center, in this discussion we are using the term data center to refer to facilities that are expressly dedicated to housing computer systems and associated components, such as telecommunications and storage systems. The facility might be a private center, which is owned or leased by one tenant only, or a shared data center that offers what are called “colocation services,” and rents space, services, and equipment to multiple tenants in the center.
A large, modern data center operates around the clock, placing a priority on providing secure and uninterrrupted service, and generally includes redundant or backup power systems or supplies, redundant data communication connections, environmental controls, fire suppression systems, and numerous security devices. Such a center is an industrial-scale operation often using as much electricity as a small town.
Types of Data Centers
There are a number of ways to classify data centers according to how they will be used, whether they are owned or used by one or multiple organizations, whether and how they fit into a topology of other data centers; which technologies and management approaches they use for computing, storage, cooling, power, and operations; and increasingly visible these days: how green they are.
Data centers can be loosely classified into three types according to who owns them and who uses them.
Exclusive Data Centers are facilities wholly built, maintained, operated and managed by the business for the optimal operation of its IT equipment. Some of these centers are well-known companies such as Facebook, Google, or Microsoft, while others are less public-facing big telecoms, insurance companies, or other service providers.
Managed Hosting Providers are data centers managed by a third party on behalf of a business. The business does not own data center or space within it. Rather, the business rents IT equipment and infrastructure it needs instead of investing in the outright purchase of what it needs.
Colocation Data Centers are usually large facilities built to accommodate multiple businesses within the center. The business rents its own space within the data center and subsequently fills the space with its IT equipment, or possibly uses equipment provided by the data center operator.
Backblaze, for example, doesn’t own its own data centers but colocates in data centers owned by others. As Backblaze’s storage needs grow, Backblaze increases the space it uses within a given data center and/or expands to other data centers in the same or different geographic areas.
Availability is Key
When designing or selecting a data center, an organization needs to decide what level of availability is required for its services. The type of business or service it provides likely will dictate this. Any organization that provides real-time and/or critical data services will need the highest level of availability and redundancy, as well as the ability to rapidly failover (transfer operation to another center) when and if required. Some organizations require multiple data centers not just to handle the computer or storage capacity they use, but to provide alternate locations for operation if something should happen temporarily or permanently to one or more of their centers.
Organizations operating data centers that can’t afford any downtime at all will typically operate data centers that have a mirrored site that can take over if something happens to the first site, or they operate a second site in parallel to the first one. These data center topologies are called Active/Passive, and Active/Active, respectively. Should disaster or an outage occur, disaster mode would dictate immediately moving all of the primary data center’s processing to the second data center.
While some data center topologies are spread throughout a single country or continent, others extend around the world. Practically, data transmission speeds put a cap on centers that can be operated in parallel with the appearance of simultaneous operation. Linking two data centers located apart from each other — say no more than 60 miles to limit data latency issues — together with dark fiber (leased fiber optic cable) could enable both data centers to be operated as if they were in the same location, reducing staffing requirements yet providing immediate failover to the secondary data center if needed.
This redundancy of facilities and ensured availability is of paramount importance to those needing uninterrupted data center services.
Leadership in Energy and Environmental Design (LEED) is a rating system devised by the United States Green Building Council (USGBC) for the design, construction, and operation of green buildings. Facilities can achieve ratings of certified, silver, gold, or platinum based on criteria within six categories: sustainable sites, water efficiency, energy and atmosphere, materials and resources, indoor environmental quality, and innovation and design.
Green certification has become increasingly important in data center design and operation as data centers require great amounts of electricity and often cooling water to operate. Green technologies can reduce costs for data center operation, as well as make the arrival of data centers more amenable to environmentally-conscious communities.
The ACT, Inc. data center in Iowa City, Iowa was the first data center in the U.S. to receive LEED-Platinum certification, the highest level available.
ACT Data Center exterior
ACT Data Center interior
Factors to Consider When Selecting a Data Center
There are numerous factors to consider when deciding to build or to occupy space in a data center. Aspects such as proximity to available power grids, telecommunications infrastructure, networking services, transportation lines, and emergency services can affect costs, risk, security and other factors that need to be taken into consideration.
The size of the data center will be dictated by the business requirements of the owner or tenant. A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19 inch rack cabinets, which are usually placed in single rows forming corridors (so-called aisles) between them. This allows staff access to the front and rear of each cabinet. Servers differ greatly in size from 1U servers (i.e. one “U” or “RU” rack unit measuring 44.50 millimeters or 1.75 inches), to Backblaze’s Storage Pod design that fits a 4U chassis, to large freestanding storage silos that occupy many square feet of floor space.
Location will be one of the biggest factors to consider when selecting a data center and encompasses many other factors that should be taken into account, such as geological risks, neighboring uses, and even local flight paths. Access to suitable available power at a suitable price point is often the most critical factor and the longest lead time item, followed by broadband service availability.
With more and more data centers available providing varied levels of service and cost, the choices increase each year. Data center brokers can be employed to find a data center, just as one might use a broker for home or other commercial real estate.
Websites listing available colocation space, such as upstack.io, or entire data centers for sale or lease, are widely used. A common practice is for a customer to publish its data center requirements, and the vendors compete to provide the most attractive bid in a reverse auction.
Business and Customer Proximity
The center’s closeness to a business or organization may or may not be a factor in the site selection. The organization might wish to be close enough to manage the center or supervise the on-site staff from a nearby business location. The location of customers might be a factor, especially if data transmission speeds and latency are important, or the business or customers have regulatory, political, tax, or other considerations that dictate areas suitable or not suitable for the storage and processing of data.
Local climate is a major factor in data center design because the climatic conditions dictate what cooling technologies should be deployed. In turn this impacts uptime and the costs associated with cooling, which can total as much as 50% or more of a center’s power costs. The topology and the cost of managing a data center in a warm, humid climate will vary greatly from managing one in a cool, dry climate. Nevertheless, data centers are located in both extremely cold regions and extremely hot ones, with innovative approaches used in both extremes to maintain desired temperatures within the center.
Geographic Stability and Extreme Weather Events
A major obvious factor in locating a data center is the stability of the actual site as regards weather, seismic activity, and the likelihood of weather events such as hurricanes, as well as fire or flooding.
Backblaze’s Sacramento data center describes its location as one of the most stable geographic locations in California, outside fault zones and floodplains.
Sometimes the location of the center comes first and the facility is hardened to withstand anticipated threats, such as Equinix’s NAP of the Americas data center in Miami, one of the largest single-building data centers on the planet (six stories and 750,000 square feet), which is built 32 feet above sea level and designed to withstand category 5 hurricane winds.
Equinix “NAP of the Americas” Data Center in Miami
Most data centers don’t have the extreme protection or history of the Bahnhof data center, which is located inside the ultra-secure former nuclear bunker Pionen, in Stockholm, Sweden. It is buried 100 feet below ground inside the White Mountains and secured behind 15.7 in. thick metal doors. It prides itself on its self-described “Bond villain” ambiance.
Bahnhof Data Center under White Mountain in Stockholm
Usually, the data center owner or tenant will want to take into account the balance between cost and risk in the selection of a location. The Ideal quadrant below is obviously favored when making this compromise.
Risk mitigation also plays a strong role in pricing. The extent to which providers must implement special building techniques and operating technologies to protect the facility will affect price. When selecting a data center, organizations must make note of the data center’s certification level on the basis of regulatory requirements in the industry. These certifications can ensure that an organization is meeting necessary compliance requirements.
Electrical power usually represents the largest cost in a data center. The cost a service provider pays for power will be affected by the source of the power, the regulatory environment, the facility size and the rate concessions, if any, offered by the utility. At higher level tiers, battery, generator, and redundant power grids are a required part of the picture.
Fault tolerance and power redundancy are absolutely necessary to maintain uninterrupted data center operation. Parallel redundancy is a safeguard to ensure that an uninterruptible power supply (UPS) system is in place to provide electrical power if necessary. The UPS system can be based on batteries, saved kinetic energy, or some type of generator using diesel or another fuel. The center will operate on the UPS system with another UPS system acting as a backup power generator. If a power outage occurs, the additional UPS system power generator is available.
Many data centers require the use of independent power grids, with service provided by different utility companies or services, to prevent against loss of electrical service no matter what the cause. Some data centers have intentionally located themselves near national borders so that they can obtain redundant power from not just separate grids, but from separate geopolitical sources.
Higher redundancy levels required by a company will of invariably lead to higher prices. If one requires high availability backed by a service-level agreement (SLA), one can expect to pay more than another company with less demanding redundancy requirements.
Stay Tuned for Part 2 of The Challenges of Opening a Data Center
That’s it for part 1 of this post. In subsequent posts, we’ll take a look at some other factors to consider when moving into a data center such as network bandwidth, cooling, and security. We’ll take a look at what is involved in moving into a new data center (including stories from Backblaze’s experiences). We’ll also investigate what it takes to keep a data center running, and some of the new technologies and trends affecting data center design and use. You can discover all posts on our blog tagged with “Data Center” by following the link https://www.backblaze.com/blog/tag/data-center/.
The second part of this series on The Challenges of Opening a Data Center will be posted later this week. Use the Join button above to receive notification of future posts in this series.
Big things are afoot in the world of HackSpace magazine! This month we’re running our first special issue, with wearables projects throughout the magazine. Moreover, we’re giving away our first subscription gift free to all 12-month print subscribers. Lastly, and most importantly, we’ve made the cover EXTRA SHINY!
Prepare your eyeballs — it’s HackSpace magazine issue 4!
In this issue, we’re taking an in-depth look at wearable tech. Not Fitbits or Apple Watches — we’re talking stuff you can make yourself, from projects that take a couple of hours to put together, to the huge, inspiring builds that are bringing technology to the runway. If you like wearing clothes and you like using your brain to make things better, then you’ll love this feature.
We’re continuing our obsession with Nixie tubes, with the brilliant Time-To-Go-Clock – Trump edition. This ingenious bit of kit uses obsolete Russian electronics to count down the time until the end of the 45th president’s term in office. However, you can also program it to tell the time left to any predictable event, such as the deadline for your tax return or essay submission, or the date England gets knocked out of the World Cup.
We’re also talking to Dr Lucy Rogers — NASA alumna, Robot Wars judge, and fellow of the Institution of Mechanical Engineers — about the difference between making as a hobby and as a job, and about why we need the Guild of Makers. Plus, issue 4 has a teeny boat, the most beautiful Raspberry Pi cases you’ve ever seen, and it explores the results of what happens when you put a bunch of hardware hackers together in a French chateau — sacré bleu!
As always, we’ve got more how-tos than you can shake a soldering iron at. Fittingly for the current climate here in the UK, there’s a hot water monitor, which shows you how long you have before your morning shower turns cold, and an Internet of Tea project to summon a cuppa from your kettle via the web. Perhaps not so fittingly, there’s also an ESP8266 project for monitoring a solar power station online. Readers in the southern hemisphere, we’ll leave that one for you — we haven’t seen the sun here for months!
And there’s more!
We’re super happy to say that all our 12-month print subscribers have been sent an Adafruit Circuit Playground Express with this new issue:
This gadget was developed primarily with wearables in mind and comes with all sorts of in-built functionality, so subscribers can get cracking with their latest wearable project today! If you’re not a 12-month print subscriber, you’ll miss out, so subscribe here to get your magazine and your device, and let us know what you’ll make.
This post summarizes the responses we received to our November 28 post asking our readers how they handle the challenge of digital asset management (DAM). You can read the previous posts in this series below:
How are you currently backing up your digital photos, video files, and/or file libraries/catalogs? Do you have a backup system that uses attached drives, a local network, the cloud, or offline storage media? Does it work well for you?
Imagine your ideal digital asset backup setup. What would it look like? Don’t be constrained by current products, technologies, brands, or solutions. Invent a technology or product if you wish. Describe an ideal system that would work the way you want it to.
We were thrilled to receive a large number of responses from readers. What was clear from the responses is that there is no consensus on solutions for either amateur or professional, and that users had many ideas for how digital media management could be improved to meet their needs.
We asked our readers to contribute to this dialog for a number of reasons. As a cloud backup and cloud storage service provider, we want to understand how our users are working with digital media so we know how to improve our services. Also, we want to participate in the digital media community, and hope that sharing the challenges our readers are facing and the solutions they are using will make a contribution to that community.
The State of Managing Digital Media
While a few readers told us they had settled on a system that worked for them, most said that they were still looking for a better solution. Many expressed frustration with dealing with the growing amount of data for digital photos and videos that is only getting larger with the increasing resolution of still and video cameras. Amateurs are making do with a number of consumer services, while professionals employ a wide range of commercial, open source, or jury rigged solutions for managing data and maintaining its integrity.
I’ve summarized the responses we received in three sections on, 1) what readers are doing today, 2) common wishes they have for improvements, and 3) concerns that were expressed by a number of respondents.
The Digital Media Workflow
Protecting Media From Camera to Cloud
We heard from a wide range of smartphone users, DSLR and other format photographers, and digital video creators. Speed of operation, the ability to share files with collaborators and clients, and product feature sets were frequently cited as reasons for selecting their particular solution. Also of great importance was protecting the integrity of media through the entire capture, transfer, editing, and backup workflow.
Avid Media Composer
Many readers said they backed up their camera memory cards as soon as possible to a computer or external drive and erased cards only when they had more than one backup of the media. Some said that they used dual memory cards that are written to simultaneously by the camera for peace-of-mind.
While some cameras now come equipped with Wi-Fi, no one other than smartphone users said they were using Wi-Fi as part of their workflow. Also, we didn’t receive feedback from any photographers who regularly shoot tethered.
Some readers said they still use CDs and DVDs for storing media. One user admitted to previously using VHS tape.
NAS (Network Attached Storage) is in wide use. Synology, Drobo, FreeNAS, and other RAID and non-RAID storage devices were frequently mentioned.
A number were backing up their NAS to the cloud for archiving. Others said they had duplicate external drives that were stored onsite or offsite, including in a physical safe, other business locations, a bank lock box, and even “mom’s house.”
Many said they had regular backup practices, including nightly backups, weekly and other regularly scheduled backups, often in non-work hours.
One reader said that a monthly data scrub was performed on the NAS to ensure data integrity.
Hardware used for backups included Synology, QNAP, Drobo, and FreeNAS systems.
Services used by our readers for backing up included Backblaze Backup, Backblaze B2 Cloud Storage, CrashPlan, SmugMug, Amazon Glacier, Google Photos, Amazon Prime Photos, Adobe Creative Cloud, Apple Photos, Lima, DropBox, and Tarsnap. Some readers made a distinction between how they used sync (such as DropBox), backup (such as Backblaze Backup), and storage (such as Backblaze B2), but others did not. (See Sync vs. Backup vs. Storage on our blog for an explanation of the differences.)
Software used for backups and maintaining file integrity included Arq, Carbon Copy Cloner, ChronoSync, SoftRAID, FreeNAS, corz checksum, rclone, rsync, Apple Time Machine, Capture One, Btrfs, BorgBackup, SuperDuper, restic, Acronis True Image, custom Python scripts, and smartphone apps PhotoTransfer and PhotoSync.
Cloud torrent services mentioned were Offcloud, Bitport, and Seedr.
A common practice mentioned is to use SSD (Solid State Drives) in the working computer or attached drives (or both) to improve speed and reliability. Protection from magnetic fields was another reason given to use SSDs.
Many users copy their media to multiple attached or network drives for redundancy.
Users of Lightroom reported keeping their Lightroom catalog on a local drive and their photo files on an attached drive. They frequently had different backup schemes for the catalog and the media. Many readers are careful to have multiple backups of their Lightroom catalog. Some expressed the desire to back up both their original raw files and their edited (working) raw files, but limitations in bandwidth and backup media caused some to give priority to good backups of their raw files, since the edited files could be recreated if necessary.
A number of smartphone users reported using Apple or Google Photos to store their photos and share them.
Digital Editing and Enhancement
Adobe still rules for many users for photo editing. Some expressed interest in alternatives from Phase One, Skylum (formerly Macphun), ON1, and DxO.
While Adobe Lightroom (and Adobe Photoshop for some) are the foundation of many users’ photo media workflow, others are still looking for something that might better suit their needs. A number of comments were made regarding Adobe’s switch to a subscription model.
Software used for image and video editing and enhancement included Adobe Lightroom, Adobe Photoshop, Luminar, Affinity Photo, Phase One, DxO, ON1, GoPro Quik, Apple Aperture (discontinued), Avid Media Composer, Adobe Premiere, and Apple Final Cut Studio (discontinued) or Final Cut Pro.
Luminar 2018 DAM preview
Managing, Archiving, Adding Metadata, Searching for Media Files
While some of our respondents are casual or serious amateur digital media users, others make a living from digital photography and videography. A number of our readers report having hundreds of thousands of files and many terabytes of data — even approaching one petabyte of data for one professional who responded. Whether amateur or professional, all shared the desire to preserve their digital media assets for the future. Consequently, they want to be able to attach metadata quickly and easily, and search for and retrieve files from wherever they are stored when necessary.
It’s not surprising that metadata was of great interest to our readers. Tagging, categorizing, and maintaining searchable records is important to anyone dealing with digital media.
While Lightroom was frequently used to manage catalogs, metadata, and files, others used spreadsheets to record archive location and grep for searching records.
Some liked the idea of Adobe’s Creative Cloud but weren’t excited about its cost and lack of choice in cloud providers.
Others reported using Photo Mechanic, DxO, digiKam, Google Photos, Daminion, Photo Supreme, Phraseanet, Phase One Media Pro, Google Picasa (discontinued), Adobe Bridge, Synology Photo Station, FotoStation, PhotoShelter, Flickr, and SmugMug.
Photo Mechanic 5
Common Wishes For Managing Digital Media in the Future
Our readers came through with numerous suggestions for how digital media management could be improved. There were a number of common themes centered around bigger and better storage, faster broadband or other ways to get data into the cloud, managing metadata, and ensuring integrity of their data.
Many wished for faster internet speeds that would make transferring and backing up files more efficient. This desire was expressed multiple times. Many said that the sheer volume of digital data they worked with made cloud services and storage impractical.
A number of readers would like the option to be able to ship files on a physical device to a cloud provider so that the initial large transfer would not take as long. Some wished to be able to send monthly physical transfers with incremental transfers send over the internet. (Note that Backblaze supports adding data via a hardware drive to B2 Cloud Storage with our Fireball service.)
Reasonable service cost, not surprisingly, was a desire expressed by just about everyone.
Many wished for not just backup, but long-term archiving of data. One suggestion was to be able to specify the length-of-term for archiving and pay by that metric for specific sets of files.
An easy-to-use Windows, Macintosh, or Linux client was a feature that many appreciated. Some were comfortable with using third-party apps for cloud storage and others wanted a vendor-supplied client.
A number of users like the combination of NAS and cloud. Many backed up their NAS devices to the cloud. Some suggested that the NAS should be the local gateway to unlimited virtual storage in the cloud. (They should read our recent blog post on Morro Data’s CloudNAS solution.)
Some just wanted the storage problem solved. They would like the computer system to manage storage intelligently so they don’t have to. One reader said that storage should be managed and optimized by the system, as RAM is, and not by the user.
Common Concerns Expressed by our Readers
Over and over again our readers expressed similar concerns about the state of digital asset management.
Dealing with large volumes of data was a common challenge. As digital media files increase in size, readers struggle to manage the amount of data they have to deal with. As one reader wrote, “Why don’t I have an online backup of my entire library? Because it’s too much damn data!”
Many said they would back up more often, or back up even more files if they had the bandwidth or storage media to do so.
The cloud is attractive to many, but some said that they didn’t have the bandwidth to get their data into the cloud in an efficient manner, the cloud is too expensive, or they have other concerns about trusting the cloud with their data.
Most of our respondents are using Apple computer systems, some Windows, and a few Linux. A lot of the Mac users are using Time Machine. Some liked the concept of Time Machine but said they had experienced corrupted data when using it.
Visibility into the backup process was mentioned many times. Users want to know what’s happening to their data. A number said they wanted automatic integrity checks of their data and reports sent to them if anything changes.
A number of readers said they didn’t want to be locked into one vendor’s proprietary solution. They prefer open standards to prevent loss if a vendor leaves the market, changes the product, or makes a turn in strategy that they don’t wish to follow.
A number of users talked about how their practices differed depending on whether they were working in the field or working in a studio or at home. Access to the internet and data transfer speed was an issue for many.
It’s clear that people working in high resolution photography and videography are pushing the envelope for moving data between storage devices and the cloud.
Some readers expressed concern about the integrity of their stored data. They were concerned that over time, files would degrade. Some asked for tools to verify data integrity manually, or that data integrity should be monitored and reported by the storage vendor on a regular basis. The OpenZFS and Btrfs file systems were mentioned by some.
A few readers mentioned that they preferred redundant data centers for cloud storage.
Metadata is an important element for many, and making sure that metadata is easily and permanently associated with their files is essential.
The ability to share working files with collaborators or finished media with clients, friends, and family also is a common requirement.
Thank You for Your Comments and Suggestions
As a cloud backup and storage provider, your contributions were of great interest to us. A number of readers made suggestions for how we can improve or augment our services to increase the options for digital media management. We listened and are considering your comments. They will be included in our discussions and planning for possible future services and offerings from Backblaze. We thank everyone for your contributions.
Digital media management
Let’s Keep the Conversation Going!
Were you surprised by any of the responses? Do you have something further to contribute? This is by no means the end of our exploration of how to better serve media professionals, so let’s keep the lines of communication open.
Earlier this month, the Office of the US Trade Representative (USTR) released an updated version of its “Out-of-Cycle Review of Notorious Markets,” ostensibly identifying some of the worst IP-offenders worldwide.
The annual list overview helps to guide the U.S. Government’s position towards foreign countries when it comes to copyright enforcement.
The most recent version featured traditional pirate sites such as The Pirate Bay, Rapidgator, and Gostream, but also the Russian social network VK and China-based marketplaces Alibaba and Taobao.com.
Since the list only identifies foreign sites, American services are never included. However, this restriction doesn’t apply in Europe, where the European Commission announced this week that it’s working on its own piracy watch list.
“The European Commission – on the basis of input from the stakeholders – after thorough verification of the received information – intends to publish a so called ‘Counterfeit and Piracy Watch-List’ in 2018, which will be updated regularly,” the EU’s call for submissions reads.
The EU watch list will operate in a similar fashion to the US equivalent and will be used to encourage site operators and foreign governments to take action.
“The list will identify and describe the most problematic marketplaces – with special focus on online marketplaces – in order to encourage their operators and owners as well as the responsible local authorities and governments to take the necessary actions and measures to reduce the availability of IPR infringing goods or services.”
In recent years various copyright holder groups have repeatedly complained about a lack of anti-piracy initiatives from companies such as Google and Cloudflare, so it will be interesting to see if these will be mentioned.
The same is true for online marketplaces. Responding to the US list last week, Alibaba also highlighted that several American companies suffer the same piracy and counterfeiting problems as they do, without being reprimanded.
“What about Amazon, eBay and others? USTR has no basis for comparison, because it does not ask for similar data from U.S. companies,” Alibaba noted in a rebuttal.
The EU watch list is clearly inspired by the US counterpart. It shows striking similarities with the US version of the watch list and some of the language appears to be copied (or pirated) word for word.
The EU writes, for example, that their list “will not mean to reflect findings of legal violations, nor will it reflect the European Union’s analysis of the general intellectual property rights protection and enforcement climate in the country or countries concerned.”
Just a few days earlier the USTR noted that its list “does not make findings of legal violations. Nor does it reflect the U.S. Government’s analysis of the general IP protection and enforcement climate in the countries connected with the listed markets.”
The above means that, despite branding foreign services as notorious offenders, these are mere allegations. No hard proof is to be expected in the report, nor will the EU research the matter on its own.
If the US example is followed, the watch list will be mostly an overview of copyright holder complaints, signed by the authorities. The latter is not without controversy, as China says it doubts the objectivity of USTR’s report for this very reason.
Copyright holders and other interested parties are invited to submit their contributions and comments by 31 March 2018, and the final list is expected to be released later in the year.
Following the 2012 raid on Megaupload and Kim Dotcom, U.S. and New Zealand authorities seized millions of dollars in cash and other property, located around the world.
Claiming the assets were obtained through copyright and money laundering crimes, the U.S. government launched separate civil cases in which it asked the court to forfeit bank accounts, servers, domain names, and other seized possessions of the Megaupload defendants.
One of these cases was lost after the U.S. branded Dotcom and his colleagues as “fugitives”.The defense team appealed the ruling, but lost again, and a subsequent petition at the Supreme Court was denied.
Following this lost battle, the U.S. also moved to conclude a separate civil forfeiture case, which was still pending at a federal court in Virginia.
The assets listed in this case are several bank accounts, including several at PayPal, as well as 60 servers Megaupload bought at Leaseweb. What has the most symbolic value, however, are the domain names that were seized, including Megaupload.com, Megaporn.com and Megavideo.com.
This week a U.S. federal court decided that all claims of Kim Dotcom, his former colleague Mathias Ortman, and several Megaupload-related companies should be stricken. A default was entered against them on Tuesday.
The same fugitive disentitlement argument was used in this case. This essentially means that someone who’s considered to be a fugitive from justice is not allowed to get relief from the judicial system he or she evades.
“Claimants Kim Dotcom and Mathias Ortmann have deliberately avoided prosecution by declining to enter or reenter the United States,” Judge Liam O’Grady writes in his order to strike the claims.
“Because Claimant Kim Dotcom, who is himself a fugitive under Section 2466, is the Corporate Claimants’ controlling shareholder and, in particular, because he signed the claims on behalf of the corporations, a presumption of disentitlement applies to the corporations as well.”
As a result, the domain names which once served 50 million users per day, are now lost to the US Government. The court records list 18 domains in total, which were registered through Godaddy, DotRegistrar, and Fabulous.
Given the legal history, the domains and other assets are likely lost for good. However, Megaupload defense lawyer Ira Rothken is not giving up yet.
“We are still evaluating the legal options in a climate where Kim Dotcom is being labeled a fugitive in a US criminal copyright case even though he has never been to the US, is merely asserting his US-NZ extradition treaty rights, and the NZ High Court has ruled that he and his co-defendants did not commit criminal copyright infringement under NZ law,” Rothken tells TorrentFreak.
There might be a possibility that assets located outside the US could be saved. Foreign courts are more open to defense arguments, it seems, as a Hong Kong court previously ordered the US to return several assets belonging to Kim Dotcom.
The Hong Kong case also brought some good news this week. At least, something that was supposed to be positive. On Twitter, Dotcom writes that two containers with seized assets were returned, but in a “rotten and destroyed” state.
“A shipment of 2 large containers just arrived in New Zealand. This is how all my stuff looks now. Rotten & destroyed. Photo: My favorite gaming chair,” Dotcom wrote.
According to Dotcom, the US Government asked him to pay for ‘climate controlled’ storage for more than half a decade to protect the seized goods. However, judging from the look of the chair and the state of some other belongings, something clearly went wrong.
For more than a year the British public has been warned about the supposed dangers of Kodi piracy.
Dozens of headlines have claimed consequences ranging from system-destroying malware to prison sentences. Fortunately, most of them can be filed under “tabloid nonsense.”
That being said, there is an extremely important issue that deserves much closer attention, particularly given a shift in the UK legal climate during 2017. We’re talking about live streaming copyrighted content on Facebook, which is both incredibly easy and frighteningly risky.
This week it was revealed that 34-year-old Craig Foster from the UK had been given an ultimatum from Sky to pay a £5,000 settlement fee. The media giant discovered that he’d live-streamed the Anthony Joshua v Wladimir Klitschko fight on Facebook and wanted compensation to make a potential court case disappear.
While it may seem initially odd to use the word, Foster was lucky.
Under last year’s Digital Economy Act, he could’ve been jailed for up to ten years for distributing copyright-infringing content to the public, if he had “reason to believe that communicating the work to the public [would] cause loss to the owner of the copyright, or [would] expose the owner of the copyright to a risk of loss.”
Clearly, as a purchaser of the £19.95 pay-per-view himself, he would’ve appreciated that the event costs money. With that in mind, a court would likely find that he would have been aware that Sky would have been exposed to a “risk of loss”. Sky claim that 4,250 people watched the stream but the way the law is written, no specific level of loss is required for a breach of the law.
But it’s not just the threat of a jail sentence that’s the problem. People streaming live sports on Facebook are sitting ducks.
In Foster’s case, the fight he streamed was watermarked, which means that Sky put a tracking code into it which identified him personally as the buyer of the event. When he (or his friend, as Foster claims) streamed it on Facebook, it was trivial for Sky to capture the watermark and track it back to his Sky account.
Equally, it would be simplicity itself to see that the name on the Sky account had exactly the same name and details as Foster’s Facebook account. So, to most observers, it would appear that not only had Foster purchased the event, but he was also streaming it to Facebook illegally.
It’s important to keep something else in mind. No cooperation between Sky and Facebook would’ve been necessary to obtain Foster’s details. Take the amount of information most people share on Facebook, combine that with the information Sky already had, and the company’s anti-piracy team would have had a very easy job.
Now compare this situation with an upload of the same stream to a torrent site.
While the video capture would still contain Foster’s watermark, which would indicate the source, to prove he also distributed the video Sky would’ve needed to get inside a torrent swarm. From there they would need to capture the IP address of the initial seeder and take the case to court, to force an ISP to hand over that person’s details.
Presuming they were the same person, Sky would have a case, with a broadly similar level of evidence to that presented in the current matter. However, it would’ve taken them months to get their man and cost large sums of money to get there. It’s very unlikely that £5,000 would cover the costs, meaning a much, much bigger bill for the culprit.
Or, confident that Foster was behind the leak based on the watermark alone, Sky could’ve gone straight to the police. That never ends well.
The bottom line is that while live-streaming on Facebook is simplicity itself, people who do it casually from their own account (especially with watermarked content) are asking for trouble.
Nailing Foster was the piracy equivalent of shooting fish in a barrel but the worrying part is that he probably never gave his (or his friend’s…) alleged infringement a second thought. With a click or two, the fight was live and he was staring down the barrel of a potential jail sentence, had Sky not gone the civil route.
It’s scary stuff and not enough is being done to warn people of the consequences. Forget the scare stories attempting to deter people from watching fights or movies on Kodi, thoughtlessly streaming them to the public on social media is the real danger.
The Pirate Bay is arguably the most widely blocked website on the Internet.
ISPs from all over the world have been ordered by courts to prevent users from accessing the torrent site, and this week the list has grown a bit longer.
A Dutch court has ruled that local Internet providers KPN, Tele2, T-Mobile, Zeelandnet and CAIW must block the site within ten days. The verdict follows a similar decision from September last year, where Ziggo and XS4All were ordered to do the same.
The blockade applies to several IP addresses and more than 150 domain names that are used by the notorious torrent site. Several of the ISPs had warned the court about the dangers of overblocking, but these concerns were rejected.
While most Dutch customers will be unable to access The Pirate Bay directly, the decision is not final yet. Not until the Supreme Court issues its pending decision. That will be the climax of a legal battle that started eight years ago.
A Dutch court first issued an order to block The Pirate Bay in 2012, but this decision was overturned two years later. Anti-piracy group BREIN then took the matter to the Supreme Court, which subsequently referred the case to the EU Court of Justice, seeking further clarification.
After a careful review of the case, the EU Court of Justice decided last year that The Pirate Bay can indeed be blocked.
The top EU court ruled that although The Pirate Bay’s operators don’t share anything themselves, they knowingly provide users with a platform to share copyright-infringing links. This can be seen as “an act of communication” under the EU Copyright Directive.
This put the case back to the Dutch Supreme court, which has yet to decide on the matter.
BREIN, however, wanted a blocking decision more quickly and requested preliminary injunctions, like the one issued this week. These injunctions will only be valid until the final verdict is handed down.
A copy of the most recent court order is available here (pdf).
Live TV is in massive demand but accessing all content in a particular region can be a hugely expensive proposition, with tradtional broadcasting monopolies demanding large subscription fees.
For millions around the world, this ‘problem’ can be easily circumvented. Pirate IPTV operations, which supply thousands of otherwise subscription channels via the Internet, are on the increase. They’re accessible for just a few dollars, euros, or pounds per month, slashing bills versus official providers on a grand scale.
This week, however, police forces around Europe coordinated to target what they claim is one of the world’s largest illicit IPTV operations. The investigation was launched last February by Europol and on Tuesday coordinated actions were carried out in Cyprus, Bulgaria, Greece, and the Netherlands.
Three suspects were arrested in Cyprus – two in Limassol (aged 43 and 44) and one in Larnaca (aged 53). All are alleged to be part of an international operation to illegally broadcast around 1,200 channels of pirated content worldwide. Some of the channels offered were illegally sourced from Sky UK, Bein Sports, Sky Italia, and Sky DE
If initial reports are to be believed, the reach of the IPTV service was huge. Figures usually need to be taken with a pinch of salt but information suggests the service had more than 500,000 subscribers, each paying around 10 euros per month. (Note: how that relates to the alleged five million euros per year in revenue is yet to be made clear)
Police action was spread across the continent, with at least nine separate raids, including in the Netherlands where servers were uncovered. However, it was determined that these were in place to hide the true location of the operation’s main servers. Similar ‘front’ servers were also deployed in other regions.
The main servers behind the IPTV operation were located in Petrich, a small town in Blagoevgrad Province, southwestern Bulgaria. No details have been provided by the authorities but TF is informed that the website of a local ISP, Megabyte-Internet, from where pirate IPTV has been broadcast for at least the past several months, disappeared on Tuesday. It remains offline this morning.
The company did not respond to our request for comment and there’s no suggestion that it’s directly involved in any illegal activity. However, its Autonomous System (AS) number reveals linked IPTV services, none of which appear to be operational today. The ISP is also listed on sites where ‘pirate’ IPTV channel playlists are compiled by users.
According to sources in Cyprus, police requested permission from the Larnaca District Court to detain the arrested individuals for eight days. However, local news outlet Philenews said that any decision would be postponed until this morning, since one of the three suspects, an English Cypriot, required an interpreter which caused a delay.
In addition to prosecutors and defense lawyers, two Dutch investigators from Europol were present in court yesterday. The hearing lasted for six hours and was said to be so intensive that the court stenographer had to be replaced due to overwork.
Ten years ago the Internet was an entirely different place. Piracy was rampant, as it is today, but the people behind the largest torrent sites were more vocal then.
There was a battle going on for the right to freely share content online. This was very much a necessity at the time, as legal options were scarce, but for many it was also an idealistic battle.
As the spokesperson of The Pirate Bay, Peter Sunde was one of the leading voices at the time. He believed, and still does, that people should be able to share anything without restrictions. Period.
For Peter and three others associated with The Pirate Bay, this eventually resulted in jail sentences. They were not the only ones to feel the consequences. Over the past decade, dozens of torrent sites were shut down under legal pressure, forcing those operators that remain to go into hiding.
Today, ten years after we spoke to Peter about the future of torrent sites and file-sharing, we reach out to him again. A lot has changed, but how does The Pirate Bay’s co-founder look at things now?
“On the personal side, all is great, and I’m working on a TV-series about activism that will air next year. On top of that of course working on Njalla, Ipredator and other known projects,” Peter says.
“In general, I think that projects for me are still about the same thing as a decade ago, but just trying different approaches!”
While Peter stays true to his activist roots, fighting for privacy and freedom on the Internet, his outlook is not as positive as it once was.
He is proud that The Pirate Bay never caved and that they fought their cases to the end. The moral struggle was won, but he also realizes that the greater battle was lost.
“I’m proud and happy to be able to look myself in the mirror every morning with a feeling of doing right. A lot of corrupt people involved in our cases probably feel quite shitty. Well, if they have feelings,” Peter says.
The Pirate Bay’s former spokesperson doesn’t have any regrets really. The one thing that comes to mind, when we ask about things that he would have done differently, is to tell fellow Pirate Bay founder Anakata to encrypt his hard drive.
Brokep (Peter) and Anakata (Gottfrid)
Looking at the current media climate, Peter doesn’t think we are better off. On the contrary. While it might be easier in some counties to access content legally online, this also means that control is now firmly in the hands of a few major companies.
The Pirate Bay and others always encouraged free sharing for creators and consumers. This certainly hasn’t improved. Instead, media today is contained in large centralized silos.
“I’m surprised that people are so short-sighted. The ‘solution’ to file sharing was never centralizing content control back to a few entities – that was the struggle we were fighting for.
“Netflix, Spotify etc are not a solution but a loss. And it surprises me that the pirate movement is not trying to talk more about that,” he adds.
The Netflixes and Spotifies of this world are often portrayed as a solution to piracy. However, Peter sees things differently. He believes that these services put more control in the hands of powerful companies.
“The same companies we fought own these platforms. Either they own the shares in the companies, or they have deals with them which makes it impossible for these companies to not follow their rules.
“Artists can’t choose to be or not to be on Spotify in reality, because there’s nothing else in the end. If Spotify doesn’t follow the rules from these companies, they are fucked as well. The dependence is higher than ever.”
The first wave of mass Internet piracy well over a decade ago was a wake-up call to the entertainment industry. The immense popularity of torrent sites showed that people demanded something they weren’t offering.
In a way, these early pirate sites are the reason why Netflix and Spotify were able to do what they do. Literally, in the case of Spotify, which used pirated music to get the service going.
Peter doesn’t see them as the answer though. The only solution in his book is to redefine and legalize piracy.
“The solution to piracy is to re-define piracy. Make things available to everyone, without that being a crime,” Peter says.
In this regard, not much has changed in ten years. However, having witnessed this battle closer than anyone else, he also realizes that the winners are likely on the other end.
Piracy will decrease over time, but not the way Peter hopes it will.
“I think we’ll have less piracy because of the problems we see today. With net neutrality being infringed upon and more laws against individual liberties and access to culture, instead of actually benefiting people.
“The media industry will be happy to know that their lobbying efforts and bribes are paying off,” he concludes.
This is the second and final post in our torrent pioneers series. The first interview with isoHunt founder Gary Fung is available here.
Първият път, когато се сблъсках със SCADA си помислих колко много потенциал има в тази платформа и колко ужасно дървено е реализирана тя. Дълго време за мен това беше пример за консервативна и егоцентрична система. Твърде затворена, скъпа, със сложно лицензиране – тя беше пълна противоположност на това, което се опитваше да бъде – универсална индустриална платформа за контрол и управление.
Времето обаче променя много неща. В наши дни вече има реализации, които са все по-отворени, поддържат все по-набъбващо количество протоколи и стандарти за интеграция, потребителските интерфейси са web-базирани, лицензирането е ясно и простичко (per server), данните се съхраняват в лесни за споделяне с други платформи бази-данни, имат все по-читави и разнообразни развойни средства. И най-важното – достъпни са и за по-малки и средни предприятия.
Те ще споделят от първо лице опита си, както с използването и внедряването ѝ в собствените им продукти, така във фабриките на техни клиенти.
Този път ще експериментираме с нов формат на събитието – ограничен малък брой наши гости ще могат да наблюдават презентацията на живо, да участват в дискусия и да задават въпросите си към лекторите ни, а след това и да останат за неформален разговор и networking помежду си. Това ще бъдат първите, които закупят VIP pass от сайта ни, преди да са се изчерпали местата.
Тези, които не могат или не успеят навреме да се регистрират за да присъстват, ще могат да гледат (само презентацията) чрез живо излъчване в нашия нов канал в YouTube на адрес https://trakia.tech/live или в последствие на запис, отново там. Това, разбира се, ще е безплатно, но без възможност за участие в дискусията и networking частта след нея.
Иначе всичко ще се случи на 11 декември (понеделник), от 16 часа, в Пловдив, при нашите любезни домакини Limacon. Заповядайте!
Since we launched the Oracle Weather Station project, we’ve collected more than six million records from our network of stations at schools and colleges around the world. Each one of these records contains data from ten separate sensors — that’s over 60 million individual weather measurements!
Weather station measurements in Oracle database
Weather data collection
Having lots of data covering a long period of time is great for spotting trends, but to do so, you need some way of visualising your measurements. We’ve always had great resources like Graphing the weather to help anyone analyse their weather data.
And from now on its going to be even easier for our Oracle Weather Station owners to display and share their measurements. I’m pleased to announce a new partnership with our friends at Initial State: they are generously providing a white-label platform to which all Oracle Weather Station recipients can stream their data.
Using Initial State
Initial State makes it easy to create vibrant dashboards that show off local climate data. The service is perfect for having your Oracle Weather Station data on permanent display, for example in the school reception area or on the school’s website.
But that’s not all: the Initial State toolkit includes a whole range of easy-to-use analysis tools for extracting trends from your data. Distribution plots and statistics are just a few clicks away!
Looks like Auntie Beryl is right — it has been a damp old year! (Humidity value distribution May–Nov 2017)
The wind direction data from my Weather Station supports my excuse as to why I’ve not managed a high-altitude balloon launch this year: to use my launch site, I need winds coming from the east, and those have been in short supply.
Chart showing wind direction over time
Initial State credientials
Every Raspberry Pi Oracle Weather Station school will shortly be receiving the credentials needed to start streaming their data to Initial State. If you’re super keen though, please email [email protected] with a photo of your Oracle Weather Station, and I’ll let you jump the queue!
The Initial State folks are big fans of Raspberry Pi and have a ton of Pi-related projects on their website. They even included shout-outs to us in the music video they made to celebrate the publication of their 50th tutorial. Can you spot their weather station?
Your home-brew weather station
If you’ve built your own Raspberry Pi–powered weather station and would like to dabble with the Initial State dashboards, you’re in luck! The team at Initial State is offering 14-day trials for everyone. For more information on Initial State, and to sign up for the trial, check out their website.
When James Puderer moved to Lima, Peru, his roadside runs left a rather nasty taste in his mouth. Hit by the pollution from old diesel cars in the area, he decided to monitor the air quality in his new city using Raspberry Pis and the abundant taxies as his tech carriers.
With the onboard tech, the device collects data on longitude, latitude, humidity, temperature, pressure, and airborne particle count, feeding it back to an Android Things datalogger. This data is then pushed to Google IoT Core, where it can be remotely accessed.
Next, the data is processed by Google Dataflow and turned into a BigQuery table. Users can then visualize the collected measurements. And while James uses Google Maps to analyse his data, there are many tools online that will allow you to organise and study your figures depending on what final result you’re hoping to achieve.
James hopped in a taxi and took his monitor on the road, collecting results throughout the journey
James has provided the complete build process, including all tech ingredients and code, on his Hackster.io project page, and urges makers to create their own air quality monitor for their local area. He also plans on building upon the existing design by adding a 12V power hookup for connecting to the taxi, functioning lights within the sign, and companion apps for drivers.
Sensing the world around you
We’ve seen a wide variety of Raspberry Pi projects using sensors to track the world around us, such as Kasia Molga’s Human Sensor costume series, which reacts to air pollution by lighting up, and Clodagh O’Mahony’s Social Interaction Dress, which she created to judge how conversation and physical human interaction can be scored and studied.
Kasia Molga’s Human Sensor — a collection of hi-tech costumes that react to air pollution within the wearer’s environment.
Many people also build their own Pi-powered weather stations, or use the Raspberry Pi Oracle Weather Station, to measure and record conditions in their towns and cities from the roofs of schools, offices, and homes.
Have you incorporated sensors into your Raspberry Pi projects? Share your builds in the comments below or via social media by tagging us.
Google regularly removes infringing websites from its search results, but the company is also wary of abuse.
When the Canadian company Equustek Solutions requested the company to remove websites that offered unlawful and competing products, it refused to do so globally.
This resulted in a legal battle that came to a climax in June, when the Supreme Court of Canada ordered Google to remove a company’s websites from its search results. Not just in Canada, but all over the world.
With options to appeal exhausted in Canada, Google took the case to a federal court in the US. The search engine requested an injunction to disarm the Canadian order, arguing that a worldwide blocking order violates the First Amendment.
Surprisingly, Equustek decided not to defend itself and without opposition, a California District Court sided with Google yesterday.
During a hearing, Google attorney Margaret Caruso stressed that it should not be possible for foreign countries to implement measures that run contrary to core values of the United States.
The search engine argued that the Canadian order violated Section 230 of the Communications Decency Act, which immunizes Internet services from liability for content created by third parties. With this law, Congress specifically chose not to deter harmful online speech by imposing liability on Internet services.
In an order, signed shortly after the hearing, District Judge Edward Davila concludes that Google qualifies for Section 230 immunity in this case. As such, he rules that the Canadian Supreme Court’s global blocking order goes too far.
“Google is harmed because the Canadian order restricts activity that Section 230 protects. In addition, the balance of equities favors Google because the injunction would deprive it of the benefits of U.S. federal law,” Davila writes.
Rendering the order unenforceable is not just in the interest of Google, the District Court writes. It’s also best for the general public as free speech is clearly at stake here.
“Congress recognized that free speech on the internet would be severely restricted if websites were to face tort liability for hosting user-generated content. It responded by enacting Section 230, which grants broad immunity to online intermediaries,” Judge Davila writes.
“The Canadian order would eliminate Section 230 immunity for service providers that link to third-party websites. By forcing intermediaries to remove links to third-party material, the Canadian order undermines the policy goals of Section 230 and threatens free speech on the global internet.”
The preliminary injunction
The Court signed a preliminary injunction which prevents Equustek enforcing the Canadian order in the United States, which is exactly what Google was after. Since the Canadian company chose not to represent itself in the US case, this will likely stand.
The ruling is important in the broader scheme. If foreign courts are allowed to grant worldwide blockades, free speech could be severely hampered. Today it’s a relatively unknown Canadian company, but what if the Chinese Government asked Google to block the websites of VPN providers?
Amazon Cognito user pools are full-fledged identity providers (IdP) that you can use to maintain a user directory. The directory can scale to hundreds of millions of users and also add sign-up and sign-in support to your mobile or web applications.
In this scenario, your web app hosted on Amazon S3 integrates with Amazon Cognito User Pools to authenticate users. It uses Amazon Cognito Federated Identities to authorize access to Amazon QuickSight on behalf of the authenticated user, with temporary AWS credentials and appropriate permissions. The app then uses an ID token generated by Amazon Cognito to call API Gateway and Lambda to obtain a sign-in token for Amazon QuickSight from AWS Sign-In Federation. With this token, the app redirects access to Amazon QuickSight:
The Amazon Cognito hosted UI provided by the app integration domain performs all sign-in, sign-up, verification, and authentication logic for the web app. This allows you to register and authenticate users.
After a user is authenticated with a valid user name and password, an OpenID Connect token (ID token) is sent to Amazon Cognito Federated Identities. The token retrieves temporary AWS credentials based on an IAM role with “quickSight:CreateUser” permissions. These credentials are used to build a session string that is encoded into the URL https://signin.aws.amazon.com/federation?Action=getSigninToken.
The ID token, along with the encoded URL, is sent to API Gateway, which in turn verifies the token with a user pool authorizer to authorize the API call.
The URL is passed on to a Lambda function that calls the AWS SSO federation endpoint to retrieve a sign-In token.
AWS SSO processes the federation request, authenticates the user, and forwards the authentication token to Amazon QuickSight, which then uses the authentication token and authorizes user access.
How can you use, configure and test this serverless solution in your own AWS account? I created a simple SAM (Serverless Application Model) template that can be used to spin up all the resources needed for the solution.
Using the AWS CLI, create an S3 bucket in the same region in which to deploy all resources:
AWS CloudFormation automatically creates and configures the following resources in your account:
Amazon CloudFront distribution
S3 static website
Amazon Cognito user pool
Amazon Cognito identity pool
IAM role for authenticated users
API Gateway API
You can follow the progress of the stack creation from the CloudFormation console. View the Outputs tab for the completed stack to get the identifiers of all created resources. You could also execute the following command with the AWS CLI:
aws cloudformation describe-stacks --query 'Stacks.[Outputs.[OutputKey,OutputValue]]|' --output text --stack-name CognitoQuickSight
Use the information from the console or CLI command to replace the related resource identifiers in the file “auth.js”.
In the Amazon Cognito User Pools console, select the pool named QuickSightUsers generated by CloudFormation.
Under App integration, choose Domain name and create a domain. Domain names must be unique to the region. Add the domain to the “auth.js” file accordingly:
Choose App integration, App client settings and then select the option Cognito User Pool. Add the CloudFront distribution address (with https://, as SSL is a requirement for the callback/sign out URLs) and make sure that the address matches the related settings in the “auth.js” file exactly. For Allowed OAuth Flows, select implicit grant. For Allowed OAuth Scopes, select openid.
The app integration configuration is now done. Your “auth.js” file should look like the following:
The preceding resources don’t exist anymore. The CloudFormation stack that generated them was deleted. I recommend that you delete your stack after testing, for cleanup purposes. Deleting the stack also deletes all the resources.
Next, upload the four JS and HTML files to the S3 bucket named “cognitoquicksight-s3website-xxxxxxxxx”. Make sure that all files are publicly readable:
Congratulations, the configuration part is now finished!
It’s time to create your first user. Access your CloudFront distribution address in a browser and choose SIGN IN / SIGN UP.
On the Amazon Cognito hosted UI, choose SIGN UP and provide a user name, password and a valid email.
You receive a verification code in email to confirm the user.
In a production system, you might not want to allow open access to your dashboards. As you now have a confirmed user, you can disable the sign-up functionality altogether to avoid letting other users sign themselves up.
In the Amazon Cognito User Pools console, choose General settings, Policies and select Only allow administrators to create users.
In the web app, you can now sign in as the Amazon Cognito user to access the Amazon QuickSight console. Because this is the first time this user is accessing Amazon QuickSight with an IAM role, provide your email address and sign up as an Amazon QuickSight user.
Enjoy your federated access to Amazon QuickSight!
After you’re done testing, go to the CloudFormation console and delete the CognitoQuickSight stack to remove all the resources.
Extending and customizing the solution
Additionally, you could configure SAML federation for your user pool with a couple of clicks, following the instructions in the Amazon Cognito User Pools supports federation with SAML post. If you add more than one SAML IdP, Amazon Cognito can identify the provider to which to redirect the authentication request, based on the user’s corporate email address.
It’s important to understand that while Amazon Cognito User Pools is authenticating (AuthN) the user, the IAM role created for the identity pool is authorizing (AuthZ) the user to perform actions on specific resources. As it is configured, the role only allows “quickSight:CreateUser” permissions. For additional permissions, modify the role accordingly, as in Setting Your IAM Policy. If your users create datasets, remember to add access to data sources such as Amazon S3.
You can customize this solution further by adding multiple groups to your user pool and associating each group with different IAM roles. For more information, see Amazon Cognito Groups and Fine-Grained Role-Based Access Control. For instance, with role-based access control, it’s possible to have a group for Amazon QuickSight administrators and one for users.
You can also modify the sign-In URL (Step 6) to redirect your user to any other service console, provided that the IAM role has appropriate permissions. After receiving a valid sign-in token from the SSO federation endpoint, the user is redirected to https://quicksight.aws.amazon.com. However, you can change the redirection to the main AWS console at https://console.aws.amazon.com. You could also change it to specific services, such as Amazon Redshift, Amazon EMR, Amazon Elasticsearch Service and Kibana (In this particular case the application needs to return an AWS Signature Version 4 – Sigv4 – signed URL based on the temporary credentials received from Cognito instead of calling the AWS Sign-In Federation endpoint), Amazon Kinesis, or AWS Lambda. Or, you can even customize a frontend portal with separate links to multiple specific services and resources that the user is only allowed to access provided that the IAM role being assumed has access to those services and resources.
With the power, flexibility, security, scalability, and the new federation and application integration features of Amazon Cognito user pools, there’s no need to worry about the undifferentiated heavy lifting of maintaining your own identity servers. This allows you to focus on application logic and secure access to other great AWS services, such as Amazon QuickSight.
If you have questions or suggestions, please comment below.
About the Author
Ed Lima is a Solutions Architect who helps AWS customers with their journey in the cloud. He has provided thought leadership to define and drive strategic direction for adoption of Amazon platforms and technologies, skillfully adapting and blending business requirements with technical aspects to achieve the best outcome helping implement well architected solutions. In his spare time, he enjoys snowboarding.
For a country with a soaring crime rate, where violent car-jackings and other violent crime are reportedly commonplace, Internet piracy isn’t something that’s been high on the agenda in Peru.
Nevertheless, under pressure from rightsholders, local authorities have now taken decisive action against the country’s most popular ‘pirate’ sites.
On the orders of prosecutor Miguel Ángel Puicón, a specialized police unit carried out searches earlier this month looking for the people behind Pelis24 (Movies24) and Series24, sites that are extremely popular across all of South America, not just Peru.
Local media reports that an initial search took place in the Los Olivos district of the Lima Province where two people were arrested in connection with the sites. On the same day, a second search was executed in the town of Rimac where a third person was detained.
The case was launched following a rightsholder complaint to the Special Prosecutor’s Office for Customs Crimes and Intellectual Property in Lima. It stated that three domains – pelis24.com, pelis24.tv and series24.tv were offering unlicensed movies and TV shows to the public.
“In view of the abundant evidence, the office requested measures indicative of the right to the criminal judge. A search was carried out in search of the property and the preliminary 48-hour detention of the people investigated was requested,” authorities said in a statement.
The warrant not only covered seizure of physical items but also the domain names associated with the platforms. As shown in the image below, they now display the following seizure banner (translated from Spanish).
Pelis24/Series24 Seizure Banner
Authorities say that a detailed preliminary investigation took place in order to corroborate the information provided by the complainant. Once the measures were approved by a judge, the Prosecutor’s Office acted in coordination with the Investigations Division of the High Technology Crimes unit to carry out the operation.
According to Puicón, this is the first action against the operators of a pirate site in Peru.
“The purpose was to have the detainees close the sites voluntarily after providing us with the login codes,” he said. “We do not have a technology department, so the specialized high-tech police and complainants were present to preserve evidence.”
Local sources indicate that sentences for piracy can be as long as six years in serious cases. However, Peru has been exclusively tackling counterfeiting of physical discs, with online piracy being allowed to run rampant.
“The Office of the Prosecutor has the competency to deal with crimes against intellectual property but has been working exclusively in cases of physical piracy,” Puicón says.
“Online piracy has another connotation, we must use other procedures, another form of investigation and another strategy. Therefore, the authorities that are aware of these crimes must be trained on technological issues.”
It’s believed that at least a million Peruvians download infringing content from the Internet each week, a problem that will need to be tackled moving forward, when the authorities can gather the expertise to do so.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.