Tag Archives: uploaded

Porn Producer Says He’ll Prove That AMC TV Exec is a BitTorrent Pirate

Post Syndicated from Andy original https://torrentfreak.com/porn-producer-says-hell-prove-that-amc-tv-exec-is-a-bittorrent-pirate-170818/

When people are found sharing copyrighted pornographic content online in the United States, there’s always a chance that an angry studio will attempt to track down the perpertrator in pursuit of a cash settlement.

That’s what adult studio Flava Works did recently, after finding its content being shared without permission on a number of gay-focused torrent sites. It’s now clear that their target was Marc Juris, President & General Manager of AMC-owned WE tv. Until this week, however, that information was secret.

As detailed in our report yesterday, Flava Works contacted Juris with an offer of around $97,000 to settle the case before trial. And, crucially, before Juris was publicly named in a lawsuit. If Juris decided not to pay, that amount would increase significantly, Flava Works CEO Phillip Bleicher told him at the time.

Not only did Juris not pay, he actually went on the offensive, filing a ‘John Doe’ complaint in a California district court which accused Flava Works of extortion and blackmail. It’s possible that Juris felt that this would cause Flava Works to back off but in fact, it had quite the opposite effect.

In a complaint filed this week in an Illinois district court, Flava Works named Juris and accused him of a broad range of copyright infringement offenses.

The complaint alleges that Juris was a signed-up member of Flava Works’ network of websites, from where he downloaded pornographic content as his subscription allowed. However, it’s claimed that Juris then uploaded this material elsewhere, in breach of copyright law.

“Defendant downloaded copyrighted videos of Flava Works as part of his paid memberships and, in violation of the terms and conditions of the paid sites, posted and distributed the aforesaid videos on other websites, including websites with peer to peer sharing and torrents technology,” the complaint reads.

“As a result of Defendant’ conduct, third parties were able to download the copyrighted videos, without permission of Flava Works.”

In addition to demanding injunctions against Juris, Flava Works asks the court for a judgment in its favor amounting to a cool $1.2m, more than twelve times the amount it was initially prepared to settle for. It’s a huge amount, but according to CEO Phillip Bleicher, it’s what his company is owed, despite Juris being a former customer.

“Juris was a member of various Flava Works websites at various times dating back to 2006. He is no longer a member and his login info has been blocked by us to prevent him from re-joining,” Bleicher informs TF.

“We allow full downloads, although each download a person performs, it tags the video with a hidden code that identifies who the user was that downloaded it and their IP info and date / time.”

We asked Bleicher how he can be sure that the content downloaded from Flava Works and re-uploaded elsewhere was actually uploaded by Juris. Fine details weren’t provided but he’s insistent that the company’s evidence holds up.

“We identified him directly, this was done by cross referencing all his IP logins with Flava Works, his email addresses he used and his usernames. We can confirm that he is/was a member of Gay-Torrents.org and Gayheaven.org. We also believe (we will find out in discovery) that he is a member of a Russian file sharing site called GayTorrent.Ru,” he says.

While the technicalities of who downloaded and shared what will be something for the court to decide, there’s still Juris’ allegations that Bleicher used extortion-like practices to get him to settle and used his relative fame against him. Bleicher says that’s not how things played out.

“[Juris] hired an attorney and they agreed to settle out of court. But then we saw him still accessing the file sharing sites (one site shows a user’s last login) and we were waiting on the settlement agreement to be drafted up by his attorney,” he explains.

“When he kept pushing the date of when we would see an agreement back we gave him a final deadline and said that after this date we would sue [him] and with all lawsuits – we make a press release.”

Bleicher says at this point Juris replaced his legal team and hired lawyer Mark Geragos, who Bleicher says tried to “bully” him, warning him of potential criminal offenses.

“Your threats in the last couple months to ‘expose’ Mr. Juris knowing he is a high profile individual, i.e., today you threatened to issue a press release, to induce him into wiring you close to $100,000 is outright extortion and subject to criminal prosecution,” Geragos wrote.

“I suggest you direct your attention to various statutes which specifically criminalize your conduct in the various jurisdictions where you have threatened suit.”

Interestingly, Geragos then went on to suggest that the lawsuit may ultimately backfire, since going public might affect Flava Works’ reputation in the gay market.

“With respect to Mr. Juris, your actions have been nothing but extortion and we reject your attempts and will vigorously pursue all available remedies against you,” Geragos’ email reads.

“We intend to use the platform you have provided to raise awareness in the LGBTQ community of this new form of digital extortion that you promote.”

But Bleicher, it seems, is up for a fight.

“Marc knows what he did and enjoyed downloading our videos and sharing them and those of videos of other studios, but now he has been caught,” he told the lawyer.

“This is the kind of case I would like to take all the way to trial, win or lose. It shows
people that want to steal our copyrighted videos that we aggressively protect our intellectual property.”

But to the tune of $1.2m? Apparently so.

“We could get up to $150,000 per infringement – we have solid proof of eight full videos – not to mention we have caught [Juris] downloading many other studios’ videos too – I think – but not sure – the number was over 75,” Bleicher told TF.

It’s quite rare for this kind of dispute to play out in public, especially considering Juris’ profile and occupation. Only time will tell if this will ultimately end in a settlement, but Bleicher and Juris seemed determined at this stage to stand by their ground and fight this out in court.

Complaint (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

“Public Figure” Threatened With Exposure Over Gay Piracy ‘Fine’

Post Syndicated from Andy original https://torrentfreak.com/public-figure-threatened-with-exposure-over-gay-piracy-fine-170817/

Flava Works is an Illinois-based company specializing in adult material featuring black and Latino men. It operates an aggressive anti-piracy strategy which has resulted in some large damages claims in the past.

Now, however, the company has found itself targeted by a lawsuit filed by one of its alleged victims. Filed in a California district court by an unnamed individual, it accuses Flava Works of shocking behavior relating to a claim of alleged piracy.

According to the lawsuit, ‘John Doe’ received a letter in early June from Flava Works CEO Phillip Bleicher, accusing him of Internet piracy. Titled “Settlement Demand and Cease and Desist”, the letter got straight to the point.

“Flava Works is aware that you have been ‘pirating’ the content from its website(s) for your own personal financial benefit,” the letter read.

[Update: ‘John Doe’ has now been identified as Marc Juris, President & General Manager of AMC-owned WE tv. All references to John Doe below refer to Juris. See note at footer]

As is often the case with such claims, Flava Works offered to settle with John Doe for a cash fee. However, instead of the few hundred or thousand dollars usually seen in such cases, the initial settlement amount was an astronomical $97,000. But that wasn’t all.

According to John Doe, Bleicher warned that unless the money was paid in ten days, Flava Works “would initiate litigation against [John Doe], publically accusing him of being a consumer and pirate of copyrighted gay adult entertainment.”

Amping up the pressure, Bleicher then warned that after the ten-day deadline had passed, the settlement amount of $97,000 would be withdrawn and replaced with a new amount – $525,000.

The lawsuit alleges that Bleicher followed up with more emails in which he indicated that there was still time to settle the matter “one on one” since the case hadn’t been assigned to an attorney. However, he warned John Doe that time was running out and that public exposure via a lawsuit would be the next step.

While these kinds of tactics are nothing new in copyright infringement cases, the amounts of money involved are huge, indicating something special at play. Indeed, it transpires that John Doe is a public figure in the entertainment industry and the suggestion is that Flava Works’ assessment of his “wealth and profile” means he can pay these large sums.

According to the suit, on July 6, 2017, Bleicher sent another email to John Doe which “alluded to [his] high-profile status and to the potential publicity that a lawsuit would bring.” The email went as far as threatening an imminent Flava Works press release, announcing that a public figure, who would be named, was being sued for pirating gay adult content.

Flava Works alleges that John Doe uploaded its videos to various BitTorrent sites and forums, but John Doe vigorously denies the accusations, noting that the ‘evidence’ presented by Flava Works fails to back up its claims.

“The materials do not reveal or expose infringement of any sort. [Flava Works’] real purpose in sending this ‘proof’ was to demonstrate just how humiliating it would be to defend against Flava Works’ scurrilous charges,” John Doe’s lawsuit notes.

“[Flava Works’] materials consist largely of screen shots of extremely graphic images of pornography, which [Flava Works] implies that [John Doe] has viewed — but which are completely irrelevant given that they are not Flava Works content. Nevertheless, Bleicher assured [John Doe] that these materials would all be included in a publicly filed lawsuit if he refused to accede to [Flava Works’] payment demands.”

From his lawsuit (pdf) it’s clear that John Doe is in no mood to pay Flava Works large sums of cash and he’s aggressively on the attack, describing the company’s demands as “criminal extortion.”

He concludes with a request for a declaration that he has not infringed Flava Works’ copyrights, while demanding attorneys’ fees and further relief to be determined by the court.

The big question now is whether Flava Works will follow through with its threats to exposure the entertainer, or whether it will drift back into the shadows to fight another day. Definitely one to watch.

Update: Flava Works has now followed through on its threat to sue Juris. A complaint filed iat an Illinois court accuses the TV executive of uploading Flava Works titles to several gay-focused torrent sites in breach of copyright. It demands $1.2m in damages.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Spinrilla Refuses to Share Its Source Code With the RIAA

Post Syndicated from Ernesto original https://torrentfreak.com/spinrilla-refuses-to-share-its-source-code-with-the-riaa-170815/

Earlier this year, a group of well-known labels targeted Spinrilla, a popular hip-hop mixtape site and accompanying app with millions of users.

The coalition of record labels including Sony Music, Warner Bros. Records, and Universal Music Group, filed a lawsuit accusing the service of alleged copyright infringements.

Both sides have started the discovery process and recently asked the court to rule on several unresolved matters. The parties begin with their statements of facts, clearly from opposite angles.

The RIAA remains confident that the mixtape site is ripping off music creators and wants its operators to be held accountable.

“Since Spinrilla launched, Defendants have facilitated millions of unauthorized downloads and streams of thousands of Plaintiffs’ sound recordings without Plaintiffs’ permission,” RIAA writes, complaining about “rampant” infringement on the site.

However, Spinrilla itself believes that the claims are overblown. The company points out that the RIAA’s complaint only lists a tiny fraction of all the songs uploaded by its users. These somehow slipped through its Audible Magic anti-piracy filter.

Where the RIAA paints a picture of rampant copyright infringement, the mixtape site stresses that the record labels are complaining about less than 0.001% of all the tracks they ever published.

“From 2013 to the present, Spinrilla users have uploaded about 1 million songs to Spinrilla’s servers and Spinrilla published about 850,000 of those. Plaintiffs are complaining that 210 of those songs are owned by them and published on Spinrilla without permission,” Spinrilla’s lawyers write.

“That means that Plaintiffs make no claim to 99.9998% of the songs on Spinrilla. Plaintiffs’ shouting of ‘rampant infringement on Spinrilla’, an accusation that Spinrilla was designed to allow easy and open access to infringing material, and assertion that ‘Defendants have facilitated millions of unauthorized downloads’ of those 210 songs is untrue – it is nothing more than a wish and a dream.”

The company reiterates that it’s a platform for independent musicians and that it doesn’t want to feature the Eminem’s and Bieber’s of this world, especially not without permission.

As for the discovery process, there are still several outstanding issues they need the Court’s advice on. Spinrilla has thus far produced 12,000 pages of documents and answered all RIAA interrogatories, but refuses to hand over certain information, including its source code.

According to Spinrilla, there is no reason for the RIAA to have access to its “crown jewel.”

“The source code is the crown jewel of any software based business, including Spinrilla. Even worse, Plaintiffs want an ‘executable’ version of Spinrilla’s source code, which would literally enable them to replicate Spinrilla’s entire website. Any Plaintiff could, in hours, delete all references to ‘Spinrilla,’ add its own brand and launch Spinrilla’s exact website.

“If we sued YouTube for hosting 210 infringing videos, would I be entitled to the source code for YouTube? There is simply no justification for Spinrilla sharing its source code with Plaintiffs,” Spinrilla adds.

The RIAA, on the other hand, argues that the source code will provide insight into several critical issues, including Spinrilla’s knowledge about infringing activity and its ability to terminate repeat copyright infringers.

In addition to the source code, the RIAA has also requested detailed information about the site’s users, including their download and streaming history. This request is too broad, the mixtape site argues, and has offered to provide information on the uploaders of the 210 infringing tracks instead.

It’s clear that the RIAA and Spinrilla disagree on various fronts and it will be up to the court to decide what information must be handed over. So far, however, the language used clearly shows that both parties are far from reaching some kind of compromise.

The first joint discovery statement is available in full here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Controlling Millions of Potential Internet Pirates Won’t Be Easy

Post Syndicated from Andy original https://torrentfreak.com/controlling-millions-of-potential-internet-pirates-wont-be-easy-170813/

For several decades the basic shape of the piracy market hasn’t changed much. At the top of the chain there has always been a relatively small number of suppliers. At the bottom, the sprawling masses keen to consume whatever content these suppliers make available, while sharing it with everyone else.

This model held in the days of tapes and CDs and transferred nicely to the P2P file-sharing era. For nearly two decades people have been waiting for those with the latest content to dump it onto file-sharing networks. After grabbing it for themselves, people share that content with others.

For many years, the majority of the latest music, movies, and TV shows appeared online having been obtained by, and then leaked from, ‘The Scene’. However, with the rise of BitTorrent and an increase in computer skills demonstrated by the public, so-called ‘P2P release groups’ began flexing their muscles, in some cases slicing the top of the piracy pyramid.

With lower barriers to entry, P2P releasers can be almost anyone who happens to stumble across some new content. That being said, people still need the skill to package up that content and make it visible online, on torrent sites for example, without getting caught.

For most people that’s prohibitively complex, so it’s no surprise that Average Joe, perhaps comforted by the air of legitimacy, has taken to uploading music and movies to sites like YouTube instead. These days that’s nothing out of the ordinary and perhaps a little boring by piracy standards, but people still have the capacity to surprise.

This week a man from the United States, without a care in the world, obtained a login for a STARZ press portal, accessed the final three episodes of ‘Power’, and then streamed them on Facebook using nothing but a phone and an Internet connection.

From the beginning, the whole thing was ridiculous, comical even. The man in question, whose name and personal details TF obtained in a matter of minutes, revealed how he got the logins and even recorded his own face during one of the uploaded videos.

He really, really couldn’t have cared any less but he definitely should have. After news broke of the leaks, STARZ went public confirming the breach and promising to do something about it.

“The final three episodes of Power’s fourth season were leaked online due to a breach of the press screening room,” Starz said in a statement. “Starz has begun forensic investigations and will take legal action against the responsible parties.”

At this point, we should consider the magnitude of what this guy did. While we all laugh at his useless camera skills, the fact remains that he unlawfully distributed copyright works online, in advance of their commercial release. In the United States, that is a criminal offense, one that can result in a prison sentence of several years.

It would be really sad if the guy in question was made an example of since his videos suggest he hadn’t considered the consequences. After all, this wasn’t some hi-tech piracy group, just a regular guy with a login and a phone, and intent always counts for something. Nevertheless, the situation this week nicely highlights how new technology affects piracy.

In the past, the process of putting an unreleased movie or TV show online could only be tackled by people with expertise in several areas. These days a similar effect is possible with almost no skill and no effort. Joe Public, pre-release TV/movie/sports pirate, using nothing but a phone, a Facebook account, and an urge?

That’s the reality today and we won’t have to wait too long for a large scale demonstration of what can happen when millions of people with access to these ubiquitous tools have an urge to share.

In a little over two weeks’ time, boxing legend Floyd Mayweather Jr fights UFC lightweight champion, Conor McGregor. It’s set to be the richest combat sports event in history, not to mention one of the most expensive for PPV buyers. That means it’s going to be pirated to hell and back, in every way possible. It’s going to be massive.

Of course, there will be high-quality paid IPTV productions available, more grainy ‘Kodi’ streams, hundreds of web portals, and even some streaming torrents, for those that way inclined. But there will also be Average Joes in their hundreds, who will point their phones at Showtime’s PPV with the intent of live streaming the biggest show on earth to their friends, family, and the Internet. For free.

Quite how this will be combatted remains to be seen but it’s fair to say that this is a problem that’s only going to get bigger. In ten years time – in five years time – many millions of people will have the ability to become pirate releasers on a whim, despite knowing nothing about the occupation.

Like ‘Power’ guy, the majority won’t be very good at it. Equally, some will turn it into an art form. But whatever happens, tackling millions of potential pirates definitely won’t be easy for copyright holders. Twenty years in, it seems the battle for control has only just begun.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Ms. Haughs’ tote-ally awesome Raspberry Pi bag

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pi-tote-bag/

While planning her trips to upcoming educational events, Raspberry Pi Certified Educator Amanda Haughs decided to incorporate the Pi Zero W into a rather nifty accessory.

Final Pi Tote bag

Uploaded by Amanda Haughs on 2017-07-08.

The idea

Commenting on the convenient size of the Raspberry Pi Zero W, Amanda explains on her blog “I decided that I wanted to make something that would fully take advantage of the compact size of the Pi Zero, that was somewhat useful, and that I could take with me and share with my maker friends during my summer tech travels.”

Amanda Haughs Raspberry Pi Tote Bag

Awesome grandmothers and wearable tech are an instant recipe for success!

With access to her grandmother’s “high-tech embroidery machine”, Amanda was able to incorporate various maker skills into her project.

The Tech

Amanda used five clear white LEDs and the Raspberry Pi Zero for the project. Taking inspiration from the LED-adorned Babbage Bear her team created at Picademy, she decided to connect the LEDs using female-to-female jumper wires

Amanda Haughs Pi Tote Bag

Poor Babbage really does suffer at Picademy events

It’s worth noting that she could also have used conductive thread, though we wonder how this slightly less flexible thread would work in a sewing machine, so don’t try this at home. Or do, but don’t blame me if it goes wonky.

Having set the LEDs in place, Amanda worked on the code. Unsure about how she wanted the LEDs to blink, she finally settled on a random pulsing of the lights, and used the GPIO Zero library to achieve the effect.

Raspberry Pi Tote Bag

Check out the GPIO Zero library for some great LED effects

The GPIO Zero pulse effect allows users to easily fade an LED in and out without the need for long strings of code. Very handy.

The Bag

Inspiration for the bag’s final design came thanks to a YouTube video, and Amanda and her grandmother were able to recreate the make using their fabric of choice.

DIY Tote Bag – Beginner’s Sewing Tutorial

Learn how to make this cute tote bag. A great project for beginning seamstresses!

A small pocket was added on the outside of the bag to allow for the Raspberry Pi Zero to be snugly secured, and the pattern was stitched into the front, allowing spaces for the LEDs to pop through.

Raspberry Pi Tote Bag

Amanda shows off her bag to Philip at ISTE 2017

You can find more information on the project, including Amanda’s initial experimentation with the Sense HAT, on her blog. If you’re a maker, an educator or, (and here’s a word I’m pretty sure I’ve made up) an edumaker, be sure to keep her blog bookmarked!

Make your own wearable tech

Whether you use jumper leads, or conductive thread or paint, we’d love to see your wearable tech projects.

Getting started with wearables

To help you get started, we’ve created this Getting started with wearables free resource that allows you to get making with the Adafruit FLORA and and NeoPixel. Check it out!

The post Ms. Haughs’ tote-ally awesome Raspberry Pi bag appeared first on Raspberry Pi.

Getting Your Data into the Cloud is Just the Beginning

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/cost-data-of-transfer-cloud-storage/

Total Cloud Storage Cost

Organizations should consider not just the cost of getting their data into the cloud, but also long-term costs for storage and retrieval when deciding which cloud storage solution meets their needs.

As cloud storage has become ubiquitous, organizations large and small are joining in. For larger organizations the lure of reducing capital expenses and their associated operational costs is enticing. For smaller organizations, cloud storage often replaces an unmanageable closet full of external hard drives, thumb drives, SD cards, and other devices. With terabytes or even petabytes of data, the common challenge facing organizations, large and small, is how to get their data up to the cloud.

Transferring Data to the Cloud

The obvious solution for getting your data to the cloud is to upload your data from your internal network through the internet to the cloud storage vendor you’ve selected. Cloud storage vendors don’t charge you for uploading your data to their cloud, but you, of course, have to pay your network provider and that’s where things start to get interesting. Here are a few things to consider.

  • The initial upload: Unless you are just starting out, you will have a large amount of data you want to upload to the cloud. This could be data you wish to archive or have had archived previously, for example data stored on LTO tapes or kept stored on external hard drives.
  • Pipe size: This is the amount of upload bandwidth of your network connection. This is measured in Mbps (megabits per second). Remember, your data is stored in MB (megabytes), so an upload connection of 80 Mbps will transfer no more than 10 MB of data per second and most likely a lot less.
  • Cost and caps: In some places, organizations pay a flat monthly rate for a specified level of service (speed) for internet access. In other locations, internet access is metered, or pay as you go. In either case, there can be internet service caps that limit or completely stop data transfer once you reach your contracted threshold.

One or more of these challenges has the potential to make the initial upload of your data expensive and potentially impossible. You could wait until cloud storage companies start buying up internet providers and make data upload cheap (or free with Amazon Prime!), but there is another option.

Data Transfer Devices

Given the potential challenges of using your network for the initial upload of your data to the cloud, a handful of cloud storage companies have introduced data transfer or data ingest services. Backblaze has the B2 Fireball, Amazon has Snowball (and other similar devices), and Google recently introduced their Transfer Appliance.

KLRU-TV Austin PBS uploaded their Austin City Limits musical anthology series to Backblaze using a B2 Fireball.

These services work as follows:

  • The provider sends you a portable (or somewhat portable) storage device.
  • You connect the device to your network and load some amount of data on the device over your internal network connection.
  • You return the device, loaded with your data, to the provider, who uploads your data to your cloud storage account from inside their own data center.

Data Transfer Devices Save Time

Assuming your Internet connection is a flat rate service that has no caps or limits and your organizational operations can withstand the traffic, you still may want to opt to use a data transfer service to move your data to the cloud. Why? Time. For example, if your initial data upload is 100 TB here’s how long it would take using different network upload connection speeds:

Network Speed Upload Time
10 Mbps 3 years
100 Mbps 124 days
500 Mbps 25 days
1 Gbps 12 days

This assumes you are using most of your upload connection to upload your data, which is probably not realistic if you want to stay in business. You could potentially rent a better connection or upgrade your connection permanently, both of which add to the cost of running your business.

Speaking of cost, there is of course a charge for the data transfer service that can be summarized as follows:

  • Backblaze B2 Fireball — Up to 40 TB of data per trip for $550.00 for 30 days in use at your site.
  • Amazon Snowball — up to 50 TB of data per trip for $200.00 for 10 days use at your site, plus $15/day each day in use at your site thereafter.
  • Google Transfer Appliance — up to 100 TB of data per trip for $300.00 for 10 days use at your site, plus $10/day each day in use at your site thereafter.

These prices do not include shipping, which can range from $100 to $900 depending on shipping method, location, etc.

Both Amazon and Google have transfer devices that are larger and cost more. For comparison purposes below we’ll use the three device versions listed above.

The Real Cost of Uploading Your Data

If we stopped our review at the previous paragraph and we were prepared to load up our transfer device in 10 days or less, the clear winner would be Google. But, this leaves out two very important components of any cloud storage project; the cost of storing your data and the cost of downloading your data.

Let’s look at two examples:

Example 1 — Archive 100 TB of data:

  • Use the data transfer service move 100 TB of data to the cloud storage service.
  • Accomplish the transfer within 10 days.
  • Store that 100 TB of data for 1 year.
Service Transfer Cost Cloud Storage Total
Backblaze B2 $1,650 (3 trips) $6,000 $7,650
Google Cloud $300 (1 trip) $24,000 $24,300
Amazon S3 $400 (2 trips) $25,200 $25,600

Results:

  • Using the B2 Fireball to store data in Backblaze B2 saves you $16,650 over a one-year period versus the Google solution.
  • The payback period for using a Backblaze B2 FireBall versus a Google Transfer Appliance is less than 1 month.

Example 2 — Store and use 100 TB of data:

  • Use the data transfer service to move 100 TB of data to the cloud storage service.
  • Accomplish the transfer within 10 days.
  • Store that 100 TB of data for 1 year.
  • Add 5 TB a month (on average) to the total stored.
  • Delete 2 TB a month (on average) from the total stored.
  • Download 10 TB a month (on average) from the total stored.
Service Transfer Cost Cloud Storage Total
Backblaze B2 $1,650 (3 trips) $9,570 $11,220
Google Cloud $300 (1 trip) $39,684 $39,984
Amazon S3 $400 (2 trips) $36,114 $36,514

Results:

  • Using the B2 Fireball to store data in Backblaze B2 saves you $28,764 over a one-year period versus the Google solution.
  • The payback period for using a Backblaze B2 FireBall versus a Google Transfer Appliance is less than 1 month.

Notes:

  • All prices listed are based on list prices from the vendor websites as of the date of this blog post.
  • We are accomplishing the transfer of your data to the device within the 10 day “free” period specified by Amazon and Google.
  • We are comparing cloud storage services that have similar performance. For example, once the data is uploaded, it is readily available for download. The data is also available for access via a Web GUI, CLI, API, and/or various applications integrated with the cloud storage service. Multiple versions of files can be kept as desired. Files can be deleted any time.

To be fair, it requires Backblaze three trips to move 100 TB while it only takes 1 trip for the Google Transfer Appliance. This adds some cost to prepare, monitor, and ship three B2 Fireballs versus one Transfer Appliance. Even with that added cost, the Backblaze B2 solution will still be significantly less expensive over the one year period and beyond.

Have a Data Transfer Device Owner

Before you run out and order a transfer device, make sure the transfer process is someone’s job once the device arrives at your organization. Filling a transfer device should only take a few days, but if it is forgotten, you’ll find you’ve had the device for 2 or 3 weeks. While that’s not much of a problem with a B2 Fireball, it could start to get expensive otherwise.

Just the Beginning

As with most “new” technologies and services, you can expect other companies to jump in and provide various data ingest services. The cost will get cheaper or even free as cloud storage companies race to capture and lock up the data you have kept locally all these years. When you are evaluating cloud storage solutions, it’s best to look past the data ingest loss-leader price, and spend a few minutes to calculate the long-term cost of storing and using your data.

The post Getting Your Data into the Cloud is Just the Beginning appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Piracy Brings a New Young Audience to Def Leppard, Guitarist Says

Post Syndicated from Andy original https://torrentfreak.com/piracy-brings-a-new-young-audience-to-def-leppard-guitarist-says-170803/

For decades the debate over piracy has raged, with bands and their recording industry paymasters on one side and large swathes of the public on the other. Throughout, however, there have been those prepared to recognize that things aren’t necessarily black and white.

Over the years, many people have argued that access to free music has helped them broaden their musical horizons, dabbling in new genres and discovering new bands. This, they argue, would have been a prohibitively expensive proposition if purchases were forced on a trial and error basis.

Of course, many labels and bands believe that piracy amounts to theft, but some are prepared to put their heads above the parapet with an opinion that doesn’t necessarily tow the party line.

Formed in 1977 in Sheffield, England, rock band Def Leppard have sold more than 100 million records worldwide and have two RIAA diamond certificated albums to their name. But unlike Metallica who have sold a total of 116 million records and were famous for destroying Napster, Def Leppard’s attitude to piracy is entirely more friendly.

In an interview with Ultimate Classic Rock, Def Leppard guitarist Vivian Campbell has been describing why he believes piracy has its upsides, particularly for enduring bands that are still trying to broaden their horizons.

“The way the band works is quite extraordinary. In recent years, we’ve been really fortunate that we’ve seen this new surge in our popularity. For the most part, that’s fueled by younger people coming to the shows,” Campbell said.

“We’ve been seeing it for the last 10, 12 or 15 years, you’d notice younger kids in the audience, but especially in the last couple of years, it’s grown exponentially. I really do believe that this is the upside of music piracy.”

Def Leppard celebrate their 40th anniversary this year, and the fact that they’re still releasing music and attracting a new audience is a real achievement for a band whose original fans only had access to vinyl and cassette tapes. But Campbell says the band isn’t negatively affected by new technology, nor people using it to obtain their content for free.

“You know, people bemoan the fact that you can’t sell records anymore, but for a band like Def Leppard at least, there is a silver lining in the fact that our music is reaching a whole new audience, and that audience is excited to hear it, and they’re coming to the shows. It’s been fantastic,” he said.

While packing out events is every band’s dream, Campbell believes that the enthusiasm these fresh fans bring to the shows is actually helping the band to improve.

“There’s a whole new energy around Leppard, in fact. I think we’re playing better than we ever have. Which you’d like to think anyway. They always say that musicians, unlike athletes, you’re supposed to get better.

“I’m not sure that anyone other than the band really notices, but I notice it and I know that the other guys do too. When I play ‘Rock of Ages’ for the 3,000,000 time, it’s not the song that excites me, it’s the energy from the audience. That’s what really lifts our performance. When you’ve got a more youthful audience coming to your shows, it only goes in one direction,” he concludes.

The thought of hundreds or even thousands of enthusiastic young pirates energizing an aging Def Leppard to the band’s delight is a real novelty. However, with so many channels for music consumption available today, are these new followers necessarily pirates?

One only has to visit Def Leppard’s official YouTube channel to see that despite being born in the late fifties and early sixties, the band are still regularly posting new content to keep fans up to date. So, given the consumption habits of young people these days, YouTube seems a more likely driver of new fans than torrents, for example.

That being said, Def Leppard are still humming along nicely on The Pirate Bay. The site lists a couple of hundred torrents, some uploaded more recently, some many years ago, including full albums, videos, and even entire discographies.

Arrr, we be Def Leppaaaaaard

Interestingly, Campbell hasn’t changed his public opinion on piracy for more than a decade. Back in 2007 he was saying similar things, and in 2011 he admitted that there were plenty of “kids out there” with the entire Def Leppard collection on their iPods.

“I am pretty sure they didn’t all pay for it. But, maybe those same kids will buy a ticket and come to a concert,” he said.

“We do not expect to sell a lot of records, we are just thankful to have people listening to our music. That is more important than having people pay for it. It will monetize itself later down the line.”

With sites like YouTube perhaps driving more traffic to bands like Def Leppard than pure piracy these days (and even diverting people away from piracy itself), it’s interesting to note that there’s still controversy around people getting paid for music.

With torrent sites slowly dropping off the record labels’ hitlists, one is much more likely to hear them criticizing YouTube itself for not giving the industry a fair deal.

Still, bands like Def Leppard seem happy, so it’s not all bad news.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

A new twist on data backup: CloudNAS

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/cloudnas-backup/

Morro CacheDrive

There are many ways for SMBs, professionals, and advanced users to back up their data. The process can be as simple as copying files to a flash drive or an external drive, or as sophisticated as using a Synology or QNAP NAS device as your primary storage device and syncing the files to a cloud storage service such as Backblaze B2.

A recent entry into the backup arena is Morro Data and their CloudNAS solution, where files are stored in the cloud, cached locally as needed, and synced globally among the other CloudNAS systems in a given organization. There are three components to the solution:

  • A Morro CacheDrive — This resides on your internal network like a NAS device and stores from 1- to 8 TB of data depending on the model
  • The CloudNAS service — This software runs on the Morro CacheDrive to keep track of and manage the data
  • Backblaze B2 Cloud Storage — Where the data is stored in the cloud

The Morro CacheDrive is installed on your local network and looks like a network share. On Windows, the share can be mounted as a letter device, M:, for example. On the Mac, the device is mounted as a Shared device (Databank in the example below).

CloudNAS software dashboard

In either case, the device works like a folder/directory, typically on your desktop. You then either drag-and-drop or save a file to the folder/directory. This places the file on the CacheDrive. Once there, the file is automatically backed up to the cloud. In the case of CloudNAS solution, that cloud is Backblaze B2.

All that sounds pretty straight-forward, but what makes the CloudNAS solution unique is the solution allows you to have unlimited storage space. For example, you can access 5 TB of data from a 1 TB CacheDrive. Confused? Let me explain. All 5 TB of the data is stored in B2, having been uploaded to B2 each time you stored data on the CacheDrive. The 1 TB CacheDrive keeps (caches) the most recent or most often used files on the CacheDrive. When you need a file not currently stored on the CacheDrive, the CloudNAS software automatically downloads the file from the B2 cloud to the CacheDrive and makes it available to use as desired.

Things to know about the CloudNAS solution

  • Sharing Systems: Multiple users can mount the same CacheDrive with each being able to update and share the files.
  • Synced Systems: If you have two or more CloudNAS systems on your network, they will keep the B2 directory of files synced between all of the systems. Everyone on the network sees the same file list.
  • Unlimited Data: Regardless of the size of the CacheDrive device you purchase, you will not run out of space as Backblaze B2 will contain all of your data. That said, you should choose the size of your CacheDrive that fits your operational environment.
  • Network Speed: Files are initially stored on the CacheDrive, then copied to B2. Local network connections are typically much faster than internet network speeds. This means your files are uploaded to the CacheDrive fast then transferred to B2 as time allows at the speed of your internet connection, all without slowing you down. This should be interesting to those of you who have slower internet connections.
  • Access: The files stored using the Cloud NAS solution can be accessed through the shared folder/directory on your desktop as well as through a web-based Team Portal.

Getting Started

To start, you purchase a Morro CacheDrive. The price starts at $499.00 for a unit with 1 TB of cache storage. Next you choose a CloudNAS subscription. This starts at $10/month for the Standard plan, and lets you manage up to 10 TB of data. Finally, you connect Backblaze B2 to the Morro system to finish the set-up process. You pay Backblaze each month for the data you store in and download from B2 while using the Morro solution.

The CloudNAS solution is certainly a different approach to storing your data. You get the ability to store a nearly unlimited amount of data without having to upgrade your hardware as you go, and all of your data is readily available with just a few clicks. For users who need to store terabytes of data that needs be available anytime, the CloudNAS solution is worth a look.

The post A new twist on data backup: CloudNAS appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Create Multiple Builds from the Same Source Using Different AWS CodeBuild Build Specification Files

Post Syndicated from Prakash Palanisamy original https://aws.amazon.com/blogs/devops/create-multiple-builds-from-the-same-source-using-different-aws-codebuild-build-specification-files/

In June 2017, AWS CodeBuild announced you can now specify an alternate build specification file name or location in an AWS CodeBuild project.

In this post, I’ll show you how to use different build specification files in the same repository to create different builds. You’ll find the source code for this post in our GitHub repo.

Requirements

The AWS CLI must be installed and configured.

Solution Overview

I have created a C program (cbsamplelib.c) that will be used to create a shared library and another utility program (cbsampleutil.c) to use that library. I’ll use a Makefile to compile these files.

I need to put this sample application in RPM and DEB packages so end users can easily deploy them. I have created a build specification file for RPM. It will use make to compile this code and the RPM specification file (cbsample.rpmspec) configured in the build specification to create the RPM package. Similarly, I have created a build specification file for DEB. It will create the DEB package based on the control specification file (cbsample.control) configured in this build specification.

RPM Build Project:

The following build specification file (buildspec-rpm.yml) uses build specification version 0.2. As described in the documentation, this version has different syntax for environment variables. This build specification includes multiple phases:

  • As part of the install phase, the required packages is installed using yum.
  • During the pre_build phase, the required directories are created and the required files, including the RPM build specification file, are copied to the appropriate location.
  • During the build phase, the code is compiled, and then the RPM package is created based on the RPM specification.

As defined in the artifact section, the RPM file will be uploaded as a build artifact.

version: 0.2

env:
  variables:
    build_version: "0.1"

phases:
  install:
    commands:
      - yum install rpm-build make gcc glibc -y
  pre_build:
    commands:
      - curr_working_dir=`pwd`
      - mkdir -p ./{RPMS,SRPMS,BUILD,SOURCES,SPECS,tmp}
      - filename="cbsample-$build_version"
      - echo $filename
      - mkdir -p $filename
      - cp ./*.c ./*.h Makefile $filename
      - tar -zcvf /root/$filename.tar.gz $filename
      - cp /root/$filename.tar.gz ./SOURCES/
      - cp cbsample.rpmspec ./SPECS/
  build:
    commands:
      - echo "Triggering RPM build"
      - rpmbuild --define "_topdir `pwd`" -ba SPECS/cbsample.rpmspec
      - cd $curr_working_dir

artifacts:
  files:
    - RPMS/x86_64/cbsample*.rpm
  discard-paths: yes

Using cb-centos-project.json as a reference, create the input JSON file for the CLI command. This project uses an AWS CodeCommit repository named codebuild-multispec and a file named buildspec-rpm.yml as the build specification file. To create the RPM package, we need to specify a custom image name. I’m using the latest CentOS 7 image available in the Docker Hub. I’m using a role named CodeBuildServiceRole. It contains permissions similar to those defined in CodeBuildServiceRole.json. (You need to change the resource fields in the policy, as appropriate.)

{
    "name": "rpm-build-project",
    "description": "Project which will build RPM from the source.",
    "source": {
        "type": "CODECOMMIT",
        "location": "https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/codebuild-multispec",
        "buildspec": "buildspec-rpm.yml"
    },
    "artifacts": {
        "type": "S3",
        "location": "codebuild-demo-artifact-repository"
    },
    "environment": {
        "type": "LINUX_CONTAINER",
        "image": "centos:7",
        "computeType": "BUILD_GENERAL1_SMALL"
    },
    "serviceRole": "arn:aws:iam::012345678912:role/service-role/CodeBuildServiceRole",
    "timeoutInMinutes": 15,
    "encryptionKey": "arn:aws:kms:eu-west-1:012345678912:alias/aws/s3",
    "tags": [
        {
            "key": "Name",
            "value": "RPM Demo Build"
        }
    ]
}

After the cli-input-json file is ready, execute the following command to create the build project.

$ aws codebuild create-project --name CodeBuild-RPM-Demo --cli-input-json file://cb-centos-project.json

{
    "project": {
        "name": "CodeBuild-RPM-Demo", 
        "serviceRole": "arn:aws:iam::012345678912:role/service-role/CodeBuildServiceRole", 
        "tags": [
            {
                "value": "RPM Demo Build", 
                "key": "Name"
            }
        ], 
        "artifacts": {
            "namespaceType": "NONE", 
            "packaging": "NONE", 
            "type": "S3", 
            "location": "codebuild-demo-artifact-repository", 
            "name": "CodeBuild-RPM-Demo"
        }, 
        "lastModified": 1500559811.13, 
        "timeoutInMinutes": 15, 
        "created": 1500559811.13, 
        "environment": {
            "computeType": "BUILD_GENERAL1_SMALL", 
            "privilegedMode": false, 
            "image": "centos:7", 
            "type": "LINUX_CONTAINER", 
            "environmentVariables": []
        }, 
        "source": {
            "buildspec": "buildspec-rpm.yml", 
            "type": "CODECOMMIT", 
            "location": "https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/codebuild-multispec"
        }, 
        "encryptionKey": "arn:aws:kms:eu-west-1:012345678912:alias/aws/s3", 
        "arn": "arn:aws:codebuild:eu-west-1:012345678912:project/CodeBuild-RPM-Demo", 
        "description": "Project which will build RPM from the source."
    }
}

When the project is created, run the following command to start the build. After the build has started, get the build ID. You can use the build ID to get the status of the build.

$ aws codebuild start-build --project-name CodeBuild-RPM-Demo
{
    "build": {
        "buildComplete": false, 
        "initiator": "prakash", 
        "artifacts": {
            "location": "arn:aws:s3:::codebuild-demo-artifact-repository/CodeBuild-RPM-Demo"
        }, 
        "projectName": "CodeBuild-RPM-Demo", 
        "timeoutInMinutes": 15, 
        "buildStatus": "IN_PROGRESS", 
        "environment": {
            "computeType": "BUILD_GENERAL1_SMALL", 
            "privilegedMode": false, 
            "image": "centos:7", 
            "type": "LINUX_CONTAINER", 
            "environmentVariables": []
        }, 
        "source": {
            "buildspec": "buildspec-rpm.yml", 
            "type": "CODECOMMIT", 
            "location": "https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/codebuild-multispec"
        }, 
        "currentPhase": "SUBMITTED", 
        "startTime": 1500560156.761, 
        "id": "CodeBuild-RPM-Demo:57a36755-4d37-4b08-9c11-1468e1682abc", 
        "arn": "arn:aws:codebuild:eu-west-1: 012345678912:build/CodeBuild-RPM-Demo:57a36755-4d37-4b08-9c11-1468e1682abc"
    }
}

$ aws codebuild list-builds-for-project --project-name CodeBuild-RPM-Demo
{
    "ids": [
        "CodeBuild-RPM-Demo:57a36755-4d37-4b08-9c11-1468e1682abc"
    ]
}

$ aws codebuild batch-get-builds --ids CodeBuild-RPM-Demo:57a36755-4d37-4b08-9c11-1468e1682abc
{
    "buildsNotFound": [], 
    "builds": [
        {
            "buildComplete": true, 
            "phases": [
                {
                    "phaseStatus": "SUCCEEDED", 
                    "endTime": 1500560157.164, 
                    "phaseType": "SUBMITTED", 
                    "durationInSeconds": 0, 
                    "startTime": 1500560156.761
                }, 
                {
                    "contexts": [], 
                    "phaseType": "PROVISIONING", 
                    "phaseStatus": "SUCCEEDED", 
                    "durationInSeconds": 24, 
                    "startTime": 1500560157.164, 
                    "endTime": 1500560182.066
                }, 
                {
                    "contexts": [], 
                    "phaseType": "DOWNLOAD_SOURCE", 
                    "phaseStatus": "SUCCEEDED", 
                    "durationInSeconds": 15, 
                    "startTime": 1500560182.066, 
                    "endTime": 1500560197.906
                }, 
                {
                    "contexts": [], 
                    "phaseType": "INSTALL", 
                    "phaseStatus": "SUCCEEDED", 
                    "durationInSeconds": 19, 
                    "startTime": 1500560197.906, 
                    "endTime": 1500560217.515
                }, 
                {
                    "contexts": [], 
                    "phaseType": "PRE_BUILD", 
                    "phaseStatus": "SUCCEEDED", 
                    "durationInSeconds": 0, 
                    "startTime": 1500560217.515, 
                    "endTime": 1500560217.662
                }, 
                {
                    "contexts": [], 
                    "phaseType": "BUILD", 
                    "phaseStatus": "SUCCEEDED", 
                    "durationInSeconds": 0, 
                    "startTime": 1500560217.662, 
                    "endTime": 1500560217.995
                }, 
                {
                    "contexts": [], 
                    "phaseType": "POST_BUILD", 
                    "phaseStatus": "SUCCEEDED", 
                    "durationInSeconds": 0, 
                    "startTime": 1500560217.995, 
                    "endTime": 1500560218.074
                }, 
                {
                    "contexts": [], 
                    "phaseType": "UPLOAD_ARTIFACTS", 
                    "phaseStatus": "SUCCEEDED", 
                    "durationInSeconds": 0, 
                    "startTime": 1500560218.074, 
                    "endTime": 1500560218.542
                }, 
                {
                    "contexts": [], 
                    "phaseType": "FINALIZING", 
                    "phaseStatus": "SUCCEEDED", 
                    "durationInSeconds": 4, 
                    "startTime": 1500560218.542, 
                    "endTime": 1500560223.128
                }, 
                {
                    "phaseType": "COMPLETED", 
                    "startTime": 1500560223.128
                }
            ], 
            "logs": {
                "groupName": "/aws/codebuild/CodeBuild-RPM-Demo", 
                "deepLink": "https://console.aws.amazon.com/cloudwatch/home?region=eu-west-1#logEvent:group=/aws/codebuild/CodeBuild-RPM-Demo;stream=57a36755-4d37-4b08-9c11-1468e1682abc", 
                "streamName": "57a36755-4d37-4b08-9c11-1468e1682abc"
            }, 
            "artifacts": {
                "location": "arn:aws:s3:::codebuild-demo-artifact-repository/CodeBuild-RPM-Demo"
            }, 
            "projectName": "CodeBuild-RPM-Demo", 
            "timeoutInMinutes": 15, 
            "initiator": "prakash", 
            "buildStatus": "SUCCEEDED", 
            "environment": {
                "computeType": "BUILD_GENERAL1_SMALL", 
                "privilegedMode": false, 
                "image": "centos:7", 
                "type": "LINUX_CONTAINER", 
                "environmentVariables": []
            }, 
            "source": {
                "buildspec": "buildspec-rpm.yml", 
                "type": "CODECOMMIT", 
                "location": "https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/codebuild-multispec"
            }, 
            "currentPhase": "COMPLETED", 
            "startTime": 1500560156.761, 
            "endTime": 1500560223.128, 
            "id": "CodeBuild-RPM-Demo:57a36755-4d37-4b08-9c11-1468e1682abc", 
            "arn": "arn:aws:codebuild:eu-west-1:012345678912:build/CodeBuild-RPM-Demo:57a36755-4d37-4b08-9c11-1468e1682abc"
        }
    ]
}

DEB Build Project:

In this project, we will use the build specification file named buildspec-deb.yml. Like the RPM build project, this specification includes multiple phases. Here I use a Debian control file to create the package in DEB format. After a successful build, the DEB package will be uploaded as build artifact.

version: 0.2

env:
  variables:
    build_version: "0.1"

phases:
  install:
    commands:
      - apt-get install gcc make -y
  pre_build:
    commands:
      - mkdir -p ./cbsample-$build_version/DEBIAN
      - mkdir -p ./cbsample-$build_version/usr/lib
      - mkdir -p ./cbsample-$build_version/usr/include
      - mkdir -p ./cbsample-$build_version/usr/bin
      - cp -f cbsample.control ./cbsample-$build_version/DEBIAN/control
  build:
    commands:
      - echo "Building the application"
      - make
      - cp libcbsamplelib.so ./cbsample-$build_version/usr/lib
      - cp cbsamplelib.h ./cbsample-$build_version/usr/include
      - cp cbsampleutil ./cbsample-$build_version/usr/bin
      - chmod +x ./cbsample-$build_version/usr/bin/cbsampleutil
      - dpkg-deb --build ./cbsample-$build_version

artifacts:
  files:
    - cbsample-*.deb

Here we use cb-ubuntu-project.json as a reference to create the CLI input JSON file. This project uses the same AWS CodeCommit repository (codebuild-multispec) but a different buildspec file in the same repository (buildspec-deb.yml). We use the default CodeBuild image to create the DEB package. We use the same IAM role (CodeBuildServiceRole).

{
    "name": "deb-build-project",
    "description": "Project which will build DEB from the source.",
    "source": {
        "type": "CODECOMMIT",
        "location": "https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/codebuild-multispec",
        "buildspec": "buildspec-deb.yml"
    },
    "artifacts": {
        "type": "S3",
        "location": "codebuild-demo-artifact-repository"
    },
    "environment": {
        "type": "LINUX_CONTAINER",
        "image": "aws/codebuild/ubuntu-base:14.04",
        "computeType": "BUILD_GENERAL1_SMALL"
    },
    "serviceRole": "arn:aws:iam::012345678912:role/service-role/CodeBuildServiceRole",
    "timeoutInMinutes": 15,
    "encryptionKey": "arn:aws:kms:eu-west-1:012345678912:alias/aws/s3",
    "tags": [
        {
            "key": "Name",
            "value": "Debian Demo Build"
        }
    ]
}

Using the CLI input JSON file, create the project, start the build, and check the status of the project.

$ aws codebuild create-project --name CodeBuild-DEB-Demo --cli-input-json file://cb-ubuntu-project.json

$ aws codebuild list-builds-for-project --project-name CodeBuild-DEB-Demo

$ aws codebuild batch-get-builds --ids CodeBuild-DEB-Demo:e535c4b0-7067-4fbe-8060-9bb9de203789

After successful completion of the RPM and DEB builds, check the S3 bucket configured in the artifacts section for the build packages. Build projects will create a directory in the name of the build project and copy the artifacts inside it.

$ aws s3 ls s3://codebuild-demo-artifact-repository/CodeBuild-RPM-Demo/
2017-07-20 16:16:59       8108 cbsample-0.1-1.el7.centos.x86_64.rpm

$ aws s3 ls s3://codebuild-demo-artifact-repository/CodeBuild-DEB-Demo/
2017-07-20 16:37:22       5420 cbsample-0.1.deb

Override Buildspec During Build Start:

It’s also possible to override the build specification file of an existing project when starting a build. If we want to create the libs RPM package instead of the whole RPM, we will use the build specification file named buildspec-libs-rpm.yml. This build specification file is similar to the earlier RPM build. The only difference is that it uses a different RPM specification file to create libs RPM.

version: 0.2

env:
  variables:
    build_version: "0.1"

phases:
  install:
    commands:
      - yum install rpm-build make gcc glibc -y
  pre_build:
    commands:
      - curr_working_dir=`pwd`
      - mkdir -p ./{RPMS,SRPMS,BUILD,SOURCES,SPECS,tmp}
      - filename="cbsample-libs-$build_version"
      - echo $filename
      - mkdir -p $filename
      - cp ./*.c ./*.h Makefile $filename
      - tar -zcvf /root/$filename.tar.gz $filename
      - cp /root/$filename.tar.gz ./SOURCES/
      - cp cbsample-libs.rpmspec ./SPECS/
  build:
    commands:
      - echo "Triggering RPM build"
      - rpmbuild --define "_topdir `pwd`" -ba SPECS/cbsample-libs.rpmspec
      - cd $curr_working_dir

artifacts:
  files:
    - RPMS/x86_64/cbsample-libs*.rpm
  discard-paths: yes

Using the same RPM build project that we created earlier, start a new build and set the value of the `–buildspec-override` parameter to buildspec-libs-rpm.yml .

$ aws codebuild start-build --project-name CodeBuild-RPM-Demo --buildspec-override buildspec-libs-rpm.yml
{
    "build": {
        "buildComplete": false, 
        "initiator": "prakash", 
        "artifacts": {
            "location": "arn:aws:s3:::codebuild-demo-artifact-repository/CodeBuild-RPM-Demo"
        }, 
        "projectName": "CodeBuild-RPM-Demo", 
        "timeoutInMinutes": 15, 
        "buildStatus": "IN_PROGRESS", 
        "environment": {
            "computeType": "BUILD_GENERAL1_SMALL", 
            "privilegedMode": false, 
            "image": "centos:7", 
            "type": "LINUX_CONTAINER", 
            "environmentVariables": []
        }, 
        "source": {
            "buildspec": "buildspec-libs-rpm.yml", 
            "type": "CODECOMMIT", 
            "location": "https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/codebuild-multispec"
        }, 
        "currentPhase": "SUBMITTED", 
        "startTime": 1500562366.239, 
        "id": "CodeBuild-RPM-Demo:82d05f8a-b161-401c-82f0-83cb41eba567", 
        "arn": "arn:aws:codebuild:eu-west-1:012345678912:build/CodeBuild-RPM-Demo:82d05f8a-b161-401c-82f0-83cb41eba567"
    }
}

After the build is completed successfully, check to see if the package appears in the artifact S3 bucket under the CodeBuild-RPM-Demo build project folder.

$ aws s3 ls s3://codebuild-demo-artifact-repository/CodeBuild-RPM-Demo/
2017-07-20 16:16:59       8108 cbsample-0.1-1.el7.centos.x86_64.rpm
2017-07-20 16:53:54       5320 cbsample-libs-0.1-1.el7.centos.x86_64.rpm

Conclusion

In this post, I have shown you how multiple buildspec files in the same source repository can be used to run multiple AWS CodeBuild build projects. I have also shown you how to provide a different buildspec file when starting the build.

For more information about AWS CodeBuild, see the AWS CodeBuild documentation. You can get started with AWS CodeBuild by using this step by step guide.


About the author

Prakash Palanisamy is a Solutions Architect for Amazon Web Services. When he is not working on Serverless, DevOps or Alexa, he will be solving problems in Project Euler. He also enjoys watching educational documentaries.

RIAA: Hip-Hop Mixtape Site Has No DMCA Safe Harbor

Post Syndicated from Ernesto original https://torrentfreak.com/riaa-hip-hop-mixtape-site-has-no-dmca-safe-harbor-170731/

Earlier this year, a group of well-known labels targeted Spinrilla, a popular hip-hop mixtape site and accompanying app with millions of users.

The coalition of record labels including Sony Music, Warner Bros. Records, and Universal Music Group, filed a lawsuit accusing the service of alleged copyright infringements.

“Spinrilla specializes in ripping off music creators by offering thousands of unlicensed sound recordings for free,” the RIAA commented at the time.

The hip-hop site countered the allegations by pointing out that it installed an RIAA-approved anti-piracy filter and actively worked with major record labels to promote their tracks. In addition, Spinrilla stressed that the DMCA’s safe harbor protects the company.

The DMCA safe-harbor shields Internet services from liability for copyright infringing users. However, to apply for this protection, companies have to meet certain requirements. This is where Spinrilla failed, according to a filing just submitted by the record labels.

The RIAA points out that Spinrilla failed to register a designated DMCA agent with the copyright office, which is one of the requirements. In addition, they claim that the mix-tape site took no clear action against repeat infringers, another prerequisite.

“Defendants have not registered a designated DMCA agent with the Copyright Office and have not adopted, communicated, or reasonably implemented a policy that prevents repeat infringement. Either of these undisputed facts alone renders Defendants ineligible for the protections of the DMCA,” the RIAA writes.

On the repeat infrimnger issue, the record labels say that some of Spinrilla’s “artist” accounts were used to upload infringing material for weeks on end.

“For example, one such ‘artist’ uploaded a new mixtape each week for over 80 consecutive weeks, each containing sound recordings that the RIAA identified to Spinrilla as infringing, including recordings by such well-known major label artists as Bruno Mars, The Weeknd, Missy Elliott, Common, and Ludacris,” RIAA notes.

Based on the above, RIAA argues that Spinrilla is not entitled to safe harbor protections under the DMCA. They ask the court for a summary judgment to render this defense inapplicable, which would be a severe blow to the hip-hop mixtape site.

“And, because Defendants have pinned their defense to liability almost entirely on the DMCA, a ruling now that Defendants are ineligible for the DMCA safe harbor will substantially streamline — if not end entirely — this litigation going forward.

“The Court should therefore grant Plaintiffs’ motion for partial summary judgment now,” the RIAA stresses (pdf).

While the case doesn’t end here, without DMCA safe harbor protection it will definitely be harder for Spinrilla to come out unscathed.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TVStreamCMS Brings Pirate Streaming Site Clones to The Masses

Post Syndicated from Ernesto original https://torrentfreak.com/tvstreamcms-brings-pirate-streaming-site-clones-to-the-masses-170723/

In recent years many pirates have moved from more traditional download sites and tools, to streaming portals.

These streaming sites come in all shapes and sizes, and there is fierce competition among site owners to grab the most traffic. More traffic means more money, after all.

While building a streaming from scratch is quite an operation, there are scripts on the market that allow virtually anyone to set up their own streaming index in just a few minutes.

TVStreamCMS is one of the leading players in this area. To find out more we spoke to one of the people behind the project, who prefers to stay anonymous, but for the sake of this article, we’ll call him Rick.

“The idea came up when I wanted to make my own streaming site. I saw that they make a lot of money, and many people had them,” Rick tells us.

After discovering that there were already a few streaming site scripts available, Rick saw an opportunity. None of the popular scripts at the time offered automatic updates with freshly pirated content, a gap that was waiting to be filled.

“I found out that TVStreamScript and others on ThemeForest like MTDB were available, but these were not automatized. Instead, they were kinda generic and hard to update. We wanted to make our own site, but as we made it, we also thought about reselling it.”

Soon after TVStreamCMS was born. In addition to using it for his own project, Rick also decided to offer it to others who wanted to run their own streaming portal, for a monthly subscription fee.

TVStreamCMS website

According to Rick, the script’s automated content management system has been its key selling point. The buyers don’t have to update or change much themselves, as pretty much everything is automatized.

This has generated hundreds of sales over the years, according to the developer. And several of the sites that run on the script are successfully “stealing” traffic from the original, such as gomovies.co, which ranks well above the real GoMovies in Google’s search results.

“Currently, a lot of the sites competing against the top level streaming sites are using our script. This includes 123movies.co, gomovies.co and putlockers.tv, keywords like yesmovies fmovies gomovies 123movies, even in different Languages like Portuguese, French and Italian,” Rick says.

The pirated videos that appear on these sites come from a database maintained by the TVStreamCMS team. These are hosted on their own servers, but also by third parties such as Google and Openload.

When we looked at one of the sites we noticed a few dead links, but according to Rick, these are regularly replaced.

“Dead links are maintained by our team, DMCA removals are re-uploaded, and so on. This allows users not to worry about re-uploading or adding content daily and weekly as movies and episodes release,” Rick explains.

While this all sounds fine and dandy for prospective pirates, there are some significant drawbacks.

Aside from the obvious legal risks that come with operating one of these sites, there is also a financial hurdle. The full package costs $399 plus a monthly fee of $99, and the basic option is $399 and $49 per month.

TVStreamCMS subscription plans

There are apparently plenty of site owners who don’t mind paying this kind of money. That said, not everyone is happy with the script. TorrentFreak spoke to a source at one of the larger streaming sites, who believes that these clones are misleading their users.

TVStreamCMS is not impressed by the criticism. They know very well what they are doing. Their users asked for these clone templates, and they are delivering them, so both sides can make more money.

“We’re are in the business to make money and grow the sales,” Rick says.

“So we have made templates looking like 123movies, Yesmovies, Fmovies and Putlocker to accommodate the demands of the buyers. A similar design gets buyers traffic and is very, very effective for new sites, as users who come from Google they think it is the real website.”

The fact that 123Movies changed its name to GoMovies and recently changed to a GoStream.is URL, only makes it easier for clones to get traffic, according to the developer.

“This provides us with a lot of business because every time they change their name the buyers come back and want another site with the new name. GoMovies, for instance, and now Gostream,” Rick notes.

Of course, the infringing nature of the clone sites means that there are many copyright holders who would rather see the script and its associated sites gone. Previously, the Hollywood group FACT managed to shut down TVstreamScript, taking down hundreds of sites that relied on it, and it’s likely that TVStreamCMS is being watched too.

For now, however, more and more clones continue to flood the web with pirated streams.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

BREIN Takes Down 231 Pirate Sites in Six Months, But That’s Not All

Post Syndicated from Andy original https://torrentfreak.com/brein-takes-down-231-pirate-sites-in-six-months-but-thats-not-all-170722/

Over the years, the MPAA and RIAA have grabbed hundreds of headlines for their anti-piracy activities but recently their work has been more subtle. The same cannot be said of Dutch anti-piracy group BREIN.

BREIN is the most prominent outfit of its type in the Netherlands but it’s not uncommon for its work to be felt way beyond its geographical borders. The group’s report for the first six months of 2017 illustrates that in very clear terms.

In its ongoing efforts to reduce piracy on movies, music, TV shows, books and games, BREIN says it carried out 268 investigations during the first two quarters of 2017. That resulted in the takedown of 231 piracy-focused sites and services.

They included 45 cyberlocker linking sites, 30 streaming sites and 9 torrent platforms. The last eDonkey site in the Netherlands was among the haul after its operators reached a settlement with BREIN. The anti-piracy outfit reports that nearly all of the sites were operated anonymously so in many instances hosting providers were the ones to pull the plug, at BREIN’s request.

BREIN has also been actively tracking down people who make content available on file-sharing networks. These initial uploaders are considered to be a major part of the problem, so taking them out of the equation is another of BREIN’s goals.

In total, 14 major uploaders to torrent, streaming, and Usenet platforms were targeted by BREIN in the first six months of this year, with each given the opportunity to settle out of court or face legal action. Settlements typically involved a cash payment of between 250 and 7,500 euros but in several instances, uploaders were also required to take down the content they had uploaded.

In one interesting case, BREIN obtained an ex parte court order against a person running a “live cinema” on Facebook. He later settled with the anti-piracy group for 7,500 euros.

BREIN has also been active in a number of other areas. The group says it had almost 693,000 infringing results removed from Google search, pushing its total takedowns to more than 15.8 million. In addition, more than 2,170 listings for infringing content and devices were removed from online marketplaces and seven piracy-focused Facebook groups were taken down.

But while all of these actions have an effect locally, it is BREIN’s persistence in important legal cases that have influenced the copyright landscape across Europe.

Perhaps the most important case so far is BREIN v Filmspeler, which saw the anti-piracy group go all the way to the European Court of Justice for clarification on the law surrounding so-called “fully loaded” set-top boxes.

In a ruling earlier this year, the ECJ not only determined that selling such devices is a breach of copyright law, but also that people streaming content from an illicit source are committing an offense. Although the case began in the Netherlands, its effects will now be felt right across Europe, and that is almost completely down to BREIN.

But despite the reach of the ruling, BREIN has already been making good use of the decision locally. Not only has the operator of the Filmspeler site settled with BREIN “for a substantial amount”, but more than 200 sellers of piracy-configured set-top boxes have ceased trading since the ECJ decision. Some of the providers are the subject of further legal action.

Finally, a notable mention must go to BREIN’s determination to have The Pirate Bay blocked in the Netherlands. The battle against ISPs Ziggo and XS4ALL has been ongoing for seven years and like the Filmspeler case, required the attention of the European Court of Justice. While it’s still not over yet, it seems likely that the Supreme Court will eventually rule in BREIN’s favor.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Game of Thrones Premiere Ignites Annual Piracy Bonanza

Post Syndicated from Ernesto original https://torrentfreak.com/game-of-thrones-premiere-ignites-annual-piracy-bonanza-170717/

Yesterday, the first episode of Game of Thrones’ seventh season made its way onto the Internet. Like every year, this generated quite a bit of activity on various torrent sites.

People from all over the world virtually gathered around the various pirated copies of the show, with the first torrents appearing within minutes of the official broadcast and dozens of others soon after.

At the time of writing, more than 130,000 people are actively sharing one of the three most-popular torrents.

Part of this unofficial audience prefers piracy over a paid subscription. However, the fact that pirate copies are available before the official release in many countries doesn’t help either.

The most-shared torrent at the moment, with tens of thousands of peers, is a 772.3 MB rip from TBS uploaded by the ettv distribution group. Like every year, the total number of downloads is eventually expected to run to several million per episode.

Tracker stats for Game.of.Thrones.S07E01.WEB.h264-TBS[ettv]

Regarding the piracy numbers, Game of Thrones still beats every other TV-show by a landslide. That said, it’s worth noting that torrent activity has leveled off somewhat.

The last swarm record, when over a quarter million people were simultaneously sharing a single file, dates back two years. Based on the numbers we’ve seen thus far, it’s not likely to be broken anytime soon, if ever.

That doesn’t mean that the interest from pirates is waning. Not at all. Over the past two years, streaming sites and services have exploded, and Game of Thrones is topping the charts there as well.

TorrentFreak spoke to a source at one of the larger streaming portals who informed us that some episodes get up to a million views each. This morning, the Game of Thrones season premiere generated close to 20,000 views per hour on that site. And that’s just on a single platform.

This massive demand is also reflected in the “most viewed” lists on many streaming sites, where GoT often comes out on top. In fact, on Fmovies the first six seasons of the show were all among the most viewed titles this week, soon to be followed by season 7.

Most-viewed on FMovies during the past week

Since streaming has overtaken torrents in terms of popularity, it’s safe to say that the majority of all Game of Thrones piracy is generated there as well.

In a way, pirate streaming sites and set-top boxes provide an even bigger threat to HBO’s hit series. They are generally easier and more convenient to use, which significantly broadens the audience.

Streaming aside, a lot of the mainstream attention remains directed at torrents. Over in India, for example, local broadcaster Hotstar launched a massive billboard campaign called “Torrents Morghulis,” which translated means “torrents must die.”

Ironically, however, Indians had access to pirated Game of Thrones copies before the official premiere. When it finally became available on Hotstar the service crashed, something which also happened with Foxtel in Australia and HBO in several other countries.

Perhaps these broadcasters should consider peer-to-peer assisted streaming next time, we’ve heard it works quite well.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Ultrasonic pi-ano

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/ultrasonic-piano/

At the Raspberry Pi Foundation, we love a good music project. So of course we’re excited to welcome Andy Grove‘s ultrasonic piano to the collection! It is a thing of beauty… and noise. Don’t let the name fool you – this build can do so much more than sound like a piano.

Ultrasonic Pi Piano – Full Demo

The Ultrasonic Pi Piano uses HC-SR04 ultrasonic sensors for input and generates MIDI instructions that are played by fluidsynth. For more information: http://theotherandygrove.com/projects/ultrasonic-pi-piano/

What’s an ultrasonic piano?

What we have here, people of all genders, is really a theremin on steroids. The build’s eight ultrasonic distance sensors detect hand movements and, with the help of an octasonic breakout board, a Raspberry Pi 3 translates their signals into notes. But that’s not all: this digital instrument is almost endlessly customisable – you can set each sensor to a different octave, or to a different instrument.

octasonic breakout board

The breakout board designed by Andy

Andy has implemented gesture controls to allow you to switch between modes you have preset. In his video, you can see that holding your hands over the two sensors most distant from each other changes the instrument. Say you’re bored of the piano – try a xylophone! Not your jam? How about a harpsichord? Or a clarinet? In fact, there are 128 MIDI instruments and sound effects to choose from. Go nuts and compose a piece using tuba, ocarina, and the noise of a guitar fret!

How to build the ultrasonic piano

If you head over to Instructables, you’ll find the thorough write-up Andy has provided. He has also made all his scripts, written in Rust, available on GitHub. Finally, he’s even added a video on how to make a housing, so your ultrasonic piano can look more like a proper instrument, and less like a pile of electronics.

Ultrasonic Pi Piano Enclosure

Uploaded by Andy Grove on 2017-04-13.

Make your own!

If you follow us on Twitter, you may have seen photos and footage of the Raspberry Pi staff attending a Pi Towers Picademy. Like Andy*, quite a few of us are massive Whovians. Consequently, one of our final builds on the course was an ultrasonic theremin that gave off a sound rather like a dying Dalek. Take a look at our masterwork here! We loved our make so much that we’ve since turned the instructions for building it into a free resource. Go ahead and build your own! And be sure to share your compositions with us in the comments.

Sonic the hedgehog is feeling the beat

Sonic is feeling the groove as well

* He has a full-sized Dalek at home. I know, right?

The post Ultrasonic pi-ano appeared first on Raspberry Pi.

Usenet Provider Giganews Sues Perfect 10 For Fraud, Demands $20m

Post Syndicated from Andy original https://torrentfreak.com/usenet-provider-giganews-sues-perfect-10-for-fraud-demands-20m-170712/

For many years, Perfect 10 went about its business of publishing images of women in print and on the Internet. At some point along the way, however, the company decided that threatening to sue online service providers was more profitable.

Claiming copyright infringement, Perfect 10 took on a number of giants including Google, Amazon, Mastercard, and Visa, not to mention hosting providers such as LeaseWeb and OVH.

With court papers revealing that Perfect 10 owner Norman Zada worked 365 days a year on litigation and that the company acquired copyrights for use in lawsuits, it’s no surprise that around two dozen of Perfect 10’s lawsuits ended in cash settlements and defaults.

With dollar signs in mind, Perfect 10 went after another pretty big fish in 2011. The publisher claimed that Usenet provider Giganews was responsible when its users uploaded Perfect 10 images to the newsgroups. Things did not go well.

In November 2014, the U.S. District Court for the Central District of California found that Giganews was not liable for the infringing activities of its users. Perfect 10 was ordered to pay Giganews $5.6m in attorney’s fees and costs. Perfect 10 lost again at the Court of Appeals for the Ninth Circuit.

But even with all of these victories under its belt, Giganews just can’t catch a break.

The company is clearly owed millions but Perfect 10 is refusing to pay up. As a result, this week Giganews filed yet another suit, accusing Perfect 10 and Norman Zada of fraud aimed at depriving Giganews of the amounts laid out by the court.

The claims center around an alleged conspiracy in which Perfect 10 transferred its funds and assets to Zada.

“As of now (over two years since the judgment), Perfect 10 has not voluntarily paid any amount of the judgment,” the complaint begins.

“Instead, Perfect 10, through the unlawful acts of Zada and in conspiracy with him, has intentionally avoided satisfaction of the judgment through a series of fraudulent transfers of Perfect 10’s corporate assets to Zada’s personal possession.”

Giganews says these “illegal and fraudulent” transfers began back in 2014, when Perfect 10 began to realize that the fight against the Usenet provider was going bad.

For example, on November 20, 2014, around six days after the court granted summary judgment in favor of Giganews, Perfect 10 transferred $850,000 to Zada’s personal account. The Perfect 10 owner later told a Judgment Debtor’s Examination that the transfer was made due to the summary judgment orders, a statement that amounts to a confession of fraud, Giganews says.

“We had a settlement of $1.1 million in, I believe, June. I was entitled to that money,” Zada told the hearing. “And after the summary judgment orders were issued, I did not see any point in keeping more cash than we needed in the account.”

Giganews says that Perfect 10 transferred at least $1.75m in cash to Zada.

Then, within weeks of the court ordering Perfect 10 to pay $5.6m in attorneys fees and costs, Giganews says that Zada “fraudulently transferred substantially all
of Perfect 10’s physical assets” to himself for an amount that did not represent their true value.

Those assets included a car, furniture, and computer servers. When Zada was questioned why the transfers took place, he admitted that “it would have been
totally disruptive to have those [assets] seized” in satisfaction of the judgment. Indeed, the complaint alleges that the assets never moved physical location.

Perhaps surprisingly given the judgment, Giganews alleges that Zada continues to run Perfect 10’s business in much the same way as he did before. The company even has copyright infringement litigation underway against AOL in Germany, despite having few assets.

This is made possible, Giganews says, by Perfect 10 calling on assets it previously transferred to Zada. When required by the company, Zada simply “gives” them back.

In summary, Giganews says these transfers display the “badges of fraud” that indicate attempts to “hinder, delay or defraud” creditors, while leaving Perfect 10 practically insolvent.

“As a consequence, Plaintiffs are entitled to a judgment against Defendants, and each of them, in the sum of the unlawfully transferred amounts of at least $1,750,000, or in an amount to be proven at trial, together with interest on that amount at the legal rate of 10% per annum from and after March 24, 2015,” the complaint reads.

But the claim doesn’t stop there. Giganews asks the court to prevent Perfect 10 from transferring any more cash or assets out of Perfect 10 to Zada or anyone acting in concert with him or on his behalf. This is rounded off with a claim for punitive and exemplary damages of $20m to be considered during a jury trial.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Pirate Bay Re-enters List of 100 Most Popular Sites on the Internet

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-bay-re-enters-list-of-100-most-popular-sites-on-the-internet-170708/

thepirateWhen the The Pirate Bay suffered over a month of downtime late 2014, many of the site’s regular visitors went elsewhere.

This resulted in a significant traffic dip afterwards, but in recent months the notorious torrent site has seen a massive uptick in visitors.

At the beginning of the year TPB was already the largest torrent site. Today, Internet traffic ranking service Alexa lists the site among the 100 most-visited domains in the world once again, in 99th place. That’s the first time in three years.

While external traffic measurements are far from perfect, the graph below shows a steady increase in ranking since last summer. Exactly how many visitors The Pirate Bay has remains unknown, but SimilarWeb estimates it at a quarter billion ‘visits’ per month.

Keep in mind that the estimates above don’t account for the dozens of Pirate Bay proxies that serve users in countries where the site is blocked. That will likely add several millions of monthly visitors, at least.

Whether Pirate Bay’s recent resurgence is something torrent users should be happy about is another question. The recent uptick in traffic is mostly caused by the demise of other torrent sites.

Last summer both KickassTorrents and Torrentz left the scene, and ExtraTorrent followed a few weeks ago. Many of these users have flocked to The Pirate Bay, which is the prime source for user uploaded torrents.

That the Pirate Bay is still around is somewhat of an achievement in itself. Over the years there have been numerous attempts to shut the site down.

It started in 2006, when Swedish authorities raided the site following pressure from the United States, only for the site to come back stronger. The criminal convictions of the site’s founders didn’t kill the site either, nor did any of the subsequent attempts to take it offline.

While many pirates have fallen in love with TPB’s deviant behavior, the recent downfall of other sites means that there’s a lot of pressure and responsibility on the shoulders of the site now. Many other indexers rely on TPB for their content, which is something not everyone realizes.

For now, however, TPB continues its reign.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Google Removed 2.5 Billion ‘Pirate’ Search Results

Post Syndicated from Ernesto original https://torrentfreak.com/google-removed-2-5-billion-pirate-search-results-170706/

Google is coping with a continuous increase in takedown requests from copyright holders, which target pirate sites in search results.

Just a few years ago the search engine removed ‘only’ a few thousand URLs per day, but this has since grown to millions. When added up, the numbers are truly staggering.

In its transparency report, Google now states that it has removed 2.5 billion reported links for alleged copyright infringement. This is roughly 90 percent of all requests the company received.

The chart below breaks down the takedown requests into several categories. In addition to the URLs that were removed, the search engine also received 154 million duplicate URLs and 25 million invalid URLs.

Another 80 million links remain in search results because they can’t be classified as copyright infringing, according to Google.

Google’s takedown overview

The 2.5 billion removed links are spread out over 1.1 million websites. File-storage service 4shared takes the crown with 64 million targeted URLs, followed at a distance by mp3toys.xyz, rapidgator.net, uploaded.net, and chomikuj.pl.

While rightsholders have increased their takedown efforts over the years, the major entertainment industry groups are still not happy with the current state of Google’s takedown process.

One of the main complaints has been that content which Google de-lists often reappears under new URLs.

“They need to take more proactive responsibility to reduce infringing content that appears on their platform, and, where we expressly notify infringing content to them, to ensure that they do not only take it down, but also keep it down,” a BPI spokesperson told us last month.

Ideally, rightsholders would like Google to ensure that content “stays down” while blocking the most notorious pirate sites from search results entirely. Known ‘pirate’ sites such as The Pirate Bay have no place in search results, they argue.

Google, however, believes such broad measures will lead to all sorts of problems, including over-blocking, and maintains that the current system is working as the DMCA was intended.

The search engine did implement various other initiatives to counter piracy, including the downranking of pirate sites and promoting legal options in search results, which it details in its regularly updated “How Google Fights Piracy” report.

In addition, Google and various rightsholders have signed a voluntary agreement to address “domain hopping” by pirate sites and share data to better understand how users are searching for content. For now, however, this effort is limited to the UK.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Cost of Cloud Storage

Post Syndicated from Tim Nufire original https://www.backblaze.com/blog/cost-of-cloud-storage/

the cost of the cloud as a percentage of revenue

This week, we’re celebrating the one year anniversary of the launch of Backblaze B2 Cloud Storage. Today’s post is focused on giving you a peek behind the curtain about the costs of providing cloud storage. Why? Over the last 10 years, the most common question we get is still “how do you do it?” In this multi-billion dollar, global industry exhibiting exponential growth, none of the other major players seem to be willing to discuss the underlying costs. By exposing a chunk of the Backblaze financials, we hope to provide a better understanding of what it costs to run “the cloud,” and continue our tradition of sharing information for the betterment of the larger community.

Context
Backblaze built one of the industry’s largest cloud storage systems and we’re proud of that accomplishment. We bootstrapped the business and funded our growth through a combination of our own business operations and just $5.3M in equity financing ($2.8M of which was invested into the business – the other $2.5M was a tender offer to shareholders). To do this, we had to build our storage system efficiently and run as a real, self-sustaining, business. After over a decade in the data storage business, we have developed a deep understanding of cloud storage economics.

Definitions
I promise we’ll get into the costs of cloud storage soon, but some quick definitions first:

    Revenue: Money we collect from customers.
    Cost of Goods Sold (“COGS”): The costs associated with providing the service.
    Operating Expenses (“OpEx”): The costs associated with developing and selling the service.
    Income/Loss: What is left after subtracting COGS and OpEx from Revenue.

I’m going to focus today’s discussion on the Cost of Goods Sold (“COGS”): What goes into it, how it breaks down, and what percent of revenue it makes up. Backblaze is a roughly break-even business with COGS accounting for 47% of our revenue and the remaining 53% spent on our Operating Expenses (“OpEx”) like developing new features, marketing, sales, office rent, and other administrative costs that are required for us to be a functional company.

This post’s focus on COGS should let us answer the commonly asked question of “how do you provide cloud storage for such a low cost?”

Breaking Down Cloud COGS

Providing a cloud storage service requires the following components (COGS and OpEX – below we break out COGS):
cloud infrastructure costs as a percentage of revenue

  • Hardware: 23% of Revenue
  • Backblaze stores data on hard drives. Those hard drives are “wrapped” with servers so they can connect to the public and store data. We’ve discussed our approach to how this works with our Vaults and Storage Pods. Our infrastructure is purpose built for data storage. That is, we thought about how data storage ought to work, and then built it from the ground up. Other companies may use different storage media like Flash, SSD, or even tape. But it all serves the same function of being the thing that data actually is stored on. For today, we’ll think of all this as “hardware.”

    We buy storage hardware that, on average, will last 5 years (60 months) before needing to be replaced. To account for hardware costs in a way that can be compared to our monthly expenses, we amortize them and recognize 1/60th of the purchase price each month.

    Storage Pods and hard drives are not the only hardware in our environment. We also have to buy the cabinets and rails that hold the servers, core servers that manage accounts/billing/etc., switches, routers, power strips, cables, and more. (Our post on bringing up a data center goes into some of this detail.) However, Storage Pods and the drives inside them make up about 90% of all the hardware cost.

  • Data Center (Space & Power): 8% of Revenue
  • “The cloud” is a great marketing term and one that has caught on for our industry. That said, all “clouds” store data on something physical like hard drives. Those hard drives (and servers) are actual, tangible things that take up actual space on earth, not in the clouds.

    At Backblaze, we lease space in colocation facilities which offer a secure, temperature controlled, reliable home for our equipment. Other companies build their own data centers. It’s the classic rent vs buy decision; but it always ends with hardware in racks in a data center.

    Hardware also needs power to function. Not everyone realizes it, but electricity is a significant cost of running cloud storage. In fact, some data center space is billed simply as a function of an electricity bill.

    Every hard drive storing data adds incremental space and power need. This is a cost that scales with storage growth.

    I also want to make a comment on taxes. We pay sales and property tax on hardware, and it is amortized as part of the hardware section above. However, it’s valuable to think about taxes when considering the data center since the location of the hardware actually drives the amount of taxes on the hardware that gets placed inside of it.

  • People: 7% of Revenue
  • Running a data center requires humans to make sure things go smoothly. The more data we store, the more human hands we need in the data center. All drives will fail eventually. When they fail, “stuff” needs to happen to get a replacement drive physically mounted inside the data center and filled with the customer data (all customer data is redundantly stored across multiple drives). The individuals that are associated specifically with managing the data center operations are included in COGS since, as you deploy more hard drives and servers, you need more of these people.

    Customer Support is the other group of people that are part of COGS. As customers use our services, questions invariably arise. To service our customers and get questions answered expediently, we staff customer support from our headquarters in San Mateo, CA. They do an amazing job! Staffing models, internally, are a function of the number of customers and the rate of acquiring new customers.

  • Bandwidth: 3% of Revenue
  • We have over 350 PB of customer data being stored across our data centers. The bulk of that has been uploaded by customers over the Internet (the other option, our Fireball service, is 6 months old and is seeing great adoption). Uploading data over the Internet requires bandwidth – basically, an Internet connection similar to the one running to your home or office. But, for a data center, instead of contracting with Time Warner or Comcast, we go “upstream.” Effectively, we’re buying wholesale.

    Understanding how that dynamic plays out with your customer base is a significant driver of how a cloud provider sets its pricing. Being in business for a decade has explicit advantages here. Because we understand our customer behavior, and have reached a certain scale, we are able to buy bandwidth in sufficient bulk to offer the industry’s best download pricing at $0.02 / Gigabyte (compared to $0.05 from Amazon, Google, and Microsoft).

    Why does optimizing download bandwidth charges matter for customers of a data storage business? Because it has a direct relationship to you being able to retrieve and use your data, which is important.

  • Other Fees: 6% of Revenue
  • We have grouped a the remaining costs inside of “Other Fees.” This includes fees we pay to our payment processor as well as the costs of running our Restore Return Refund program.

    A payment processor is required for businesses like ours that need to accept credit cards securely over the Internet. The bulk of the money we pay to the payment processor is actually passed through to pay the credit card companies like AmEx, Visa, and Mastercard.

    The Restore Return Refund program is a unique program for our consumer and business backup business. Customers can download any and all of their files directly from our website. We also offer customers the ability to order a hard drive with some or all of their data on it, we then FedEx it to the customer wherever in the world she is. If the customer chooses, she can return the drive to us for a full refund. Customers love the program, but it does cost Backblaze money. We choose to subsidize the cost associated with this service in an effort to provide the best customer experience we can.

The Big Picture

At the beginning of the post, I mentioned that Backblaze is, effectively, a break even business. The reality is that our products drive a profitable business but those profits are invested back into the business to fund product development and growth. That means growing our team as the size and complexity of the business expands; it also means being fortunate enough to have the cash on hand to fund “reserves” of extra hardware, bandwidth, data center space, etc. In our first few years as a bootstrapped business, having sufficient buffer was a challenge. Having weathered that storm, we are particularly proud of being in a financial place where we can afford to make things a bit more predictable.

All this adds up to answer the question of how Backblaze has managed to carve out its slice of the cloud market – a market that is a key focus for some of the largest companies of our time. We have innovated a novel, purpose built storage infrastructure with our Vaults and Pods. That infrastructure allows us to keep costs very, very low. Low costs enable us to offer the world’s most affordable, reliable cloud storage.

Does reliable, affordable storage matter? For a company like Vintage Aerial, it enables them to digitize 50 years’ worth of aerial photography of rural America and share that national treasure with the world. Having the best download pricing in the storage industry means Austin City Limits, a PBS show out of Austin, can digitize and preserve over 550 concerts.

We think offering purpose built, affordable storage is important. It empowers our customers to monetize existing assets, make sure data is backed up (and not lost), and focus on their core business because we can handle their data storage needs.

The post The Cost of Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.