Tag Archives: philosophy

N O D E’s Handheld Linux Terminal

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/n-o-d-es-handheld-linux-terminal/

Fit an entire Raspberry Pi-based laptop into your pocket with N O D E’s latest Handheld Linux Terminal build.

The Handheld Linux Terminal Version 3 (Portable Pi 3)

Hey everyone. Today I want to show you the new version 3 of the Handheld Linux Terminal. It’s taken a long time, but I’m finally finished. This one takes all the things I’ve learned so far, and improves on many of the features from the previous iterations.

N O D E

With interests in modding tech, exploring the boundaries of the digital world, and open source, YouTuber N O D E has become one to watch within the digital maker world. He maintains a channel focused on “the transformative power of technology.”

“Understanding that electronics isn’t voodoo is really powerful”, he explains in his Patreon video. “And learning how to build your own stuff opens up so many possibilities.”

NODE Youtube channel logo - Handheld Linux Terminal v3

The topics of his videos range from stripped-down devices, upgraded tech, and security upgrades, to the philosophy behind technology. He also provides weekly roundups of, and discussions about, new releases.

Essentially, if you like technology, you’ll like N O D E.

Handheld Linux Terminal v3

Subscribers to N O D E’s YouTube channel, of whom there are currently over 44000, will have seen him documenting variations of this handheld build throughout the last year. By stripping down a Raspberry Pi 3, and incorporating a Zero W, he’s been able to create interesting projects while always putting functionality first.

Handheld Linux Terminal v3

With the third version of his terminal, N O D E has taken experiences gained from previous builds to create something of which he’s obviously extremely proud. And so he should be. The v3 handheld is impressively small considering he managed to incorporate a fully functional keyboard with mouse, a 3.5″ screen, and a fan within the 3D-printed body.

Handheld Linux Terminal v3

“The software side of things is where it really shines though, and the Pi 3 is more than capable of performing most non-intensive tasks,” N O D E goes on to explain. He demonstrates various applications running on Raspbian, plus other operating systems he has pre-loaded onto additional SD cards:

“I have also installed Exagear Desktop, which allows it to run x86 apps too, and this works great. I have x86 apps such as Sublime Text and Spotify running without any problems, and it’s technically possible to use Wine to also run Windows apps on the device.”

We think this is an incredibly neat build, and we can’t wait to see where N O D E takes it next!

The post N O D E’s Handheld Linux Terminal appeared first on Raspberry Pi.

Kodi ‘Trademark Troll’ Has Interesting Views on Co-Opting Other People’s Work

Post Syndicated from Andy original https://torrentfreak.com/kodi-trademark-troll-has-interesting-views-on-co-opting-other-peoples-work-170917/

The Kodi team, operating under the XBMC Foundation, announced last week that a third-party had registered the Kodi trademark in Canada and was using it for their own purposes.

That person was Geoff Gavora, who had previously been in communication with the Kodi team, expressing how important the software was to his sales.

“We had hoped, given the positive nature of his past emails, that perhaps he was doing this for the benefit of the Foundation. We learned, unfortunately, that this was not the case,” XBMC Foundation President Nathan Betzen said.

According to the Kodi team, Gavora began delisting Amazon ads placed by companies selling Kodi-enabled products, based on infringement of Gavora’s trademark rights.

“[O]nly Gavora’s hardware can be sold, unless those companies pay him a fee to stay on the store,” Betzen explained.

Predictably, Gavora’s move is being viewed as highly controversial, not least since he’s effectively claiming licensing rights in Canada over what should be a free and open source piece of software. TF obtained one of the notices Amazon sent to a seller of a Kodi-enabled device in Canada, following a complaint from Gavora.

Take down Kodi from Amazon, or pay Gavora

So who is Geoff Gavora and what makes him tick? Thanks to a 2016 interview with Ali Salman of the Rapid Growth Podcast, we have a lot of information from the horse’s mouth.

It all began in 2011, when Gavora began jailbreaking Apple TVs, loading them with XBMC, and selling them to friends.

“I did it as a joke, for beer money from my friends,” Gavora told Salman.

“I’d do it for $25 to $50 and word of mouth spread that I was doing this so we could load on this media center to watch content and online streams from it.”

Intro to the interview with Ali Salman

Soon, however, word of mouth caused the business to grow wings, Gavora claims.

“So they started telling people and I start telling people it’s $50, and then I got so busy so I start telling people it’s $75. I’m getting too busy with my work and with this. And it got to the point where I was making more jailbreaking these Apple TVs than I was at my career, and I wasn’t very happy at my career at that time.”

Jailbreaking was supposed to be a side thing to tide Gavora over until another job came along, but he had a problem – he didn’t come from a technical background. Nevertheless, what Gavora did have was a background in marketing and with a decent knowledge of how to succeed in customer service, he majored on that front.

Gavora had come to learn that while people wanted his devices, they weren’t very good at operating XBMC (Kodi’s former name) which he’d loaded onto them. With this in mind, he began offering web support and phone support via a toll-free line.

“I started receiving calls from New York, Dallas, and then Australia, Hong Kong. Everyone around the world was calling me and saying ‘we hear there’s some kid in Calgary, some young child, who’s offering tech support for the Apple TV’,” Gavora said.

But with things apparently going well, a wrench was soon thrown into the works when Apple released the third variant of its Apple TV and Gavorra was unable to jailbreak it. This prompted him to market his own Linux-based set-top device and his business, Raw-Media, grew from there.

While it seems likely that so-called ‘Raw Boxes’ were doing reasonably well with consumers, what was the secret of their success? Podcast host Salman asked Gavora for his ‘networking party 10-second pitch’, and the Canadian was happy to oblige.

“I get this all the time actually. I basically tell people that I sell a box that gives them free TV and movies,” he said.

This was met with laughter from the host, to which Gavora added, “That’s sort of the three-second pitch and everyone’s like ‘Oh, tell me more’.”

“Who doesn’t like free TV, come on?” Salman responded. “Yeah exactly,” Gavora said.

The image below, taken from a January 2016 YouTube unboxing video, shows one of the products sold by Gavora’s company.

Raw-Media Kodi Box packaging (note Kodi logo)

Bearing in mind the offer of free movies and TV, the tagline on the box, “Stop paying for things you don’t want to watch, watch more free tv!” initially looks quite provocative. That being said, both the device and Kodi are perfectly capable of playing plenty of legal content from free sources, so there’s no problem there.

What is surprising, however, is that the unboxing video shows the device being booted up, apparently already loaded with infamous third-party Kodi addons including PrimeWire, Genesis, Icefilms, and Navi-X.

The unboxing video showing the Kodi setup

Given that Gavora has registered the Kodi trademark in Canada and prints the official logo on his packaging, this runs counter to the official Kodi team’s aggressive stance towards boxes ready-configured with what they categorize as banned addons. Matters are compounded when one visits the product support site.

As seen in the image below, Raw-Media devices are delivered with a printed card in the packaging informing people where to get the after-sales services Gavora says he built his business upon. The cards advise people to visit No-Issue.ca, a site setup to offer text and video-based support to set-top box buyers.

No-Issue.ca (which is hosted on the same server as raw-media.ca and claimed officially as a sister site here) now redirects to No-Issue.is, as per a 2016 announcement. It has a fairly bland forum but the connected tutorial videos, found on No Issue’s YouTube channel, offer a lot more spice.

Registered under Gavora’s online nickname Gombeek (which is also used on the official Kodi forums), the channel is full of videos detailing how to install and use a wide range of addons.

The No-issue YouTube Channel tutorials

But while supplying tutorial videos is one thing, providing the actual software addons is another. Surprisingly, No-Issue does that too. Filed away under the URL http://solved.no-issue.is/ is a Kodi repository which distributes a wide range of addons, including many that specialize in infringing content, according to the Kodi team.

The No-Issue repository

A source familiar with Raw-Media’s devices informs TF that they’re no longer delivered with addons installed. However, tools hosted on No-Issue.is automate the installation process for the customer, with unlisted YouTube Videos (1,2) providing the instructions.

XBMC Foundation President Nathan Betzen says that situation isn’t ideal.

“If that really is his repo it is disappointing to see that Gavora is charging a fee or outright preventing the sale of boxes with Kodi installed that do not include infringing add-ons, while at the same time he is distributing boxes himself that do include the infringing add-ons like this,” Betzen told TF.

While the legality of this type of service is yet to be properly tested in Canada and may yet emerge as entirely permissible under local law, Gavora himself previously described his business as operating in a gray area.

“If I could go back in time four years, I would’ve been more aggressive in the beginning because there was a lot of uncertainty being in a gray market business about how far I could push it,” he said.

“I really shouldn’t say it’s a gray market because everything I do is completely above board, I just felt it was more gray market so I was a bit scared,” he added.

But, legality aside (which will be determined in due course through various cases 1,2), the situation is still problematic when it comes to the Kodi trademark.

The official Kodi team indicate they don’t want to be associated with any kind of questionable addon or even tutorials for the same. Nevertheless, several of the addons installed by No-Issue (including PrimeWire, cCloud TV, Genesis, Icefilms, MoviesHD, MuchMovies and Navi-X, to name a few), are present on the Kodi team’s official ban list.

The fact remains, however, that Gavora successfully registered the trademark in Canada (one month later it was transferred to a brand new company at the same address), and Kodi now have no control over the situation in the country, short of a settlement or some kind of legal action.

Kodi matters aside, though, we get more insight into Gavora’s attitudes towards intellectual property after learning that he studied gemology and jewelry at school. He’s a long-standing member of jewelry discussion forum Ganoskin.com (his profile links to Gavora.com, a domain Gavora owns, as per information supplied by Amazon).

Things get particularly topical in a 2006 thread titled “When your work gets ripped“. The original poster asked how people feel when their jewelry work gets copied and Gavora made his opinions known.

“I think that what most people forget to remember is that when a piece from Tiffany’s or Cartier is ripped off or copied they don’t usually just copy the work, they will stamp it with their name as well,” Gavora said.

“This is, in fact, fraud and they are deceiving clients into believing they are purchasing genuine Tiffany’s or Cartier pieces. The client is in fact more interested in purchasing from an artist than they are the piece. Laying claim to designs (unless a symbol or name is involved) is outrageous.”

Unless that ‘design’ is called Kodi, of course, then it’s possible to claim it as your own through an administrative process and begin demanding licensing fees from the public. That being said, Gavora does seem to flip back and forth a little, later suggesting that being copied is sometimes ok.

“If someone copies your design and produces it under their own name, I think one should be honored and revel in the fact that your design is successful and has caused others to imitate it and grow from it,” he wrote.

“I look forward to the day I see one of my original designs copied, that is the day I will know my design is a success.”

From their public statements, this opinion isn’t shared by the Kodi team in respect of their product. Despite the Kodi name, software and logo being all their own work, they now find themselves having to claw back rights in Canada, in order to keep the product free in the region. For now, however, that seems like a difficult task.

TorrentFreak wrote to Gavora and asked him why he felt the need to register the Kodi trademark, but we received no response. That means we didn’t get the chance to ask him why he’s taking down Amazon listings for other people’s devices, or about something else that came up in the podcast.

“My biggest weakness, I guess, is that I’m too ethical about how I do my business,” he said, referring to how he deals with customers.

Only time will tell how that philosophy will affect Gavora’s attitudes to trademarks and people’s desire not to be charged for using free, open source software.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Cloud Storage Doesn’t have to be Convoluted, Complex, or Confusing

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/cloud-storage-pricing-comparison/

business man frustrated over cloud storage pricing

So why do many vendors make it so hard to get information about how much you’re storing and how much you’re being charged?

Cloud storage is fast becoming the central repository for mission critical information, irreplaceable memories, and in some cases entire corporate and personal histories. Given this responsibility, we believe cloud storage vendors have an obligation to be transparent as possible in how they interact with their customers.

In that light we decided to challenge four cloud storage vendors and ask two simple questions:

  1. Can a customer understand how much data is stored?
  2. Can a customer understand the bill?

The detailed results are below, but if you wish to skip the details and the screen captures (TL;DR), we’ve summarized the results in the table below.

Summary of Cloud Storage Pricing Test

Our challenge was to upload 1 terabyte of data, store it for one month, and then download it.

Visibility to Data Stored Easy to Understand Bill Cost
Backblaze B2 Accurate, intuitive display of storage information. Available on demand, and the site clearly defines what has and will be charged for. $25
Microsoft Azure Storage is being measured in KiB, but is billed by the GB. With a calculator, it is unclear how much storage we are using. Available, but difficult to find. The nearly 30 day lag in billing creates business and accounting challenges. $72
Amazon S3 Incomplete. From the file browsing user interface, there is no reasonable way to understand how much data is being stored. Available on demand. While there are some line items that seem unnecessary for our test, the bill is generally straight-forward to understand. $71
Google Cloud Service Incomplete. From the file browsing user interface, there is no reasonable way to understand how much data is being stored. Available, but provides descriptions in units that are not on the pricing table nor commonly used. $100

Cloud Storage Test Details

For our tests, we choose Backblaze B2, Microsoft’s Azure, Amazon’s S3, and Google Cloud Storage. Our idea was simple: Upload 1 TB of data to the comparable service for each vendor, store it for 1 month, download that 1 TB, then document and share the results.

Let’s start with most obvious observation, the cost charged by each vendor for the test:

Cost
Backblaze B2 $25
Microsoft Azure $72
Amazon S3 $71
Google Cloud Service $100

Later in this post, we’ll see if we can determine the different cost components (storage, downloading, transactions, etc.) for each vendor, but our first step is to see if we can determine how much data we stored. In some cases, the answer is not as obvious as it would seem.

Test 1: Can a Customer Understand How Much Data Is Stored?

At the core, a provider of a service ought to be able to tell a customer how much of the service he or she is using. In this case, one might assume that providers of Cloud Storage would be able to tell customers how much data is being stored at any given moment. It turns out, it’s not that simple.

Backblaze B2
Logging into a Backblaze B2 account, one is presented with a summary screen that displays all “buckets.” Each bucket displays key summary information, including data currently stored.

B2 Cloud Storage Buckets screenshot

Clicking into a given bucket, one can browse individual files. Each file displays its size, and multiple files can be selected to create a size summary.

B2 file tree screenshot

Summary: Accurate, intuitive display of storage information.

Microsoft Azure

Moving on to Microsoft’s Azure, things get a little more “exciting.” There was no area that we could find where one can determine the total amount of data, in GB, stored with Azure.

There’s an area entitled “usage,” but that wasn’t helpful.

Microsoft Azure cloud storage screenshot

We then moved on to “Overview,” but had a couple challenges.The first issue was that we were presented with KiB (kibibyte) as a unit of measure. One GB (the unit of measure used in Azure’s pricing table) equates to roughly 976,563 KiB. It struck us as odd that things would be summarized by a unit of measure different from the billing unit of measure.

Microsoft Azure usage dashboard screenshot

Summary: Storage is being measured in KiB, but is billed by the GB. Even with a calculator, it is unclear how much storage we are using.

Amazon S3

Next we checked on the data we were storing in S3. We again ran into problems.

In the bucket overview, we were able to identify our buckets. However, we could not tell how much data was being stored.

Amazon S3 cloud storage buckets screenshot

Drilling into a bucket, the detail view does tell us file size. However, there was no method for summarizing the data stored within that bucket or for multiple files.

Amazon S3 cloud storage buckets usage screenshot

Summary: Incomplete. From the file browsing user interface, there is no reasonable way to understand how much data is being stored.

Google Cloud Storage (“GCS”)

GCS proved to have its own quirks, as well.

One can easily find the “bucket” summary, however, it does not provide information on data stored.

Google Cloud Storage Bucket screenshot

Clicking into the bucket, one can see files and the size of an individual file. However, no ability to see data total is provided.

Google Cloud Storage bucket files screenshot

Summary: Incomplete. From the file browsing user interface, there is no reasonable way to understand how much data is being stored.

Test 1 Conclusions

We knew how much storage we were uploading and, in many cases, the user will have some sense of the amount of data they are uploading. However, it strikes us as odd that many vendors won’t tell you how much data you have stored. Even stranger are the vendors that provide reporting in a unit of measure that is different from the units in their pricing table.

Test 2: Can a Customer Understand The Bill?

The cloud storage industry has done itself no favors with its tiered pricing that requires a calculator to figure out what’s going on. Setting that aside for a moment, one would presume that bills would be created in clear, auditable ways.

Backblaze

Inside of the Backblaze user interface, one finds a navigation link entitled “Billing.” Clicking on that, the user is presented with line items for previous bills, payments, and an estimate for the upcoming charges.

Backblaze B2 billing screenshot

One can expand any given row to see the the line item transactions composing each bill.

Backblaze B2 billing details screenshot

Summary: Available on demand, and the site clearly defines what has and will be charged for.

Azure

Trying to understand the Azure billing proved to be a bit tricky.

On August 6th, we logged into the billing console and were presented with this screen.

Microsoft Azure billing screenshot

As you can see, on Aug 6th, billing for the period of May-June was not available for download. For the period ending June 26th, we were charged nearly a month later, on July 24th. Clicking into that row item does display line item information.

Microsoft Azure cloud storage billing details screenshot

Summary: Available, but difficult to find. The nearly 30 day lag in billing creates business and accounting challenges.

Amazon S3

Amazon presents a clean billing summary and enables users to “drill down” into line items.

Going to the billing area of AWS, one can survey various monthly bills and is presented with a clean summary of billing charges.

AWS billing screenshot

Expanding into the billing detail, Amazon articulates each line item charge. Within each line item, charges are broken out into sub-line items for the different tiers of pricing.

AWS billing details screenshot

Summary: Available on demand. While there are some line items that seem unnecessary for our test, the bill is generally straight-forward to understand.

Google Cloud Storage (“GCS”)

This was an area where the GCS User Interface, which was otherwise relatively intuitive, became confusing.

Going to the Billing Overview page did not offer much in the way of an overview on charges.

Google Cloud Storage billing screenshot

However, moving down to the “Transactions” section did provide line item detail on all the charges incurred. However, similar to Azure introducing the concept of KiB, Google introduces the concept of the equally confusing Gibibyte (GiB). While all of Google’s pricing tables are listed in terms of GB, the line items reference GiB. 1 GiB is 1.07374 GBs.

Google Cloud Storage billing details screenshot

Summary: Available, but provides descriptions in units that are not on the pricing table nor commonly used.

Test 2 Conclusions

Clearly, some vendors do a better job than others in making their pricing available and understandable. From a transparency standpoint, it’s difficult to justify why a vendor would have their pricing table in units of X, but then put units of Y in the user interface.

Transparency: The Backblaze Way

Transparency isn’t easy. At Backblaze, we believe in investing time and energy into presenting the most intuitive user interfaces that we can create. We take pride in our heritage in the consumer backup space — servicing consumers has taught us how to make things understandable and usable. We do our best to apply those lessons to everything we do.

This philosophy reflects our desire to make our products usable, but it’s also part of a larger ethos of being transparent with our customers. We are being trusted with precious data. We want to repay that trust with, among other things, transparency.

It’s that spirit that was behind the decision to publish our hard drive performance stats, to open source the infrastructure that is behind us having the lowest cost of storage in the industry, and also to open source our erasure coding (the math that drives a significant portion of our redundancy for your data).

Why? We believe it’s not just about good user interface, it’s about the relationship we want to build with our customers.

The post Cloud Storage Doesn’t have to be Convoluted, Complex, or Confusing appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Nazis, are bad

Post Syndicated from Eevee original https://eev.ee/blog/2017/08/13/nazis-are-bad/

Anonymous asks:

Could you talk about something related to the management/moderation and growth of online communities? IOW your thoughts on online community management, if any.

I think you’ve tweeted about this stuff in the past so I suspect you have thoughts on this, but if not, again, feel free to just blog about … anything 🙂

Oh, I think I have some stuff to say about community management, in light of recent events. None of it hasn’t already been said elsewhere, but I have to get this out.

Hopefully the content warning is implicit in the title.


I am frustrated.

I’ve gone on before about a particularly bothersome phenomenon that hurts a lot of small online communities: often, people are willing to tolerate the misery of others in a community, but then get up in arms when someone pushes back. Someone makes a lot of off-hand, off-color comments about women? Uses a lot of dog-whistle terms? Eh, they’re not bothering anyone, or at least not bothering me. Someone else gets tired of it and tells them to knock it off? Whoa there! Now we have the appearance of conflict, which is unacceptable, and people will turn on the person who’s pissed off — even though they’ve been at the butt end of an invisible conflict for who knows how long. The appearance of peace is paramount, even if it means a large chunk of the population is quietly miserable.

Okay, so now, imagine that on a vastly larger scale, and also those annoying people who know how to skirt the rules are Nazis.


The label “Nazi” gets thrown around a lot lately, probably far too easily. But when I see a group of people doing the Hitler salute, waving large Nazi flags, wearing Nazi armbands styled after the SS, well… if the shoe fits, right? I suppose they might have flown across the country to join a torch-bearing mob ironically, but if so, the joke is going way over my head. (Was the murder ironic, too?) Maybe they’re not Nazis in the sense that the original party doesn’t exist any more, but for ease of writing, let’s refer to “someone who espouses Nazi ideology and deliberately bears a number of Nazi symbols” as, well, “a Nazi”.

This isn’t a new thing, either; I’ve stumbled upon any number of Twitter accounts that are decorated in Nazi regalia. I suppose the trouble arises when perfectly innocent members of the alt-right get unfairly labelled as Nazis.

But hang on; this march was called “Unite the Right” and was intended to bring together various far right sub-groups. So what does their choice of aesthetic say about those sub-groups? I haven’t heard, say, alt-right coiner Richard Spencer denounce the use of Nazi symbology — extra notable since he was fucking there and apparently didn’t care to discourage it.


And so begins the rule-skirting. “Nazi” is definitely overused, but even using it to describe white supremacists who make not-so-subtle nods to Hitler is likely to earn you some sarcastic derailment. A Nazi? Oh, so is everyone you don’t like and who wants to establish a white ethno state a Nazi?

Calling someone a Nazi — or even a white supremacist — is an attack, you see. Merely expressing the desire that people of color not exist is perfectly peaceful, but identifying the sentiment for what it is causes visible discord, which is unacceptable.

These clowns even know this sort of thing and strategize around it. Or, try, at least. Maybe it wasn’t that successful this weekend — though flicking through Charlottesville headlines now, they seem to be relatively tame in how they refer to the ralliers.

I’m reminded of a group of furries — the alt-furries — who have been espousing white supremacy and wearing red armbands with a white circle containing a black… pawprint. Ah, yes, that’s completely different.


So, what to do about this?

Ignore them” is a popular option, often espoused to bullied children by parents who have never been bullied, shortly before they resume complaining about passive-aggressive office politics. The trouble with ignoring them is that, just like in smaller communitiest, they have a tendency to fester. They take over large chunks of influential Internet surface area like 4chan and Reddit; they help get an inept buffoon elected; and then they start to have torch-bearing rallies and run people over with cars.

4chan illustrates a kind of corollary here. Anyone who’s steeped in Internet Culture™ is surely familiar with 4chan; I was never a regular visitor, but it had enough influence that I was still aware of it and some of its culture. It was always thick with irony, which grew into a sort of ironic detachment — perhaps one of the major sources of the recurring online trope that having feelings is bad — which proceeded into ironic racism.

And now the ironic racism is indistinguishable from actual racism, as tends to be the case. Do they “actually” “mean it”, or are they just trying to get a rise out of people? What the hell is unironic racism if not trying to get a rise out of people? What difference is there to onlookers, especially as they move to become increasingly involved with politics?

It’s just a joke” and “it was just a thoughtless comment” are exceptionally common defenses made by people desperate to preserve the illusion of harmony, but the strain of overt white supremacy currently running rampant through the US was built on those excuses.


The other favored option is to debate them, to defeat their ideas with better ideas.

Well, hang on. What are their ideas, again? I hear they were chanting stuff like “go back to Africa” and “fuck you, faggots”. Given that this was an overtly political rally (and again, the Nazi fucking regalia), I don’t think it’s a far cry to describe their ideas as “let’s get rid of black people and queer folks”.

This is an underlying proposition: that white supremacy is inherently violent. After all, if the alt-right seized total political power, what would they do with it? If I asked the same question of Democrats or Republicans, I’d imagine answers like “universal health care” or “screw over poor people”. But people whose primary goal is to have a country full of only white folks? What are they going to do, politely ask everyone else to leave? They’re invoking the memory of people who committed genocide and also tried to take over the fucking world. They are outright saying, these are the people we look up to, this is who we think had a great idea.

How, precisely, does one defeat these ideas with rational debate?

Because the underlying core philosophy beneath all this is: “it would be good for me if everything were about me”. And that’s true! (Well, it probably wouldn’t work out how they imagine in practice, but it’s true enough.) Consider that slavery is probably fantastic if you’re the one with the slaves; the issue is that it’s reprehensible, not that the very notion contains some kind of 101-level logical fallacy. That’s probably why we had a fucking war over it instead of hashing it out over brunch.

…except we did hash it out over brunch once, and the result was that slavery was still allowed but slaves only counted as 60% of a person for the sake of counting how much political power states got. So that’s how rational debate worked out. I’m sure the slaves were thrilled with that progress.


That really only leaves pushing back, which raises the question of how to push back.

And, I don’t know. Pushing back is much harder in spaces you don’t control, spaces you’re already struggling to justify your own presence in. For most people, that’s most spaces. It’s made all the harder by that tendency to preserve illusory peace; even the tamest request that someone knock off some odious behavior can be met by pushback, even by third parties.

At the same time, I’m aware that white supremacists prey on disillusioned young white dudes who feel like they don’t fit in, who were promised the world and inherited kind of a mess. Does criticism drive them further away? The alt-right also opposes “political correctness”, i.e. “not being a fucking asshole”.

God knows we all suck at this kind of behavior correction, even within our own in-groups. Fandoms have become almost ridiculously vicious as platforms like Twitter and Tumblr amplify individual anger to deafening levels. It probably doesn’t help that we’re all just exhausted, that every new fuck-up feels like it bears the same weight as the last hundred combined.

This is the part where I admit I don’t know anything about people and don’t have any easy answers. Surprise!


The other alternative is, well, punching Nazis.

That meme kind of haunts me. It raises really fucking complicated questions about when violence is acceptable, in a culture that’s completely incapable of answering them.

America’s relationship to violence is so bizarre and two-faced as to be almost incomprehensible. We worship it. We have the biggest military in the world by an almost comical margin. It’s fairly mainstream to own deadly weapons for the express stated purpose of armed revolution against the government, should that become necessary, where “necessary” is left ominously undefined. Our movies are about explosions and beating up bad guys; our video games are about explosions and shooting bad guys. We fantasize about solving foreign policy problems by nuking someone — hell, our talking heads are currently in polite discussion about whether we should nuke North Korea and annihilate up to twenty-five million people, as punishment for daring to have the bomb that only we’re allowed to have.

But… violence is bad.

That’s about as far as the other side of the coin gets. It’s bad. We condemn it in the strongest possible terms. Also, guess who we bombed today?

I observe that the one time Nazis were a serious threat, America was happy to let them try to take over the world until their allies finally showed up on our back porch.

Maybe I don’t understand what “violence” means. In a quest to find out why people are talking about “leftist violence” lately, I found a National Review article from May that twice suggests blocking traffic is a form of violence. Anarchists have smashed some windows and set a couple fires at protests this year — and, hey, please knock that crap off? — which is called violence against, I guess, Starbucks. Black Lives Matter could be throwing a birthday party and Twitter would still be abuzz with people calling them thugs.

Meanwhile, there’s a trend of murderers with increasingly overt links to the alt-right, and everyone is still handling them with kid gloves. First it was murders by people repeating their talking points; now it’s the culmination of a torches-and-pitchforks mob. (Ah, sorry, not pitchforks; assault rifles.) And we still get this incredibly bizarre both-sides-ism, a White House that refers to the people who didn’t murder anyone as “just as violent if not more so“.


Should you punch Nazis? I don’t know. All I know is that I’m extremely dissatisfied with discourse that’s extremely alarmed by hypothetical punches — far more mundane than what you’d see after a sporting event — but treats a push for ethnic cleansing as a mere difference of opinion.

The equivalent to a punch in an online space is probably banning, which is almost laughable in comparison. It doesn’t cause physical harm, but it is a use of concrete force. Doesn’t pose quite the same moral quandary, though.

Somewhere in the middle is the currently popular pastime of doxxing (doxxxxxxing) people spotted at the rally in an attempt to get them fired or whatever. Frankly, that skeeves me out, though apparently not enough that I’m directly chastizing anyone for it.


We aren’t really equipped, as a society, to deal with memetic threats. We aren’t even equipped to determine what they are. We had a fucking world war over this, and now people are outright saying “hey I’m like those people we went and killed a lot in that world war” and we give them interviews and compliment their fashion sense.

A looming question is always, what if they then do it to you? What if people try to get you fired, to punch you for your beliefs?

I think about that a lot, and then I remember that it’s perfectly legal to fire someone for being gay in half the country. (Courts are currently wrangling whether Title VII forbids this, but with the current administration, I’m not optimistic.) I know people who’ve been fired for coming out as trans. I doubt I’d have to look very far to find someone who’s been punched for either reason.

And these aren’t even beliefs; they’re just properties of a person. You can stop being a white supremacist, one of those people yelling “fuck you, faggots”.

So I have to recuse myself from this asinine question, because I can’t fairly judge the risk of retaliation when it already happens to people I care about.

Meanwhile, if a white supremacist does get punched, I absolutely still want my tax dollars to pay for their universal healthcare.


The same wrinkle comes up with free speech, which is paramount.

The ACLU reminds us that the First Amendment “protects vile, hateful, and ignorant speech”. I think they’ve forgotten that that’s a side effect, not the goal. No one sat down and suggested that protecting vile speech was some kind of noble cause, yet that’s how we seem to be treating it.

The point was to avoid a situation where the government is arbitrarily deciding what qualifies as vile, hateful, and ignorant, and was using that power to eliminate ideas distasteful to politicians. You know, like, hypothetically, if they interrogated and jailed a bunch of people for supporting the wrong economic system. Or convicted someone under the Espionage Act for opposing the draft. (Hey, that’s where the “shouting fire in a crowded theater” line comes from.)

But these are ideas that are already in the government. Bannon, a man who was chair of a news organization he himself called “the platform for the alt-right”, has the President’s ear! How much more mainstream can you get?

So again I’m having a little trouble balancing “we need to defend the free speech of white supremacists or risk losing it for everyone” against “we fairly recently were ferreting out communists and the lingering public perception is that communists are scary, not that the government is”.


This isn’t to say that freedom of speech is bad, only that the way we talk about it has become fanatical to the point of absurdity. We love it so much that we turn around and try to apply it to corporations, to platforms, to communities, to interpersonal relationships.

Look at 4chan. It’s completely public and anonymous; you only get banned for putting the functioning of the site itself in jeopardy. Nothing is stopping a larger group of people from joining its politics board and tilting sentiment the other way — except that the current population is so odious that no one wants to be around them. Everyone else has evaporated away, as tends to happen.

Free speech is great for a government, to prevent quashing politics that threaten the status quo (except it’s a joke and they’ll do it anyway). People can’t very readily just bail when the government doesn’t like them, anyway. It’s also nice to keep in mind to some degree for ubiquitous platforms. But the smaller you go, the easier it is for people to evaporate away, and the faster pure free speech will turn the place to crap. You’ll be left only with people who care about nothing.


At the very least, it seems clear that the goal of white supremacists is some form of destabilization, of disruption to the fabric of a community for purely selfish purposes. And those are the kinds of people you want to get rid of as quickly as possible.

Usually this is hard, because they act just nicely enough to create some plausible deniability. But damn, if someone is outright telling you they love Hitler, maybe skip the principled hand-wringing and eject them.

Pirate Bay Founder Wants to Save Lives With His New App

Post Syndicated from Andy original https://torrentfreak.com/pirate-bay-founder-wants-to-save-lives-with-his-new-app-170714/

Of all the early founders of The Pirate Bay, it is Peter Sunde who has remained most obviously in the public eye. Now distanced from the site, Sunde has styled himself as a public speaker and entrepreneur.

Earlier this year the Swede (who is of both Norwegian and Finnish ancestry) sold his second most famous project Flattr to the parent company of Adblock Plus. Now, however, he has another digital baby to nurture, and this one is quite interesting.

Like many countries, Sweden operates a public early warning system. Popularly known as ‘Hesa Fredrik’, it consists of extremely loud outdoor sirens accompanied by radio and television messages.

The sirens can be activated in specific areas of the country wherever the problems exist. Fire, floods, gas leaks, threats to the water system, terrorist attacks or even war could trigger the alarm.

Just recently the ‘Hesa Fredrik’ alarm was sounded in Sweden, yet there was no planned test and no emergency. The public didn’t know that though and as people struggled to find information, authority websites crashed under the strain. The earliest news report indicating that it was a false alarm appeared behind a news site’s paywall. The national police site published no information.

The false alarm

Although Sunde heard the sirens, it was an earlier incident that motivated him to find a better solution. Speaking with Swedish site Breakit, Sunde says he got the idea during the Västmanland wildfire, which burned for six weeks straight in 2014 and became the largest fire in Sweden for 40 years.

“I got the idea during Västmanland fire. It took several days before text messages were sent to everyone in the area but by then it was already out of control. I thought that was so very bad when it is so easy to build something better,” Sunde said.

Sunde’s solution is the Hesa Fredrika app, which is currently under development by himself and several former members of the Flattr team.

“The goal is for everyone to download the app and then forget about it,” Sunde says.

When one thinks about the problem Sunde is trying to solve (i.e. the lack of decent and timely information in a crisis) today’s mobile phones provide the perfect solution. Not only do most people have one (or are near someone who does), they provide the perfect platform to deliver immediately deliver emergency services advice to people in a precise location.

“It is not enough for a small text to appear in the corner of the screen. I want to build something that makes the phone vibrate and sound so that you notice it properly,” Sunde told Breakit.

But while such an app could genuinely save lives in the event of a frankly rare event, Sunde has bigger ideas for the software that could extend its usefulness significantly.

Users will also be invited to add information about themselves, such as their doctor’s name or if they are a blood donor. The app user could then be messaged if there was an urgent need for a particular match. But while the app will be rolled out soon, it won’t be rushed.

“Since it is extremely important to the quality of the messages, we want as many partnerships as possible before we launch something,” Sunde says, adding that in true Pirate Bay style, it will be completely free for everyone.

“So it will remain forever,” he says. “My philosophy is such that I do not want people to pay for things that can save their lives.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Developers and Ethics

Post Syndicated from Bozho original https://techblog.bozho.net/developers-and-ethics/

“What are some areas you are particularly interested in” – recruiters (head-hunters) tend to ask that question a lot. I don’t have a good answer for that – I’ll know it when I see it. But I have a list of areas that I wouldn’t like to work in. And one of them is gambling.

Several years ago I got a very lucrative offer for a gambling company, both well paid and technically challenging. But I rejected it. Because I didn’t want to contribute to abusing peoples’ weaknesses for the sake of getting their money. And no, I’m not a raging Marxist, but gambling is bad. You may argue that it’s a necessary vice and people need it to suppress other internal struggles, but I’m not buying that as a motivator.

I felt it’s unethical to write code that does that. Like I feel it’s unethical to profile users’ behaviours and “read” their emails in order to target ads, or to write bots to disseminate fake news.

A few months ago I was part of the campaign HQ for a party in a parliamentary election. Cambridge Analytica had already become popular after “delivering Brexit and Trump’s victory”, that using voters’ data in order to target messages at them sounded like the new cool thing. As head of IT & data, I rejected this approach. Because it would be unethical to bait unsuspecting users to take dumb tests in order to provide us with facebook tokens. Yes, we didn’t have any money to hire Cambridge Analytica-like companies, but even if we had, is “outsourcing” the dubious practice changing anything? If you pay someone to trick users into unknowingly giving their personal data, it’s as if you did it yourself.

This can be a very long post about technology and ethics. But it won’t, as this is a technical blog, not a philosophical one. It won’t be about philosophy – for interesting takes on the matter you can listen to Damon Horowitz’s TED talk or even go through all of Michael Sandel’s Justice lectures at Harvard. It won’t be about how companies should be ethical (e.g. following the ethical design manifesto)

Instead, it will be a short post focusing on developers and their ethical choices.

I think we have the freedom to be ethical – there’s so much demand on the job market that rejecting an offer, refusing to do something, or leaving a company for ethical reasons is something we have the luxury to do without compromising our well-being. When asked to do something unethical, we can refuse (several years ago I was asked to take part in some shady interactions related to a potential future government contract, which I refused to do). When offered jobs that are slightly better paid but would have us build abusive technology, we can turn the offer down. When a new feature requires us to breach people’s privacy, we can argue it, and ultimately not do it.

But in order to start making these ethical choices, we have to start thinking about ethics. To put ourselves in context. We, developers, are building the world of tomorrow (it sounds grandiose, but we know it’s way more mundane than that). We are the “tools” with which future products will be shaped. And yes, that’s true even for the average back-office system of an insurance company (which allows for raising the insurance for pre-existing conditions), and true for boring banking software (which allows mortgages way beyond the actual coverage the bank has), and so on.

Are these decisions ours to make? Isn’t it legislators that should define what’s allowed and what isn’t? We are just building whatever they tell us to build. Forgive me the far-fetched analogy, but Nazi Germany was an anti-humanity machine based on people who “just followed orders”. Yes, we’ll refuse, someone else will come and do it, but collective ethics gets built over time.

As Hannah Arendt had put it – “The sad truth is that most evil is done by people who never make up their minds to be good or evil.”. We may think that as developers we don’t have a say. But without us, no software can be built. So with our individual ethical stance, a certain unethical software may not be built or be successful, and that’s a stance worth considering, especially when it costs us next to nothing.

The post Developers and Ethics appeared first on Bozho's tech blog.

The Pirate Bay Isn’t Affected By Adverse Court Rulings – Everyone Else Is

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-isnt-affected-by-adverse-court-rulings-everyone-else-is-170618/

For more than a decade The Pirate Bay has been the world’s most controversial site. Delivering huge quantities of copyrighted content to the masses, the platform is revered and reviled across the copyright spectrum.

Its reputation is one of a defiant Internet swashbuckler, but due to changes in how the site has been run in more recent times, its current philosophy is more difficult to gauge. What has never been in doubt, however, is the site’s original intent to be as provocative as possible.

Through endless publicity stunts, some real, some just for the ‘lulz’, The Pirate Bay managed to attract a massive audience, all while incurring the wrath of every major copyright holder in the world.

Make no mistake, they all queued up to strike back, but every subsequent rightsholder action was met by a Pirate Bay middle finger, two fingers, or chin flick, depending on the mood of the day. This only served to further delight the masses, who happily spread the word while keeping their torrents flowing.

This vicious circle of being targeted by the entertainment industries, mocking them, and then reaping the traffic benefits, developed into the cheapest long-term marketing campaign the Internet had ever seen. But nothing is ever truly for free and there have been consequences.

After taunting Hollywood and the music industry with its refusals to capitulate, endless legal action that the site would have ordinarily been forced to participate in largely took place without The Pirate Bay being present. It doesn’t take a law degree to work out what happened in each and every one of those cases, whatever complex route they took through the legal system. No defense, no win.

For example, the web-blocking phenomenon across the UK, Europe, Asia and Australia was driven by the site’s absolute resilience and although there would clearly have been other scapegoats had The Pirate Bay disappeared, the site was the ideal bogeyman the copyright lobby required to move forward.

Filing blocking lawsuits while bringing hosts, advertisers, and ISPs on board for anti-piracy initiatives were also made easier with the ‘evil’ Pirate Bay still online. Immune from every anti-piracy technique under the sun, the existence of the platform in the face of all onslaughts only strengthened the cases of those arguing for even more drastic measures.

Over a decade, this has meant a significant tightening of the sharing and streaming climate. Without any big legislative changes but plenty of case law against The Pirate Bay, web-blocking is now a walk in the park, ad hoc domain seizures are a fairly regular occurrence, and few companies want to host sharing sites. Advertisers and brands are also hesitant over where they place their ads. It’s a very different world to the one of 10 years ago.

While it would be wrong to attribute every tightening of the noose to the actions of The Pirate Bay, there’s little doubt that the site and its chaotic image played a huge role in where copyright enforcement is today. The platform set out to provoke and succeeded in every way possible, gaining supporters in their millions. It could also be argued it kicked a hole in a hornets’ nest, releasing the hell inside.

But perhaps the site’s most amazing achievement is the way it has managed to stay online, despite all the turmoil.

This week yet another ruling, this time from the powerful European Court of Justice, found that by offering links in the manner it does, The Pirate Bay and other sites are liable for communicating copyright works to the public. Of course, this prompted the usual swathe of articles claiming that this could be the final nail in the site’s coffin.

Wrong.

In common with every ruling, legal defeat, and legislative restriction put in place due to the site’s activities, this week’s decision from the ECJ will have zero effect on the Pirate Bay’s availability. For right or wrong, the site was breaking the law long before this ruling and will continue to do so until it decides otherwise.

What we have instead is a further tightened legal landscape that will have a lasting effect on everything BUT the site, including weaker torrent sites, Internet users, and user-uploaded content sites such as YouTube.

With The Pirate Bay carrying on regardless, that is nothing short of remarkable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Teaching tech

Post Syndicated from Eevee original https://eev.ee/blog/2017/06/10/teaching-tech/

A sponsored post from Manishearth:

I would kinda like to hear about any thoughts you have on technical teaching or technical writing. Pedagogy is something I care about. But I don’t know how much you do, so feel free to ignore this suggestion 🙂

Good news: I care enough that I’m trying to write a sorta-kinda-teaching book!

Ironically, one of the biggest problems I’ve had with writing the introduction to that book is that I keep accidentally rambling on for pages about problems and difficulties with teaching technical subjects. So maybe this is a good chance to get it out of my system.

Phaser

I recently tried out a new thing. It was Phaser, but this isn’t a dig on them in particular, just a convenient example fresh in my mind. If anything, they’re better than most.

As you can see from Phaser’s website, it appears to have tons of documentation. Two of the six headings are “LEARN” and “EXAMPLES”, which seems very promising. And indeed, Phaser offers:

  • Several getting-started walkthroughs
  • Possibly hundreds of examples
  • A news feed that regularly links to third-party tutorials
  • Thorough API docs

Perfect. Beautiful. Surely, a dream.

Well, almost.

The examples are all microscopic, usually focused around a single tiny feature — many of them could be explained just as well with one line of code. There are a few example games, but they’re short aimless demos. None of them are complete games, and there’s no showcase either. Games sometimes pop up in the news feed, but most of them don’t include source code, so they’re not useful for learning from.

Likewise, the API docs are just API docs, leading to the sorts of problems you might imagine. For example, in a few places there’s a mention of a preUpdate stage that (naturally) happens before update. You might rightfully wonder what kinds of things happen in preUpdate — and more importantly, what should you put there, and why?

Let’s check the API docs for Phaser.Group.preUpdate:

The core preUpdate – as called by World.

Okay, that didn’t help too much, but let’s check what Phaser.World has to say:

The core preUpdate – as called by World.

Ah. Hm. It turns out World is a subclass of Group and inherits this method — and thus its unaltered docstring — from Group.

I did eventually find some brief docs attached to Phaser.Stage (but only by grepping the source code). It mentions what the framework uses preUpdate for, but not why, and not when I might want to use it too.


The trouble here is that there’s no narrative documentation — nothing explaining how the library is put together and how I’m supposed to use it. I get handed some brief primers and a massive reference, but nothing in between. It’s like buying an O’Reilly book and finding out it only has one chapter followed by a 500-page glossary.

API docs are great if you know specifically what you’re looking for, but they don’t explain the best way to approach higher-level problems, and they don’t offer much guidance on how to mesh nicely with the design of a framework or big library. Phaser does a decent chunk of stuff for you, off in the background somewhere, so it gives the strong impression that it expects you to build around it in a particular way… but it never tells you what that way is.

Tutorials

Ah, but this is what tutorials are for, right?

I confess I recoil whenever I hear the word “tutorial”. It conjures an image of a uniquely useless sort of post, which goes something like this:

  1. Look at this cool thing I made! I’ll teach you how to do it too.

  2. Press all of these buttons in this order. Here’s a screenshot, which looks nothing like what you have, because I’ve customized the hell out of everything.

  3. You did it!

The author is often less than forthcoming about why they made any of the decisions they did, where you might want to try something else, or what might go wrong (and how to fix it).

And this is to be expected! Writing out any of that stuff requires far more extensive knowledge than you need just to do the thing in the first place, and you need to do a good bit of introspection to sort out something coherent to say.

In other words, teaching is hard. It’s a skill, and it takes practice, and most people blogging are not experts at it. Including me!


With Phaser, I noticed that several of the third-party tutorials I tried to look at were 404s — sometimes less than a year after they were linked on the site. Pretty major downside to relying on the community for teaching resources.

But I also notice that… um…

Okay, look. I really am not trying to rag on this author. I’m not. They tried to share their knowledge with the world, and that’s a good thing, something worthy of praise. I’m glad they did it! I hope it helps someone.

But for the sake of example, here is the most recent entry in Phaser’s list of community tutorials. I have to link it, because it’s such a perfect example. Consider:

  • The post itself is a bulleted list of explanation followed by a single contiguous 250 lines of source code. (Not that there’s anything wrong with bulleted lists, mind you.) That code contains zero comments and zero blank lines.

  • This is only part two in what I think is a series aimed at beginners, yet the title and much of the prose focus on object pooling, a performance hack that’s easy to add later and that’s almost certainly unnecessary for a game this simple. There is no explanation of why this is done; the prose only says you’ll understand why it’s critical once you add a lot more game objects.

  • It turns out I only have two things to say here so I don’t know why I made this a bulleted list.

In short, it’s not really a guided explanation; it’s “look what I did”.

And that’s fine, and it can still be interesting. I’m not sure English is even this person’s first language, so I’m hardly going to criticize them for not writing a novel about platforming.

The trouble is that I doubt a beginner would walk away from this feeling very enlightened. They might be closer to having the game they wanted, so there’s still value in it, but it feels closer to having someone else do it for them. And an awful lot of tutorials I’ve seen — particularly of the “post on some blog” form (which I’m aware is the genre of thing I’m writing right now) — look similar.

This isn’t some huge social problem; it’s just people writing on their blog and contributing to the corpus of written knowledge. It does become a bit stickier when a large project relies on these community tutorials as its main set of teaching aids.


Again, I’m not ragging on Phaser here. I had a slightly frustrating experience with it, coming in knowing what I wanted but unable to find a description of the semantics anywhere, but I do sympathize. Teaching is hard, writing documentation is hard, and programmers would usually rather program than do either of those things. For free projects that run on volunteer work, and in an industry where anything other than programming is a little undervalued, getting good docs written can be tricky.

(Then again, Phaser sells books and plugins, so maybe they could hire a documentation writer. Or maybe the whole point is for you to buy the books?)

Some pretty good docs

Python has pretty good documentation. It introduces the language with a tutorial, then documents everything else in both a library and language reference.

This sounds an awful lot like Phaser’s setup, but there’s some considerable depth in the Python docs. The tutorial is highly narrative and walks through quite a few corners of the language, stopping to mention common pitfalls and possible use cases. I clicked an arbitrary heading and found a pleasant, informative read that somehow avoids being bewilderingly dense.

The API docs also take on a narrative tone — even something as humble as the collections module offers numerous examples, use cases, patterns, recipes, and hints of interesting ways you might extend the existing types.

I’m being a little vague and hand-wavey here, but it’s hard to give specific examples without just quoting two pages of Python documentation. Hopefully you can see right away what I mean if you just take a look at them. They’re good docs, Bront.

I’ve likewise always enjoyed the SQLAlchemy documentation, which follows much the same structure as the main Python documentation. SQLAlchemy is a database abstraction layer plus ORM, so it can do a lot of subtly intertwined stuff, and the complexity of the docs reflects this. Figuring out how to do very advanced things correctly, in particular, can be challenging. But for the most part it does a very thorough job of introducing you to a large library with a particular philosophy and how to best work alongside it.

I softly contrast this with, say, the Perl documentation.

It’s gotten better since I first learned Perl, but Perl’s docs are still a bit of a strange beast. They exist as a flat collection of manpage-like documents with terse names like perlootut. The documentation is certainly thorough, but much of it has a strange… allocation of detail.

For example, perllol — the explanation of how to make a list of lists, which somehow merits its own separate documentation — offers no fewer than nine similar variations of the same code for reading a file into a nested lists of words on each line. Where Python offers examples for a variety of different problems, Perl shows you a lot of subtly different ways to do the same basic thing.

A similar problem is that Perl’s docs sometimes offer far too much context; consider the references tutorial, which starts by explaining that references are a powerful “new” feature in Perl 5 (first released in 1994). It then explains why you might want to nest data structures… from a Perl 4 perspective, thus explaining why Perl 5 is so much better.

Some stuff I’ve tried

I don’t claim to be a great teacher. I like to talk about stuff I find interesting, and I try to do it in ways that are accessible to people who aren’t lugging around the mountain of context I already have. This being just some blog, it’s hard to tell how well that works, but I do my best.

I also know that I learn best when I can understand what’s going on, rather than just seeing surface-level cause and effect. Of course, with complex subjects, it’s hard to develop an understanding before you’ve seen the cause and effect a few times, so there’s a balancing act between showing examples and trying to provide an explanation. Too many concrete examples feel like rote memorization; too much abstract theory feels disconnected from anything tangible.

The attempt I’m most pleased with is probably my post on Perlin noise. It covers a fairly specific subject, which made it much easier. It builds up one step at a time from scratch, with visualizations at every point. It offers some interpretations of what’s going on. It clearly explains some possible extensions to the idea, but distinguishes those from the core concept.

It is a little math-heavy, I grant you, but that was hard to avoid with a fundamentally mathematical topic. I had to be economical with the background information, so I let the math be a little dense in places.

But the best part about it by far is that I learned a lot about Perlin noise in the process of writing it. In several places I realized I couldn’t explain what was going on in a satisfying way, so I had to dig deeper into it before I could write about it. Perhaps there’s a good guideline hidden in there: don’t try to teach as much as you know?

I’m also fairly happy with my series on making Doom maps, though they meander into tangents a little more often. It’s hard to talk about something like Doom without meandering, since it’s a convoluted ecosystem that’s grown organically over the course of 24 years and has at least three ways of doing anything.


And finally there’s the book I’m trying to write, which is sort of about game development.

One of my biggest grievances with game development teaching in particular is how often it leaves out important touches. Very few guides will tell you how to make a title screen or menu, how to handle death, how to get a Mario-style variable jump height. They’ll show you how to build a clearly unfinished demo game, then leave you to your own devices.

I realized that the only reliable way to show how to build a game is to build a real game, then write about it. So the book is laid out as a narrative of how I wrote my first few games, complete with stumbling blocks and dead ends and tiny bits of polish.

I have no idea how well this will work, or whether recapping my own mistakes will be interesting or distracting for a beginner, but it ought to be an interesting experiment.

zetcd: running ZooKeeper apps without ZooKeeper

Post Syndicated from ris original https://lwn.net/Articles/723334/rss

The CoreOS Blog introduces the first
beta release, v0.0.1, of zetcd. “Distributed systems commonly rely
on a distributed consensus to coordinate work. Usually the systems
providing distributed consensus guarantee information is delivered in order
and never suffer split-brain conflicts. The usefulness, but rich design
space, of such systems is evident by the proliferation of implementations;
projects such as chubby, ZooKeeper, etcd, and consul, despite differing in philosophy
and protocol, all focus on serving similar basic key-value primitives for
distributed consensus. As part of making etcd the most appealing foundation
for distributed systems, the etcd team developed a new proxy, zetcd, to
serve ZooKeeper requests with an unmodified etcd cluster.

Operating OpenStack at Scale

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/159795571841

By James Penick, Cloud Architect & Gurpreet Kaur, Product Manager

A version of this byline was originally written for and appears in CIO Review.

A successful private cloud presents a consistent and reliable facade over the complexities of hyperscale infrastructure. It must simultaneously handle constant organic traffic growth, unanticipated spikes, a multitude of hardware vendors, and discordant customer demands. The depth of this complexity only increases with the age of the business, leaving a private cloud operator saddled with legacy hardware, old network infrastructure, customers dependent on legacy operating systems, and the list goes on. These are the foundations of the horror stories told by grizzled operators around the campfire.

Providing a plethora of services globally for over a billion active users requires a hyperscale infrastructure. Yahoo’s on-premises infrastructure is comprised of datacenters housing hundreds of thousands of physical and virtual compute resources globally, connected via a multi-terabit network backbone. As one of the very first hyperscale internet companies in the world, Yahoo’s infrastructure had grown organically – things were built, and rebuilt, as the company learned and grew. The resulting web of modern and legacy infrastructure became progressively more difficult to manage. Initial attempts to manage this via IaaS (Infrastructure-as-a-Service) taught some hard lessons. However, those lessons served us well when OpenStack was selected to manage Yahoo’s datacenters, some of which are shared below.

Centralized team offering Infrastructure-as-a-Service

Chief amongst the lessons learned prior to OpenStack was that IaaS must be presented as a core service to the whole organization by a dedicated team. An a-la-carte-IaaS, where each user is expected to manage their own control plane and inventory, just isn’t sustainable at scale. Multiple teams tackling the same challenges involved in the curation of software, deployment, upkeep, and security within an organization is not just a duplication of effort; it removes the opportunity for improved synergy with all levels of the business. The first OpenStack cluster, with a centralized dedicated developer and service engineering team, went live in June 2012.  This model has served us well and has been a crucial piece of making OpenStack succeed at Yahoo. One of the biggest advantages to a centralized, core team is the ability to collaborate with the foundational teams upon which any business is built: Supply chain, Datacenter Site-Operations, Finance, and finally our customers, the engineering teams. Building a close relationship with these vital parts of the business provides the ability to streamline the process of scaling inventory and presenting on-demand infrastructure to the company.

Developers love instant access to compute resources

Our developer productivity clusters, named “OpenHouse,” were a huge hit. Ideation and experimentation are core to developers’ DNA at Yahoo. It empowers our engineers to innovate, prototype, develop, and quickly iterate on ideas. No longer is a developer reliant on a static and costly development machine under their desk. OpenHouse enables developer agility and cost savings by obviating the desktop.

Dynamic infrastructure empowers agile products

From a humble beginning of a single, small OpenStack cluster, Yahoo’s OpenStack footprint is growing beyond 100,000 VM instances globally, with our single largest virtual machine cluster running over a thousand compute nodes, without using Nova Cells.

Until this point, Yahoo’s production footprint was nearly 100% focused on baremetal – a part of the business that one cannot simply ignore. In 2013, Yahoo OpenStack Baremetal began to manage all new compute deployments. Interestingly, after moving to a common API to provision baremetal and virtual machines, there was a marked increase in demand for virtual machines.

Developers across all major business units ranging from Yahoo Mail, Video, News, Finance, Sports and many more, were thrilled with getting instant access to compute resources to hit the ground running on their projects. Today, the OpenStack team is continuing to fully migrate the business to OpenStack-managed. Our baremetal footprint is well beyond that of our VMs, with over 100,000 baremetal instances provisioned by OpenStack Nova via Ironic.

How did Yahoo hit this scale?  

Scaling OpenStack begins with understanding how its various components work and how they communicate with one another. This topic can be very deep and for the sake of brevity, we’ll hit the high points.

1. Start at the bottom and think about the underlying hardware

Do not overlook the unique resource constraints for the services which power your cloud, nor the fashion in which those services are to be used. Leverage that understanding to drive hardware selection. For example, when one examines the role of the database server in an OpenStack cluster, and considers the multitudinous calls to the database: compute node heartbeats, instance state changes, normal user operations, and so on; they would conclude this core component is extremely busy in even a modest-sized Nova cluster, and in need of adequate computational resources to perform. Yet many deployers skimp on the hardware. The performance of the whole cluster is bottlenecked by the DB I/O. By thinking ahead you can save yourself a lot of heartburn later on.

2. Think about how things communicate

Our cluster databases are configured to be multi-master single-writer with automated failover. Control plane services have been modified to split DB reads directly to the read slaves and only write to the write-master. This distributes load across the database servers.

3. Scale wide

OpenStack has many small horizontally-scalable components which can peacefully cohabitate on the same machines: the Nova, Keystone, and Glance APIs, for example. Stripe these across several small or modest hardware. Some services, such as the Nova scheduler, run the risk of race conditions when running multi-active. If the risk of race conditions is unacceptable, use ZooKeeper to manage leader election.

4. Remove dependencies

In a Yahoo datacenter, DHCP is only used to provision baremetal servers. By statically declaring IPs in our instances via cloud-init, our infrastructure is less prone to outage from a failure in the DHCP infrastructure.

5. Don’t be afraid to replace things

Neutron used Dnsmasq to provide DHCP services, however it was not designed to address the complexity or scale of a dynamic environment. For example, Dnsmasq must be restarted for any config change, such as when a new host is being provisioned.  In the Yahoo OpenStack clusters this has been replaced by ISC-DHCPD, which scales far better than Dnsmasq and allows dynamic configuration updates via an API.

6. Or split them apart

Some of the core imaging services provided by Ironic, such as DHCP, TFTP, and HTTPS communicate with a host during the provisioning process. These services are normally  part of the Ironic Conductor (IC) service. In our environment we split these services into a new and physically-distinct service called the Ironic Transport Service (ITS). This brings value by:

  • Adding security: Splitting the ITS from the IC allows us to block all network traffic from production compute nodes to the IC, and other parts of our control plane. If a malicious entity attacks a node serving production traffic, they cannot escalate from it  to our control plane.
  • Scale: The ITS hosts allow us to horizontally scale the core provisioning services with which nodes communicate.
  • Flexibility: ITS allows Yahoo to manage remote sites, such as peering points, without building a new cluster in that site. Resources in those sites can now be managed by the nearest Yahoo owned & operated (O&O) datacenter, without needing to build a whole cluster in each site.

Be prepared for faulty hardware!

Running IaaS reliably at hyperscale is more than just scaling the control plane. One must take a holistic look at the system and consider everything. In fact, when examining provisioning failures, our engineers determined the majority root cause was faulty hardware. For example, there are a number of machines from varying vendors whose IPMI firmware fails from time to time, leaving the host inaccessible to remote power management. Some fail within minutes or weeks of installation. These failures occur on many different models, across many generations, and across many hardware vendors. Exposing these failures to users would create a very negative experience, and the cloud must be built to tolerate this complexity.

Focus on the end state

Yahoo’s experience shows that one can run OpenStack at hyperscale, leveraging it to wrap infrastructure and remove perceived complexity. Correctly leveraged, OpenStack presents an easy, consistent, and error-free interface. Delivering this interface is core to our design philosophy as Yahoo continues to double down on our OpenStack investment. The Yahoo OpenStack team looks forward to continue collaborating with the OpenStack community to share feedback and code.

Security Orchestration and Incident Response

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/security_orches.html

Last month at the RSA Conference, I saw a lot of companies selling security incident response automation. Their promise was to replace people with computers ­– sometimes with the addition of machine learning or other artificial intelligence techniques ­– and to respond to attacks at computer speeds.

While this is a laudable goal, there’s a fundamental problem with doing this in the short term. You can only automate what you’re certain about, and there is still an enormous amount of uncertainty in cybersecurity. Automation has its place in incident response, but the focus needs to be on making the people effective, not on replacing them ­ security orchestration, not automation.

This isn’t just a choice of words ­– it’s a difference in philosophy. The US military went through this in the 1990s. What was called the Revolution in Military Affairs (RMA) was supposed to change how warfare was fought. Satellites, drones and battlefield sensors were supposed to give commanders unprecedented information about what was going on, while networked soldiers and weaponry would enable troops to coordinate to a degree never before possible. In short, the traditional fog of war would be replaced by perfect information, providing certainty instead of uncertainty. They, too, believed certainty would fuel automation and, in many circumstances, allow technology to replace people.

Of course, it didn’t work out that way. The US learned in Afghanistan and Iraq that there are a lot of holes in both its collection and coordination systems. Drones have their place, but they can’t replace ground troops. The advances from the RMA brought with them some enormous advantages, especially against militaries that didn’t have access to the same technologies, but never resulted in certainty. Uncertainty still rules the battlefield, and soldiers on the ground are still the only effective way to control a region of territory.

But along the way, we learned a lot about how the feeling of certainty affects military thinking. Last month, I attended a lecture on the topic by H.R. McMaster. This was before he became President Trump’s national security advisor-designate. Then, he was the director of the Army Capabilities Integration Center. His lecture touched on many topics, but at one point he talked about the failure of the RMA. He confirmed that military strategists mistakenly believed that data would give them certainty. But he took this change in thinking further, outlining the ways this belief in certainty had repercussions in how military strategists thought about modern conflict.

McMaster’s observations are directly relevant to Internet security incident response. We too have been led to believe that data will give us certainty, and we are making the same mistakes that the military did in the 1990s. In a world of uncertainty, there’s a premium on understanding, because commanders need to figure out what’s going on. In a world of certainty, knowing what’s going on becomes a simple matter of data collection.

I see this same fallacy in Internet security. Many companies exhibiting at the RSA Conference promised to collect and display more data and that the data will reveal everything. This simply isn’t true. Data does not equal information, and information does not equal understanding. We need data, but we also must prioritize understanding the data we have over collecting ever more data. Much like the problems with bulk surveillance, the “collect it all” approach provides minimal value over collecting the specific data that’s useful.

In a world of uncertainty, the focus is on execution. In a world of certainty, the focus is on planning. I see this manifesting in Internet security as well. My own Resilient Systems ­– now part of IBM Security –­ allows incident response teams to manage security incidents and intrusions. While the tool is useful for planning and testing, its real focus is always on execution.

Uncertainty demands initiative, while certainty demands synchronization. Here, again, we are heading too far down the wrong path. The purpose of all incident response tools should be to make the human responders more effective. They need both the ability and the capability to exercise it effectively.

When things are uncertain, you want your systems to be decentralized. When things are certain, centralization is more important. Good incident response teams know that decentralization goes hand in hand with initiative. And finally, a world of uncertainty prioritizes command, while a world of certainty prioritizes control. Again, effective incident response teams know this, and effective managers aren’t scared to release and delegate control.

Like the US military, we in the incident response field have shifted too much into the world of certainty. We have prioritized data collection, preplanning, synchronization, centralization and control. You can see it in the way people talk about the future of Internet security, and you can see it in the products and services offered on the show floor of the RSA Conference.

Automation, too, is fixed. Incident response needs to be dynamic and agile, because you are never certain and there is an adaptive, malicious adversary on the other end. You need a response system that has human controls and can modify itself on the fly. Automation just doesn’t allow a system to do that to the extent that’s needed in today’s environment. Just as the military shifted from trying to replace the soldier to making the best soldier possible, we need to do the same.

For some time, I have been talking about incident response in terms of OODA loops. This is a way of thinking about real-time adversarial relationships, originally developed for airplane dogfights, but much more broadly applicable. OODA stands for observe-orient-decide-act, and it’s what people responding to a cybersecurity incident do constantly, over and over again. We need tools that augment each of those four steps. These tools need to operate in a world of uncertainty, where there is never enough data to know everything that is going on. We need to prioritize understanding, execution, initiative, decentralization and command.

At the same time, we’re going to have to make all of this scale. If anything, the most seductive promise of a world of certainty and automation is that it allows defense to scale. The problem is that we’re not there yet. We can automate and scale parts of IT security, such as antivirus, automatic patching and firewall management, but we can’t yet scale incident response. We still need people. And we need to understand what can be automated and what can’t be.

The word I prefer is orchestration. Security orchestration represents the union of people, process and technology. It’s computer automation where it works, and human coordination where that’s necessary. It’s networked systems giving people understanding and capabilities for execution. It’s making those on the front lines of incident response the most effective they can be, instead of trying to replace them. It’s the best approach we have for cyberdefense.

Automation has its place. If you think about the product categories where it has worked, they’re all areas where we have pretty strong certainty. Automation works in antivirus, firewalls, patch management and authentication systems. None of them is perfect, but all those systems are right almost all the time, and we’ve developed ancillary systems to deal with it when they’re wrong.

Automation fails in incident response because there’s too much uncertainty. Actions can be automated once the people understand what’s going on, but people are still required. For example, IBM’s Watson for Cyber Security provides insights for incident response teams based on its ability to ingest and find patterns in an enormous amount of freeform data. It does not attempt a level of understanding necessary to take people out of the equation.

From within an orchestration model, automation can be incredibly powerful. But it’s the human-centric orchestration model –­ the dashboards, the reports, the collaboration –­ that makes automation work. Otherwise, you’re blindly trusting the machine. And when an uncertain process is automated, the results can be dangerous.

Technology continues to advance, and this is all a changing target. Eventually, computers will become intelligent enough to replace people at real-time incident response. My guess, though, is that computers are not going to get there by collecting enough data to be certain. More likely, they’ll develop the ability to exhibit understanding and operate in a world of uncertainty. That’s a much harder goal.

Yes, today, this is all science fiction. But it’s not stupid science fiction, and it might become reality during the lifetimes of our children. Until then, we need people in the loop. Orchestration is a way to achieve that.

This essay previously appeared on the Security Intelligence blog.

A Day in the Life of a Data Center

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/day-life-datacenter-part/

Editor’s note: We’ve reposted this very popular 2016 blog entry because we often get questions about how Backblaze stores data, and we wanted to give you a look inside!

A data center is part of the “cloud”; as in cloud backup, cloud storage, cloud computing, and so on. It is often where your data goes or goes through, once it leaves your home, office, mobile phone, tablet, etc. While many of you have never been inside a data center, chances are you’ve seen one. Cleverly disguised to fit in, data centers are often nondescript buildings with few if any windows and little if any signage. They can be easy to miss. There are exceptions of course, but most data centers are happy to go completely unnoticed.

We’re going to take a look at a typical day in the life of a data center.

Getting Inside A Data Center

As you approach a data center, you’ll notice there isn’t much to notice. There’s no “here’s my datacenter” signage, and the parking lot is nearly empty. You might wonder, “is this the right place?” While larger, more prominent, data centers will have armed guards and gates, most data centers have a call-box outside of a locked door. In either case, data centers don’t like drop-in visitors, so unless you’ve already made prior arrangements, you’re going to be turned away. In short, regardless of whether it is a call-box or an armed guard, a primary line of defense is to know everyone whom you let in the door.

Once inside the building, you’re still a long way from being in the real data center. You’ll start by presenting the proper identification to the guard and fill out some paperwork. Depending on the facility and your level of access, you will have to provide a fingerprint for biometric access/exit confirmation. Eventually, you get a badge or other form of visual identification that shows your level of access. For example, you could have free range of the place (highly doubtful), or be allowed in certain defined areas (doubtful), or need an escort wherever you go (likely). For this post, we’ll give you access to the Backblaze areas in the data center, accompanied of course.

We’re ready to go inside, so attach your badge with your picture on it, get your finger ready to be scanned, and remember to smile for the cameras as you pass through the “box.” While not the only method, the “box” is a widely used security technique that allows one person at a time to pass through a room where they are recorded on video and visually approved before they can leave. Speaking of being on camera, by the time you get to this point, you will have passed dozens of cameras – hidden, visible, behind one-way glass, and so on.

Once past the “box,” you’re in the data center, right? Probably not. Data centers can be divided into areas or blocks each with different access codes and doors. Once out of the box, you still might only be able to access the snack room and the bathrooms. These “rooms” are always located outside of the data center floor. Let’s step inside, “badge in please.”

Inside the Data Center

While every data center is different, there are three things that most people find common in their experience; how clean it is, the noise level and the temperature.

Data Centers are Clean

From the moment you walk into a typical data center, you’ll notice that it is clean. While most data centers are not cleanrooms by definition, they do ensure the environment is suitable for the equipment housed there.

Data center Entry Mats

Cleanliness starts at the door. Mats like this one capture the dirt from the bottom of your shoes. These mats get replaced regularly. As you look around, you might notice that there are no trashcans on the data center floor. As a consequence, the data center staff follows the “whatever you bring in, you bring out” philosophy, sort of like hiking in the woods. Most data centers won’t allow food or drink on the data center floor, either. Instead, one has to leave the datacenter floor to have a snack or use the restroom.

Besides being visually clean, the air in a data center is also amazingly clean: Filtration systems filter particulates to the sub-micron level. Data center filters have a 99.97% (or higher) efficiency rating in removing 0.3-micron particles. In comparison, your typical home filter provides a 70% sub-micron efficiency level. That might explain the dust bunnies behind your gaming tower.

Data Centers Are Noisy

Data center Noise Levels

The decibel level in a given data center can vary considerably. As you can see, the Backblaze datacenter is between 76 and 78 decibels. This is the level when you are near the racks of Storage Pods. How loud is 78dB? Normal conversation is 60dB, a barking dog is 70dB, and a screaming child is only 80dB. In the US, OSHA has established 85dB as the lower threshold for potential noise damage. Still, 78dB is loud enough that we insist our data center staff wear ear protection on the floor. Their favorite earphones are Bose’s noise reduction models. They are a bit costly but well worth it.

The noise comes from a combination of the systems needed to operate the data center. Air filtration, heating, cooling, electric, and other systems in use. 6,000 spinning 3-inch fans in the Storage Pods produce a lot of noise.

Data Centers Are Hot and Cold

As noted, part of the noise comes from heating and air-conditioning systems, mostly air-conditioning. As you walk through the racks and racks of equipment in many data centers, you’ll alternate between warm aisles and cold aisles. In a typical raised floor data center, cold air rises from vents in the floor in front of each rack. Fans inside the servers in the racks, in our case Storage Pods, pull the air in from the cold aisle and through the server. By the time the air reaches the other side, the warm row, it is warmer and is sucked away by vents in the ceiling or above the racks.

There was a time when data centers were like meat lockers with some kept as cold as 55°F (12.8°C). Warmer heads prevailed, and over the years the average temperature has risen to over 80°F (26.7°C) with some companies pushing that even higher. That works for us, but in our case, we are more interested in the temperature inside our Storage Pods and more precisely the hard drives within. Previously we looked at the correlation between hard disk temperature and failure rate. The conclusion: As long as you run drives well within their allowed range of operating temperatures, there is no problem operating a data center at 80°F (26.7°C) or even higher. As for the employees, if they get hot they can always work in the cold aisle for a while and vice-versa.

Getting Out of a Data Center

When you’re finished visiting the data center, remember to leave yourself a few extra minutes to get out. The first challenge is to find your way back to the entrance. If an escort accompanies you, there’s no issue, but if you’re on your own, I hope you paid attention to the way inside. It’s amazing how all the walls and doors look alike as you’re wandering around looking for the exit, and with data centers getting larger and larger the task won’t get any easier. For example, the Switch SUPERNAP datacenter complex in Reno Nevada will be over 6.4 million square feet, roughly the size of the Pentagon. Having worked in the Pentagon, I can say that finding your way around a facility that large can be daunting. Of course, a friendly security guard is likely to show up to help if you get lost or curious.

On your way back out you’ll pass through the “box” once again for your exit cameo. Also, if you are trying to leave with more than you came in with you will need a fair bit of paperwork before you can turn in your credentials and exit the building. Don’t forget to wave at the cameras in the parking lot.

The post A Day in the Life of a Data Center appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Cloud’s Software: A Look Inside Backblaze

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/the-clouds-software-a-look-inside-backblaze/

When most of us think about “the cloud,” we have an abstract idea that it’s computers in a data center somewhere – racks of blinking lights and lots of loud fans. There’s truth to that. Have a look inside our datacenter to get an idea. But besides the impressive hardware – and the skilled techs needed to keep it running – there’s software involved. Let’s take a look at a few of the software tools that keep our operation working.

Our data center is populated with Storage Pods, the servers that hold the data you entrust to us if you’re a Backblaze customer or you use B2 Cloud Storage. Inside each Storage Pod are dozens of 3.5-inch spinning hard disk drives – the same kind you’ll find inside a desktop PC. Storage Pods are mounted on racks inside the data center. Those Storage Pods work together in Vaults.

Vault Software

The Vault software that keeps those Storage Pods humming is one the backbones of our operation. It’s what makes it possible for us to scale our services to meet your needs and with durability, scalability and fast performance.

The Vault software distributes data across 20 different Storage Pods, with the data spread evenly across all 20 pods. Drives in the same position inside each Storage Pod are grouped together in software in what we call a “tome.” When a file gets uploaded to Backblaze, it’s split into pieces we call “shards” and distributed across all 20 drives.

Each file is stored as 20 shards: 17 data shards and three parity shards. As the name implies, the data shards comprise the information in the files you upload to Backblaze. Parity shards add redundancy so that a file can be completely restored from a Vault even if some of the pieces are not available.

Because those shards are distributed across 20 Storage Pods in 20 cabinets, a Storage Pod can go down and the Vault will still operate unimpeded. An entire cabinet can lose power and the Vault will still work fine.

Files can be written to the Vault even if a Storage Pod is down with two parity shards to protect the data. Even in the extreme — and unlikely — case where three Storage Pods in a Vault are offline, the files in the vault are still available because they can be reconstructed from the 17 available pieces.

Reed-Solomon Erasure Coding

Erasure coding makes it possible to rebuild a data file even if parts of the original are lost. Having effective erasure coding is vital in a distributed environment like a Backblaze Vault. It helps us keep your data safe even when the hardware that the data is stored on needs to be serviced.

We use Reed-Solomon erasure encoding. It’s a proven technique used in Linux RAID systems, by Microsoft in its Azure cloud storage, and by Facebook too. The Backblaze Vault Architecture is capable of delivering 99.99999% annual durability thanks in part to our Reed-Solomon erasure coding implementation.

Here’s our own Brian Beach with an explanation of how Reed-Solomon encoding works:

We threw out the Linux RAID software we had been using prior to the implementation of the Vaults and wrote our own Reed-Solomon implementation from scratch. We’re very proud of it. So much so that we’ve released it as open source that you can use in your own projects, if you wish.

We developed our Reed-Solomon implementation as a Java library. Why? When we first started this project, we assumed that we would need to write it in C to make it run as fast as we needed. It turns out that modern Java virtual machines working on our servers are great, and just-in-time compilers produces code that runs pretty quick.

All the work we’ve done to build a reliable, scalable, affordable solution for storing data in a “cloud” led to the creation of B2 Cloud Storage. B2 lets you store your data in the cloud for a fraction of what you’d spend elsewhere – 1/4 the price of Amazon S3, for example.

Using Our Storage

Having over 300 Petabytes of data storage available isn’t very useful unless we can store data and reliably restore it too. We offer two ways to store data with Backblaze: via a client application or via direct access. Our client application, Backblaze Computer Backup, is installed on your Mac or Windows system and basically does everything related to automatically backing up your computer. We locate the files that are new or changed and back them up. We manage versions, deduplicate files, and more. The Backblaze app does all the work behind the scenes.

The other way to use our storage is via direct access. You can use a Web GUI, a Command Line Interface (CLI) or an Application Programming Interface (API). With any of these methods, you are in charge of what gets stored in the Backblaze cloud. This is what Backblaze B2 is all about. You can log into B2 and use the Web GUI to drag and drop files that are stored in the Backblaze cloud. You decide what gets added and deleted, and how many versions of a file you want to keep. Think of B2 as your very own bucket in the cloud where you can store your files.

We also have mobile apps for iOS and Android devices to help you view and share any backed up files you have on the go. You can download them, play back or view media files, and share them as you need.

We focused on creating a native, integrated experience for you when you use our software. We didn’t take a shortcut to create a Java app for the desktop. On the Mac our app is built using Xcode and on the PC it was built using C. The app is designed for lightweight, unobtrusive performance. If you do need to adjust its performance, we give you that ability. You have control over throttling the backup rate. You can even adjust the number of CPU threads dedicated to Backblaze, if you choose.

When we first released the software almost a decade ago we had no idea that we’d iterate it more than 1,000 times. That’s the threshold we reached late last year, however! We released version 4.3.0 in December. We’re still plugging away at it and have plans for the future, too.

Our Philosophy: Keep It Simple

“Keep It Simple” is the philosophy that underlies all of the technology that powers our hardware. It makes it possible for you to affordably, reliably back up your computers and store data in the cloud.

We’re not interested in creating elaborate, difficult-to-implement solutions or pricing schemes that confuse and confound you. Our backup service is unlimited and unthrottled for one low price. We offer cloud storage for 1/4th the competition. And we make it easy to access with desktop, mobile and web interfaces, command line tools and APIs.

Hopefully we’ve shed some light on the software that lets our cloud services operate. Have questions? Join the discussion and let us know.

The post The Cloud’s Software: A Look Inside Backblaze appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

BitChute is a BitTorrent-Powered YouTube Alternative

Post Syndicated from Andy original https://torrentfreak.com/bitchute-is-a-bittorrent-powered-youtube-alternative-170129/

bitchute-logoYouTube attracts over a billion visitors every month, with many flocking to the platform to view original content uploaded by thousands of contributors. However, those contributors aren’t completely free to upload and make money from whatever they like.

Since it needs to please its advertisers, YouTube has rules in place over what kind of content can be monetized, something which caused a huge backlash last year alongside claims of censorship.

But what if there was an alternative to YouTube, one that doesn’t impose the same kinds of restrictions on uploaders? Enter BitChute, a BitTorrent-powered video platform that seeks to hand freedom back to its users.

“The idea comes from seeing the increased levels of censorship by the large social media platforms in the last couple of years. Bannings, demonetization, and tweaking algorithms to send certain content into obscurity and, wanting to do something about it,” BitChute founder Ray Vahey informs TorrentFreak.

“I knew building a clone wasn’t the answer, many have tried and failed. And it would inevitably grow into an organization with the same problems anyway.”

As seen in the image below, the site has a familiar layout for anyone used to YouTube-like video platforms. It has similar video controls, view counts, and the ability to vote on content. It also has a fully-functioning comment section.

bitchute

Of course, one of the main obstacles for video content hosting platforms is the obscene amounts of bandwidth they consume. Any level of success is usually accompanied by big hosting bills. But along with its people-powered philosophy, BitChute does things a little differently.

Instead of utilizing central servers, BitChute uses WebTorrent, a system which allows people to share videos directly from their browser, without having to configure or install anything. Essentially this means that the site’s users become hosts of the videos they’re watching, which slams BitChute’s hosting costs into the ground.

“Distributed systems and WebTorrent invert the scalability advantage the Googles and Facebooks have. The bigger our user base grows, the more efficiently it can serve while retaining the simplicity of the web browser,” Vahey says.

“Also by the nature of all torrent technology, we are not locking users into a single site, and they have the choice to retain and continue sharing the files they download. That puts more power back in the hands of the consumer where it should be.”

The only hints that BitChute is using peer-to-peer technology are the peer counts under each video and a short delay before a selected video begins to play. This is necessary for the system to find peers but thankfully it isn’t too intrusive.

As far as we know, BitChute is the first attempt at a YouTube-like platform that leverages peer-to-peer technology. It’s only been in operation for a short time but according to its founder, things are going well.

“As far as I could tell, no one had yet run with this idea as a service, so that’s what myself and few like-minded people decided. To put it out there and see what people think. So far it’s been an amazingly positive response from people who understand and agree with what we’re doing,” Vahey explains.

“Just over three weeks ago we launched with limited upload access on a first come first served basis. We are flat out busy working on the next version of the site; I have two other co-founders based out of the UK who are supporting me, watch this space,” he concludes.

Certainly, people will be cheering the team on. Last September, popular YouTuber Bluedrake experimented with WebTorrent to distribute his videos after becoming frustrated with YouTube’s policies.

“All I want is a site where people can say what they want,” he said at the time. “I want a site where people can operate their business without having somebody else step in and take away their content when they say something they don’t like.”

For now, BitChute is still under development, but so far it has impressed Feross Aboukhadijeh, the Stanford University graduate who invented WebTorrent.

“BitChute is an exciting new product,” he told TF this week. “This is exactly the kind of ‘people-powered’ website that WebTorrent technology was designed to enable. I’m eager to see where the team takes it.”

BitChute can be found here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Raspberry Pi Foundation’s Digital Making Curriculum

Post Syndicated from Carrie Anne Philbin original https://www.raspberrypi.org/blog/digital-making-curriculum/

At Raspberry Pi, we’re determined in our ambition to put the power of digital making into the hands of people all over the world: one way we pursue this is by developing high-quality learning resources to support a growing community of educators. We spend a lot of time thinking hard about what you can learn by tinkering and making with a Raspberry Pi, and other devices and platforms, in order to become skilled in computer programming, electronics, and physical computing.

Now, we’ve taken an exciting step in this journey by defining our own digital making curriculum that will help people everywhere learn new skills.

A PDF version of the curriculum is also available to download.

Who is it for?

We have a large and diverse community of people who are interested in digital making. Some might use the curriculum to help guide and inform their own learning, or perhaps their children’s learning. People who run digital making clubs at schools, community centres, and Raspberry Jams may draw on it for extra guidance on activities that will engage their learners. Some teachers may wish to use the curriculum as inspiration for what to teach their students.

Raspberry Pi produces an extensive and varied range of online learning resources and delivers a huge teacher training program. In creating this curriculum, we have produced our own guide that we can use to help plan our resources and make sure we cover the broad spectrum of learners’ needs.

Progression

Learning anything involves progression. You start with certain skills and knowledge and then, with guidance, practice, and understanding, you gradually progress towards broader and deeper knowledge and competence. Our digital making curriculum is structured around this progression, and in representing it, we wanted to avoid the age-related and stage-related labels that are often associated with a learner’s progress and the preconceptions these labels bring. We came up with our own, using characters to represent different levels of competence, starting with Creator and moving onto Builder and Developer before becoming a Maker.

Progress through our curriculum and become a digital maker

Strands

We want to help people to make things so that they can become the inventors, creators, and makers of tomorrow. Digital making, STEAM, project-based learning, and tinkering are at the core of our teaching philosophy which can be summed up simply as ‘we learn best by doing’.

We’ve created five strands which we think encapsulate key concepts and skills in digital making: Design, Programming, Physical Computing, Manufacture, and Community and Sharing.

Computational thinking

One of the Raspberry Pi Foundation’s aims is to help people to learn about computer science and how to make things with computers. We believe that learning how to create with digital technology will help people shape an increasingly digital world, and prepare them for the work of the future.

Computational thinking is at the heart of the learning that we advocate. It’s the thought process that underpins computing and digital making: formulating a problem and expressing its solution in such a way that a computer can effectively carry it out. Computational thinking covers a broad range of knowledge and skills including, but not limited to:

  • Logical reasoning
  • Algorithmic thinking
  • Pattern recognition
  • Abstraction
  • Decomposition
  • Debugging
  • Problem solving

By progressing through our curriculum, learners will develop computational thinking skills and put them into practice.

What’s not on our curriculum?

If there’s one thing we learned from our extensive work in formulating this curriculum, it’s that no two educators or experts can agree on the best approach to progression and learning in the field of digital making. Our curriculum is intended to represent the skills and thought processes essential to making things with technology. We’ve tried to keep the headline outcomes as broad as possible, and then provide further examples as a guide to what could be included.

Our digital making curriculum is not intended to be a replacement for computer science-related curricula around the world, such as the ‘Computing Programme of Study’ in England or the ‘Digital Technologies’ curriculum in Australia. We hope that following our learning pathways will support the study of formal curricular and exam specifications in a fun and tangible way. As we continue to expand our catalogue of free learning resources, we expect our curriculum will grow and improve, and your input into that process will be vital.

Get involved

We’re proud to be part of a movement that aims to empower people to shape their world through digital technologies. We value the support of our community of makers, educators, volunteers, and enthusiasts. With this in mind, we’re interested to hear your thoughts on our digital making curriculum. Add your feedback to this form, or talk to us at one of the events that Raspberry Pi will attend in 2017.

The post The Raspberry Pi Foundation’s Digital Making Curriculum appeared first on Raspberry Pi.

Backblaze Begins: Celebrating A Very Special Anniversary

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/backblaze-begins/

2017 is a milestone year for us at Backblaze: This year is our tenth anniversary. Our official birthday – based on our papers of incorporation – places that date in April. But January 15th was another important date: It’s when the work on what would become Backblaze first started in earnest.

Brian Wilson, our intrepid CTO and CFO, began coding Backblaze full-time on January 15, 2007, in a corner of his living room. Brian had already been coding for a couple of decades, and like our other founders is a serial entrepreneur. But Backblaze was a bit more personal.

Our guiding philosophy is that backups should be easy, automatic, and affordable.

Brian is known by family and friends to have a pretty elaborate and extensive – some might even say a bit insane – backup regimen. That expertise would occasionally lead to questions about data backup and recovery. That’s when Jeanine’s computer crashed. She begged Brian for help to recover her files. But without a backup to restore from, Brian couldn’t help.

While Brian felt bad for Jeanine, he quickly realized that this was just the beginning as photos, movies, and personal and work documents were all going digital. Purely digital. And like Jeanine, most people were not backing up their computers. The amount of data that was at risk of being lost was staggering.

Why were so few people backing up their computers? Managing backups can be difficult, especially for people who don’t live on their computers all the time, but still rely on them. And while there were already online backup services, they were confusing, expensive and required you to select the files you wanted to backup.

So we started Backblaze to solve that impending disaster. Our guiding philosophy is that backups should be easy, automatic, and affordable.

Backblaze caught fire very quickly, if you’ll pardon the pun. By early February we had already begun stringing together a Gigabit Ethernet network (in Brian’s home) to connect the servers and other gear we’d need to get Backblaze up and running. By early April we had working code. That’s when we called the lawyers and got the letters of incorporation rolling, which is why our “official” birthday is in April. April 20th, 2007 in fact.

So why did we call this new venture Backblaze? Back in 2001, Brian had incorporated “Codeblaze” for his consulting business. “Backblaze” was a nod to that previous effort, with an acknowledgment that this new project would focus on backups. April 20th is our official birthday.

4/20 is familiar to marijuana enthusiasts everywhere. and our birthday is a source of occasional mirth and merriment to those in the know. Sorry to be a buzzkill, but it’s totally coincidental: The great state of Delaware decided on our incorporation date.

It’s interesting how one great idea can lead to another. When Backblaze was still coming together, we thought we’d store backup data using an existing cloud storage service. So we went to Amazon S3. We found that S3 was way too expensive for us. Heck, it’s still too expensive, years later.

So we decided to roll our own storage instead. We called it the Storage Pod. We really liked the idea – so much that we decided to open-source the design so you can build your own if you want to. We’ve iterated it many times since then, most recently with our Storage Pod 6.0. We’re now putting 60 drives in each rack-mounted chassis in our data center, for a total of 480 Terabytes of storage in each one.

As our work with Storage Pods continued and as we built out our data center, we realized that we could offer cloud storage to our customers for 1/4 the price that Amazon does. And that’s what led us to start B2 Cloud Storage. You’ve enthusiastically adopted B2, along with a growing cadre of integration partners.

Ten years later, we’re astonished, humbled and thrilled with what we’ve accomplished. We’ve restored more than 20 billion files and the Storage Pods in our data center now keep more than 300 Petabytes of data. The future is bright and the sky’s the limit. We can’t wait to see what the next ten years have in store.

The post Backblaze Begins: Celebrating A Very Special Anniversary appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Top Spotify Lawyer: Attracting Pirates is in Our DNA

Post Syndicated from Andy original https://torrentfreak.com/top-spotify-lawyer-attracting-pirates-is-in-our-dna-161226/

spotifyAlmost eight years ago and just months after its release, TF published an article which pondered whether the fledgling Spotify service could become a true alternative to Internet piracy.

From the beginning, one of the key software engineers at Spotify has been Ludvig Strigeus, the creator of uTorrent, so clearly the company already knew a lot about file-sharers. In the early days the company was fairly open about its aim to provide an alternative to piracy, but perhaps one of the earliest indications of growing success came when early invites were shared among users of private torrent sites.

Today Spotify is indeed huge. The service has an estimated 100 million users, many of them taking advantage of its ad-supported free tier. This is the gateway for many subscribers, including millions of former and even current pirates who augment their sharing with the desirable service.

Over the years, Spotify has made no secret of its desire to recruit more pirates to its service. In 2014, Spotify Australia managing director Kate Vale said it was one of their key aims.

“People that are pirating music and not paying for it, they are the ones we want on our platform. It’s important for us to be reaching these individuals that have never paid for music before in their life, and get them onto a service that’s legal and gives money back to the rights holders,” Vale said.

Now, in a new interview with The Journal on Sports and Entertainment Law, General Counsel of Spotify Horacio Gutierrez reveals just how deeply this philosophy runs in the company. It’s absolutely fundamental to its being, he explains.

“One of the things that inspired the creation of Spotify and is part of the DNA of the company from the day it launched (and remember the service was launched for the first time around 8 years ago) was addressing one of the biggest questions that everyone in the music industry had at the time — how would one tackle and combat online piracy in music?” Gutierrez says.

“Spotify was determined from the very beginning to provide a fully licensed, legal alternative for online music consumption that people would prefer over piracy.”

The signs that just might be possible came very early on. Just months after Spotify’s initial launch the quality of its service was celebrated on what was to become the world’s best music torrent site, What.cd.

“Honestly it’s going to be huge,” a What.cd user predicted in 2008.

“I’ve been browsing and playing from its seemingly endless music catalogue all afternoon, it loads as if it’s playing from local files, so fast, so easy. If it’s this great in such early beta stages then I can’t imagine where it’s going. I feel like buying another laptop to have permanently rigged.”

Of course, hardcore pirates aren’t always easily encouraged to part with their cash, so Spotify needed an equivalent to the no-cost approach of many torrent sites. That is still being achieved today via its ad-supported entry level, Gutierrez says.

“I think one just has to look at data to recognize that the freemium model for online music consumption works. Our free tier is a key to attracting users away from online piracy, and Spotify’s success is proof that the model works.

“We have data around the world that shows that it works, that in fact we are making inroads against piracy because we offer an ability for those users to have a better experience with higher quality content, variety richer catalogue, and a number of other user-minded features that make the experience much better for the user.”

Spotify’s general counsel says that the company is enjoying success, not only by bringing pirates onboard, but also by converting them to premium customers via a formula that benefits everyone in the industry.

“If you look at what has happened since the launch of the Spotify service, we have been incredibly successful on that score. Figures coming out the music industry show that after 15 years of revenue losses in music industry, the music industry is once again growing thanks to music streaming,” he concludes.

With the shutdown of What.cd in recent weeks, it’s likely that former users will be considering the Spotify option again this Christmas, if they aren’t customers already.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Amazon ECS Service Auto Scaling Enables Rent-A-Center SAP Hybris Solution

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/amazon-ecs-service-auto-scaling-enables-rent-a-center-sap-hybris-solution/

This is a guest post from Troy Washburn, Sr. DevOps Manager @ Rent-A-Center, Inc., and Ashay Chitnis, Flux7 architect.

—–

Rent-A-Center in their own words: Rent-A-Center owns and operates more than 3,000 rent-to-own retail stores for name-brand furniture, electronics, appliances and computers across the US, Canada, and Puerto Rico.

Rent-A-Center (RAC) wanted to roll out an ecommerce platform that would support the entire online shopping workflow using SAP’s Hybris platform. The goal was to implement a cloud-based solution with a cluster of Hybris servers which would cater to online web-based demand.

The challenge: to run the Hybris clusters in a microservices architecture. A microservices approach has several advantages including the ability for each service to scale up and down to meet fluctuating changes in demand independently. RAC also wanted to use Docker containers to package the application in a format that is easily portable and immutable. There were four types of containers necessary for the architecture. Each corresponded to a particular service:

1. Apache: Received requests from the external Elastic Load Balancing load balancer. Apache was used to set certain rewrite and proxy http rules.
2. Hybris: An external Tomcat was the frontend for the Hybris platform.
3. Solr Master: A product indexing service for quick lookup.
4. Solr Slave: Replication of master cache to directly serve product searches from Hybris.

To deploy the containers in a microservices architecture, RAC and AWS consultants at Flux7 started by launching Amazon ECS resources with AWS CloudFormation templates. Running containers on ECS requires the use of three primary resources: clusters, services, and task definitions. Each container refers to its task definition for the container properties, such as CPU and memory. And, each of the above services stored its container images in Amazon ECR repositories.

This post describes the architecture that we created and implemented.

Auto Scaling

At first glance, scaling on ECS can seem confusing. But the Flux7 philosophy is that complex systems only work when they are a combination of well-designed simple systems that break the problem down into smaller pieces. The key insight that helped us design our solution was understanding that there are two very different scaling operations happening. The first is the scaling up of individual tasks in each service and the second is the scaling up of the cluster of Amazon EC2 instances.

During implementation, Service Auto Scaling was released by the AWS team and so we researched how to implement task scaling into the existing solution. As we were implementing the solution through AWS CloudFormation, task scaling needed to be done the same way. However, the new scaling feature was not available for implementation through CloudFormation and so the natural course was to implement it using AWS Lambda–backed custom resources.

A corresponding Lambda function is implemented in Node.js 4.3, while automatic scaling happens by monitoring the CPUUtilization Amazon CloudWatch metric. The ECS policies below are registered with CloudWatch alarms that are triggered when specific thresholds are crossed. Similarly, by using the MemoryUtilization CloudWatch metric, ECS scaling can be made to scale in and out as well.

The Lambda function and CloudFormation custom resource JSON are available in the Flux7 GitHub repository: https://github.com/Flux7Labs/blog-code-samples/tree/master/2016-10-ecs-enables-rac-sap-hybris

Scaling ECS services and EC2 instances automatically

The key to understanding cluster scaling is to start by understanding the problem. We are no longer running a homogeneous workload in a simple environment. We have a cluster hosting a heterogeneous workload with different requirements and different demands on the system.

This clicked for us after we phrased the problem as, “Make sure the cluster has enough capacity to launch ‘x’ more instances of a task.” This led us to realize that we were no longer looking at an overall average resource utilization problem, but rather a discrete bin packing problem.

The problem is inherently more complex. (Anyone remember from algorithms class how the discrete Knapsack problem is NP-hard, but the continuous knapsack problem can easily be solved in polynomial time? Same thing.) So we have to check on each individual instance if a particular task can be scheduled on it, and if for any task we don’t cross the required capacity threshold, then we need to allocate more instance capacity.

To ensure that ECS scaling always has enough resources to scale out and has just enough resources after scaling in, it was necessary that the Auto Scaling group scales according to three criteria:

1. ECS task count in relation to the host EC2 instance count in a cluster
2. Memory reservation
3. CPU reservation

We implemented the first criteria for the Auto Scaling group. Instead of using the default scaling abilities, we set group scaling in and out using Lambda functions that were triggered periodically by a combination of the AWS::Lambda::Permission and an AWS::Events::Rule resources, as we wanted specific criteria for scaling.

The Lambda function is available in the Flux7 GitHub repository: https://github.com/Flux7Labs/blog-code-samples/tree/master/2016-10-ecs-enables-rac-sap-hybris

Future versions of this piece of code will incorporate the other two criteria along with the ability to use CloudWatch alarms to trigger scaling.

Conclusion

Using advanced ECS features like Service Auto Scaling in conjunction with Lambda to meet RAC’s business requirements, RAC and Flux7 were able to Dockerize SAP Hybris in production for the first time ever.

Further, ECS and CloudFormation give users the ability to implement robust solutions while still providing the ability to roll back in case of failures. With ECS as a backbone technology, RAC has been able to deploy a Hybris setup with automatic scaling, self-healing, one-click deployment, CI/CD, and PCI compliance consistent with the company’s latest technology guidelines and meeting the requirements of their newly-formed culture of DevOps and extreme agility.

If you have any questions or suggestions, please comment below.

Noisia Handle Their Album Leak Without Blaming Fans

Post Syndicated from Andy original https://torrentfreak.com/noisia-handle-their-album-leak-without-blaming-fans-160806/

outer-edgesFor closing on 20 years, online piracy has been the bane of the music industry. Starting off relatively small due to limited speed Internet connections, the phenomenon boomed as broadband took hold.

In the years that followed, countless millions of tracks were shared among like-minded Internet users eager to keep up with their favorite artists and to discover those they never knew existed.

But with 2016 almost two-thirds done, legal music availability has never been better and the excuses for obtaining all content illegally are slowly disappearing over the horizon, for those who can afford it at least.

Nevertheless, one type of piracy refuses to go away, no matter how well-off the consumer. By definition, pre-release music is officially unavailable to buy, so money is completely out of the equation. Such leaks are the ultimate forbidden fruit for hardcore fans of all standing, including those most likely to shell out for the real thing.

This means that the major labels regularly cite pre-release leaks as the most damaging form of piracy, as they are unable to compete with the unauthorized copies already available. Release plans and marketing drives are planned and paid for in advance, and record companies are reluctant to change them.

As a result, such leaks are often followed by extreme outbursts from labels which slam leakers and downloaders alike. That’s perhaps understandable but there are better ways of getting the message over without attacking fans. Last week, drum and bass, dubstep, breakbeat and house stars Noisia showed how it should be done after their long-awaited album Outer Edges leaked online six weeks early.

“Friday night, while we were in the final minutes of setting up the stage for our first ever Outer Edges show, we received the news that our album had been leaked. We think you can imagine how bad we felt at that moment,” the trio told fans on Facebook.

“We realize it’s 2016, and things like these happen all the time. Still, it’s quite a setback. All the plans we’ve made have to be scrapped and replaced by something less ideal, because we have to react to this unfortunate situation.”

In the DnB scene Noisia are absolutely huge but despite their success have chosen to stay close to their fans. The trio run their own label (Vision Recordings) so have more control than many artists, thus allowing them to give fans unprecedented access to their music.

“So far in releasing this album, we’ve made a conscious effort to make every track that’s available somewhere, available everywhere. If you pre-ordered the album on Itunes or our webstore, you received the new track the same day it was first premiered on Soundcloud,” Noisia continue.

“We believe that users of all platforms should be able to listen to our music pretty much the minute it’s available anywhere else. In this philosophy, the availability of our whole album on illegal download sites means that we have to make it available on all platforms.”

So, instead of going off on a huge rant, Noisia leapt into action.

“We have immediately changed all our previous plans and made the whole album available to buy on our web store right now,” the band announced on the day of the leak.

Sadly, other digital platforms couldn’t move as quickly, so most only got the release on Friday. Physical products couldn’t be moved either due to production limitations, so they will appear on the original schedule.

“Even though we are unhappy about this leak, we’re still really happy with the music. We really hope you will enjoy it as much as we’ve enjoyed making it,” Noisia said.

With this reaction to what must’ve been a hugely disappointing leak, Noisia showed themselves to be professionals. No crying, no finger pointing, just actions to limit the impact of the situation alongside a decision to compete with free, both before and after the leak.

Nevertheless, Thijs, Martijn and Nik are human after all, so they couldn’t resist taking a little shot at whoever leaked their music. No threats of lawsuits of course, but a decent helping of sarcasm and dark humor.

“Thanks to whoever leaked our album, next time please do it after the album is out, maybe we can coordinate? Oh wait, that wouldn’t really be leaking… And besides, we don’t negotiate with terror..leaking persons,” they said.

“No, instead we will fold, and adjust our entire strategy. Take that! We hope you get stung by a lot of mosquitoes this summer, and maybe also next summer.”

Noisia’s next album will probably take years to arrive (Outer Edges took six years) but when it does, don’t expect to be warned months in advance. The trio are already vowing to do things differently next time, so expect the unexpected.

Noisia make a lot of their music available for free on Soundcloud, YouTube and ad-supported Spotify.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.