Tag Archives: philosophy

The Pirate Bay Isn’t Affected By Adverse Court Rulings – Everyone Else Is

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-isnt-affected-by-adverse-court-rulings-everyone-else-is-170618/

For more than a decade The Pirate Bay has been the world’s most controversial site. Delivering huge quantities of copyrighted content to the masses, the platform is revered and reviled across the copyright spectrum.

Its reputation is one of a defiant Internet swashbuckler, but due to changes in how the site has been run in more recent times, its current philosophy is more difficult to gauge. What has never been in doubt, however, is the site’s original intent to be as provocative as possible.

Through endless publicity stunts, some real, some just for the ‘lulz’, The Pirate Bay managed to attract a massive audience, all while incurring the wrath of every major copyright holder in the world.

Make no mistake, they all queued up to strike back, but every subsequent rightsholder action was met by a Pirate Bay middle finger, two fingers, or chin flick, depending on the mood of the day. This only served to further delight the masses, who happily spread the word while keeping their torrents flowing.

This vicious circle of being targeted by the entertainment industries, mocking them, and then reaping the traffic benefits, developed into the cheapest long-term marketing campaign the Internet had ever seen. But nothing is ever truly for free and there have been consequences.

After taunting Hollywood and the music industry with its refusals to capitulate, endless legal action that the site would have ordinarily been forced to participate in largely took place without The Pirate Bay being present. It doesn’t take a law degree to work out what happened in each and every one of those cases, whatever complex route they took through the legal system. No defense, no win.

For example, the web-blocking phenomenon across the UK, Europe, Asia and Australia was driven by the site’s absolute resilience and although there would clearly have been other scapegoats had The Pirate Bay disappeared, the site was the ideal bogeyman the copyright lobby required to move forward.

Filing blocking lawsuits while bringing hosts, advertisers, and ISPs on board for anti-piracy initiatives were also made easier with the ‘evil’ Pirate Bay still online. Immune from every anti-piracy technique under the sun, the existence of the platform in the face of all onslaughts only strengthened the cases of those arguing for even more drastic measures.

Over a decade, this has meant a significant tightening of the sharing and streaming climate. Without any big legislative changes but plenty of case law against The Pirate Bay, web-blocking is now a walk in the park, ad hoc domain seizures are a fairly regular occurrence, and few companies want to host sharing sites. Advertisers and brands are also hesitant over where they place their ads. It’s a very different world to the one of 10 years ago.

While it would be wrong to attribute every tightening of the noose to the actions of The Pirate Bay, there’s little doubt that the site and its chaotic image played a huge role in where copyright enforcement is today. The platform set out to provoke and succeeded in every way possible, gaining supporters in their millions. It could also be argued it kicked a hole in a hornets’ nest, releasing the hell inside.

But perhaps the site’s most amazing achievement is the way it has managed to stay online, despite all the turmoil.

This week yet another ruling, this time from the powerful European Court of Justice, found that by offering links in the manner it does, The Pirate Bay and other sites are liable for communicating copyright works to the public. Of course, this prompted the usual swathe of articles claiming that this could be the final nail in the site’s coffin.


In common with every ruling, legal defeat, and legislative restriction put in place due to the site’s activities, this week’s decision from the ECJ will have zero effect on the Pirate Bay’s availability. For right or wrong, the site was breaking the law long before this ruling and will continue to do so until it decides otherwise.

What we have instead is a further tightened legal landscape that will have a lasting effect on everything BUT the site, including weaker torrent sites, Internet users, and user-uploaded content sites such as YouTube.

With The Pirate Bay carrying on regardless, that is nothing short of remarkable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Teaching tech

Post Syndicated from Eevee original https://eev.ee/blog/2017/06/10/teaching-tech/

A sponsored post from Manishearth:

I would kinda like to hear about any thoughts you have on technical teaching or technical writing. Pedagogy is something I care about. But I don’t know how much you do, so feel free to ignore this suggestion 🙂

Good news: I care enough that I’m trying to write a sorta-kinda-teaching book!

Ironically, one of the biggest problems I’ve had with writing the introduction to that book is that I keep accidentally rambling on for pages about problems and difficulties with teaching technical subjects. So maybe this is a good chance to get it out of my system.


I recently tried out a new thing. It was Phaser, but this isn’t a dig on them in particular, just a convenient example fresh in my mind. If anything, they’re better than most.

As you can see from Phaser’s website, it appears to have tons of documentation. Two of the six headings are “LEARN” and “EXAMPLES”, which seems very promising. And indeed, Phaser offers:

  • Several getting-started walkthroughs
  • Possibly hundreds of examples
  • A news feed that regularly links to third-party tutorials
  • Thorough API docs

Perfect. Beautiful. Surely, a dream.

Well, almost.

The examples are all microscopic, usually focused around a single tiny feature — many of them could be explained just as well with one line of code. There are a few example games, but they’re short aimless demos. None of them are complete games, and there’s no showcase either. Games sometimes pop up in the news feed, but most of them don’t include source code, so they’re not useful for learning from.

Likewise, the API docs are just API docs, leading to the sorts of problems you might imagine. For example, in a few places there’s a mention of a preUpdate stage that (naturally) happens before update. You might rightfully wonder what kinds of things happen in preUpdate — and more importantly, what should you put there, and why?

Let’s check the API docs for Phaser.Group.preUpdate:

The core preUpdate – as called by World.

Okay, that didn’t help too much, but let’s check what Phaser.World has to say:

The core preUpdate – as called by World.

Ah. Hm. It turns out World is a subclass of Group and inherits this method — and thus its unaltered docstring — from Group.

I did eventually find some brief docs attached to Phaser.Stage (but only by grepping the source code). It mentions what the framework uses preUpdate for, but not why, and not when I might want to use it too.

The trouble here is that there’s no narrative documentation — nothing explaining how the library is put together and how I’m supposed to use it. I get handed some brief primers and a massive reference, but nothing in between. It’s like buying an O’Reilly book and finding out it only has one chapter followed by a 500-page glossary.

API docs are great if you know specifically what you’re looking for, but they don’t explain the best way to approach higher-level problems, and they don’t offer much guidance on how to mesh nicely with the design of a framework or big library. Phaser does a decent chunk of stuff for you, off in the background somewhere, so it gives the strong impression that it expects you to build around it in a particular way… but it never tells you what that way is.


Ah, but this is what tutorials are for, right?

I confess I recoil whenever I hear the word “tutorial”. It conjures an image of a uniquely useless sort of post, which goes something like this:

  1. Look at this cool thing I made! I’ll teach you how to do it too.

  2. Press all of these buttons in this order. Here’s a screenshot, which looks nothing like what you have, because I’ve customized the hell out of everything.

  3. You did it!

The author is often less than forthcoming about why they made any of the decisions they did, where you might want to try something else, or what might go wrong (and how to fix it).

And this is to be expected! Writing out any of that stuff requires far more extensive knowledge than you need just to do the thing in the first place, and you need to do a good bit of introspection to sort out something coherent to say.

In other words, teaching is hard. It’s a skill, and it takes practice, and most people blogging are not experts at it. Including me!

With Phaser, I noticed that several of the third-party tutorials I tried to look at were 404s — sometimes less than a year after they were linked on the site. Pretty major downside to relying on the community for teaching resources.

But I also notice that… um…

Okay, look. I really am not trying to rag on this author. I’m not. They tried to share their knowledge with the world, and that’s a good thing, something worthy of praise. I’m glad they did it! I hope it helps someone.

But for the sake of example, here is the most recent entry in Phaser’s list of community tutorials. I have to link it, because it’s such a perfect example. Consider:

  • The post itself is a bulleted list of explanation followed by a single contiguous 250 lines of source code. (Not that there’s anything wrong with bulleted lists, mind you.) That code contains zero comments and zero blank lines.

  • This is only part two in what I think is a series aimed at beginners, yet the title and much of the prose focus on object pooling, a performance hack that’s easy to add later and that’s almost certainly unnecessary for a game this simple. There is no explanation of why this is done; the prose only says you’ll understand why it’s critical once you add a lot more game objects.

  • It turns out I only have two things to say here so I don’t know why I made this a bulleted list.

In short, it’s not really a guided explanation; it’s “look what I did”.

And that’s fine, and it can still be interesting. I’m not sure English is even this person’s first language, so I’m hardly going to criticize them for not writing a novel about platforming.

The trouble is that I doubt a beginner would walk away from this feeling very enlightened. They might be closer to having the game they wanted, so there’s still value in it, but it feels closer to having someone else do it for them. And an awful lot of tutorials I’ve seen — particularly of the “post on some blog” form (which I’m aware is the genre of thing I’m writing right now) — look similar.

This isn’t some huge social problem; it’s just people writing on their blog and contributing to the corpus of written knowledge. It does become a bit stickier when a large project relies on these community tutorials as its main set of teaching aids.

Again, I’m not ragging on Phaser here. I had a slightly frustrating experience with it, coming in knowing what I wanted but unable to find a description of the semantics anywhere, but I do sympathize. Teaching is hard, writing documentation is hard, and programmers would usually rather program than do either of those things. For free projects that run on volunteer work, and in an industry where anything other than programming is a little undervalued, getting good docs written can be tricky.

(Then again, Phaser sells books and plugins, so maybe they could hire a documentation writer. Or maybe the whole point is for you to buy the books?)

Some pretty good docs

Python has pretty good documentation. It introduces the language with a tutorial, then documents everything else in both a library and language reference.

This sounds an awful lot like Phaser’s setup, but there’s some considerable depth in the Python docs. The tutorial is highly narrative and walks through quite a few corners of the language, stopping to mention common pitfalls and possible use cases. I clicked an arbitrary heading and found a pleasant, informative read that somehow avoids being bewilderingly dense.

The API docs also take on a narrative tone — even something as humble as the collections module offers numerous examples, use cases, patterns, recipes, and hints of interesting ways you might extend the existing types.

I’m being a little vague and hand-wavey here, but it’s hard to give specific examples without just quoting two pages of Python documentation. Hopefully you can see right away what I mean if you just take a look at them. They’re good docs, Bront.

I’ve likewise always enjoyed the SQLAlchemy documentation, which follows much the same structure as the main Python documentation. SQLAlchemy is a database abstraction layer plus ORM, so it can do a lot of subtly intertwined stuff, and the complexity of the docs reflects this. Figuring out how to do very advanced things correctly, in particular, can be challenging. But for the most part it does a very thorough job of introducing you to a large library with a particular philosophy and how to best work alongside it.

I softly contrast this with, say, the Perl documentation.

It’s gotten better since I first learned Perl, but Perl’s docs are still a bit of a strange beast. They exist as a flat collection of manpage-like documents with terse names like perlootut. The documentation is certainly thorough, but much of it has a strange… allocation of detail.

For example, perllol — the explanation of how to make a list of lists, which somehow merits its own separate documentation — offers no fewer than nine similar variations of the same code for reading a file into a nested lists of words on each line. Where Python offers examples for a variety of different problems, Perl shows you a lot of subtly different ways to do the same basic thing.

A similar problem is that Perl’s docs sometimes offer far too much context; consider the references tutorial, which starts by explaining that references are a powerful “new” feature in Perl 5 (first released in 1994). It then explains why you might want to nest data structures… from a Perl 4 perspective, thus explaining why Perl 5 is so much better.

Some stuff I’ve tried

I don’t claim to be a great teacher. I like to talk about stuff I find interesting, and I try to do it in ways that are accessible to people who aren’t lugging around the mountain of context I already have. This being just some blog, it’s hard to tell how well that works, but I do my best.

I also know that I learn best when I can understand what’s going on, rather than just seeing surface-level cause and effect. Of course, with complex subjects, it’s hard to develop an understanding before you’ve seen the cause and effect a few times, so there’s a balancing act between showing examples and trying to provide an explanation. Too many concrete examples feel like rote memorization; too much abstract theory feels disconnected from anything tangible.

The attempt I’m most pleased with is probably my post on Perlin noise. It covers a fairly specific subject, which made it much easier. It builds up one step at a time from scratch, with visualizations at every point. It offers some interpretations of what’s going on. It clearly explains some possible extensions to the idea, but distinguishes those from the core concept.

It is a little math-heavy, I grant you, but that was hard to avoid with a fundamentally mathematical topic. I had to be economical with the background information, so I let the math be a little dense in places.

But the best part about it by far is that I learned a lot about Perlin noise in the process of writing it. In several places I realized I couldn’t explain what was going on in a satisfying way, so I had to dig deeper into it before I could write about it. Perhaps there’s a good guideline hidden in there: don’t try to teach as much as you know?

I’m also fairly happy with my series on making Doom maps, though they meander into tangents a little more often. It’s hard to talk about something like Doom without meandering, since it’s a convoluted ecosystem that’s grown organically over the course of 24 years and has at least three ways of doing anything.

And finally there’s the book I’m trying to write, which is sort of about game development.

One of my biggest grievances with game development teaching in particular is how often it leaves out important touches. Very few guides will tell you how to make a title screen or menu, how to handle death, how to get a Mario-style variable jump height. They’ll show you how to build a clearly unfinished demo game, then leave you to your own devices.

I realized that the only reliable way to show how to build a game is to build a real game, then write about it. So the book is laid out as a narrative of how I wrote my first few games, complete with stumbling blocks and dead ends and tiny bits of polish.

I have no idea how well this will work, or whether recapping my own mistakes will be interesting or distracting for a beginner, but it ought to be an interesting experiment.

zetcd: running ZooKeeper apps without ZooKeeper

Post Syndicated from ris original https://lwn.net/Articles/723334/rss

The CoreOS Blog introduces the first
beta release, v0.0.1, of zetcd. “Distributed systems commonly rely
on a distributed consensus to coordinate work. Usually the systems
providing distributed consensus guarantee information is delivered in order
and never suffer split-brain conflicts. The usefulness, but rich design
space, of such systems is evident by the proliferation of implementations;
projects such as chubby, ZooKeeper, etcd, and consul, despite differing in philosophy
and protocol, all focus on serving similar basic key-value primitives for
distributed consensus. As part of making etcd the most appealing foundation
for distributed systems, the etcd team developed a new proxy, zetcd, to
serve ZooKeeper requests with an unmodified etcd cluster.

Operating OpenStack at Scale

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/159795571841

By James Penick, Cloud Architect & Gurpreet Kaur, Product Manager

A version of this byline was originally written for and appears in CIO Review.

A successful private cloud presents a consistent and reliable facade over the complexities of hyperscale infrastructure. It must simultaneously handle constant organic traffic growth, unanticipated spikes, a multitude of hardware vendors, and discordant customer demands. The depth of this complexity only increases with the age of the business, leaving a private cloud operator saddled with legacy hardware, old network infrastructure, customers dependent on legacy operating systems, and the list goes on. These are the foundations of the horror stories told by grizzled operators around the campfire.

Providing a plethora of services globally for over a billion active users requires a hyperscale infrastructure. Yahoo’s on-premises infrastructure is comprised of datacenters housing hundreds of thousands of physical and virtual compute resources globally, connected via a multi-terabit network backbone. As one of the very first hyperscale internet companies in the world, Yahoo’s infrastructure had grown organically – things were built, and rebuilt, as the company learned and grew. The resulting web of modern and legacy infrastructure became progressively more difficult to manage. Initial attempts to manage this via IaaS (Infrastructure-as-a-Service) taught some hard lessons. However, those lessons served us well when OpenStack was selected to manage Yahoo’s datacenters, some of which are shared below.

Centralized team offering Infrastructure-as-a-Service

Chief amongst the lessons learned prior to OpenStack was that IaaS must be presented as a core service to the whole organization by a dedicated team. An a-la-carte-IaaS, where each user is expected to manage their own control plane and inventory, just isn’t sustainable at scale. Multiple teams tackling the same challenges involved in the curation of software, deployment, upkeep, and security within an organization is not just a duplication of effort; it removes the opportunity for improved synergy with all levels of the business. The first OpenStack cluster, with a centralized dedicated developer and service engineering team, went live in June 2012.  This model has served us well and has been a crucial piece of making OpenStack succeed at Yahoo. One of the biggest advantages to a centralized, core team is the ability to collaborate with the foundational teams upon which any business is built: Supply chain, Datacenter Site-Operations, Finance, and finally our customers, the engineering teams. Building a close relationship with these vital parts of the business provides the ability to streamline the process of scaling inventory and presenting on-demand infrastructure to the company.

Developers love instant access to compute resources

Our developer productivity clusters, named “OpenHouse,” were a huge hit. Ideation and experimentation are core to developers’ DNA at Yahoo. It empowers our engineers to innovate, prototype, develop, and quickly iterate on ideas. No longer is a developer reliant on a static and costly development machine under their desk. OpenHouse enables developer agility and cost savings by obviating the desktop.

Dynamic infrastructure empowers agile products

From a humble beginning of a single, small OpenStack cluster, Yahoo’s OpenStack footprint is growing beyond 100,000 VM instances globally, with our single largest virtual machine cluster running over a thousand compute nodes, without using Nova Cells.

Until this point, Yahoo’s production footprint was nearly 100% focused on baremetal – a part of the business that one cannot simply ignore. In 2013, Yahoo OpenStack Baremetal began to manage all new compute deployments. Interestingly, after moving to a common API to provision baremetal and virtual machines, there was a marked increase in demand for virtual machines.

Developers across all major business units ranging from Yahoo Mail, Video, News, Finance, Sports and many more, were thrilled with getting instant access to compute resources to hit the ground running on their projects. Today, the OpenStack team is continuing to fully migrate the business to OpenStack-managed. Our baremetal footprint is well beyond that of our VMs, with over 100,000 baremetal instances provisioned by OpenStack Nova via Ironic.

How did Yahoo hit this scale?  

Scaling OpenStack begins with understanding how its various components work and how they communicate with one another. This topic can be very deep and for the sake of brevity, we’ll hit the high points.

1. Start at the bottom and think about the underlying hardware

Do not overlook the unique resource constraints for the services which power your cloud, nor the fashion in which those services are to be used. Leverage that understanding to drive hardware selection. For example, when one examines the role of the database server in an OpenStack cluster, and considers the multitudinous calls to the database: compute node heartbeats, instance state changes, normal user operations, and so on; they would conclude this core component is extremely busy in even a modest-sized Nova cluster, and in need of adequate computational resources to perform. Yet many deployers skimp on the hardware. The performance of the whole cluster is bottlenecked by the DB I/O. By thinking ahead you can save yourself a lot of heartburn later on.

2. Think about how things communicate

Our cluster databases are configured to be multi-master single-writer with automated failover. Control plane services have been modified to split DB reads directly to the read slaves and only write to the write-master. This distributes load across the database servers.

3. Scale wide

OpenStack has many small horizontally-scalable components which can peacefully cohabitate on the same machines: the Nova, Keystone, and Glance APIs, for example. Stripe these across several small or modest hardware. Some services, such as the Nova scheduler, run the risk of race conditions when running multi-active. If the risk of race conditions is unacceptable, use ZooKeeper to manage leader election.

4. Remove dependencies

In a Yahoo datacenter, DHCP is only used to provision baremetal servers. By statically declaring IPs in our instances via cloud-init, our infrastructure is less prone to outage from a failure in the DHCP infrastructure.

5. Don’t be afraid to replace things

Neutron used Dnsmasq to provide DHCP services, however it was not designed to address the complexity or scale of a dynamic environment. For example, Dnsmasq must be restarted for any config change, such as when a new host is being provisioned.  In the Yahoo OpenStack clusters this has been replaced by ISC-DHCPD, which scales far better than Dnsmasq and allows dynamic configuration updates via an API.

6. Or split them apart

Some of the core imaging services provided by Ironic, such as DHCP, TFTP, and HTTPS communicate with a host during the provisioning process. These services are normally  part of the Ironic Conductor (IC) service. In our environment we split these services into a new and physically-distinct service called the Ironic Transport Service (ITS). This brings value by:

  • Adding security: Splitting the ITS from the IC allows us to block all network traffic from production compute nodes to the IC, and other parts of our control plane. If a malicious entity attacks a node serving production traffic, they cannot escalate from it  to our control plane.
  • Scale: The ITS hosts allow us to horizontally scale the core provisioning services with which nodes communicate.
  • Flexibility: ITS allows Yahoo to manage remote sites, such as peering points, without building a new cluster in that site. Resources in those sites can now be managed by the nearest Yahoo owned & operated (O&O) datacenter, without needing to build a whole cluster in each site.

Be prepared for faulty hardware!

Running IaaS reliably at hyperscale is more than just scaling the control plane. One must take a holistic look at the system and consider everything. In fact, when examining provisioning failures, our engineers determined the majority root cause was faulty hardware. For example, there are a number of machines from varying vendors whose IPMI firmware fails from time to time, leaving the host inaccessible to remote power management. Some fail within minutes or weeks of installation. These failures occur on many different models, across many generations, and across many hardware vendors. Exposing these failures to users would create a very negative experience, and the cloud must be built to tolerate this complexity.

Focus on the end state

Yahoo’s experience shows that one can run OpenStack at hyperscale, leveraging it to wrap infrastructure and remove perceived complexity. Correctly leveraged, OpenStack presents an easy, consistent, and error-free interface. Delivering this interface is core to our design philosophy as Yahoo continues to double down on our OpenStack investment. The Yahoo OpenStack team looks forward to continue collaborating with the OpenStack community to share feedback and code.

Security Orchestration and Incident Response

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/security_orches.html

Last month at the RSA Conference, I saw a lot of companies selling security incident response automation. Their promise was to replace people with computers ­– sometimes with the addition of machine learning or other artificial intelligence techniques ­– and to respond to attacks at computer speeds.

While this is a laudable goal, there’s a fundamental problem with doing this in the short term. You can only automate what you’re certain about, and there is still an enormous amount of uncertainty in cybersecurity. Automation has its place in incident response, but the focus needs to be on making the people effective, not on replacing them ­ security orchestration, not automation.

This isn’t just a choice of words ­– it’s a difference in philosophy. The US military went through this in the 1990s. What was called the Revolution in Military Affairs (RMA) was supposed to change how warfare was fought. Satellites, drones and battlefield sensors were supposed to give commanders unprecedented information about what was going on, while networked soldiers and weaponry would enable troops to coordinate to a degree never before possible. In short, the traditional fog of war would be replaced by perfect information, providing certainty instead of uncertainty. They, too, believed certainty would fuel automation and, in many circumstances, allow technology to replace people.

Of course, it didn’t work out that way. The US learned in Afghanistan and Iraq that there are a lot of holes in both its collection and coordination systems. Drones have their place, but they can’t replace ground troops. The advances from the RMA brought with them some enormous advantages, especially against militaries that didn’t have access to the same technologies, but never resulted in certainty. Uncertainty still rules the battlefield, and soldiers on the ground are still the only effective way to control a region of territory.

But along the way, we learned a lot about how the feeling of certainty affects military thinking. Last month, I attended a lecture on the topic by H.R. McMaster. This was before he became President Trump’s national security advisor-designate. Then, he was the director of the Army Capabilities Integration Center. His lecture touched on many topics, but at one point he talked about the failure of the RMA. He confirmed that military strategists mistakenly believed that data would give them certainty. But he took this change in thinking further, outlining the ways this belief in certainty had repercussions in how military strategists thought about modern conflict.

McMaster’s observations are directly relevant to Internet security incident response. We too have been led to believe that data will give us certainty, and we are making the same mistakes that the military did in the 1990s. In a world of uncertainty, there’s a premium on understanding, because commanders need to figure out what’s going on. In a world of certainty, knowing what’s going on becomes a simple matter of data collection.

I see this same fallacy in Internet security. Many companies exhibiting at the RSA Conference promised to collect and display more data and that the data will reveal everything. This simply isn’t true. Data does not equal information, and information does not equal understanding. We need data, but we also must prioritize understanding the data we have over collecting ever more data. Much like the problems with bulk surveillance, the “collect it all” approach provides minimal value over collecting the specific data that’s useful.

In a world of uncertainty, the focus is on execution. In a world of certainty, the focus is on planning. I see this manifesting in Internet security as well. My own Resilient Systems ­– now part of IBM Security –­ allows incident response teams to manage security incidents and intrusions. While the tool is useful for planning and testing, its real focus is always on execution.

Uncertainty demands initiative, while certainty demands synchronization. Here, again, we are heading too far down the wrong path. The purpose of all incident response tools should be to make the human responders more effective. They need both the ability and the capability to exercise it effectively.

When things are uncertain, you want your systems to be decentralized. When things are certain, centralization is more important. Good incident response teams know that decentralization goes hand in hand with initiative. And finally, a world of uncertainty prioritizes command, while a world of certainty prioritizes control. Again, effective incident response teams know this, and effective managers aren’t scared to release and delegate control.

Like the US military, we in the incident response field have shifted too much into the world of certainty. We have prioritized data collection, preplanning, synchronization, centralization and control. You can see it in the way people talk about the future of Internet security, and you can see it in the products and services offered on the show floor of the RSA Conference.

Automation, too, is fixed. Incident response needs to be dynamic and agile, because you are never certain and there is an adaptive, malicious adversary on the other end. You need a response system that has human controls and can modify itself on the fly. Automation just doesn’t allow a system to do that to the extent that’s needed in today’s environment. Just as the military shifted from trying to replace the soldier to making the best soldier possible, we need to do the same.

For some time, I have been talking about incident response in terms of OODA loops. This is a way of thinking about real-time adversarial relationships, originally developed for airplane dogfights, but much more broadly applicable. OODA stands for observe-orient-decide-act, and it’s what people responding to a cybersecurity incident do constantly, over and over again. We need tools that augment each of those four steps. These tools need to operate in a world of uncertainty, where there is never enough data to know everything that is going on. We need to prioritize understanding, execution, initiative, decentralization and command.

At the same time, we’re going to have to make all of this scale. If anything, the most seductive promise of a world of certainty and automation is that it allows defense to scale. The problem is that we’re not there yet. We can automate and scale parts of IT security, such as antivirus, automatic patching and firewall management, but we can’t yet scale incident response. We still need people. And we need to understand what can be automated and what can’t be.

The word I prefer is orchestration. Security orchestration represents the union of people, process and technology. It’s computer automation where it works, and human coordination where that’s necessary. It’s networked systems giving people understanding and capabilities for execution. It’s making those on the front lines of incident response the most effective they can be, instead of trying to replace them. It’s the best approach we have for cyberdefense.

Automation has its place. If you think about the product categories where it has worked, they’re all areas where we have pretty strong certainty. Automation works in antivirus, firewalls, patch management and authentication systems. None of them is perfect, but all those systems are right almost all the time, and we’ve developed ancillary systems to deal with it when they’re wrong.

Automation fails in incident response because there’s too much uncertainty. Actions can be automated once the people understand what’s going on, but people are still required. For example, IBM’s Watson for Cyber Security provides insights for incident response teams based on its ability to ingest and find patterns in an enormous amount of freeform data. It does not attempt a level of understanding necessary to take people out of the equation.

From within an orchestration model, automation can be incredibly powerful. But it’s the human-centric orchestration model –­ the dashboards, the reports, the collaboration –­ that makes automation work. Otherwise, you’re blindly trusting the machine. And when an uncertain process is automated, the results can be dangerous.

Technology continues to advance, and this is all a changing target. Eventually, computers will become intelligent enough to replace people at real-time incident response. My guess, though, is that computers are not going to get there by collecting enough data to be certain. More likely, they’ll develop the ability to exhibit understanding and operate in a world of uncertainty. That’s a much harder goal.

Yes, today, this is all science fiction. But it’s not stupid science fiction, and it might become reality during the lifetimes of our children. Until then, we need people in the loop. Orchestration is a way to achieve that.

This essay previously appeared on the Security Intelligence blog.

A Day in the Life of a Data Center

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/day-life-datacenter-part/

Editor’s note: We’ve reposted this very popular 2016 blog entry because we often get questions about how Backblaze stores data, and we wanted to give you a look inside!

A data center is part of the “cloud”; as in cloud backup, cloud storage, cloud computing, and so on. It is often where your data goes or goes through, once it leaves your home, office, mobile phone, tablet, etc. While many of you have never been inside a data center, chances are you’ve seen one. Cleverly disguised to fit in, data centers are often nondescript buildings with few if any windows and little if any signage. They can be easy to miss. There are exceptions of course, but most data centers are happy to go completely unnoticed.

We’re going to take a look at a typical day in the life of a data center.

Getting Inside A Data Center

As you approach a data center, you’ll notice there isn’t much to notice. There’s no “here’s my datacenter” signage, and the parking lot is nearly empty. You might wonder, “is this the right place?” While larger, more prominent, data centers will have armed guards and gates, most data centers have a call-box outside of a locked door. In either case, data centers don’t like drop-in visitors, so unless you’ve already made prior arrangements, you’re going to be turned away. In short, regardless of whether it is a call-box or an armed guard, a primary line of defense is to know everyone whom you let in the door.

Once inside the building, you’re still a long way from being in the real data center. You’ll start by presenting the proper identification to the guard and fill out some paperwork. Depending on the facility and your level of access, you will have to provide a fingerprint for biometric access/exit confirmation. Eventually, you get a badge or other form of visual identification that shows your level of access. For example, you could have free range of the place (highly doubtful), or be allowed in certain defined areas (doubtful), or need an escort wherever you go (likely). For this post, we’ll give you access to the Backblaze areas in the data center, accompanied of course.

We’re ready to go inside, so attach your badge with your picture on it, get your finger ready to be scanned, and remember to smile for the cameras as you pass through the “box.” While not the only method, the “box” is a widely used security technique that allows one person at a time to pass through a room where they are recorded on video and visually approved before they can leave. Speaking of being on camera, by the time you get to this point, you will have passed dozens of cameras – hidden, visible, behind one-way glass, and so on.

Once past the “box,” you’re in the data center, right? Probably not. Data centers can be divided into areas or blocks each with different access codes and doors. Once out of the box, you still might only be able to access the snack room and the bathrooms. These “rooms” are always located outside of the data center floor. Let’s step inside, “badge in please.”

Inside the Data Center

While every data center is different, there are three things that most people find common in their experience; how clean it is, the noise level and the temperature.

Data Centers are Clean

From the moment you walk into a typical data center, you’ll notice that it is clean. While most data centers are not cleanrooms by definition, they do ensure the environment is suitable for the equipment housed there.

Data center Entry Mats

Cleanliness starts at the door. Mats like this one capture the dirt from the bottom of your shoes. These mats get replaced regularly. As you look around, you might notice that there are no trashcans on the data center floor. As a consequence, the data center staff follows the “whatever you bring in, you bring out” philosophy, sort of like hiking in the woods. Most data centers won’t allow food or drink on the data center floor, either. Instead, one has to leave the datacenter floor to have a snack or use the restroom.

Besides being visually clean, the air in a data center is also amazingly clean: Filtration systems filter particulates to the sub-micron level. Data center filters have a 99.97% (or higher) efficiency rating in removing 0.3-micron particles. In comparison, your typical home filter provides a 70% sub-micron efficiency level. That might explain the dust bunnies behind your gaming tower.

Data Centers Are Noisy

Data center Noise Levels

The decibel level in a given data center can vary considerably. As you can see, the Backblaze datacenter is between 76 and 78 decibels. This is the level when you are near the racks of Storage Pods. How loud is 78dB? Normal conversation is 60dB, a barking dog is 70dB, and a screaming child is only 80dB. In the US, OSHA has established 85dB as the lower threshold for potential noise damage. Still, 78dB is loud enough that we insist our data center staff wear ear protection on the floor. Their favorite earphones are Bose’s noise reduction models. They are a bit costly but well worth it.

The noise comes from a combination of the systems needed to operate the data center. Air filtration, heating, cooling, electric, and other systems in use. 6,000 spinning 3-inch fans in the Storage Pods produce a lot of noise.

Data Centers Are Hot and Cold

As noted, part of the noise comes from heating and air-conditioning systems, mostly air-conditioning. As you walk through the racks and racks of equipment in many data centers, you’ll alternate between warm aisles and cold aisles. In a typical raised floor data center, cold air rises from vents in the floor in front of each rack. Fans inside the servers in the racks, in our case Storage Pods, pull the air in from the cold aisle and through the server. By the time the air reaches the other side, the warm row, it is warmer and is sucked away by vents in the ceiling or above the racks.

There was a time when data centers were like meat lockers with some kept as cold as 55°F (12.8°C). Warmer heads prevailed, and over the years the average temperature has risen to over 80°F (26.7°C) with some companies pushing that even higher. That works for us, but in our case, we are more interested in the temperature inside our Storage Pods and more precisely the hard drives within. Previously we looked at the correlation between hard disk temperature and failure rate. The conclusion: As long as you run drives well within their allowed range of operating temperatures, there is no problem operating a data center at 80°F (26.7°C) or even higher. As for the employees, if they get hot they can always work in the cold aisle for a while and vice-versa.

Getting Out of a Data Center

When you’re finished visiting the data center, remember to leave yourself a few extra minutes to get out. The first challenge is to find your way back to the entrance. If an escort accompanies you, there’s no issue, but if you’re on your own, I hope you paid attention to the way inside. It’s amazing how all the walls and doors look alike as you’re wandering around looking for the exit, and with data centers getting larger and larger the task won’t get any easier. For example, the Switch SUPERNAP datacenter complex in Reno Nevada will be over 6.4 million square feet, roughly the size of the Pentagon. Having worked in the Pentagon, I can say that finding your way around a facility that large can be daunting. Of course, a friendly security guard is likely to show up to help if you get lost or curious.

On your way back out you’ll pass through the “box” once again for your exit cameo. Also, if you are trying to leave with more than you came in with you will need a fair bit of paperwork before you can turn in your credentials and exit the building. Don’t forget to wave at the cameras in the parking lot.

The post A Day in the Life of a Data Center appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Cloud’s Software: A Look Inside Backblaze

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/the-clouds-software-a-look-inside-backblaze/

When most of us think about “the cloud,” we have an abstract idea that it’s computers in a data center somewhere – racks of blinking lights and lots of loud fans. There’s truth to that. Have a look inside our datacenter to get an idea. But besides the impressive hardware – and the skilled techs needed to keep it running – there’s software involved. Let’s take a look at a few of the software tools that keep our operation working.

Our data center is populated with Storage Pods, the servers that hold the data you entrust to us if you’re a Backblaze customer or you use B2 Cloud Storage. Inside each Storage Pod are dozens of 3.5-inch spinning hard disk drives – the same kind you’ll find inside a desktop PC. Storage Pods are mounted on racks inside the data center. Those Storage Pods work together in Vaults.

Vault Software

The Vault software that keeps those Storage Pods humming is one the backbones of our operation. It’s what makes it possible for us to scale our services to meet your needs and with durability, scalability and fast performance.

The Vault software distributes data across 20 different Storage Pods, with the data spread evenly across all 20 pods. Drives in the same position inside each Storage Pod are grouped together in software in what we call a “tome.” When a file gets uploaded to Backblaze, it’s split into pieces we call “shards” and distributed across all 20 drives.

Each file is stored as 20 shards: 17 data shards and three parity shards. As the name implies, the data shards comprise the information in the files you upload to Backblaze. Parity shards add redundancy so that a file can be completely restored from a Vault even if some of the pieces are not available.

Because those shards are distributed across 20 Storage Pods in 20 cabinets, a Storage Pod can go down and the Vault will still operate unimpeded. An entire cabinet can lose power and the Vault will still work fine.

Files can be written to the Vault even if a Storage Pod is down with two parity shards to protect the data. Even in the extreme — and unlikely — case where three Storage Pods in a Vault are offline, the files in the vault are still available because they can be reconstructed from the 17 available pieces.

Reed-Solomon Erasure Coding

Erasure coding makes it possible to rebuild a data file even if parts of the original are lost. Having effective erasure coding is vital in a distributed environment like a Backblaze Vault. It helps us keep your data safe even when the hardware that the data is stored on needs to be serviced.

We use Reed-Solomon erasure encoding. It’s a proven technique used in Linux RAID systems, by Microsoft in its Azure cloud storage, and by Facebook too. The Backblaze Vault Architecture is capable of delivering 99.99999% annual durability thanks in part to our Reed-Solomon erasure coding implementation.

Here’s our own Brian Beach with an explanation of how Reed-Solomon encoding works:

We threw out the Linux RAID software we had been using prior to the implementation of the Vaults and wrote our own Reed-Solomon implementation from scratch. We’re very proud of it. So much so that we’ve released it as open source that you can use in your own projects, if you wish.

We developed our Reed-Solomon implementation as a Java library. Why? When we first started this project, we assumed that we would need to write it in C to make it run as fast as we needed. It turns out that modern Java virtual machines working on our servers are great, and just-in-time compilers produces code that runs pretty quick.

All the work we’ve done to build a reliable, scalable, affordable solution for storing data in a “cloud” led to the creation of B2 Cloud Storage. B2 lets you store your data in the cloud for a fraction of what you’d spend elsewhere – 1/4 the price of Amazon S3, for example.

Using Our Storage

Having over 300 Petabytes of data storage available isn’t very useful unless we can store data and reliably restore it too. We offer two ways to store data with Backblaze: via a client application or via direct access. Our client application, Backblaze Computer Backup, is installed on your Mac or Windows system and basically does everything related to automatically backing up your computer. We locate the files that are new or changed and back them up. We manage versions, deduplicate files, and more. The Backblaze app does all the work behind the scenes.

The other way to use our storage is via direct access. You can use a Web GUI, a Command Line Interface (CLI) or an Application Programming Interface (API). With any of these methods, you are in charge of what gets stored in the Backblaze cloud. This is what Backblaze B2 is all about. You can log into B2 and use the Web GUI to drag and drop files that are stored in the Backblaze cloud. You decide what gets added and deleted, and how many versions of a file you want to keep. Think of B2 as your very own bucket in the cloud where you can store your files.

We also have mobile apps for iOS and Android devices to help you view and share any backed up files you have on the go. You can download them, play back or view media files, and share them as you need.

We focused on creating a native, integrated experience for you when you use our software. We didn’t take a shortcut to create a Java app for the desktop. On the Mac our app is built using Xcode and on the PC it was built using C. The app is designed for lightweight, unobtrusive performance. If you do need to adjust its performance, we give you that ability. You have control over throttling the backup rate. You can even adjust the number of CPU threads dedicated to Backblaze, if you choose.

When we first released the software almost a decade ago we had no idea that we’d iterate it more than 1,000 times. That’s the threshold we reached late last year, however! We released version 4.3.0 in December. We’re still plugging away at it and have plans for the future, too.

Our Philosophy: Keep It Simple

“Keep It Simple” is the philosophy that underlies all of the technology that powers our hardware. It makes it possible for you to affordably, reliably back up your computers and store data in the cloud.

We’re not interested in creating elaborate, difficult-to-implement solutions or pricing schemes that confuse and confound you. Our backup service is unlimited and unthrottled for one low price. We offer cloud storage for 1/4th the competition. And we make it easy to access with desktop, mobile and web interfaces, command line tools and APIs.

Hopefully we’ve shed some light on the software that lets our cloud services operate. Have questions? Join the discussion and let us know.

The post The Cloud’s Software: A Look Inside Backblaze appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

BitChute is a BitTorrent-Powered YouTube Alternative

Post Syndicated from Andy original https://torrentfreak.com/bitchute-is-a-bittorrent-powered-youtube-alternative-170129/

bitchute-logoYouTube attracts over a billion visitors every month, with many flocking to the platform to view original content uploaded by thousands of contributors. However, those contributors aren’t completely free to upload and make money from whatever they like.

Since it needs to please its advertisers, YouTube has rules in place over what kind of content can be monetized, something which caused a huge backlash last year alongside claims of censorship.

But what if there was an alternative to YouTube, one that doesn’t impose the same kinds of restrictions on uploaders? Enter BitChute, a BitTorrent-powered video platform that seeks to hand freedom back to its users.

“The idea comes from seeing the increased levels of censorship by the large social media platforms in the last couple of years. Bannings, demonetization, and tweaking algorithms to send certain content into obscurity and, wanting to do something about it,” BitChute founder Ray Vahey informs TorrentFreak.

“I knew building a clone wasn’t the answer, many have tried and failed. And it would inevitably grow into an organization with the same problems anyway.”

As seen in the image below, the site has a familiar layout for anyone used to YouTube-like video platforms. It has similar video controls, view counts, and the ability to vote on content. It also has a fully-functioning comment section.


Of course, one of the main obstacles for video content hosting platforms is the obscene amounts of bandwidth they consume. Any level of success is usually accompanied by big hosting bills. But along with its people-powered philosophy, BitChute does things a little differently.

Instead of utilizing central servers, BitChute uses WebTorrent, a system which allows people to share videos directly from their browser, without having to configure or install anything. Essentially this means that the site’s users become hosts of the videos they’re watching, which slams BitChute’s hosting costs into the ground.

“Distributed systems and WebTorrent invert the scalability advantage the Googles and Facebooks have. The bigger our user base grows, the more efficiently it can serve while retaining the simplicity of the web browser,” Vahey says.

“Also by the nature of all torrent technology, we are not locking users into a single site, and they have the choice to retain and continue sharing the files they download. That puts more power back in the hands of the consumer where it should be.”

The only hints that BitChute is using peer-to-peer technology are the peer counts under each video and a short delay before a selected video begins to play. This is necessary for the system to find peers but thankfully it isn’t too intrusive.

As far as we know, BitChute is the first attempt at a YouTube-like platform that leverages peer-to-peer technology. It’s only been in operation for a short time but according to its founder, things are going well.

“As far as I could tell, no one had yet run with this idea as a service, so that’s what myself and few like-minded people decided. To put it out there and see what people think. So far it’s been an amazingly positive response from people who understand and agree with what we’re doing,” Vahey explains.

“Just over three weeks ago we launched with limited upload access on a first come first served basis. We are flat out busy working on the next version of the site; I have two other co-founders based out of the UK who are supporting me, watch this space,” he concludes.

Certainly, people will be cheering the team on. Last September, popular YouTuber Bluedrake experimented with WebTorrent to distribute his videos after becoming frustrated with YouTube’s policies.

“All I want is a site where people can say what they want,” he said at the time. “I want a site where people can operate their business without having somebody else step in and take away their content when they say something they don’t like.”

For now, BitChute is still under development, but so far it has impressed Feross Aboukhadijeh, the Stanford University graduate who invented WebTorrent.

“BitChute is an exciting new product,” he told TF this week. “This is exactly the kind of ‘people-powered’ website that WebTorrent technology was designed to enable. I’m eager to see where the team takes it.”

BitChute can be found here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Raspberry Pi Foundation’s Digital Making Curriculum

Post Syndicated from Carrie Anne Philbin original https://www.raspberrypi.org/blog/digital-making-curriculum/

At Raspberry Pi, we’re determined in our ambition to put the power of digital making into the hands of people all over the world: one way we pursue this is by developing high-quality learning resources to support a growing community of educators. We spend a lot of time thinking hard about what you can learn by tinkering and making with a Raspberry Pi, and other devices and platforms, in order to become skilled in computer programming, electronics, and physical computing.

Now, we’ve taken an exciting step in this journey by defining our own digital making curriculum that will help people everywhere learn new skills.

A PDF version of the curriculum is also available to download.

Who is it for?

We have a large and diverse community of people who are interested in digital making. Some might use the curriculum to help guide and inform their own learning, or perhaps their children’s learning. People who run digital making clubs at schools, community centres, and Raspberry Jams may draw on it for extra guidance on activities that will engage their learners. Some teachers may wish to use the curriculum as inspiration for what to teach their students.

Raspberry Pi produces an extensive and varied range of online learning resources and delivers a huge teacher training program. In creating this curriculum, we have produced our own guide that we can use to help plan our resources and make sure we cover the broad spectrum of learners’ needs.


Learning anything involves progression. You start with certain skills and knowledge and then, with guidance, practice, and understanding, you gradually progress towards broader and deeper knowledge and competence. Our digital making curriculum is structured around this progression, and in representing it, we wanted to avoid the age-related and stage-related labels that are often associated with a learner’s progress and the preconceptions these labels bring. We came up with our own, using characters to represent different levels of competence, starting with Creator and moving onto Builder and Developer before becoming a Maker.

Progress through our curriculum and become a digital maker


We want to help people to make things so that they can become the inventors, creators, and makers of tomorrow. Digital making, STEAM, project-based learning, and tinkering are at the core of our teaching philosophy which can be summed up simply as ‘we learn best by doing’.

We’ve created five strands which we think encapsulate key concepts and skills in digital making: Design, Programming, Physical Computing, Manufacture, and Community and Sharing.

Computational thinking

One of the Raspberry Pi Foundation’s aims is to help people to learn about computer science and how to make things with computers. We believe that learning how to create with digital technology will help people shape an increasingly digital world, and prepare them for the work of the future.

Computational thinking is at the heart of the learning that we advocate. It’s the thought process that underpins computing and digital making: formulating a problem and expressing its solution in such a way that a computer can effectively carry it out. Computational thinking covers a broad range of knowledge and skills including, but not limited to:

  • Logical reasoning
  • Algorithmic thinking
  • Pattern recognition
  • Abstraction
  • Decomposition
  • Debugging
  • Problem solving

By progressing through our curriculum, learners will develop computational thinking skills and put them into practice.

What’s not on our curriculum?

If there’s one thing we learned from our extensive work in formulating this curriculum, it’s that no two educators or experts can agree on the best approach to progression and learning in the field of digital making. Our curriculum is intended to represent the skills and thought processes essential to making things with technology. We’ve tried to keep the headline outcomes as broad as possible, and then provide further examples as a guide to what could be included.

Our digital making curriculum is not intended to be a replacement for computer science-related curricula around the world, such as the ‘Computing Programme of Study’ in England or the ‘Digital Technologies’ curriculum in Australia. We hope that following our learning pathways will support the study of formal curricular and exam specifications in a fun and tangible way. As we continue to expand our catalogue of free learning resources, we expect our curriculum will grow and improve, and your input into that process will be vital.

Get involved

We’re proud to be part of a movement that aims to empower people to shape their world through digital technologies. We value the support of our community of makers, educators, volunteers, and enthusiasts. With this in mind, we’re interested to hear your thoughts on our digital making curriculum. Add your feedback to this form, or talk to us at one of the events that Raspberry Pi will attend in 2017.

The post The Raspberry Pi Foundation’s Digital Making Curriculum appeared first on Raspberry Pi.

Backblaze Begins: Celebrating A Very Special Anniversary

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/backblaze-begins/

2017 is a milestone year for us at Backblaze: This year is our tenth anniversary. Our official birthday – based on our papers of incorporation – places that date in April. But January 15th was another important date: It’s when the work on what would become Backblaze first started in earnest.

Brian Wilson, our intrepid CTO and CFO, began coding Backblaze full-time on January 15, 2007, in a corner of his living room. Brian had already been coding for a couple of decades, and like our other founders is a serial entrepreneur. But Backblaze was a bit more personal.

Our guiding philosophy is that backups should be easy, automatic, and affordable.

Brian is known by family and friends to have a pretty elaborate and extensive – some might even say a bit insane – backup regimen. That expertise would occasionally lead to questions about data backup and recovery. That’s when Jeanine’s computer crashed. She begged Brian for help to recover her files. But without a backup to restore from, Brian couldn’t help.

While Brian felt bad for Jeanine, he quickly realized that this was just the beginning as photos, movies, and personal and work documents were all going digital. Purely digital. And like Jeanine, most people were not backing up their computers. The amount of data that was at risk of being lost was staggering.

Why were so few people backing up their computers? Managing backups can be difficult, especially for people who don’t live on their computers all the time, but still rely on them. And while there were already online backup services, they were confusing, expensive and required you to select the files you wanted to backup.

So we started Backblaze to solve that impending disaster. Our guiding philosophy is that backups should be easy, automatic, and affordable.

Backblaze caught fire very quickly, if you’ll pardon the pun. By early February we had already begun stringing together a Gigabit Ethernet network (in Brian’s home) to connect the servers and other gear we’d need to get Backblaze up and running. By early April we had working code. That’s when we called the lawyers and got the letters of incorporation rolling, which is why our “official” birthday is in April. April 20th, 2007 in fact.

So why did we call this new venture Backblaze? Back in 2001, Brian had incorporated “Codeblaze” for his consulting business. “Backblaze” was a nod to that previous effort, with an acknowledgment that this new project would focus on backups. April 20th is our official birthday.

4/20 is familiar to marijuana enthusiasts everywhere. and our birthday is a source of occasional mirth and merriment to those in the know. Sorry to be a buzzkill, but it’s totally coincidental: The great state of Delaware decided on our incorporation date.

It’s interesting how one great idea can lead to another. When Backblaze was still coming together, we thought we’d store backup data using an existing cloud storage service. So we went to Amazon S3. We found that S3 was way too expensive for us. Heck, it’s still too expensive, years later.

So we decided to roll our own storage instead. We called it the Storage Pod. We really liked the idea – so much that we decided to open-source the design so you can build your own if you want to. We’ve iterated it many times since then, most recently with our Storage Pod 6.0. We’re now putting 60 drives in each rack-mounted chassis in our data center, for a total of 480 Terabytes of storage in each one.

As our work with Storage Pods continued and as we built out our data center, we realized that we could offer cloud storage to our customers for 1/4 the price that Amazon does. And that’s what led us to start B2 Cloud Storage. You’ve enthusiastically adopted B2, along with a growing cadre of integration partners.

Ten years later, we’re astonished, humbled and thrilled with what we’ve accomplished. We’ve restored more than 20 billion files and the Storage Pods in our data center now keep more than 300 Petabytes of data. The future is bright and the sky’s the limit. We can’t wait to see what the next ten years have in store.

The post Backblaze Begins: Celebrating A Very Special Anniversary appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Top Spotify Lawyer: Attracting Pirates is in Our DNA

Post Syndicated from Andy original https://torrentfreak.com/top-spotify-lawyer-attracting-pirates-is-in-our-dna-161226/

spotifyAlmost eight years ago and just months after its release, TF published an article which pondered whether the fledgling Spotify service could become a true alternative to Internet piracy.

From the beginning, one of the key software engineers at Spotify has been Ludvig Strigeus, the creator of uTorrent, so clearly the company already knew a lot about file-sharers. In the early days the company was fairly open about its aim to provide an alternative to piracy, but perhaps one of the earliest indications of growing success came when early invites were shared among users of private torrent sites.

Today Spotify is indeed huge. The service has an estimated 100 million users, many of them taking advantage of its ad-supported free tier. This is the gateway for many subscribers, including millions of former and even current pirates who augment their sharing with the desirable service.

Over the years, Spotify has made no secret of its desire to recruit more pirates to its service. In 2014, Spotify Australia managing director Kate Vale said it was one of their key aims.

“People that are pirating music and not paying for it, they are the ones we want on our platform. It’s important for us to be reaching these individuals that have never paid for music before in their life, and get them onto a service that’s legal and gives money back to the rights holders,” Vale said.

Now, in a new interview with The Journal on Sports and Entertainment Law, General Counsel of Spotify Horacio Gutierrez reveals just how deeply this philosophy runs in the company. It’s absolutely fundamental to its being, he explains.

“One of the things that inspired the creation of Spotify and is part of the DNA of the company from the day it launched (and remember the service was launched for the first time around 8 years ago) was addressing one of the biggest questions that everyone in the music industry had at the time — how would one tackle and combat online piracy in music?” Gutierrez says.

“Spotify was determined from the very beginning to provide a fully licensed, legal alternative for online music consumption that people would prefer over piracy.”

The signs that just might be possible came very early on. Just months after Spotify’s initial launch the quality of its service was celebrated on what was to become the world’s best music torrent site, What.cd.

“Honestly it’s going to be huge,” a What.cd user predicted in 2008.

“I’ve been browsing and playing from its seemingly endless music catalogue all afternoon, it loads as if it’s playing from local files, so fast, so easy. If it’s this great in such early beta stages then I can’t imagine where it’s going. I feel like buying another laptop to have permanently rigged.”

Of course, hardcore pirates aren’t always easily encouraged to part with their cash, so Spotify needed an equivalent to the no-cost approach of many torrent sites. That is still being achieved today via its ad-supported entry level, Gutierrez says.

“I think one just has to look at data to recognize that the freemium model for online music consumption works. Our free tier is a key to attracting users away from online piracy, and Spotify’s success is proof that the model works.

“We have data around the world that shows that it works, that in fact we are making inroads against piracy because we offer an ability for those users to have a better experience with higher quality content, variety richer catalogue, and a number of other user-minded features that make the experience much better for the user.”

Spotify’s general counsel says that the company is enjoying success, not only by bringing pirates onboard, but also by converting them to premium customers via a formula that benefits everyone in the industry.

“If you look at what has happened since the launch of the Spotify service, we have been incredibly successful on that score. Figures coming out the music industry show that after 15 years of revenue losses in music industry, the music industry is once again growing thanks to music streaming,” he concludes.

With the shutdown of What.cd in recent weeks, it’s likely that former users will be considering the Spotify option again this Christmas, if they aren’t customers already.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Amazon ECS Service Auto Scaling Enables Rent-A-Center SAP Hybris Solution

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/amazon-ecs-service-auto-scaling-enables-rent-a-center-sap-hybris-solution/

This is a guest post from Troy Washburn, Sr. DevOps Manager @ Rent-A-Center, Inc., and Ashay Chitnis, Flux7 architect.


Rent-A-Center in their own words: Rent-A-Center owns and operates more than 3,000 rent-to-own retail stores for name-brand furniture, electronics, appliances and computers across the US, Canada, and Puerto Rico.

Rent-A-Center (RAC) wanted to roll out an ecommerce platform that would support the entire online shopping workflow using SAP’s Hybris platform. The goal was to implement a cloud-based solution with a cluster of Hybris servers which would cater to online web-based demand.

The challenge: to run the Hybris clusters in a microservices architecture. A microservices approach has several advantages including the ability for each service to scale up and down to meet fluctuating changes in demand independently. RAC also wanted to use Docker containers to package the application in a format that is easily portable and immutable. There were four types of containers necessary for the architecture. Each corresponded to a particular service:

1. Apache: Received requests from the external Elastic Load Balancing load balancer. Apache was used to set certain rewrite and proxy http rules.
2. Hybris: An external Tomcat was the frontend for the Hybris platform.
3. Solr Master: A product indexing service for quick lookup.
4. Solr Slave: Replication of master cache to directly serve product searches from Hybris.

To deploy the containers in a microservices architecture, RAC and AWS consultants at Flux7 started by launching Amazon ECS resources with AWS CloudFormation templates. Running containers on ECS requires the use of three primary resources: clusters, services, and task definitions. Each container refers to its task definition for the container properties, such as CPU and memory. And, each of the above services stored its container images in Amazon ECR repositories.

This post describes the architecture that we created and implemented.

Auto Scaling

At first glance, scaling on ECS can seem confusing. But the Flux7 philosophy is that complex systems only work when they are a combination of well-designed simple systems that break the problem down into smaller pieces. The key insight that helped us design our solution was understanding that there are two very different scaling operations happening. The first is the scaling up of individual tasks in each service and the second is the scaling up of the cluster of Amazon EC2 instances.

During implementation, Service Auto Scaling was released by the AWS team and so we researched how to implement task scaling into the existing solution. As we were implementing the solution through AWS CloudFormation, task scaling needed to be done the same way. However, the new scaling feature was not available for implementation through CloudFormation and so the natural course was to implement it using AWS Lambda–backed custom resources.

A corresponding Lambda function is implemented in Node.js 4.3, while automatic scaling happens by monitoring the CPUUtilization Amazon CloudWatch metric. The ECS policies below are registered with CloudWatch alarms that are triggered when specific thresholds are crossed. Similarly, by using the MemoryUtilization CloudWatch metric, ECS scaling can be made to scale in and out as well.

The Lambda function and CloudFormation custom resource JSON are available in the Flux7 GitHub repository: https://github.com/Flux7Labs/blog-code-samples/tree/master/2016-10-ecs-enables-rac-sap-hybris

Scaling ECS services and EC2 instances automatically

The key to understanding cluster scaling is to start by understanding the problem. We are no longer running a homogeneous workload in a simple environment. We have a cluster hosting a heterogeneous workload with different requirements and different demands on the system.

This clicked for us after we phrased the problem as, “Make sure the cluster has enough capacity to launch ‘x’ more instances of a task.” This led us to realize that we were no longer looking at an overall average resource utilization problem, but rather a discrete bin packing problem.

The problem is inherently more complex. (Anyone remember from algorithms class how the discrete Knapsack problem is NP-hard, but the continuous knapsack problem can easily be solved in polynomial time? Same thing.) So we have to check on each individual instance if a particular task can be scheduled on it, and if for any task we don’t cross the required capacity threshold, then we need to allocate more instance capacity.

To ensure that ECS scaling always has enough resources to scale out and has just enough resources after scaling in, it was necessary that the Auto Scaling group scales according to three criteria:

1. ECS task count in relation to the host EC2 instance count in a cluster
2. Memory reservation
3. CPU reservation

We implemented the first criteria for the Auto Scaling group. Instead of using the default scaling abilities, we set group scaling in and out using Lambda functions that were triggered periodically by a combination of the AWS::Lambda::Permission and an AWS::Events::Rule resources, as we wanted specific criteria for scaling.

The Lambda function is available in the Flux7 GitHub repository: https://github.com/Flux7Labs/blog-code-samples/tree/master/2016-10-ecs-enables-rac-sap-hybris

Future versions of this piece of code will incorporate the other two criteria along with the ability to use CloudWatch alarms to trigger scaling.


Using advanced ECS features like Service Auto Scaling in conjunction with Lambda to meet RAC’s business requirements, RAC and Flux7 were able to Dockerize SAP Hybris in production for the first time ever.

Further, ECS and CloudFormation give users the ability to implement robust solutions while still providing the ability to roll back in case of failures. With ECS as a backbone technology, RAC has been able to deploy a Hybris setup with automatic scaling, self-healing, one-click deployment, CI/CD, and PCI compliance consistent with the company’s latest technology guidelines and meeting the requirements of their newly-formed culture of DevOps and extreme agility.

If you have any questions or suggestions, please comment below.

Noisia Handle Their Album Leak Without Blaming Fans

Post Syndicated from Andy original https://torrentfreak.com/noisia-handle-their-album-leak-without-blaming-fans-160806/

outer-edgesFor closing on 20 years, online piracy has been the bane of the music industry. Starting off relatively small due to limited speed Internet connections, the phenomenon boomed as broadband took hold.

In the years that followed, countless millions of tracks were shared among like-minded Internet users eager to keep up with their favorite artists and to discover those they never knew existed.

But with 2016 almost two-thirds done, legal music availability has never been better and the excuses for obtaining all content illegally are slowly disappearing over the horizon, for those who can afford it at least.

Nevertheless, one type of piracy refuses to go away, no matter how well-off the consumer. By definition, pre-release music is officially unavailable to buy, so money is completely out of the equation. Such leaks are the ultimate forbidden fruit for hardcore fans of all standing, including those most likely to shell out for the real thing.

This means that the major labels regularly cite pre-release leaks as the most damaging form of piracy, as they are unable to compete with the unauthorized copies already available. Release plans and marketing drives are planned and paid for in advance, and record companies are reluctant to change them.

As a result, such leaks are often followed by extreme outbursts from labels which slam leakers and downloaders alike. That’s perhaps understandable but there are better ways of getting the message over without attacking fans. Last week, drum and bass, dubstep, breakbeat and house stars Noisia showed how it should be done after their long-awaited album Outer Edges leaked online six weeks early.

“Friday night, while we were in the final minutes of setting up the stage for our first ever Outer Edges show, we received the news that our album had been leaked. We think you can imagine how bad we felt at that moment,” the trio told fans on Facebook.

“We realize it’s 2016, and things like these happen all the time. Still, it’s quite a setback. All the plans we’ve made have to be scrapped and replaced by something less ideal, because we have to react to this unfortunate situation.”

In the DnB scene Noisia are absolutely huge but despite their success have chosen to stay close to their fans. The trio run their own label (Vision Recordings) so have more control than many artists, thus allowing them to give fans unprecedented access to their music.

“So far in releasing this album, we’ve made a conscious effort to make every track that’s available somewhere, available everywhere. If you pre-ordered the album on Itunes or our webstore, you received the new track the same day it was first premiered on Soundcloud,” Noisia continue.

“We believe that users of all platforms should be able to listen to our music pretty much the minute it’s available anywhere else. In this philosophy, the availability of our whole album on illegal download sites means that we have to make it available on all platforms.”

So, instead of going off on a huge rant, Noisia leapt into action.

“We have immediately changed all our previous plans and made the whole album available to buy on our web store right now,” the band announced on the day of the leak.

Sadly, other digital platforms couldn’t move as quickly, so most only got the release on Friday. Physical products couldn’t be moved either due to production limitations, so they will appear on the original schedule.

“Even though we are unhappy about this leak, we’re still really happy with the music. We really hope you will enjoy it as much as we’ve enjoyed making it,” Noisia said.

With this reaction to what must’ve been a hugely disappointing leak, Noisia showed themselves to be professionals. No crying, no finger pointing, just actions to limit the impact of the situation alongside a decision to compete with free, both before and after the leak.

Nevertheless, Thijs, Martijn and Nik are human after all, so they couldn’t resist taking a little shot at whoever leaked their music. No threats of lawsuits of course, but a decent helping of sarcasm and dark humor.

“Thanks to whoever leaked our album, next time please do it after the album is out, maybe we can coordinate? Oh wait, that wouldn’t really be leaking… And besides, we don’t negotiate with terror..leaking persons,” they said.

“No, instead we will fold, and adjust our entire strategy. Take that! We hope you get stung by a lot of mosquitoes this summer, and maybe also next summer.”

Noisia’s next album will probably take years to arrive (Outer Edges took six years) but when it does, don’t expect to be warned months in advance. The trio are already vowing to do things differently next time, so expect the unexpected.

Noisia make a lot of their music available for free on Soundcloud, YouTube and ad-supported Spotify.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Really? Buy a Pirate TV Box, Get a Free Cruise?

Post Syndicated from Andy original https://torrentfreak.com/reallybuy-a-pirate-tv-box-get-a-free-cruise-160703/

zstream-logoAdvertising and marketing efforts are all around us. Every waking hour of every day, someone somewhere tries to get us interested in their latest product or service.

While TV and radio have dominated over the years, increasingly the Internet is the go-to platform for companies determined to portray their product as the next big thing.

The Internet has many great qualities and for those looking to do something a bit different or wild, its unregulated nature means you can do whatever the hell you want. Or at the least shoot first, worry about the consequences later.

That appears to be the philosophy of the company behind the Z Stream Box, the next ‘big’ thing in audio-visual consumption. Promoted via a glossy website and numerous online videos, the set-top Z Stream Box aims to fulfil the dreams of every movie, TV show, sports and music junkie.

“Get the Biggest Shows, the Latest Movies, Stream the biggest blockbusters here first. Watch every episode of your favorite shows, past and present, Live and on demand. Enjoy the latest series and specials as they premiere without waiting,” the advertising reads.

“Break free of annual contracts, surprise fees and TV that ties you down. With Z Stream Box® get the TV you love over 100 of your favorite channels, Hit movies, Documentaries, Sports and more! NO contracts and NO monthly payments ever.”

While these kinds of claims are usually the sole preserve of pirate devices, there are various indicators on the Z Stream site suggesting that this must be a legitimate offer. Firstly, it has celebrity endorsements. Here’s a nice image of singer, songwriter and actress Christina Millian enjoying the device.


And to make sure that the cord-cutting phenomenon resonates with the younger generation, here’s YouTube star Jordyn Jones holding a Z Stream Box and looking surprised at how much it can do.


For those who haven’t already guessed, the basic premise of the Z Stream Box is that people can stop paying their expensive cable bills and get all their content online. It’s an Internet sensation!

Actually, let’s cut the nonsense. Z Stream Box is nothing more than a Kodi-enabled Android box with all the best pirate addons such as Genesis and Icefilms fully installed.


While that probably isn’t much of a shock by now, the way this device is being marketed is nothing short of remarkable.

Claimed celebrity endorsements aside, the people at Z Stream have commissioned a full-blown 18-minute infomercial for their device which must have cost a small fortune and would be at home on any shopping channel.


Seriously, this gig has absolutely everything – several glossy presenters, many actors, a perfect family, potential and existing ‘customers’ who can’t quite believe how good the device is, and much much more.

Of course, you’re probably wondering how much all this costs. Well, it’s the equivalent of just a few months cable, apparently. Admittedly that’s quite a lot of cash, but it’s the savings that are important, Z Stream say.

In the end it’s revealed the unit costs ‘just’ $295.95. That’s almost $300 for a box that would cost less than $100 if people looked around for something similar on eBay or Amazon. But do those products come with a free five-day cruise for two around the Bahamas, including all onboard meals and entertainment? Thought not. (18 minutes into the video below)


The full and quite unbelievable infomercial is embedded below and for those interested in just how far pirate advertising can go, the Z Stream Box website can be found here. Facebook here, YouTube account here.

Update: The Z Stream Box website has been taken down, Google Cache to the rescue, with Archive.is backup

Video mirror here

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Pomerantz and Peek: Fifty shades of open

Post Syndicated from ris original http://lwn.net/Articles/687626/rss

Jeffrey Pomerantz and Robin Peek seek to
the word “open”, as it is used or misused today. Examples
include open source, open access, open society, open knowledge, open
government, and so on.
From the common ancestor Free Software, the term “open” diversified, filling a wide range of niches. The Open Source Definition gave rise to a number of other definitions, articulating openness for everything from hardware to knowledge. Inspired by the political philosophy of openness, the Open Society Institute funded the meeting at which the Budapest Open Access Initiative declaration was created. Open Access then gave rise to a wide range of other opens concerned with scholarship, publication, and cultural heritage generally. This spread of openness can be seen as the diversification of a powerful idea into a wide range of resources and services. It can also be seen more importantly as the arrival, society-wide, of an idea whose time has come … an idea with political, legal, and cultural impacts.
(Thanks to Paul Wise)

Wanted: Java Programmer

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-java-programmer/

Backblaze Jobs

Want to work at a company that helps customers in over 150 countries around the world protect the memories they hold dear? A company that stores over 200 petabytes of customers’ photos, music, documents and work files in a purpose-built cloud storage system? Well here’s your chance. Backblaze is looking for a Java Programmer!

You will work on the server side APIs that authenticate users when they log in, accept the backups, manage the data, and prepare restored data for customers. You will work with artists and designers to create new HTML web pages that customers use every day. And you will help build new features as well as support tools to help chase down and diagnose customer issues.

Must be proficient in:

  • Java
  • XML
  • Apache Tomcat
  • Struts
  • JSON
  • UTF-8, Java Properties, and Localized HTML (Backblaze runs in 11 languages)
  • Large scale systems supporting thousands of servers and millions of customers
  • Cross platform (Linux/Macintosh/Windows) — don’t need to be an expert on all three, but cannot be afraid of any.
  • Cassandra experience a plus**
  • JavaScript a plus**

Looking for an attitude of:

  • Passionate about building friendly, easy to use Interfaces and APIs.
  • Must be interested in NoSQL Databases
  • Has to believe NoSQL is an Ok philosophy to build enormously scalable systems.
  • Likes to work closely with other engineers, support, and sales to help customers.
  • Believes the whole world needs backup, not just English speakers in the USA.
  • Customer Focused (!!) — always focus on the customer’s point of view and how to solve their problem!

Required for all Backblaze Employees:

  • Good attitude and willingness to do whatever it takes to get the job done
  • Strong desire to work for a small fast paced company
  • Desire to learn and adapt to rapidly changing technologies and work environment
  • Occasional visits to Backblaze datacenters necessary
  • Rigorous adherence to best practices
  • Relentless attention to detail
  • Excellent interpersonal skills and good oral/written communication
  • Excellent troubleshooting and problem solving skills
  • OK with pets in office

This position is located in San Mateo, California. Regular attendance in the office is expected.
Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.

If this sounds like you — follow these steps:

  • Send an email to [email protected] with the position in the subject line.
  • Include your resume.
  • Tell us a bit about your programming experience.

The post Wanted: Java Programmer appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

KDE’s “Kirigami UI”

Post Syndicated from corbet original http://lwn.net/Articles/681725/rss

The KDE project has announced
a new framework called the Kirigami UI; it appears to be oriented toward
the needs of mobile applications. “Kirigami UI isn’t just a set of
components, it’s also a philosophy: It defines precise UI/UX patterns to
allow developers to quickly develop intuitive and consistent apps that
provide a great user experience.”

KDE’s “Kirigami UI”

Post Syndicated from corbet original http://lwn.net/Articles/681725/rss

The KDE project has announced
a new framework called the Kirigami UI; it appears to be oriented toward
the needs of mobile applications. “Kirigami UI isn’t just a set of
components, it’s also a philosophy: It defines precise UI/UX patterns to
allow developers to quickly develop intuitive and consistent apps that
provide a great user experience.”

Kuhn’s Paradox

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2016/02/19/kuhns-paradox.html

I’ve been making the following social observation frequently in my talks
and presentations for the last two years. While I suppose it’s rather
forward of me to do so, I’ve decide to name this principle:

Kuhn’s Paradox

For some time now, this paradoxical principle appears to hold: each
day, more lines of freely licensed code exist than ever before in human
history; yet, it also becomes increasingly more difficult each day
for users to successfully avoid proprietary software while completing their
necessary work on a computer.

Kuhn’s View On Motivations & Causes of Kuhn’s Paradox

I believe this paradox is primarily driven by the cooption of software
freedom by companies that ostensibly support Open Source, but have the (now
popular) open
source almost everything

For certain areas of software endeavor, companies dedicate enormous
resources toward the authorship of new Free Software for particular narrow
tasks. Often, these core systems provide underpinnings and fuel the growth
of proprietary systems built on top of them. An obvious example here is
OpenStack: a fully Free Software platform, but most deployments of
OpenStack add proprietary features not available from a pure upstream
OpenStack installation.

Meanwhile, in other areas, projects struggle for meager resources to
compete with the largest proprietary behemoths. Large user-facing,
server-based applications of
the Service
as a Software Substitute
variety, along with massive social media sites
like Twitter and Facebook that actively work against federated social
network systems, are the two classes of most difficult culprits on this
point. Even worse, most traditional web sites have now become a mix of
mundane content (i.e., HTML) and proprietary Javascript programs, which are
installed on-demand into the users’ browser all day long, even while most
of those servers run a primarily Free Software operating system.

Finally, much (possibly a majority of) computer use in industrialized
society is via hand-held mobile devices
(usually inaccurately
described as “mobile phones”
). While some of these devices
have Free Software operating systems (i.e., Android/Linux), nearly all the
applications for all of these devices are proprietary software.

The explosion of for-profit interest in “Open Source” over the
last decade has led us to this paradoxical problem, which increases daily
— because the gap between “software under a license respects my
rights to copy, share, and modify” and “software that’s
essential for my daily activities” grows linearly wider with each

I propose herein no panacea; I wish I had one to offer. However, I
believe the problem is exacerbated by our community’s tendency to ignore
this paradox, and its pace even accelerates due to many developers’ belief
that having a job writing any old Free Software replaces the need for
volunteer labor to author more strategic code that advances software

Linksvayer’s View On Motivations & Causes of Kuhn’s Paradox

Linksvayer agrees the paradox is observable, but disagrees with me
regarding the primary motivations and causes. Linksvayer claims the
following are the primary motivations and causes of Kuhn’s paradox:

Software is becoming harder to avoid.

Proprietary vendors outcompete relatively decentralized free
software efforts to put software in hands of people.

The latter may be increasing or decreasing. But even if the latter is
decreasing, the former trumps it.

Note the competition includes competition to control policy,
particularly public policy. Unfortunately most Free Software activists
appear to be focused on individual (thus dwarfish) heroism and insider
politics rather than collective action.

I rewrote Linksvayer’s text slightly from a comment made to this blog post
to include it in the main text, as I find his arguments regarding causes as
equally plausible as mine.

As an Apologia for
the possibility that Linksvayer means me spending too much time
on insider politics, I believe that the cooption I discussed above means
that the seemingly broad base of support we could use for the collective
action Linksvayer recommends is actually tiny. In other words, most
people involved with Free Software development now are not Free Software
activists. (Compare it to 20 years ago, when rarely did you find a Free
Software developer who wasn’t also a Free Software activist.) Therefore,
one central part of my insider politics work is to recruit moderate Open
Source enthusiasts to become radical Free Software activists.