Sophisticated spyware, sold by surveillance tech companies to Mexican government agencies, are ending up in the hands of drug cartels:
As many as 25 private companies — including the Israeli company NSO Group and the Italian firm Hacking Team — have sold surveillance software to Mexican federal and state police forces, but there is little or no regulation of the sector — and no way to control where the spyware ends up, said the officials.
Lots of details in the article. The cyberweapons arms business is immoral in many ways. This is just one of them.
Regardless of your career path, there’s no denying that attending industry events can provide helpful career development opportunities — not only for improving and expanding your skill sets, but for networking as well. According to this article from PayScale.com, experts estimate that somewhere between 70-85% of new positions are landed through networking.
Narrowing our focus to networking opportunities with cloud computing professionals who’re working on tackling some of today’s most innovative and exciting big data solutions, attending big data-focused sessions at an AWS Global Summit is a great place to start.
AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. As the name suggests, these summits are held in major cities around the world, and attract technologists from all industries and skill levels who’re interested in hearing from AWS leaders, experts, partners, and customers.
In addition to networking opportunities with top cloud technology providers, consultants and your peers in our Partner and Solutions Expo, you’ll also hone your AWS skills by attending and participating in a multitude of education and training opportunities.
Here’s a brief sampling of some of the upcoming sessions relevant to big data professionals:
Be sure to check out the main page for AWS Global Summits, where you can see which cities have AWS Summits planned for 2018, register to attend an upcoming event, or provide your information to be notified when registration opens for a future event.
This column is from The MagPi issue 59. You can download a PDF of the full issue for free, or subscribe to receive the print edition through your letterbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve our charitable goals.
“Hey, world!” Estefannie exclaims, a wide grin across her face as the camera begins to roll for another YouTube tutorial video. With a growing number of followers and wonderful support from her fans, Estefannie is building a solid reputation as an online maker, creating unique, fun content accessible to all.
It’s as if she was born into performing and making for an audience, but this fun, enjoyable journey to social media stardom came not from a desire to be in front of the camera, but rather as a unique approach to her own learning. While studying, Estefannie decided the best way to confirm her knowledge of a subject was to create an educational video explaining it. If she could teach a topic successfully, she knew she’d retained the information. And so her YouTube channel, Estefannie Explains It All, came into being.
Her first videos featured pages of notes with voice-over explanations of data structure and algorithm analysis. Then she moved in front of the camera, and expanded her skills in the process.
But YouTube isn’t her only outlet. With nearly 50000 followers, Estefannie’s Instagram game is strong, adding to an increasing number of female coders taking to the platform. Across her Instagram grid, you’ll find insights into her daily routine, from programming on location for work to behind-the-scenes troubleshooting as she begins to create another tutorial video. It’s hard work, with content creation for both Instagram and YouTube forever on her mind as she continues to work and progress successfully as a software engineer.
As a thank you to her Instagram fans for helping her reach 10000 followers, Estefannie created a free game for Android and iOS called Gravitris — imagine Tetris with balance issues!
Estefannie was born and raised in Mexico, with ambitions to become a graphic designer and animator. However, a documentary on coding at Pixar, and the beauty of Merida’s hair in Brave, opened her mind to the opportunities of software engineering in animation. She altered her career path, moved to the United States, and switched to a Computer Science course.
With a constant desire to make and to learn, Estefannie combines her software engineering profession with her hobby to create fun, exciting content for YouTube.
While studying, Estefannie started a Computer Science Girls Club at the University of Houston, Texas, and she found herself eager to put more time and effort into the movement to increase the percentage of women in the industry. The club was a success, and still is to this day. While Estefannie has handed over the reins, she’s still very involved in the cause.
Through her YouTube videos, Estefannie continues the theme of inclusion, with every project offering a warm sense of approachability for all, regardless of age, gender, or skill. From exploring Scratch and Makey Makey with her young niece and nephew to creating her own Disney ‘Made with Magic’ backpack for a trip to Disney World, Florida, Estefannie’s videos are essentially a documentary of her own learning process, produced so viewers can learn with her — and learn from her mistakes — to create their own tech wonders.
Estefannie’s automated gingerbread house project was a labour of love, with electronics, wires, and candy strewn across both her living room and kitchen for weeks before completion. While she already was a skilled programmer, the world of physical digital making was still fairly new for Estefannie. Having ditched her hot glue gun in favour of a soldering iron in a previous video, she continued to experiment and try out new, interesting techniques that are now second nature to many members of the maker community. With the gingerbread house, Estefannie was able to research and apply techniques such as light controls, servos, and app making, although the latter was already firmly within her skill set. The result? A fun video of ups and downs that resulted in a wonderful, festive treat. She even gave her holiday home its own solar panel!
1,910 Likes, 43 Comments – Estefannie Explains It All (@estefanniegg) on Instagram: “A DAY AT RASPBERRY PI TOWERS!! LINK IN BIO @raspberrypifoundation”
And that’s just the beginning of her adventures with Pi…but we won’t spoil her future plans by telling you what’s coming next. Sorry! However, since this article was written last year, Estefannie has released a few more Pi-based project videos, plus some awesome interviews and live-streams with other members of the maker community such as Simone Giertz. She even made us an awesome video for our Raspberry Pi YouTube channel! So be sure to check out her latest releases.
2,264 Likes, 56 Comments – Estefannie Explains It All (@estefanniegg) on Instagram: “Best day yet!! I got to hangout, play Jenga with a huge arm robot, and have afternoon tea with…”
While many wonderful maker videos show off a project without much explanation, or expect a certain level of skill from viewers hoping to recreate the project, Estefannie’s videos exist almost within their own category. We can’t wait to see where Estefannie Explains It All goes next!
We’re just over three weeks away from the Raspberry Jam Big Birthday Weekend 2018, our community celebration of Raspberry Pi’s sixth birthday. Instead of an event in Cambridge, as we’ve held in the past, we’re coordinating Raspberry Jam events to take place around the world on 3–4 March, so that as many people as possible can join in. Well over 100 Jams have been confirmed so far.
Find a Jam near you
There are Jams planned in Argentina, Australia, Bolivia, Brazil, Bulgaria, Cameroon, Canada, Colombia, Dominican Republic, France, Germany, Greece, Hungary, India, Iran, Ireland, Italy, Japan, Kenya, Malaysia, Malta, Mexico, Netherlands, Norway, Papua New Guinea, Peru, Philippines, Poland, South Africa, Spain, Taiwan, Turkey, United Kingdom, United States, and Zimbabwe.
Take a look at the events map and the full list (including those who haven’t added their event to the map quite yet).
We will have Raspberry Jams in 35 countries across six continents
We had some special swag made especially for the birthday, including these T-shirts, which we’ve sent to Jam organisers:
There is also a poster with a list of participating Jams, which you can download:
Raspberry Jam photo booth
I created a Raspberry Jam photo booth that overlays photos with the Big Birthday Weekend logo and then tweets the picture from your Jam’s account — you’ll be seeing plenty of those if you follow the #PiParty hashtag on 3–4 March.
Check out the project on GitHub, and feel free to set up your own booth, or modify it to your own requirements. We’ve included text annotations in several languages, and more contributions are very welcome.
There’s still time…
If you can’t find a Jam near you, there’s still time to organise one for the Big Birthday Weekend. All you need to do is find a venue — a room in a school or library will do — and think about what you’d like to do at the event. Some Jams have Raspberry Pis set up for workshops and practical activities, some arrange tech talks, some put on show-and-tell — it’s up to you. To help you along, there’s the Raspberry Jam Guidebook full of advice and tips from Jam organisers.
The packed. And they packed. And they packed some more. Who’s expecting one of these #rjam kits for the Raspberry Jam Big Birthday Weekend?
Download the Raspberry Jam branding pack, and the special birthday branding pack, where you’ll find logos, graphical assets, flyer templates, worksheets, and more. When you’re ready to announce your event, create a webpage for it — you can use a site like Eventbrite or Meetup — and submit your Jam to us so it will appear on the Jam map!
We are six
We’re really looking forward to celebrating our birthday with thousands of people around the world. Over 48 hours, people of all ages will come together at more than 100 events to learn, share ideas, meet people, and make things during our Big Birthday Weekend.
Since we released the first Raspberry Pi in 2012, we’ve sold 17 million of them. We’re also reaching almost 200000 children in 130 countries around the world through Code Club and CoderDojo, we’ve trained over 1500 Raspberry Pi Certified Educators, and we’ve sent code written by more than 6800 children into space. Our magazines are read by a quarter of a million people, and millions more use our free online learning resources. There’s plenty to celebrate and even more still to do: we really hope you’ll join us from a Jam near you on 3–4 March.
We recently launched AWS Architecture Monthly, a new subscription service on Kindle that will push a selection of the best content around cloud architecture from AWS, with a few pointers to other content you might also enjoy.
From building a simple website to crafting an AI-based chat bot, the choices of technologies and the best practices in how to apply them are constantly evolving. Our goal is to supply you each month with a broad selection of the best new tech content from AWS — from deep-dive tutorials to industry-trend articles.
With your free subscription, you can look forward to fresh content delivered directly to your Kindledevice or Kindle app including: – Technical whitepapers – Reference architectures – New solutions and implementation guides – Training and certification opportunities – Industry trends
The January issue is now live. This month includes: – AWS Architecture Blog: Glenn Gore’s Take on re:Invent 2017 (Chief Architect for AWS) – AWS Reference Architectures: Java Microservices Deployed on EC2 Container Service; Node.js Microservices Deployed on EC2 Container Service – AWS Training & Certification: AWS Certified Solutions Architect – Associate – Sample Code: aws-serverless-express – Technical Whitepaper: Serverless Architectures with AWS Lambda – Overview and Best Practices
At this time, Architecture Monthly annual subscriptions are only available in the France (new), US, UK, and Germany. As more countries become available, we’ll update you here on the blog. For Amazon.com countries not listed above, we are offering single-issue downloads — also accessible from our landing page. The content is the same as in the subscription but requires individual-issue downloads.
FAQ I have to submit my credit card information for a free subscription? While you do have to submit your card information at this time (as you would for a free book in the Kindle store), it won’t be charged. This will remain a free, annual subscription and includes all 10 issues for the year.
Why isn’t the subscription available everywhere? As new countries get added to Kindle Newsstand, we’ll ensure we add them for Architecture Monthly. This month we added France but anticipate it will take some time for the new service to move into additional markets.
What countries are included in the Amazon.com list where the issues can be downloaded? Andorra, Australia, Austria, Belgium, Brazil, Canada, Gibraltar, Guernsey, India, Ireland, Isle of Man, Japan, Jersey, Liechtenstein, Luxembourg, Mexico, Monaco, Netherlands, New Zealand, San Marino, Spain, Switzerland, Vatican City
♪ Used to have a little now I have a lot I’m still, I’m still Jenny from the block chain ♪
For all that has been written about Bitcoin and its ilk, it is curious that the focus is almost solely what the cryptocurrencies are supposed to be. Technologists wax lyrical about the potential for blockchains to change almost every aspect of our lives. Libertarians and paleoconservatives ache for the return to “sound money” that can’t be conjured up at the whim of a bureaucrat. Mainstream economists wag their fingers, proclaiming that a proper currency can’t be deflationary, that it must maintain a particular velocity, or that the government must be able to nip crises of confidence in the bud. And so on.
Much of this may be true, but the proponents of cryptocurrencies should recognize that an appeal to consequences is not a guarantee of good results. The critics, on the other hand, would be best served to remember that they are drawing far-reaching conclusions about the effects of modern monetary policies based on a very short and tumultuous period in history.
In this post, my goal is to ditch most of the dogma, talk a bit about the origins of money – and then see how “crypto” fits the bill.
1. The prehistory of currencies
The emergence of money is usually explained in a very straightforward way. You know the story: a farmer raised a pig, a cobbler made a shoe. The cobbler needed to feed his family while the farmer wanted to keep his feet warm – and so they met to exchange the goods on mutually beneficial terms. But as the tale goes, the barter system had a fatal flaw: sometimes, a farmer wanted a cooking pot, a potter wanted a knife, and a blacksmith wanted a pair of pants. To facilitate increasingly complex, multi-step exchanges without requiring dozens of people to meet face to face, we came up with an abstract way to represent value – a shiny coin guaranteed to be accepted by every tradesman.
It is a nice parable, but it probably isn’t very true. It seems far more plausible that early societies relied on the concept of debt long before the advent of currencies: an informal tally or a formal ledger would be used to keep track of who owes what to whom. The concept of debt, closely associated with one’s trustworthiness and standing in the community, would have enabled a wide range of economic activities: debts could be paid back over time, transferred, renegotiated, or forgotten – all without having to engage in spot barter or to mint a single coin. In fact, such non-monetary, trust-based, reciprocal economies are still common in closely-knit communities: among families, neighbors, coworkers, or friends.
In such a setting, primitive currencies probably emerged simply as a consequence of having a system of prices: a cow being worth a particular number of chickens, a chicken being worth a particular number of beaver pelts, and so forth. Formalizing such relationships by settling on a single, widely-known unit of account – say, one chicken – would make it more convenient to transfer, combine, or split debts; or to settle them in alternative goods.
Contrary to popular belief, for communal ledgers, the unit of account probably did not have to be particularly desirable, durable, or easy to carry; it was simply an accounting tool. And indeed, we sometimes run into fairly unusual units of account even in modern times: for example, cigarettes can be the basis of a bustling prison economy even when most inmates don’t smoke and there are not that many packs to go around.
2. The age of commodity money
In the end, the development of coinage might have had relatively little to do with communal trade – and far more with the desire to exchange goods with strangers. When dealing with a unfamiliar or hostile tribe, the concept of a chicken-denominated ledger does not hold up: the other side might be disinclined to honor its obligations – and get away with it, too. To settle such problematic trades, we needed a “spot” medium of exchange that would be easy to carry and authenticate, had a well-defined value, and a near-universal appeal. Throughout much of the recorded history, precious metals – predominantly gold and silver – proved to fit the bill.
In the most basic sense, such commodities could be seen as a tool to reconcile debts across societal boundaries, without necessarily replacing any local units of account. An obligation, denominated in some local currency, would be created on buyer’s side in order to procure the metal for the trade. The proceeds of the completed transaction would in turn allow the seller to settle their own local obligations that arose from having to source the traded goods. In other words, our wondrous chicken-denominated ledgers could coexist peacefully with gold – and when commodity coinage finally took hold, it’s likely that in everyday trade, precious metals served more as a useful abstraction than a precise store of value. A “silver chicken” of sorts.
Still, the emergence of commodity money had one interesting side effect: it decoupled the unit of debt – a “claim on the society”, in a sense – from any moral judgment about its origin. A piece of silver would buy the same amount of food, whether earned through hard labor or won in a drunken bet. This disconnect remains a central theme in many of the debates about social justice and unfairly earned wealth.
3. The State enters the game
If there is one advantage of chicken ledgers over precious metals, it’s that all chickens look and cluck roughly the same – something that can’t be said of every nugget of silver or gold. To cope with this problem, we needed to shape raw commodities into pieces of a more predictable shape and weight; a trusted party could then stamp them with a mark to indicate the value and the quality of the coin.
At first, the task of standardizing coinage rested with private parties – but the responsibility was soon assumed by the State. The advantages of this transition seemed clear: a single, widely-accepted and easily-recognizable currency could be now used to settle virtually all private and official debts.
Alas, in what deserves the dubious distinction of being one of the earliest examples of monetary tomfoolery, some States succumbed to the temptation of fiddling with the coinage to accomplish anything from feeding the poor to waging wars. In particular, it would be common to stamp coins with the same face value but a progressively lower content of silver and gold. Perhaps surprisingly, the strategy worked remarkably well; at least in the times of peace, most people cared about the value stamped on the coin, not its precise composition or weight.
And so, over time, representative money was born: sooner or later, most States opted to mint coins from nearly-worthless metals, or print banknotes on paper and cloth. This radically new currency was accompanied with a simple pledge: the State offered to redeem it at any time for its nominal value in gold.
Of course, the promise was largely illusory: the State did not have enough gold to honor all the promises it had made. Still, as long as people had faith in their rulers and the redemption requests stayed low, the fundamental mechanics of this new representative currency remained roughly the same as before – and in some ways, were an improvement in that they lessened the insatiable demand for a rare commodity. Just as importantly, the new money still enabled international trade – using the underlying gold exchange rate as a reference point.
4. Fractional reserve banking and fiat money
For much of the recorded history, banking was an exceptionally dull affair, not much different from running a communal chicken ledger of the old. But then, something truly marvelous happened in the 17th century: around that time, many European countries have witnessed the emergence of fractional-reserve banks.
These private ventures operated according to a simple scheme: they accepted people’s coin for safekeeping, promising to pay a premium on every deposit made. To meet these obligations and to make a profit, the banks then used the pooled deposits to make high-interest loans to other folks. The financiers figured out that under normal circumstances and when operating at a sufficient scale, they needed only a very modest reserve – well under 10% of all deposited money – to be able to service the usual volume and size of withdrawals requested by their customers. The rest could be loaned out.
The very curious consequence of fractional-reserve banking was that it pulled new money out of thin air. The funds were simultaneously accounted for in the statements shown to the depositor, evidently available for withdrawal or transfer at any time; and given to third-party borrowers, who could spend them on just about anything. Heck, the borrowers could deposit the proceeds in another bank, creating even more money along the way! Whatever they did, the sum of all funds in the monetary system now appeared much higher than the value of all coins and banknotes issued by the government – let alone the amount of gold sitting in any vault.
Of course, no new money was being created in any physical sense: all that banks were doing was engaging in a bit of creative accounting – the sort of which would probably land you in jail if you attempted it today in any other comparably vital field of enterprise. If too many depositors were to ask for their money back, or if too many loans were to go bad, the banking system would fold. Fortunes would evaporate in a puff of accounting smoke, and with the disappearance of vast quantities of quasi-fictitious (“broad”) money, the wealth of the entire nation would shrink.
In the early 20th century, the world kept witnessing just that; a series of bank runs and economic contractions forced the governments around the globe to act. At that stage, outlawing fractional-reserve banking was no longer politically or economically tenable; a simpler alternative was to let go of gold and move to fiat money – a currency implemented as an abstract social construct, with no predefined connection to the physical realm. A new breed of economists saw the role of the government not in trying to peg the value of money to an inflexible commodity, but in manipulating its supply to smooth out economic hiccups or to stimulate growth.
(Contrary to popular beliefs, such manipulation is usually not done by printing new banknotes; more sophisticated methods, such as lowering reserve requirements for bank deposits or enticing banks to invest its deposits into government-issued securities, are the preferred route.)
The obvious peril of fiat money is that in the long haul, its value is determined strictly by people’s willingness to accept a piece of paper in exchange for their trouble; that willingness, in turn, is conditioned solely on their belief that the same piece of paper would buy them something nice a week, a month, or a year from now. It follows that a simple crisis of confidence could make a currency nearly worthless overnight. A prolonged period of hyperinflation and subsequent austerity in Germany and Austria was one of the precipitating factors that led to World War II. In more recent times, dramatic episodes of hyperinflation plagued the fiat currencies of Israel (1984), Mexico (1988), Poland (1990), Yugoslavia (1994), Bulgaria (1996), Turkey (2002), Zimbabwe (2009), Venezuela (2016), and several other nations around the globe.
For the United States, the switch to fiat money came relatively late, in 1971. To stop the dollar from plunging like a rock, the Nixon administration employed a clever trick: they ordered the freeze of wages and prices for the 90 days that immediately followed the move. People went on about their lives and paid the usual for eggs or milk – and by the time the freeze ended, they were accustomed to the idea that the “new”, free-floating dollar is worth about the same as the old, gold-backed one. A robust economy and favorable geopolitics did the rest, and so far, the American adventure with fiat currency has been rather uneventful – perhaps except for the fact that the price of gold itself skyrocketed from $35 per troy ounce in 1971 to $850 in 1980 (or, from $210 to $2,500 in today’s dollars).
Well, one thing did change: now better positioned to freely tamper with the supply of money, the regulators in accord with the bankers adopted a policy of creating it at a rate that slightly outstripped the organic growth in economic activity. They did this to induce a small, steady degree of inflation, believing that doing so would discourage people from hoarding cash and force them to reinvest it for the betterment of the society. Some critics like to point out that such a policy functions as a “backdoor” tax on savings that happens to align with the regulators’ less noble interests; still, either way: in the US and most other developed nations, the purchasing power of any money kept under a mattress will drop at a rate of somewhere between 2 to 10% a year.
5. So what’s up with Bitcoin?
Well… countless tomes have been written about the nature and the optimal characteristics of government-issued fiat currencies. Some heterodox economists, notably including Murray Rothbard, have also explored the topic of privately-issued, decentralized, commodity-backed currencies. But Bitcoin is a wholly different animal.
In essence, BTC is a global, decentralized fiat currency: it has no (recoverable) intrinsic value, no central authority to issue it or define its exchange rate, and it has no anchoring to any historical reference point – a combination that until recently seemed nonsensical and escaped any serious scrutiny. It does the unthinkable by employing three clever tricks:
It allows anyone to create new coins, but only by solving brute-force computational challenges that get more difficult as the time goes by,
It prevents unauthorized transfer of coins by employing public key cryptography to sign off transactions, with only the authorized holder of a coin knowing the correct key,
It prevents double-spending by using a distributed public ledger (“blockchain”), recording the chain of custody for coins in a tamper-proof way.
The blockchain is often described as the most important feature of Bitcoin, but in some ways, its importance is overstated. The idea of a currency that does not rely on a centralized transaction clearinghouse is what helped propel the platform into the limelight – mostly because of its novelty and the perception that it is less vulnerable to government meddling (although the government is still free to track down, tax, fine, or arrest any participants). On the flip side, the everyday mechanics of BTC would not be fundamentally different if all the transactions had to go through Bitcoin Bank, LLC.
A more striking feature of the new currency is the incentive structure surrounding the creation of new coins. The underlying design democratized the creation of new coins early on: all you had to do is leave your computer running for a while to acquire a number of tokens. The tokens had no practical value, but obtaining them involved no substantial expense or risk. Just as importantly, because the difficulty of the puzzles would only increase over time, the hope was that if Bitcoin caught on, latecomers would find it easier to purchase BTC on a secondary market than mine their own – paying with a more established currency at a mutually beneficial exchange rate.
The persistent publicity surrounding Bitcoin and other cryptocurrencies did the rest – and today, with the growing scarcity of coins and the rapidly increasing demand, the price of a single token hovers somewhere south of $15,000.
6. So… is it bad money?
Predicting is hard – especially the future. In some sense, a coin that represents a cryptographic proof of wasted CPU cycles is no better or worse than a currency that relies on cotton decorated with pictures of dead presidents. It is true that Bitcoin suffers from many implementation problems – long transaction processing times, high fees, frequent security breaches of major exchanges – but in principle, such problems can be overcome.
That said, currencies live and die by the lasting willingness of others to accept them in exchange for services or goods – and in that sense, the jury is still out. The use of Bitcoin to settle bona fide purchases is negligible, both in absolute terms and in function of the overall volume of transactions. In fact, because of the technical challenges and limited practical utility, some companies that embraced the currency early on are now backing out.
When the value of an asset is derived almost entirely from its appeal as an ever-appreciating investment vehicle, the situation has all the telltale signs of a speculative bubble. But that does not prove that the asset is destined to collapse, or that a collapse would be its end. Still, the built-in deflationary mechanism of Bitcoin – the increasing difficulty of producing new coins – is probably both a blessing and a curse.
It’s going to go one way or the other; and when it’s all said and done, we’re going to celebrate the people who made the right guess. Because future is actually pretty darn easy to predict — in retrospect.
We’re growing at a pretty rapid clip, and as we add more customers, we need people to help keep all of our hard drive spinning. Along with support, the other department that grows linearly with the number of customers that join us is the operations team, and they’ve just added a new member to their team, Rich! He joins us as a Network Systems Administrator! Lets take a moment to learn more about Rich, shall we?
What is your Backblaze Title? Network Systems Administrator
Where are you originally from? The Upper Peninsula of Michigan. Da UP, eh!
What attracted you to Backblaze? The fact that it is a small tech company packed with highly intelligent people and a place where I can also be friends with my peers. I am also huge on cloud storage and backing up your past!
What do you expect to learn while being at Backblaze? I look forward to expanding my Networking skills and System Administration skills while helping build the best Cloud Storage and Backup Company there is!
Where else have you worked? I first started working in Data Centers at Viawest. I was previously an Infrastructure Engineer at Twitter and a Production Engineer at Groupon.
Where did you go to school? I started at Finlandia University in Norther Michigan, carried onto Northwest Florida State and graduated with my A.S. from North Lake College in Dallas, TX. I then completed my B.S. Degree online at WGU.
What’s your dream job? Sr. Network Engineer
Favorite place you’ve traveled? I have traveled around a bit in my life. I really liked Dublin, Ireland but I have to say favorite has to be Puerto Vallarta, Mexico! Which is actually where I am getting married in 2019!
Favorite hobby? Water is my life. I like to wakeboard and wakesurf. I also enjoy biking, hunting, fishing, camping, and anything that has to do with the great outdoors!
Of what achievement are you most proud? I’m proud of moving up in my career as quickly as I have been. I am also very proud of being able to wakesurf behind a boat without a rope! Lol!
Star Trek or Star Wars? Star Trek! I grew up on it!
Coke or Pepsi? H2O 😀
Favorite food? Mexican Food and Pizza!
Why do you like certain things? Hmm…. because certain things make other certain things particularly certain!
Anything else you’d like you’d like to tell us? Nope 😀
Who can say no to high quality H2O? Welcome to the team Rich!
In today’s guest post, Bruce Tulloch, CEO and Managing Director of BitScope Designs, discusses the uses of cluster computing with the Raspberry Pi, and the recent pilot of the Los Alamos National Laboratory 3000-Pi cluster built with the BitScope Blade.
High-performance computing and Raspberry Pi are not normally uttered in the same breath, but Los Alamos National Laboratory is building a Raspberry Pi cluster with 3000 cores as a pilot before scaling up to 40 000 cores or more next year.
The short answer to this question is: the Raspberry Pi cluster enables Los Alamos National Laboratory (LANL) to conduct exascale computing R&D.
The Pi cluster breadboard
Exascale refers to computing systems at least 50 times faster than the most powerful supercomputers in use today. The problem faced by LANL and similar labs building these things is one of scale. To get the required performance, you need a lot of nodes, and to make it work, you need a lot of R&D.
However, there’s a catch-22: how do you write the operating systems, networks stacks, launch and boot systems for such large computers without having one on which to test it all? Use an existing supercomputer? No — the existing large clusters are fully booked 24/7 doing science, they cost millions of dollars per year to run, and they may not have the architecture you need for your next-generation machine anyway. Older machines retired from science may be available, but at this scale they cost far too much to use and are usually very hard to maintain.
The Los Alamos solution? Build a “model supercomputer” with Raspberry Pi!
Think of it as a “cluster development breadboard”.
The idea is to design, develop, debug, and test new network architectures and systems software on the “breadboard”, but at a scale equivalent to the production machines you’re currently building. Raspberry Pi may be a small computer, but it can run most of the system software stacks that production machines use, and the ratios of its CPU speed, local memory, and network bandwidth scale proportionately to the big machines, much like an architect’s model does when building a new house. To learn more about the project, see the news conference and this interview with insideHPC at SC17.
Traditional Raspberry Pi clusters
Like most people, we love a good cluster! People have been building them with Raspberry Pi since the beginning, because it’s inexpensive, educational, and fun. They’ve been built with the original Pi, Pi 2, Pi 3, and even the Pi Zero, but none of these clusters have proven to be particularly practical.
That’s not stopped them being useful though! I saw quite a few Raspberry Pi clusters at the conference last week.
One tiny one that caught my eye was from the people at openio.io, who used a small Raspberry Pi Zero W cluster to demonstrate their scalable software-defined object storage platform, which on big machines is used to manage petabytes of data, but which is so lightweight that it runs just fine on this:
There was another appealing example at the ARM booth, where the Berkeley Labs’ singularity container platform was demonstrated running very effectively on a small cluster built with Raspberry Pi 3s.
My show favourite was from the Edinburgh Parallel Computing Center (EPCC): Nick Brown used a cluster of Pi 3s to explain supercomputers to kids with an engaging interactive application. The idea was that visitors to the stand design an aircraft wing, simulate it across the cluster, and work out whether an aircraft that uses the new wing could fly from Edinburgh to New York on a full tank of fuel. Mine made it, fortunately!
Next-generation Raspberry Pi clusters
We’ve been building small-scale industrial-strength Raspberry Pi clusters for a while now with BitScope Blade.
When Los Alamos National Laboratory approached us via HPC provider SICORP with a request to build a cluster comprising many thousands of nodes, we considered all the options very carefully. It needed to be dense, reliable, low-power, and easy to configure and to build. It did not need to “do science”, but it did need to work in almost every other way as a full-scale HPC cluster would.
Some people argue Compute Module 3 is the ideal cluster building block. It’s very small and just as powerful as Raspberry Pi 3, so one could, in theory, pack a lot of them into a very small space. However, there are very good reasons no one has ever successfully done this. For a start, you need to build your own network fabric and I/O, and cooling the CM3s, especially when densely packed in a cluster, is tricky given their tiny size. There’s very little room for heatsinks, and the tiny PCBs dissipate very little excess heat.
Instead, we saw the potential for Raspberry Pi 3 itself to be used to build “industrial-strength clusters” with BitScope Blade. It works best when the Pis are properly mounted, powered reliably, and cooled effectively. It’s important to avoid using micro SD cards and to connect the nodes using wired networks. It has the added benefit of coming with lots of “free” USB I/O, and the Pi 3 PCB, when mounted with the correct air-flow, is a remarkably good heatsink.
When Gordon announced netboot support, we became convinced the Raspberry Pi 3 was the ideal candidate when used with standard switches. We’d been making smaller clusters for a while, but netboot made larger ones practical. Assembling them all into compact units that fit into existing racks with multiple 10 Gb uplinks is the solution that meets LANL’s needs. This is a 60-node cluster pack with a pair of managed switches by Ubiquiti in testing in the BitScope Lab:
Two of these packs, built with Blade Quattro, and one smaller one comprising 30 nodes, built with Blade Duo, are the components of the Cluster Module we exhibited at the show. Five of these modules are going into Los Alamos National Laboratory for their pilot as I write this.
It’s not only research clusters like this for which Raspberry Pi is well suited. You can build very reliable local cloud computing and data centre solutions for research, education, and even some industrial applications. You’re not going to get much heavy-duty science, big data analytics, AI, or serious number crunching done on one of these, but it is quite amazing to see just how useful Raspberry Pi clusters can be for other purposes, whether it’s software-defined networks, lightweight MaaS, SaaS, PaaS, or FaaS solutions, distributed storage, edge computing, industrial IoT, and of course, education in all things cluster and parallel computing. For one live example, check out Mythic Beasts’ educational compute cloud, built with Raspberry Pi 3.
Ever since the launch of the first Raspberry Pi back in 2012, one thing that has been critical to us is to make our products easy to buy in as many countries as possible.
Buying a Raspberry Pi is certainly much simpler nowadays than it was when we were just starting out. Nevertheless, we want to go even further, and so today we are introducing an Approved Reseller programme. With this programme, we aim to recognise those resellers that represent Raspberry Pi products well, and make purchasing them easy for their customers.
The Raspberry Pi Approved Reseller programme
We’re launching the programme in eleven countries today: the UK, Ireland, France, Spain, Portugal, Italy, the Netherlands, Belgium, Luxembourg, Greece and South Africa. Over the next few weeks, you will see us expand it to at least 50 countries.
We will link to the Approved Resellers’ websites directly from our Products page via the “Buy now” button. For customers who want to buy for business applications we have also added a “Buy for business” button. After clicking it, you will be able to select your country from a drop down menu. Doing so will link you directly to the local websites of our two licensed partners, Premier Farnell and Electrocomponents.
Our newest Raspberry Pi Zero resellers
On top of this we are also adding 6 new Raspberry Pi Zero resellers, giving 13 countries direct access to the Raspberry Pi Zero for the first time. We are particularly excited that these countries include Brazil and India, since they both have proved difficult to supply in the past.
As Backblaze continues to grow, and as we go down the path of sharing our stories, we found ourselves in need of someone that could wrangle our content calendar, write blog posts, and come up with interesting ideas that we could share with our readers and fans. We put out the call, and found Roderick! As you’ll read below he has an incredibly interesting history, and we’re thrilled to have his perspective join our marketing team! Lets learn a bit more about Roderick, shall we?
What is your Backblaze Title? Content Director
Where are you originally from? I was born in Southern California, but have lived a lot of different places, including Alaska, Washington, Oregon, Texas, New Mexico, Austria, and Italy.
What attracted you to Backblaze? I met Gleb a number of years ago at the Failcon Conference in San Francisco. I spoke with him and was impressed with him and his description of the company. We connected on LinkedIn after the conference and I ultimately saw his post for this position about a month ago.
What do you expect to learn while being at Backblaze? I hope to learn about Backblaze’s customers and dive deep into the latest in cloud storage and other technologies. I also hope to get to know my fellow employees.
Where else have you worked? I’ve worked for Microsoft, Adobe, Autodesk, and a few startups. I’ve also consulted to Apple, HP, Stanford, the White House, and startups in the U.S. and abroad. I mentored at incubators in Silicon Valley, including IndieBio and Founders Space. I used to own vineyards and a food education and event center in the Napa Valley with my former wife, and worked in a number of restaurants, hotels, and wineries. Recently, I taught part-time at the Culinary Institute of America at Greystone in the Napa Valley. I’ve been a partner in a restaurant and currently am a partner in a mozzarella di bufala company in Marin county where we have about 50 water buffalo that are amazing animals. They are named after famous rock and roll vocalists. Our most active studs now are Sting and Van Morrison. I think singing “a fantabulous night to make romance ‘neath the cover of October skies” works for Van.
Where did you go to school? I studied at Reed College, U.C. Berkeley, U.C. Davis, and the Università per Stranieri di Perugia in Italy. I put myself through college so was in and out of school a number of times to make money. Some of the jobs I held to earn money for college were cook, waiter, dishwasher, bartender, courier, teacher, bookstore clerk, head of hotel maintenance, bookkeeper, lifeguard, journalist, and commercial salmon fisherman in Alaska.
What’s your dream job? I think my dream would be having a job that would continually allow me to learn new things and meet new challenges. I love to learn, travel, and be surprised by things I don’t know.
I love animals and sometimes think I should have become a veterinarian.
Favorite place you’ve traveled? I lived and studied in Italy, and would have to say the Umbria region of Italy is perhaps my favorite place. I also worked in my father’s home country of Austria, which is incredibly beautiful.
Favorite hobby? I love foreign languages, and have studied Italian, French, German, and a few others. I am a big fan of literature and theatre and read widely and have attended theatre productions all over the world. That was my motivation to learn other languages—so I could enjoy literature and theatre in the languages they were written in. I started scuba diving when I was very young because I wanted to be Jacques-Yves Cousteau and explore the oceans. I also sail, motorcycle, ski, bicycle, hike, play music, and hope to finish my pilot’s license someday.
Coke or Pepsi? Red Burgundy
Favorite food? Both my parents are chefs, so I was exposed to a lot of great food growing up. I would have to give more than one answer to that question: fresh baked bread and bouillabaisse. Oh, and white truffles.
Not sure we’ll be able to stock our cupboards with Red Burgundy, but we’ll see what our office admin can do! Welcome to the team!
There are two opposing models of how the Internet has changed protest movements. The first is that the Internet has made protesters mightier than ever. This comes from the successful revolutions in Tunisia (2010-11), Egypt (2011), and Ukraine (2013). The second is that it has made them more ineffectual. Derided as “slacktivism” or “clicktivism,” the ease of action without commitment can result in movements like Occupy petering out in the US without any obvious effects. Of course, the reality is more nuanced, and Zeynep Tufekci teases that out in her new book Twitter and Tear Gas.
Tufekci is a rare interdisciplinary figure. As a sociologist, programmer, and ethnographer, she studies how technology shapes society and drives social change. She has a dual appointment in both the School of Information Science and the Department of Sociology at University of North Carolina at Chapel Hill, and is a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard University. Her regular New York Times column on the social impacts of technology is a must-read.
Modern Internet-fueled protest movements are the subjects of Twitter and Tear Gas. As an observer, writer, and participant, Tufekci examines how modern protest movements have been changed by the Internet — and what that means for protests going forward. Her book combines her own ethnographic research and her usual deft analysis, with the research of others and some big data analysis from social media outlets. The result is a book that is both insightful and entertaining, and whose lessons are much broader than the book’s central topic.
“The Power and Fragility of Networked Protest” is the book’s subtitle. The power of the Internet as a tool for protest is obvious: it gives people newfound abilities to quickly organize and scale. But, according to Tufekci, it’s a mistake to judge modern protests using the same criteria we used to judge pre-Internet protests. The 1963 March on Washington might have culminated in hundreds of thousands of people listening to Martin Luther King Jr. deliver his “I Have a Dream” speech, but it was the culmination of a multi-year protest effort and the result of six months of careful planning made possible by that sustained effort. The 2011 protests in Cairo came together in mere days because they could be loosely coordinated on Facebook and Twitter.
That’s the power. Tufekci describes the fragility by analogy. Nepalese Sherpas assist Mt. Everest climbers by carrying supplies, laying out ropes and ladders, and so on. This means that people with limited training and experience can make the ascent, which is no less dangerous — to sometimes disastrous results. Says Tufekci: “The Internet similarly allows networked movements to grow dramatically and rapidly, but without prior building of formal or informal organizational and other collective capacities that could prepare them for the inevitable challenges they will face and give them the ability to respond to what comes next.” That makes them less able to respond to government counters, change their tactics — a phenomenon Tufekci calls “tactical freeze” — make movement-wide decisions, and survive over the long haul.
Tufekci isn’t arguing that modern protests are necessarily less effective, but that they’re different. Effective movements need to understand these differences, and leverage these new advantages while minimizing the disadvantages.
To that end, she develops a taxonomy for talking about social movements. Protests are an example of a “signal” that corresponds to one of several underlying “capacities.” There’s narrative capacity: the ability to change the conversation, as Black Lives Matter did with police violence and Occupy did with wealth inequality. There’s disruptive capacity: the ability to stop business as usual. An early Internet example is the 1999 WTO protests in Seattle. And finally, there’s electoral or institutional capacity: the ability to vote, lobby, fund raise, and so on. Because of various “affordances” of modern Internet technologies, particularly social media, the same signal — a protest of a given size — reflects different underlying capacities.
This taxonomy also informs government reactions to protest movements. Smart responses target attention as a resource. The Chinese government responded to 2015 protesters in Hong Kong by not engaging with them at all, denying them camera-phone videos that would go viral and attract the world’s attention. Instead, they pulled their police back and waited for the movement to die from lack of attention.
If this all sounds dry and academic, it’s not. Twitter and Tear Gasis infused with a richness of detail stemming from her personal participation in the 2013 Gezi Park protests in Turkey, as well as personal on-the-ground interviews with protesters throughout the Middle East — particularly Egypt and her native Turkey — Zapatistas in Mexico, WTO protesters in Seattle, Occupy participants worldwide, and others. Tufekci writes with a warmth and respect for the humans that are part of these powerful social movements, gently intertwining her own story with the stories of others, big data, and theory. She is adept at writing for a general audience, anddespite being published by the intimidating Yale University Press — her book is more mass-market than academic. What rigor is there is presented in a way that carries readers along rather than distracting.
The synthesist in me wishes Tufekci would take some additional steps, taking the trends she describes outside of the narrow world of political protest and applying them more broadly to social change. Her taxonomy is an important contribution to the more-general discussion of how the Internet affects society. Furthermore, her insights on the networked public sphere has applications for understanding technology-driven social change in general. These are hard conversations for society to have. We largely prefer to allow technology to blindly steer society or — in some ways worse — leave it to unfettered for-profit corporations. When you’re reading Twitter and Tear Gas, keep current and near-term future technological issues such as ubiquitous surveillance, algorithmic discrimination, and automation and employment in mind. You’ll come away with new insights.
Tufekci twice quotes historian Melvin Kranzberg from 1985: “Technology is neither good nor bad; nor is it neutral.” This foreshadows her central message. For better or worse, the technologies that power the networked public sphere have changed the nature of political protest as well as government reactions to and suppressions of such protest.
I have long characterized our technological future as a battle between the quick and the strong. The quick — dissidents, hackers, criminals, marginalized groups — are the first to make use of a new technology to magnify their power. The strong are slower, but have more raw power to magnify. So while protesters are the first to use Facebook to organize, the governments eventually figure out how to use Facebook to track protesters. It’s still an open question who will gain the upper hand in the long term, but Tufekci’s book helps us understand the dynamics at work.
In case you missed it: in yesterday’s post, we released our Raspberry Jam Guidebook, a new Jam branding pack and some more resources to help people set up their own Raspberry Pi community events. Today I’m sharing some insights from Jams I’ve attended recently.
Preston Raspberry Jam
The Preston Jam is one of the most long-established Jams, and it recently ran its 58th event. It has achieved this by running like clockwork: on the first Monday evening of every month, without fail, the Jam takes place. A few months ago I decided to drop in to surprise the organiser, Alan O’Donohoe. The Jam is held at the Media Innovation Studio at the University of Central Lancashire. The format is quite informal, and it’s very welcoming to newcomers. The first half of the event allows people to mingle, and beginners can get support from more seasoned makers. I noticed a number of parents who’d brought their children along to find out more about the Pi and what can be done with it. It’s a great way to find out for real what people use their Pis for, and to get pointers on how to set up and where to start.
About half way through the evening, the organisers gather everyone round to watch a few short presentations. At the Jam I attended, most of these talks were from children, which was fantastic to see: Josh gave a demo in which he connected his Raspberry Pi to an Amazon Echo using the Alexa API, Cerys talked about her Jam in Staffordshire, and Elise told everyone about the workshops she ran at MozFest. All their talks were really well presented. The Preston Jam has done very well to keep going for so long and so consistently, and to provide such great opportunities and support for young people like Josh, Cerys and Elise to develop their digital making abilities (and presentation skills). Their next event is on Monday 1 May.
Manchester Raspberry Jam and CoderDojo
I set up the Manchester Jam back in 2012, around the same time that the Preston one started. Back then, you could only buy one Pi at a time, and only a handful of people in the area owned one. We ran a fairly small event at the local tech community space, MadLab, adopting the format of similar events I’d been to, which was very hands-on and project-based – people brought along their Pis and worked on their own builds. I ran the Jam for a year before moving to Cambridge to work for the Foundation, and I asked one of the regular attendees, Jack, if he’d run it in future. I hadn’t been back until last month, when Clare and I decided to visit.
The Jam is now held at The Shed, a digital innovation space at Manchester Metropolitan University, thanks to Darren Dancey, a computer science lecturer who claims he taught me everything I know (this claim is yet to be peer-reviewed). Jack, Darren, and Raspberry Pi Foundation co-founder and Trustee Pete Lomas put on an excellent event. They have a room for workshops, and a space for people to work on their own projects. It was wonderful to see some of the attendees from the early days still going along every month, as well as lots of new faces. Some of Darren’s students ran a Minecraft Pi workshop for beginners, and I ran one using traffic lights with GPIO Zero and guizero.
The next day, we went along to Manchester CoderDojo, a monthly event for young people learning to code and make things. The Dojo is held at The Sharp Project, and thanks to the broad range of skills of the volunteers, they provide a range of different activities: Raspberry Pi, Minecraft, LittleBits, Code Club Scratch projects, video editing, game making and lots more.
The Cambridge Raspberry Jam is a big event that runs two or three times a year, with quite a different format to the smaller monthly Jams. They have a lecture theatre for talks, a space for workshops, lots of show-and-tell, and even a collection of retailers selling Pis and accessories. It’s a very social event, and always great fun to attend.
The organisers, Mike and Tim, who wrote the foreword for the Guidebook, also run Pi Wars: the annual Raspberry Pi robotics competition. Clare and I went along to this year’s event, where we got to see teams from all over the country (and even one from New Mexico, brought by one of our Certified Educators from Picademy USA, Kerry Bruce) take part in a whole host of robotic challenges. A few of the teams I spoke to have been working on their robots at their local Jams throughout the year. If you’re interested in taking part next year, you can get a team together now and start to make a plan for your 2018 robot! Keep an eye on camjam.me and piwars.org for announcements.
Ely Cathedral has surprisingly good straight line speed for a cathedral. Great job Ely Makers! #PiWars
Raspberry Jam @ Pi Towers
As well as working on supporting other Jams, I’ve also been running my own for the last few months. Held at our own offices in Cambridge, Raspberry Jam @ Pi Towers is a monthly event for people of all ages. We run workshops, show-and-tell and other practical activities. If you’re in the area, our next event is on Saturday 13 May.
In 2013 and 2014, Alan O’Donohoe organised the Raspberry Jamboree, which took place in Manchester to mark the first and second Raspberry Pi birthdays – and it’s coming back next month, this time organised by Claire Dodd Wicher and Les Pounder. It’s primarily an unconference, so the talks are given by the attendees and arranged on the day, which is a great way to allow anyone to participate. There will also be workshops and practical sessions, so don’t miss out! Unless, like me, you’re going to the new Norwich Jam instead…
Start a Jam near you
If there’s no Jam where you live, you can start your own! Download a copy of the brand new Raspberry Jam Guidebook for tips on how to get started. It’s not as hard as you’d think! And we’re on hand if you need any help.
Visiting Jams and hearing from Jam organisers are great ways for us to find out how we can best support our wonderful community. If you run a Jam and you’d like to tell us about what you do, or share your success stories, please don’t hesitate to get in touch. Email me at [email protected], and we’ll try to feature your stories on the blog in future.
Backblaze will be celebrating its ten year anniversary this month. As I was reflecting on our path to get here, I thought some of the issues we encountered along the way are universal to most startups. With that in mind, I’ll write a series of blog posts focused on the entrepreneurial journey. This post is the first and focuses on the birth of Backblaze. I hope you stick around and enjoy the Backblaze story along the way.
What’s Your Problem?
The entrepreneur builds things to solve problems – your own or someone else’s. That problem may be a lack of something that you wish existed or something broken you want to fix. Here’s the problem that kicked off Backblaze and how it got noticed:
Brian Wilson, now co-founder and CTO of Backblaze, had been doing tech support for friends and family, as many of us did. One day he got a panicked call from one of those friends, Lise.
Lise: “You’ve got to help me! My computer crashed!” Brian: “No problem – we’ll get you a new laptop; where’s your backup?” Lise: “Look, what I don’t need now is a lecture! What I need is for you to get my data back!”
Brian was religious about backing up data and had been for years. He burned his data onto a CD and a DVD, diversifying the media types he used. During the process, Brian periodically read some files from each of the discs to test his backups. Finally, Brian put one disc in his closet and mailed another to his brother in New Mexico to have it offsite. Brian did this every week!
Brian was obviously a lot more obsessive than most of us.
Lise, however, had the opposite problem. She had no backup. And she wasn’t alone.
Whose Problem Is It?
A serious pain-point for one person may turn out to be a serious pain-point for millions.
At this point, it would have been easy just to say, “Well that sucks” or blame Lise. “User error” and “they just don’t get it” are common refrains in tech. But blaming the user doesn’t solve the problem.
Brian started talking to people and asking, “Who doesn’t back up?” He also talked with me and some of the others that are now Backblaze co-founders, and we asked the same question to others.
It turned out that most people didn’t back up their computers. Lise wasn’t the anomaly; Brian was. And that was a problem.
Over the previous decade, everything had gone digital. Photos, movies, financials, taxes, everything. A single crashed hard drive could cause you to lose everything. And drives would indeed crash. Over time everything would be digital, and society as a whole would permanently lose vast amounts of information. Big problem.
Surveying the Landscape
There’s a well-known adage that “Having no competition may mean you have no market.” The corollary I’d add is that “Having competition doesn’t mean the market is full.”
Weren’t There Backup Solutions?
Yes. Plenty. In fact, we joked that we were thirty years too late to the problem.
“Solutions Exist” does not mean “Problem Solved.” Even though many backup solutions were available, most people did not back up their data.
What Were the Current Solutions?
At first glance, it seems clear we’d be competing with other backup services. But when I asked people “How do you back up your data today?”, here were the answers I heard most frequently:
Copy ‘My Documents’ directory to an external drive before going on vacation
Copy files to a USB key
Send important files to Gmail
And “Do I need to back up?” (I’ll talk about this one in another post.)
Sometimes people would mention a particular backup app or service, but this was rare.
What Was Wrong With the Current Solutions?
Existing backup systems had various issues. They would not back up all of the users’ data, for example. They would only back up periodically and thus didn’t have current data. Most solutions were not off-site, so fire, theft or another catastrophe could still wipe out data. Some weren’t automatic, which left more room for neglect and user error.
“Solutions Exist” does not mean “Problem Solved.”
In fairness, some backup products and services had already solved some of these issues. But few people used those products. I talked with a lot of people and asked, “Why don’t you use some backup software/service?”
The most common answer was, “I tried it…and it was too hard and too expensive.” We’d learn a lot more about what “hard” and “expensive” meant along the way.
Finding and Testing Solutions
Focus is critical for execution, but when brainstorming solutions, go broad.
We considered a variety of approaches to help people back up their files.
Peer-to-Peer Backup: This was the original idea. Two people would install our backup software which would send each person’s data to the other’s computer. This idea had a lot going for it: The data would be off-site; It would work with existing hardware; It was mildly viral.
Local Drive Backup: The backup software would send data to a USB hard drive. Manually copying files to an external drive was most people’s idea of backing up. However, no good software existed at the time to make this easy. (Time Machine for the Mac hadn’t launched yet.)
Backup To Online Services: Weirder and more unique, this idea stemmed from noticing that online services provided free storage: Flickr for photos; Google Docs for documents and spreadsheets; YouTube for movies; and so on. We considered writing software that would back up each file type to the service that supported it and back up the rest to Gmail.
Backup To Our Online Storage: We’d create a service that backed up data to the cloud. It may seem obvious now, but backing up to the cloud was just one of a variety of possibilities at the time. Also, initially, we didn’t mean ‘our’ storage. We assumed we would use S3 or some other storage provider.
The goal was to come up with a solution that was easy.
We put each solution we came up with through its paces. The goal was to come up with a solution that was easy: Easy for people to use. Easy to understand.
Peer-to-peer backup? First, we’d have to explain what it is (no small task) and then get buy-in from the user to host a backup on their machine. That meant having enough space on each computer, and both needed to be online at the same time. After our initial excitement with the idea, we came to the conclusion that there were too many opportunities for things to go wrong. Verdict: Not easy.
Backup software? Not off-site, and required the purchase of a hard drive. If the drive broke or wasn’t connected, no backup occurred. A useful solution but again, too many opportunities for things to go wrong. Verdict: Not easy.
Back up to online services? Users needed accounts at each, and none of the services supported all file types, so your data ended up scattered all over the place. Verdict: Not easy.
Back up to our online storage? The backup would be current, kept off-site, and updated automatically. It was easy to for people to use, and easy to understand. Verdict: Easy!
Getting To the Solution
Don’t brainstorm forever. Problems don’t get solved on ideas alone.
We decided to back up to our online storage! It met many of the key goals. We started building.
We built a backup software installer, a way to pick files and folders to back up, and the underlying engine that copies the files to remote storage. We tried to make it comfortable by minimizing clicks and questions.
This approach seemed easy enough to use, at least for us, but it turned out not to be for our target users.
We thought about the original answer we heard: “I tried it…and it was too hard and too expensive.”
“Too hard” is not enough information. What was too hard before? Were the icons too small? The text too long? A critical feature missing? Were there too many features to wade through? Or something else altogether?
Dig deeper into users’ actual needs
We reached out to a lot of friends, family, and co-workers and held some low-key pizza and beer focus groups. Those folks walked us through their backup experience. While there were a lot of difficult areas, the most complicated part was setting up what would be backed up.
“I had to get all the files and folders on my computer organized; then I could set up the backup.”
That’s like cleaning the garage. Sounds like a good idea, but life conspires to get in the way, and it doesn’t happen.
We had to solve that or users would never think of our service as ‘easy.’
Takeaway: Dig deeper into users’ actual needs.
Trying to remove the need to “clean the garage,” we asked folks what they wanted to be backed up. They told us they wanted their photos, movies, music, documents, and everything important.
We listened and tried making it easier. We focused our second attempt at a backup solution by pre-selecting everything ‘important.’ We selected the documents folder and then went one step further by finding all the photo, movies, music, and other common file types on the computer. Now users didn’t have to select files and folders – we would do it for them!
More pizza and beer user testing had people ask, “But how do I know that my photos are being backed up?”
We told them, “we’re searching your whole computer for photos.”
“But my photos are in this weird format: .jpg, are those included? .gif? .psd?”
We learned that the backup process felt nebulous to users since they wouldn’t know what exactly would be selected. Users would always feel uncomfortable – and uncomfortable isn’t ‘easy.’
Takeaway: No, really, keep digging deeper into users’ actual needs. Identify their real problem, not the solution they propose.
We took a step back and asked, “What do we know?”
We want all of our “important” files backed up, but it can be hard for us to identify what files those are. Having us guess makes us uncomfortable. So, forget the tech. What experience would be the right one?
Our answer was that the computer would just magically be backed up to the cloud.
Then one of our co-founders Tim wondered, “what if we didn’t ask any questions and just backed up everything?”
At first, we all looked at him askew. Backup everything? That was a lot of data. How would that be possible? But we came back to, “Is this the right answer? Yes. So let’s see if we can make it work.”
So we flipped the entire backup approach on its head.
We didn’t ask users, “What do you want to have backed up.” We asked, “What do you NOT want to be backed up?” If you didn’t know, we’d back up all your data. It took away the scary “pick your files” question and made people comfortable that all their necessary data was being backed up.
We ran that experience by users, and their surprised response was, “Really, that’s it?” Hallelujah.
Takeaway: Keep digging deeper. Don’t let the tech get in the way of understanding the real problem.
Pricing isn’t a side-note – it’s part of the product. Understand how customers will perceive your pricing.
We had developed a solution that was easy to use and easy to understand. But could we make it easy to afford? How much do we charge?
We would be storing a lot of data for each customer. The more data they needed to store, the more it would cost us. We planned to put the data on S3, which charged $0.15/GB/month. So it would seem logical to follow that same pricing model.
People thought of the value of the service rather than an amount of storage.
People had no idea how much data they had on their hard drive and certainly not how much of it needed to be backed up. Worse, they could be off by 1000x if they weren’t sure about the difference between megabytes and gigabytes, as some were.
We had to solve that too, or users would never think of our service as ‘easy.’
I asked everyone I could find: “If we were to provide you a service that automatically would backup all of the data on your computer over the internet, what would that be worth to you?”
What I heard back was a bell-curve:
A small number of people said, “$0. It should be free. Everything on the net is free!”
A small number of people said, “$50 – $100/month. That’s incredibly valuable!”
But by far the majority said, “Hmm. If it were $5/month, that’d be a no-brainer.”
A few interesting takeaways:
Everyone assumed it would be a monthly charge even though I didn’t ask, “What would you pay per month.”
No one said, “I’d pay $x/GB/month,” so people thought of the value of the service rather than an amount of storage.
There may have been opportunities to offer a free service and attempt to monetize it in other ways or to charge $50 – $100/month/user, but these were the small markets.
At $5/month, there was a significant slice of the population that was excited to use it.
Conclusion On the Solution
Over and over again we heard, “I tried backing up, but it was too hard and too expensive.”
After really understanding what was complicated, we finally got our real solution: An unlimited online backup service that would back up all your data automatically and charge just $5/month.
Easy to use, easy to understand, and easy to afford. Easy in the ways that mattered to the people using the service.
Often looking backward things seem obvious. But we learned a lot along the way:
Having competition doesn’t mean the market is full. Just because solutions exist doesn’t mean the problem is solved.
Don’t brainstorm forever. Problems don’t get solved on ideas alone. Brainstorm options, but don’t get stuck in the brainstorming phase.
Dig deeper into users’ actual needs. Then keep digging. Don’t let your knowledge of tech get in the way of your understanding the user. And be willing to shift course as your learn more.
Pricing isn’t a side-note. It’s part of the product. Understand how customers will perceive your pricing.
Just because we knew the right solution didn’t mean that it was possible. I’ll talk about that, along with how to launch, getting early traction, and more in future posts. What other questions do you have? Leave them in the comments.
Multi-talented maker Giorgio Sancristoforo has used a Raspberry Pi and Sense HAT to create Tableau, a generative music album. It’s an innovative idea: the music constantly evolves as it reacts to environmental stimuli like atmospheric pressure, humidity, and temperature.
“There is no doubt that, as music is removed by the phonographrecord from the realm of live production and from the imperative of artistic activity and becomes petrified, it absorbs into itself, in this process of petrification, the very life that would otherwise vanish.”
Creating generative music
“I’ve been dreaming about using portable microcomputers to create a generative music album,” explains Giorgio. “Now my dream is finally a reality: this is my first portable generative LP (PGLP)”. Tableau uses both a Raspberry Pi 2 and a Sense HAT: the HAT provides the data for the album’s musical evolution via its range of onboard sensors.
Photo credit: Giorgio Sancristoforo
The Sense HAT was originally designed for use aboard the International Space Station (ISS) as part of the ongoing Astro Pi challenge. It has, however, become a staple within the Raspberry Pi maker community. This is partly thanks to the myriad of possibilities offered by its five onboard sensors, five-button joystick, and 8 × 8 LED matrix.
Photo credit: Giorgio Sancristoforo
The final release of Tableau consists of a limited edition of fifty PGLPs: each is set up to begin playing immediately power is connected, and the music will continue to evolve indefinitely. “Instead of being reproduced as on a CD or in an MP3 file, the music is spontaneously generated and arranged while you are listening to it,” Giorgio explains on his website. “It never sounds the same. Tableau creates an almost endless number of mixes of the LP (4 × 12 factorial). Each time you will listen, the music will be different, and it will keep on evolving until you switch the power off.”
Photo credit: Giorgio Sancristoforo
Experiment with the Sense HAT
What really interests us is how the sound of Tableau might alter in different locations. Would it sound different in Cambridge as opposed to the deserts of Mexico? What about Antarctica versus the ISS?
If Giorgio’s project has piqued your interest, why not try using our free data logging resource for the Sense HAT? You can use it to collect information from the HAT’s onboard sensors and create your own projects. How about collecting data over a year, and transforming this into your own works of art?
Even if you don’t have access to the Sense HAT, you can experience it via the Sense HAT desktop emulator. This is a great solution if you want to work on Sense HAT-based projects in the classroom, as it reduces the amount of hardware you need.
If you’ve already built a project using the Sense HAT, make sure to share it in the comments below. We would love to see what you have been making!
On September 1st we asked you to predict when Backblaze would reach our 20 Billion Files Restored mark. We had a ton of entrants!! Over two thousand people hazarded a guess. It is our pleasure to announce that we’ve finally hit that milestone, and along with it, we’re announcing the winners of our contest.
Before we reveal the date and time that we crossed that threshold, we first want to point out that the closest guess was only off by 23 minutes. Second closest? 57 minutes. That’s kind of remarkable. The 10 winners all pinpointed the exact date and the furthest winner was only four hours and six minutes off the mark, very impressive job!
Congratulations to our winners:
Lance – Bismarck, North Dakota
Bartosz – Warsaw, Poland
Jeremy – Evergreen, Colorado
Justin – Los Angeles, California
Andy – London, UK
Jeffrey – Round Rock, Texas
Jose – Merida, Mexico
Maria – New York, New York
Rizwan – Surrey, UK
Max – Howard Beach, New York
11/20/2016 – 10:57 AM
That was the exact time when we restored the 20,000,000,000th file. That’s lots of memories, documents, and projects saved. The amount of files per Backblaze restore varies do our various restore methods. The most common use-case for our restores is when folks forget one or two files and do small restores of just a folder or two.
Restore Fun Facts For a Typical Month:
Where restores are created:
96.3% of restores are done on the web
3.7% are done via our mobile apps
Of all restores:
97.8% are ZIP restores
1.7% are USB HD restores
0.5% are USB Flash Drive restores
The average size of restores:
ZIP restores (web & mobile): 25 GB
USB HD – 1.1 TB (1,100 GB)
USB Flash – 63 GB
Based on the amount of data in ZIP file restores:
Range in GB
% of Restores
1 – 10
10 – 25
25 – 50
50 – 75
75 – 100
100 – 200
200 – 300
300 – 400
400 – 500
Backblaze was started with the goal of preventing data loss. Even though we’re nearing 300 Petabytes of data stored, we consider our 20 Billion Files Restored the benchmark that we’re most proud of because it validates our ability to help our customers out when they need us most. Thank you for being loyal Backblaze fans, and while we always hope that folks won’t need to create restores, we’ll be here for you if you do!
An Obama commission has publish a report on how to “Enhance Cybersecurity”. It’s promoted as having been written by neutral, bipartisan, technical experts. Instead, it’s almost entirely dominated by special interests and the Democrat politics of the outgoing administration.
In this post, I’m going through a random list of some of the 53 “action items” proposed by the documents. I show how they are policy issues, not technical issues. Indeed, much of the time the technical details are warped to conform to special interests.
IoT passwords The recommendations include such things as Action Item 2.1.4:
Initial best practices should include requirements to mandate that IoT devices be rendered unusable until users first change default usernames and passwords.
This recommendation for changing default passwords is repeated many times. It comes from the way the Mirai worm exploits devices by using hardcoded/default passwords.
But this is a misunderstanding of how these devices work. Take, for example, the infamous Xiongmai camera. It has user accounts on the web server to control the camera. If the user forgets the password, the camera can be reset to factory defaults by pressing a button on the outside of the camera.
But here’s the deal with security cameras. They are placed at remote sites miles away, up on the second story where people can’t mess with them. In order to reset them, you need to put a ladder in your truck and drive 30 minutes out to the site, then climb the ladder (an inherently dangerous activity). Therefore, Xiongmai provides a RESET.EXE utility for remotely resetting them. That utility happens to connect via Telnet using a hardcoded password.
The above report misunderstands what’s going on here. It sees Telnet and a hardcoded password, and makes assumptions. Some people assume that this is the normal user account — it’s not, it’s unrelated to the user accounts on the web server portion of the device. Requiring the user to change the password on the web service would have no effect on the Telnet service. Other people assume the Telnet service is accidental, that good security hygiene would remove it. Instead, it’s an intended feature of the product, to remotely reset the device. Fixing the “password” issue as described in the above recommendations would simply mean the manufacturer would create a different, custom backdoor that hackers would eventually reverse engineer, creating MiraiV2 botnet. Instead of security guides banning backdoors, they need to come up with standard for remote reset.
That characterization of Mirai as an IoT botnet is wrong. Mirai is a botnet of security cameras. Security cameras are fundamentally different from IoT devices like toasters and fridges because they are often exposed to the public Internet. To stream video on your phone from your security camera, you need a port open on the Internet. Non-camera IoT devices, however, are overwhelmingly protected by a firewall, with no exposure to the public Internet. While you can create a botnet of Internet cameras, you cannot create a botnet of Internet toasters.
The point I’m trying to demonstrate here is that the above report was written by policy folks with little grasp of the technical details of what’s going on. They use Mirai to justify several of their “Action Items”, none of which actually apply to the technical details of Mirai. It has little to do with IoT, passwords, or hygiene.
Action Item 1.2.1: The President should create, through executive order, the National Cybersecurity Private–Public Program (NCP 3 ) as a forum for addressing cybersecurity issues through a high-level, joint public–private collaboration.
We’ve had public-private partnerships to secure cyberspace for over 20 years, such as the FBI InfraGuard partnership. President Clinton’s had a plan in 1998 to create a public-private partnership to address cyber vulnerabilities. President Bush declared public-private partnerships the “cornerstone of his 2003 plan to secure cyberspace.
Here we are 20 years later, and this document is full of new naive proposals for public-private partnerships There’s no analysis of why they have failed in the past, or a discussion of which ones have succeeded.
The many calls for public-private programs reflects the left-wing nature of this supposed “bipartisan” document, that sees government as a paternalistic entity that can help. The right-wing doesn’t believe the government provides any value in these partnerships. In my 20 years of experience with government private-partnerships in cybersecurity, I’ve found them to be a time waster at best and at worst, a way to coerce “voluntary measures” out of companies that hurt the public’s interest.
Build a wall and make China pay for it
Action Item 1.3.1: The next Administration should require that all Internet-based federal government services provided directly to citizens require the use of appropriately strong authentication.
This would cost at least $100 per person, for 300 million people, or $30 billion. In other words, it’ll cost more than Trump’s wall with Mexico.
Hardware tokens are cheap. Blizzard (a popular gaming company) must deal with widespread account hacking from “gold sellers”, and provides second factor authentication to its gamers for $6 each. But that ignores the enormous support costs involved. How does a person prove their identity to the government in order to get such a token? To replace a lost token? When old tokens break? What happens if somebody’s token is stolen?
And that’s the best case scenario. Other options, like using cellphones as a second factor, are non-starters.
This is actually not a bad recommendation, as far as government services are involved, but it ignores the costs and difficulties involved.
But then the recommendations go on to suggest this for private sector as well:
Specifically, private-sector organizations, including top online retailers, large health insurers, social media companies, and major financial institutions, should use strong authentication solutions as the default for major online applications.
No, no, no. There is no reason for a “top online retailer” to know your identity. I lie about my identity. Amazon.com thinks my name is “Edward Williams”, for example.
They get worse with:
Action Item 1.3.3: The government should serve as a source to validate identity attributes to address online identity challenges.
In other words, they are advocating a cyber-dystopic police-state wet-dream where the government controls everyone’s identity. We already see how this fails with Facebook’s “real name” policy, where everyone from political activists in other countries to LGBTQ in this country get harassed for revealing their real names.
Anonymity and pseudonymity are precious rights on the Internet that we now enjoy — rights endangered by the radical policies in this document. This document frequently claims to promote security “while protecting privacy”. But the government doesn’t protect privacy — much of what we want from cybersecurity is to protect our privacy from government intrusion. This is nothing new, you’ve heard this privacy debate before. What I’m trying to show here is that the one-side view of privacy in this document demonstrates how it’s dominated by special interests.
Action Item 1.4.2: All federal agencies should be required to use the Cybersecurity Framework.
The “Cybersecurity Framework” is a bunch of a nonsense that would require another long blogpost to debunk. It requires months of training and years of experience to understand. It contains things like “DE.CM-4: Malicious code is detected”, as if that’s a thing organizations are able to do.
All the while it ignores the most common cyber attacks (SQL/web injections, phishing, password reuse, DDoS). It’s a typical example where organizations spend enormous amounts of money following process while getting no closer to solving what the processes are attempting to solve. Federal agencies using the Cybersecurity Framework are no safer from my pentests than those who don’t use it.
It gets even crazier:
Action Item 1.5.1: The National Institute of Standards and Technology (NIST) should expand its support of SMBs in using the Cybersecurity Framework and should assess its cost-effectiveness specifically for SMBs.
Small businesses can’t even afford to even read the “Cybersecurity Framework”. Simply reading the doc, trying to understand it, would exceed their entire IT/computer budget for the year. It would take a high-priced consultant earning $500/hour to tell them that “DE.CM-4: Malicious code is detected” means “buy antivirus and keep it up to date”.
Software liability is a hoax invented by the Chinese to make our IoT less competitive
Action Item 2.1.3: The Department of Justice should lead an interagency study with the Departments of Commerce and Homeland Security and work with the Federal Trade Commission, the Consumer Product Safety Commission, and interested private sector parties to assess the current state of the law with regard to liability for harm caused by faulty IoT devices and provide recommendations within 180 days.
For over a decade, leftists in the cybersecurity industry have been pushing the concept of “software liability”. Every time there is a major new development in hacking, such as the worms around 2003, they come out with documents explaining why there’s a “market failure” and that we need liability to punish companies to fix the problem. Then the problem is fixed, without software liability, and the leftists wait for some new development to push the theory yet again.
It’s especially absurd for the IoT marketspace. The harm, as they imagine, is DDoS. But the majority of devices in Mirai were sold by non-US companies to non-US customers. There’s no way US regulations can stop that.
What US regulations will stop is IoT innovation in the United States. Regulations are so burdensome, and liability lawsuits so punishing, that it will kill all innovation within the United States. If you want to get rich with a clever IoT Kickstarter project, forget about it: you entire development budget will go to cybersecurity. The only companies that will be able to afford to ship IoT products in the United States will be large industrial concerns like GE that can afford the overhead of regulation/liability.
Liability is a left-wing policy issue, not one supported by technical analysis. Software liability has proven to be immaterial in any past problem and current proponents are distorting the IoT market to promote it now.
Action Item 4.1.1: The next President should initiate a national cybersecurity workforce program to train 100,000 new cybersecurity practitioners by 2020.
The problem in our industry isn’t the lack of “cybersecurity practitioners”, but the overabundance of “insecurity practitioners”.
Take “SQL injection” as an example. It’s been the most common way hackers break into websites for 15 years. It happens because programmers, those building web-apps, blinding paste input into SQL queries. They do that because they’ve been trained to do it that way. All the textbooks on how to build webapps teach them this. All the examples show them this.
So you have government programs on one hand pushing tech education, teaching kids to build web-apps with SQL injection. Then you propose to train a second group of people to fix the broken stuff the first group produced.
The solution to SQL/website injections is not more practitioners, but stopping programmers from creating the problems in the first place. The solution to phishing is to use the tools already built into Windows and networks that sysadmins use, not adding new products/practitioners. These are the two most common problems, and they happen not because of a lack of cybersecurity practitioners, but because the lack of cybersecurity as part of normal IT/computers.
I point this to demonstrate yet against that the document was written by policy people with little or no technical understanding of the problem.
Action Item 3.1.1: To improve consumers’ purchasing decisions, an independent organization should develop the equivalent of a cybersecurity “nutritional label” for technology products and services—ideally linked to a rating system of understandable, impartial, third-party assessment that consumers will intuitively trust and understand.
This can’t be done. Grab some IoT devices, like my thermostat, my car, or a Xiongmai security camera used in the Mirai botnet. These devices are so complex that no “nutritional label” can be made from them.
One of the things you’d like to know is all the software dependencies, so that if there’s a bug in OpenSSL, for example, then you know your device is vulnerable. Unfortunately, that requires a nutritional label with 10,000 items on it.
Or, one thing you’d want to know is that the device has no backdoor passwords. But that would miss the Xiongmai devices. The web service has no backdoor passwords. If you caught the Telnet backdoor password and removed it, then you’d miss the special secret backdoor that hackers would later reverse engineer.
This is a policy position chasing a non-existent technical issue push by Pieter Zatko, who has gotten hundreds of thousands of dollars from government grants to push the issue. It’s his way of getting rich and has nothing to do with sound policy.
Cyberczars and ambassadors Various recommendations call for the appointment of various CISOs, Assistant to the President for Cybersecurity, and an Ambassador for Cybersecurity. But nowhere does it mention these should be technical posts. This is like appointing a Surgeon General who is not a doctor.
Government’s problems with cybersecurity stems from the way technical knowledge is so disrespected. The current cyberczar prides himself on his lack of technical knowledge, because that helps him see the bigger picture.
Ironically, many of the other Action Items are about training cybersecurity practitioners, employees, and managers. None of this can happen as long as leadership is clueless. Technical details matter, as I show above with the Mirai botnet. Subtlety and nuance in technical details can call for opposite policy responses.
Conclusion This document is promoted as being written by technical experts. However, nothing in the document is neutral technical expertise. Instead, it’s almost entirely a policy document dominated by special interests and left-wing politics. In many places it makes recommendations to the incoming Republican president. His response should be to round-file it immediately.
I only chose a few items, as this blogpost is long enough as it is. I could pick almost any of of the 53 Action Items to demonstrate how they are policy, special-interest driven rather than reflecting technical expertise.
The Backblaze datacenter team continues to expand and we have a new hire in Joe! Joe’s joining us as a Datacenter Site Manager and will be learning the ropes with our core datacenter team in Sacramento! He hails from a local alt-weekly newspaper and enjoys cycling, but lets learn a bit more about Joe shall we?
What is your Backblaze Title? Datacenter Site Manager.
Where are you originally from? I’m from Vacaville, CA.
What attracted you to Backblaze? Aaron McCormack attracted me to Backblaze using shiny red pods as bait.
What do you expect to learn while being at Backblaze? This is the first tech company I’ve worked for so I expect to learn a lot. From what I can tell from Larry Wilke, I’m going to be learning a lot about juggling.
Where else have you worked? I’ve worked at UC Davis computer shop as the technician lead and on the Sacramento News & Review Ops team.
What’s your dream job? My childhood dream was to be a marine biologist.
Favorite place you’ve traveled? Mazatlan, Mexico.
Favorite hobby? Cycling, making mead, and outdoorsy stuff.
Star Trek or Star Wars? Stargate.
Favorite food? Chimichangas.
Homemade mead, Stargate, and chimichangas sounds like a wonderful way to spend the evening! All Backblaze jobs involve some amount of juggling, but we believe in you. Welcome aboard Joe!
I dislike commenting on politics. I think it’s difficult to contribute any novel thought – and in today’s hyper-polarized world, stating an unpopular or half-baked opinion is a recipe for losing friends or worse. Still, with many of my colleagues expressing horror and disbelief over what happened on Tuesday night, I reluctantly decided to jot down my thoughts.
I think that in trying to explain away the meteoric rise of Mr. Trump, many of the mainstream commentators have focused on two phenomena. Firstly, they singled out the emergence of “filter bubbles” – a mechanism that allows people to reinforce their own biases and shields them from opposing views. Secondly, they implicated the dark undercurrents of racism, misogynism, or xenophobia that still permeate some corners of our society. From that ugly place, the connection to Mr. Trump’s foul-mouthed populism was not hard to make; his despicable bragging about women aside, to his foes, even an accidental hand gesture or an inane 4chan frog meme was proof enough. Once we crossed this line, the election was no longer about economic policy, the environment, or the like; it was an existential battle for equality and inclusiveness against the forces of evil that lurk in our midst. Not a day went by without a comparison between Mr. Trump and Adolf Hitler in the press. As for the moderate voters, the pundits had an explanation, too: the right-wing filter bubble must have clouded their judgment and created a false sense of equivalency between a horrid, conspiracy-peddling madman and our cozy, liberal status quo.
Now, before I offer my take, let me be clear that I do not wish to dismiss the legitimate concerns about the overtones of Mr. Trump’s campaign. Nor do I desire to downplay the scale of discrimination and hatred that the societies around the world are still grappling with, or the potential that the new administration could make it worse. But I found the aforementioned explanation of Mr. Trump’s unexpected victory to be unsatisfying in many ways. Ultimately, we all live in bubbles and we all have biases; in that regard, not much sets CNN apart from Fox News, Vox from National Review, or The Huffington Post from Breitbart. The reason why most of us would trust one and despise the other is that we instinctively recognize our own biases as more benign. After all, in the progressive world, we are fighting for an inclusive society that gives all people a fair chance to succeed. As for the other side? They seem like a bizarre, cartoonishly evil coalition of dimwits, racists, homophobes, and the ultra-rich. We even have serious scientific studies to back that up; their authors breathlessly proclaim that the conservative brain is inferior to the progressive brain. Unlike the conservatives, we believe in science, so we hit the “like” button and retweet the news.
But here’s the thing: I know quite a few conservatives, many of whom have probably voted for Mr. Trump – and they are about as smart, as informed, and as compassionate as my progressive friends. I think that the disconnect between the worldviews stems from something else: if you are a well-off person in a coastal city, you know people who are immigrants or who belong to other minorities, making you acutely attuned to their plight; but you may lack the same, deeply personal connection to – say – the situation of the lower middle class in the Midwest. You might have seen surprising charts or read a touching story in Mother Jones few years back, but it’s hard to think of them as individuals; they are more of a socioeconomic obstacle, a problem to be solved. The same goes for our understanding of immigration or globalization: these phenomena make our high-tech hubs more prosperous and more open; the externalities of our policies, if any, are just an abstract price that somebody else ought to bear for doing what’s morally right. And so, when Mr. Trump promises to temporarily ban travel from Muslim countries linked to terrorism or anti-American sentiments, we (rightly) gasp in disbelief; but when Mr. Obama paints an insulting caricature of rural voters as simpletons who “cling to guns or religion or antipathy to people who aren’t like them”, we smile and praise him for his wit, not understanding how the other side could be so offended by the truth. Similarly, when Mrs. Clinton chuckles while saying “we are going to put a lot of coal miners out of business” to a cheering crowd, the scene does not strike us as a thoughtless, offensive, or in poor taste. Maybe we will read a story about the miners in Mother Jones some day?
Of course, liberals take pride in caring for the common folk, but I suspect that their leaders’ attempts to reach out to the underprivileged workers in the “flyover states” often come across as ham-fisted and insincere. The establishment schools the voters about the inevitability of globalization, as if it were some cosmic imperative; they are told that to reject the premise would not just be wrong – but that it’d be a product of a diseased, nativist mind. They hear that the factories simply had to go to China or Mexico, and the goods just have to come back duty-free – all so that our complex, interconnected world can be a happier place. The workers are promised entitlements, but it stands to reason that they want dignity and hope for their children, not a lifetime on food stamps. The idle, academic debates about automation, post-scarcity societies, and Universal Basic Income probably come across as far-fetched and self-congratulatory, too.
The discourse is poisoned by cognitive biases in many other ways. The liberal media keeps writing about the unaccountable right-wing oligarchs who bankroll the conservative movement and supposedly poison people’s minds – but they offer nothing but praise when progressive causes are being bankrolled by Mr. Soros or Mr. Bloomberg. They claim that the conservatives represent “post-truth” politics – but their fact-checkers shoot down conservative claims over fairly inconsequential mistakes, while giving their favored politicians a pass on half-true platitudes about immigration, gun control, crime, or the sources of inequality. Mr. Obama sneers at the conservative bias of Fox News, but has no concern with the striking tilt to the left in the academia or in the mainstream press. The Economist finds it appropriate to refer to Trump supporters as “trumpkins” in print – but it would be unthinkable for them to refer to the fans of Mrs. Clinton using any sort of a mocking term. The pundits ponder the bold artistic statement made by the nude statues of the Republican nominee – but they would be disgusted if a conservative sculptor portrayed the Democratic counterpart in a similarly unflattering light. The commentators on MSNBC read into every violent incident at Trump rallies – but when a a random group of BLM protesters starts chanting about killing police officers, we all agree it would not be fair to cast the entire movement in a negative light.
Most progressives are either oblivious to these biases, or dismiss them as a harmless casualty of fighting the good fight. Perhaps so – and it is not my intent to imply equivalency between the causes of the left and of the right. But in the end, I suspect that the liberal echo chamber contributed to the election of Mr. Trump far more than anything that ever transpired on the right. It marginalized and excluded legitimate but alien socioeconomic concerns from the mainstream political discourse, binning them with truly bigoted and unintelligent speech – and leaving the “flyover underclass” no option other than to revolt. And it wasn’t just a revolt of the awful fringes. On the right, we had Mr. Trump – a clumsy outsider who eschews many of the core tenets of the conservative platform, and who does not convincingly represent neither the neoconservative establishment of the Bush era, nor the Bible-thumping religious right of the Tea Party. On the left, we had Mr. Sanders – an unaccomplished Senator who offered simplistic but moving slogans, who painted the accumulation of wealth as the source of our ills, and who promised to mold the United States into an idyllic version of the social democracies of Europe – supposedly governed by the workers, and not by the exploitative elites.
I think that people rallied behind Mr. Sanders and Mr. Trump not because they particularly loved the candidates or took all their promises seriously – but because they had no other credible herald for their cause. When the mainstream media derided their rebellion and the left simply laughed it off, it only served as a battle cry. When tens of millions of Trump supporters were labeled as xenophobic and sexist deplorables who deserved no place in politics, it only pushed more moderates toward the fringe. Suddenly, rational people could see themselves voting for a politically inexperienced and brash billionaire – a guy who talks about cutting taxes for the rich, who wants to cozy up to Russia, and whose VP pick previously wasn’t so sure about LGBT rights. I think it all happened not because of Mr. Trump’s character traits or thoughtful political positions, and not because half of the country hates women and minorities. He won because he was the only one to promise to “drain the swamp” – and to promise hope, not handouts, to the lower middle class.
There is a risk that this election will prove to be a step back for civil rights, or that Mr. Trump’s bold but completely untested economic policies will leave the world worse off; while not certain, it pains me to even contemplate this possibility. When we see injustice, we should fight tooth and nail. But for now, I am not swayed by the preemptively apocalyptic narrative on the left. Perhaps naively, I have faith in the benevolence of our compatriots and the strength of the institutions of – as cheesy as it sounds – one of the great nations of the world.
aws-maintenance-lambda – A lambda function to send alerts (to Slack) on AWS maintenance events. While the email from AWS includes only the instance id, the alert will include the Name of the instance and owner (team or individual on Slack) from the appropriate tags. It is an open source project licensed under the Apache License, Version 2.0.
lambda-cd is a proof of concept implementation of continuous delivery for Lambda functions.
cloud-search-query is an ORM-like wrapper for building AWS CloudSearch structured queries.
New Customer Success Stories
Apposphere – Using AWS and bitfusion.io from the AWS Marketplace, Apposphere can scale 50 to 60 percent month-over-month while keeping customer satisfaction high. Based in Austin, Texas, the Apposphere mobile app delivers real-time leads from social media channels.
CADFEM – CADFEM uses AWS to make complex simulation software more accessible to smaller engineering firms, helping them compete with much larger ones. The firm specializes in simulation software and services for the engineering industry.
Mambu – Using AWS, Mambu helped one of its customers launch the United Kingdom’s first cloud-based bank, and the company is now on track for tenfold growth, giving it a competitive edge in the fast-growing fintech sector. Mambu is an all-in-one SaaS banking platform for managing credit and deposit products quickly, simply, and affordably.
Okta – Okta uses AWS to get new services into production in days instead of weeks. Okta creates products that use identity information to grant people access to applications on multiple devices at any time, while still enforcing strong security protections.
PayPlug – PayPlug is a startup created in 2013 that developed an online payment solution. It differentiates itself by the simplicity of its services and its ease of integration on e-commerce websites. PayPlug is a startup created in 2013 that developed an online payment solution. It differentiates itself by the simplicity of its services and its ease of integration on e-commerce websites
Rent-a-Center – Rent-a-Center is a leading renter of furniture, appliances, and electronics to customers in the United States, Canada, Puerto Rico, and Mexico. Rent-A-Center uses AWS to manage its new e-commerce website, scale to support a 1,000 percent spike in site traffic, and enable a DevOps approach.
UK Ministry of Justice – By going all in on the AWS Cloud, the UK Ministry of Justice (MoJ) can use technology to enhance the effectiveness and fairness of the services it provides to British citizens. The MoJ is a ministerial department of the UK government. MoJ had its own on-premises data center, but lacked the ability to change and adapt rapidly to the needs of its citizens. As it created more digital services, MoJ turned to AWS to automate, consolidate, and deliver constituent services.
There’s another leak of NSA hacking tools and data from the Shadow Brokers. This one includes a list of hacked sites.
According to analyses from researchers here and here, Monday’s dump contains 352 distinct IP addresses and 306 domain names that purportedly have been hacked by the NSA. The timestamps included in the leak indicate that the servers were targeted between August 22, 2000 and August 18, 2010. The addresses include 32 .edu domains and nine .gov domains. In all, the targets were located in 49 countries, with the top 10 being China, Japan, Korea, Spain, Germany, India, Taiwan, Mexico, Italy, and Russia. Vitali Kremez, a senior intelligence analyst at security firm Flashpoint, also provides useful analysis here.
The dump also includes various other pieces of data. Chief among them are configuration settings for an as-yet unknown toolkit used to hack servers running Unix operating systems. If valid, the list could be used by various organizations to uncover a decade’s worth of attacks that until recently were closely guarded secrets. According to this spreadsheet, the servers were mostly running Solaris, an operating system from Sun Microsystems that was widely used in the early 2000s. Linux and FreeBSD are also shown.
The data is old, but you can see if you’ve been hacked.
Honestly, I am surprised by this release. I thought that the original Shadow Brokers dump was everything. Now that we know they held things back, there could easily be more releases.
EDITED TO ADD (11/6): More on the NSA targets. Note that the Hague-based Organization for the Prohibition of Chemical Weapons is on the list, hacked in 2000.
The collective thoughts of the interwebz
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.