Tag Archives: photography

How Much Photo & Video Data Do You Have Stored?

Post Syndicated from Jim Goldstein original https://www.backblaze.com/blog/how-much-photo-video-data-do-you-have-stored/

How Much Photo and Video Data Do You Have?

Backblaze’s Director of Marketing Operations, Jim, is not just a marketing wizard, he’s worked as a professional photographer and run marketing for a gear rental business. He knows a lot of photographers. We thought that our readers would be interested in the results of an informal poll he recently conducted among his media friends about the amount of media data they store.You’re invited to contribute to the poll, as well!

— Editor

I asked my circle of professional and amateur photographer friends how much digital media data they have stored. It was a quick survey, and not in any way scientific, but it did show the range of data use by photographers and videographers.

Jim's media data storage poll

I received 64 responses. The answers ranged from less than 5 TB (17 users) to 2 petabytes (1 user). The most popular response was 10-19 TB (18 users). Here are the results.

Digital media storage poll results

Jim's digital media storage poll results

How Much Digital Media Do You Have Stored?

I wondered if the results would be similar if I expanded our survey to a wider audience.

The poll below replicates what I asked of my circle of professional and non-professional photographer and videographer friends. The poll results will be updated in real-time. I ask that you respond only once.

Backblaze is interested in the results as it will help us write blog articles that will be useful to our readership, and also offer cloud services suitable for the needs of our users. Please feel free to ask questions in the comments about cloud backup and storage, and about our products Backblaze Backup and Backblaze B2 Cloud Storage.

I’m anxious to see the results.

Our Poll — Please Vote!

How much photo/video data do you have in total (TB)?

Thanks for participating in the poll. If you’d like to provide more details about the data you store and how you do it, we’d love to hear from you in the comments.

The post How Much Photo & Video Data Do You Have Stored? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Stories of Camera and Data Catastrophes

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/stories-of-camera-and-data-catastrophes/

Salt water damaged camera

This is the third post in a series of post exchanges with our friends at Lensrentals.com, a popular online site for renting photography, videography, and lighting equipment. Seeing as how Halloween is just a few days away, we thought it appropriate to offer some scary tales of camera and data catastrophes. Enjoy.

Note: You can read all of Lensrentals’ posts on our blog. Find all of our posts on the Lensrentals blog.

— Editor

Stories of Camera and Data Catastrophes
by Zach Sutton, Editor-in-chief, Lensrentals.com

As one of the largest photo and video gear rental companies in the world, Lensrentals.com ships out thousands of pieces of gear each day. It would be impossible to expect that all of our gear would return to us in the same condition it was in when we rented it out. More often than not, the damage is the result of things being dropped, but now and then some pretty interesting things happen to the gear we rent out.

We have an incredible customer base, and when this kind of damage happens, they’re more than happy to pay the necessary repair fees. Stuff happens, mistakes are made, and we have a full-service repair center to keep the costs low. And while we have insurance policies for accidental damage such as drops, dings, and other accidents, it doesn’t cover neglect, which accounts for the stories we’re going to share with you below. Let’s take a look at some of our more exciting camera and data catastrophe stories.

Camera Data Catastrophes

Data catastrophes happen more often than anything else, but aren’t exactly the most exciting stories we’ve gotten over the years. The stories are usually similar. Someone rents a memory card or SSD from us, uses the card/SSD, then sends it back without pulling the footage off of it. When we receive gear back into our warehouse, we inspect and format all the media. If you realize your mistake and call or email us before that happens, we can usually put a hold on the media and ship it back to you to pull the data off of it. If we’ve already formatted the media, we will perform a recovery on the data using software such as TestDisk and PhotoRec, and let you know if we had any success. We then give you the option whether or not you want to rent the product again to have it shipped to you so you can pull the files.

The Salty Sony A7sII

A common issue we run into — and have addressed a number of times on our blog — is the dubious term “weather resistant.” This term is often used by equipment marketers and doesn’t give you the protection that people might assume by its name.

One example of that was last year, when we received a nonfunctioning Sony a7sII back from the California coast, and had to disassemble it to determine what was wrong. Upon opening the camera, it was quite apparent that it had been submerged in salt water. Water isn’t good for electronics, but the real killer is impurities, such as salt. Salt builds up on electronics, is a conductor of electricity, and will fry electronics in no time when power is applied. So, once we saw the salt corrosion, we knew that the camera was irreparable. Still, we disassembled it for no other reason than to provide evidence to others on what salt water can do to your electronics. You can read more about this and see the full break down in our post, About Getting Your Camera Wet… Teardown of a Salty Sony A7sII.

Sony A7sII disassembled into parts Sony A7sII salt water damage

The Color Run Cleanup

Color runs are 5K running events that happen all over the world. If you haven’t seen one, participants and spectators toss colorful powders throughout the run, so that by the time the runners reach the finish line, they’re covered head to toe in colorful powder. This event sounds like a lot of fun, and one would naturally want to take photos of the spectacle, but any camera gear used for the event will definitely require a deep cleaning.

Color run damage to camera lens

Color run damage to camera

We’ve asked our clients multiple times not to take our cameras to color runs, but each year we get another system back that is covered in pink, green, and blue dust. The dust used for these events is incredibly fine, making it easy to get into every nook and cranny within the camera body and lenses. This requires the gear to be completely disassembled, cleaned, and reassembled. We have two photos in this post of the results of a color run, but you can view more on the post we did about Color runs back in 2013, How to Ruin Your (or Our) Gear in 5 Minutes (Without Water).

The Eclipse That Killed Cameras

About a year ago, we had the incredible phenomenon here in the United States of a total solar eclipse. It was the first total solar eclipse to occur in the continental United States since 1979, hence a pretty exciting moment for all of us, but we braced ourselves for the damage it would do to cameras.

Eclipse camera lens damage

For weeks leading up to the event, we sent out fliers with our rentals that encouraged people to not only wear eye protection, but to protect their camera lenses with high-density ND filters. Despite that, in the days following the eclipse, we had gear coming back to us with aperture blades melted and holes burnt into sensors.

Eclipse camera damage

Eclipse camera shutter damage

As one would expect, it’s not a good idea to point your camera directly at the sun, especially for long periods of time. Most of the damage done from the eclipse was caused by people who had set up their camera and lens on a tripod pointing at the sun while waiting for the eclipse. This prolonged exposure causes a lot of heat to build up and will eventually start burning through apertures, shutters, sensors and anything else in its way. Not only do we recommend ND filters for the front of your lens, but also black cards to stop light from entering the camera until it’s go time for the total eclipse. You can read about the whole experience in our blog post on the topic, Rental Camera Gear Destroyed by the Solar Eclipse of 2017.

Damage from Burning Man

While we have countless stories of gear being destroyed, we figured it’d be best to just leave you with this one. Burning Man is an annual event that takes place in the deserts of Nevada. Touted as an art installation and experience, tens of thousands of people spend a few days living in the remote desert with fellow Burners to create and participate in a wide range of activities. And where there is a desert, there always are sand, dust, and dust storms.

Burning Man camera damage

Burning Man dust damage

One might think that sand is the biggest nuisance for camera gear at Burning Man, but it’s actually the fine dust that the wind picks up. One of the more interesting phenomena that happens during Burning Man are the dust storms. The dust storms occur with little warning, kicking up the fine dust buried within the sand that can quickly cause damage to your electronics, your skin, and your lungs. Because it is so fine, it is easily able to enter your cameras and lenses.

Burning Man damage to Nikon camera

While Burning Man doesn’t always totally destroy gear, it does result in a lot of cleaning and disassembling of gear after the event. This takes time and patience and costs the customer money. While there are stories of people who bring camera gear to Burning Man wrapped in nothing more than plastic and gaffer tape, we don’t recommend that for good gear. It’s best to just leave your camera at home, or buy an old camera for cheap to document the week. To see more of what can happen to gear at Burning Man, you can read our blog post on the topic, Please, Don’t Take Our Photography and Video Gear to Burning Man.

Those are just a few stories of some of the data and camera catastrophes that we’ve experienced over the years. We hope this serves as a warning to those who might be considering putting their gear through some of the experiences above and hopefully sway them against it. If you have some of your own stories on data or gear catastrophes, feel free to share them below in the comments.

— Zach Sutton, Editor-in-chief, Lensrentals.com

The post Stories of Camera and Data Catastrophes appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

SelfieBot: taking and printing photos with a smile

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/selfiebot-sophy-wong-raspberry-pi-camera/

Does your camera giggle and smile as it takes your photo? Does your camera spit out your image from a thermal printer? No? Well, Sophy Wong’s SelfieBot does!

Raspberry Pi SelfieBot: Selfie Camera with a Personality

SelfieBot is a project Kim and I originally made for our booth at Seattle Mini Maker Faire 2017. Now, you can build your own! A full tutorial for SelfieBot is up on the Adafruit Learning System at https://learn.adafruit.com/raspberry-pi-selfie-bot/ This was our first Raspberry Pi project, and is an experiment in DIY AI.

Pasties, projects, and plans

Last year, I built a Raspberry Pi photobooth for a friend’s wedding, complete with a thermal printer for instant printouts, and a Twitter feed to keep those unable to attend the event in the loop. I called the project PastyCam, because I built it into the paper mache body of a Cornish pasty, and I planned on creating a tutorial blog post for the build. But I obviously haven’t. And I think it’s time, a year later, to admit defeat.

A photo of the Cornish Pasty photo booth Alex created for a wedding in Cornwall - SelfieBot Raspberry Pi Camera

The wedding was in Cornwall, so the Cornish pasty totally makes sense, alright?

But lucky for us, Sophy Wong has gifted us all with SelfieBot.

Sophy Wong

If you subscribe to HackSpace magazine, you’ll recognise Sophy from issue 4, where she adorned the cover, complete with glowing fingernails. And if you’re like me, you instantly wanted to be her as soon as you saw that image.

SelfieBot Raspberry Pi Camera

Makers should also know Sophy from her impressive contributions to the maker community, including her tutorials for Adafruit, her YouTube channel, and most recently her work with Mythbusters Jr.

sophy wong on Twitter

Filming for #MythbustersJr is wrapped, and I’m heading home to Seattle. What an incredible summer filled with amazing people. I’m so inspired by every single person, crew and cast, on this show, and I’ll miss you all until our paths cross again someday 😊

SelfieBot at MakerFaire

I saw SelfieBot in passing at Maker Faire Bay Area earlier this year. Yet somehow I managed to not introduce myself to Sophy and have a play with her Pi-powered creation. So a few weeks back at World Maker Faire New York, I accosted Sophy as soon as I could, and we bonded by swapping business cards and Pimoroni pins.

Creating SelfieBot

SelfieBot is more than just a printing photo booth. It giggles, it talks, it reacts to movement. It’s the robot version of that friend of yours who’s always taking photos. Always. All the time, Amy. It’s all the time! *ahem*

SelfieBot Raspberry Pi Camera

SelfieBot consists of a Raspberry Pi 2, a Pi Camera Module, a 5″ screen, an accelerometer, a mini thermal printer, and more, including 3D-printed and laser-cut parts.

sophy wong on Twitter

Getting SelfieBot ready for Maker Faire Bay Area next weekend! Super excited to be talking on Sunday with @kpimmel – come see us and meet SelfieBot!

If you want to build your own SelfieBot — and obviously you do — then you can find a complete breakdown of the build process, including info on all parts you’ll need, files for 3D printing, and so, so many wonderfully informative photographs, on the Adafruit Learning System!

The post SelfieBot: taking and printing photos with a smile appeared first on Raspberry Pi.

Securely Managing Your Digital Media (SD, CF, SSD, and Beyond)

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/securely-managing-your-digital-media-sd-cf-ssd-and-beyond/

3 rows of 3 memory cards

This is the second in our post exchange series with our friends Zach Sutton and Ryan Hill at Lensrentals.com, who have an online site for renting photography, videography, and lighting equipment. You can read our post from last month on their blog, 3-2-1 Backup Best Practices using Cloud Archiving, and all posts on our blog in this series at Lensrentals post series.

— Editor

Managing digital media securely is crucial for all photographers and videographers. At Lensrentals.com, we take media security very seriously, with dozens of rented memory cards, hard drives, and other data devices returned to our facility every day. All of our media is inspected with each and every rental customer. Most of the cards returned to us in rental shipments are not properly reformatted and erased, so it’s part of our usual service to clear all the data from returned media to keep each client’s identity and digital property secure.

We’ve gotten pretty good at the routine of managing data and formatting storage devices for our clients while making sure our media has a long life and remains free from corruption. Before we get too involved in our process of securing digital media, we should first talk fundamentals.

The Difference Between Erasing and Reformatting Digital Media

When you insert a card in the camera, you’re likely given two options, either erase the card or format the card. There is an important distinction between the two. Erasing images from a card does just that — erases them. That’s it. It designates the area the prior data occupied on the card as available to write over and confirms to you that the data has been removed.

The term erase is a bit misleading here. The underlying data, the 1’s and 0’s that are recorded on the media, are still there. What really happens is that the drive’s address table is changed to show that the space the previous file occupied is available for new data.

This is the reason that simply erasing a file does not securely remove it. Data recovery software can be used to recover that old data as long as it hasn’t been overwritten with new data.

Formatting goes further. When you format a drive or memory card, all of the files are erased (even files you’re designated as “protected”) and also usually adds a file system. This is a more effective method for removing all the data on the drive since all the space previously divided up for specific files has a brand new structure unencumbered by whatever size files were previously stored. Be beware, however, that it’s possible to retrieve older data even after a format. Whether that can happen depends on the formatting method and whether new data has overwritten what was previously stored.

To make sure that the older data cannot be recovered, a secure erase goes further. Rather than simply designating the data that can be overwritten with new data, a secure erase writes a random selection of 1s and 0s to the disk to make sure the old data is no longer available. This takes longer and is more taxing on the card because data is being overwritten rather than simply removed.

Always Format a Card for the Camera You’re Going to Be Using

If you’ve ever tried to use the same memory card on cameras of different makes without formatting it, you may have seen problems with how the data files are displayed. Each camera system handles its file structure a little differently.

For this reason it’s advisable to format the card for the specific camera you’re using. If this is not done, there is a risk of corrupting data on the card.

Our Process For Securing Data

Our inspection process for recording media varies a little depending on what kind of card we’re inspecting. For standardized media like SD cards or compact flash cards, we simply use a card reader to format the card to exFAT. This is done in Disk Utility on the Apple Macbooks that we issue to each of our Video Technicians. We use exFAT specifically because it’s recognizable by just about every device. Since these cards are used in a wide variety of different cameras, recorders, and accessories, and we have no way of knowing at the point of inspection what device they’ll be used with, we have to choose a format that will allow any camera to recognize the card. While our customer may still have to format a card in a camera for file structure purposes, the card will at least always come formatted in a way that the camera can recognize.

Sony SxS media
For proprietary media — things like REDMAGs, SxS, and other cards that we know will only be used in a particular camera — we use cameras to do the formatting. While the exFAT system would technically work, a camera-specific erase and format process saves the customer a step and allows us to more regularly double-check the media ports on our cameras. In fact, we actually format these cards twice at inspection. First, the Technician erases the card to clear out any customer footage that may have been left on it. Next, they record a new clip to the card, around 30 seconds, just to make sure everything is working as it’s supposed to. Finally, they format the card again, erasing the test footage before sending it to the shelf where it awaits use by another customer.

REDMAG Red Mini-Mag You’ll notice that at no point in this process do we do a full secure erase. This is both to save time and to prevent unnecessary wear and tear on the cards. About 75% of the media we get back from orders still has footage on it, so we don’t get the impression that many of our customers are overly concerned with keeping their footage private once they’re done shooting. However, if you are one of those 25% that may have a personal or professional interest in keeping your footage secure after shooting, we’d recommend that you securely erase the media before returning rented memory cards and drives. Or, if you’d rather we handle it, just send an email or note with your return order requesting that we perform a secure erase rather than simply formatting the cards, and we’ll be happy to oblige.

Managing your digital media securely can be easy if done right. Data management and backing up files, on the other hand, can be more involved and require more planning. If you have any questions on that topic, be sure to check out our recent blog post on proper data backup.

— Zach Sutton and Ryan Hill, lensrentals.com

The post Securely Managing Your Digital Media (SD, CF, SSD, and Beyond) appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Protecting Your Data From Camera to Archive

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/protecting-your-data-from-camera-to-archive/

Camera data getting backed up to Backblaze B2 cloud

Lensrentals.com is a highly respected company that rents photography and videography equipment. We’re a fan of their blog and asked Zach Sutton and Ryan Hill of Lensrentals to contribute something for our audience. We also contributed a post to their blog that was posted today: 3-2-1 Backup Best Practices using Cloud Archiving.

Enjoy!

— Editor

At Lensrentals.com we get a number of support calls, but unfortunately one of them is among the most common: data catastrophes.

The first of the frequent calls is from someone who thought they transferred over their footage or photos before returning their rental and discovered later that they were missing some images or footage. If we haven’t already gone through an inspection of those cards, it’s usually not a problem to send the cards back to them so they can collect their data. But if our techs have inspected the memory cards, then there isn’t much we can do. Our team at Lensrentals.com perform a full and secure reformatting of the cards to keep each customer’s data safe from the next renter. Once that footage is gone, it is unrecoverable and gone forever. This is never a fun conversation to have.

The second scenario is when a customer calls to tell us that they did manage to transfer all the footage over, but one or more of the clips or images were corrupted in the transferring process. Typically, people don’t discover this until after they’ve sent back the memory cards, and after we’ve already formatted the original media. This is another tough phone call to have. On occasion, data corruption happens in camera, but more often than not, the file gets corrupted during the transfer from the media to the computer or hard drive.

These kinds of problems aren’t entirely avoidable and are inherent risks users take when working with digital media. However, as with all risks, you can take proper steps to assure that your data is safe. If a problem arises, there are techniques you can use to work around it.

We’ve summarized our best suggestions for protecting your data from camera to archive in the following sections. We hope you find them useful.

How to Protect Your Digital Assets

Before Your Shoot

The first and most obvious step to take to assure your data is safe is to make sure you use reliable media. For us, we recommend using cards from brands you trust, such as Sandisk, Lexar or ProGrade Digital (a company that took the reins from Lexar). For hard drives, SanDisk, Samsung, Western Digital, and Intel are all considered incredibly reliable. These brands may be more expensive than bargain brands but have been proven time and time again to be more reliable. The few extra dollars spent on reliable media will potentially save you thousands in the long run and will assure that your data is safe and free of corruption.

One of the most important things you should do before any shoot is format your memory card in the camera. Formatting in camera is a great way to minimize file corruption as it keeps the card’s file structure conforming to that camera manufacturer’s specifications, and it should be done every time before every shoot. Equally important, if the camera gives you an option to do a complete or secure format, take that option over the other low-level formatting options available. In the same vein, it’s essential to also take the time to research and see if your camera needs to unmount or “eject” the media before removing it physically. While this option applies more for video camera recording systems, like those found on the RED camera platform and the Odyssey 7Q, it’s always worth checking into to avoid any corruption of the data. More often than not, preventable data corruption happens when the users turn off the camera system before the media has been unmounted.

Finally, if you’re shooting for the entire day, you’ll want to make sure you have enough media on hand for the entire day, so that you do not need to back up and reformat cards throughout the shoot. While it’s possible to take footage off of the card, reformat it, and use it again for the same day, that is not something you’d want to be doing during the hectic environment of a shoot day — it’s best to have extra media on hand. We’ve all made a mistake and deleted a file we didn’t mean to, so it’s best to avoid that mistake by not having to delete or manage files while shooting. Play it safe, and only reformat when you have the time and clear head to do so.

During Your Shoot

On many modern camera systems, you have the option of dual-recording using two different card slots. If your camera offers this option, we cannot recommend it enough. Doubling the media you’re recording onto can overcome a failure in one of the memory cards. While the added cost may be a hard sell, it’s negligible when compared to all the money spent on lights, cameras, actors and lousy pizza for the day. Additionally, develop a system that works for you and keeps everything as organized as possible. Spent media shouldn’t be in the same location as unused media, and your file structure should be consistent throughout the entire shoot. A proper file structure not only saves time but assures that none of the footage goes missing after the shoot, lost in some random folder.

Camera memory cards

Among one of the most critical jobs while on set is the work of a DIT (Digital Imaging Technician) for video, and a DT (Digital Technician) for photography. Essentially, the responsibilities of these positions are to keep the data archived and organized on a set, as well as metadata logging and other technical tasks involved in keeping a shoot organized. While it may not be cost effective to have a DIT/DT on every shoot, if the budget allows for it, I highly recommend you hire one to take on the responsibilities. Having someone on set who is solely responsible for safely backing up and organizing footage helps keep the rest of the crew focused on their obligations to assure nothing goes wrong. When they’re not transferring and archiving data, DIT/DT’s also log metadata, color correct footage and help with the other preliminary editing processes. Even if the budget doesn’t allow for this position to be filled, work to find someone who can solely handle these processes while on set. You don’t want your camera operator to be in charge of also backing up and organizing footage if you can help it.

Ingest Software

If there is one piece of information we’d like for videographers and photographers to take away from this article, it is this: file-moving or ‘offloading’ software is worth the investment and should be used every time you shoot anything. For those who are unfamiliar with offload software, it’s any application that is designed to make it easier for you to back up footage from one location to another, and one shoot to another. In short, to avoid accidents or data corruption, it’s always best to have your media on a MINIMUM of two different devices. The easiest way to do this is to simply dump media onto two separate hard drives, and keep those drives separately stored. Ideally (if the budget allows), you’ll also keep all of your data on the original media for the day as well, making sure you have multiple copies stored in various locations. Many other options are available and recommended if possible, such as RAID arrays or even copying the data over to a cloud service such as Backblaze B2. What offloading software does is just this process, and helps build a platform of automation while verifying all the data as it’s transferred.

There are a few different recommendations I give for offloading software, all at different price points and with unique features. At the highest end of video production, you’ll often see DITs using a piece of software called Silverstack, which offers color grading functionalities, LTO tape support, and basic editing tools for creating daily edits. At a $600 annual price, it is the most expensive in this field and is probably overkill for most users. As for my recommendation, I recommend a tool call Shotput Pro. At $129, Shotput Pro offers all the tools you’d need to build a great archiving process while sacrificing some of the color editing tools. Shotput Pro can simultaneously copy and transfer files to multiple locations, build PDF reports, and verify all transfers. If you’re looking for something even cheaper, there are additional options such as Offload and Hedge. They’re both available for $99 each and give all the tools you’d need within their simple interfaces.

When it comes to photo, the two most obvious choices are Adobe Lightroom and Capture One Pro. While both tools are known more for their editing tools, they also have a lot of archiving functions built into their ingest systems, allowing you to unload cards to multiple locations and make copies on the fly.

workstation with video camera and RAID NAS

When it comes to video, the most crucial feature all of the apps should have is an option called “checksum verification.” This subject can get complicated, but all you really need to know is that larger files are more likely to be corrupted when transferring and copying, so what checksum verification does is verify the file to assure that it’s identical to the original version down to the individual byte. It is by far the most reliable and effective way to ensure that entire volumes of data are copied without corruption or loss of data. Whichever application you choose, make sure checksum verification is an available feature, and part of your workflow every time you’re copying video files. While available on select photo ingesting software, corruption happens less on smaller files and is generally less of an issue. Still, if possible, use it.

Post-Production

Once you’ve completed your shoot and all of your data is safely transfered over to external drives, it’s time to look at how you can store your information long term. Different people approach archiving in different ways because none of us will have an identical workflow. There is no correct way to handle how to archive your photos and videos, but there are a few rules that you’ll want to implement.

The first rule is the most obvious. You’ll want to make sure your media is stored on multiple drives. That way, if one of your drives dies on you, you still have a backup version of the work ready to go. The second rule of thumb is that you’ll want to store these backups in different locations. This can be extremely important if there is a fire in your office, or you’re a victim of a robbery. The most obvious way to do this is to back up or archive into a cloud service such as Backblaze B2. In my production experience I’ve seen multiple production houses implement a system where they store their backup hard drives in a safety deposit box at their bank. The final rule of thumb is especially important when you’re working with significant amounts of data, and that is to keep a working drive separate from an archive drive. The reason for this is an obvious one: all hard drives have a life expectancy, and you can prolong that by minimizing drive use. Having a working drive separate from your archive drives means that your archive drives will have fewer hours on them, thereby extending their practical life.

Ryan Hill’s Workflow

To help visualize what we discussed above, I’ll lay out my personal workflow for you. Please keep in mind that I’m mainly a one-man band, so my workflow is based on me handling everything. I’m also working with a large variety of mediums, so nothing I’m doing is going to be video and camera specific as all of my video projects, photo projects, and graphic projects are organized in the same way. I won’t bore you with details on my file structure, except to say that everything in my root folder is organized by job number, followed by sub-folders with the data classified into categories. I will keep track of which jobs are which, and have a Google Spreadsheet that organizes the job numbers with descriptions and client information. All of this information is secured within my Google account but also allows me to access it from anywhere if needed.

With archiving, my system is pretty simple. I’ve got a 4-drive RAID array in my office that gets updated every time I’m working on a new project. The array is set to RAID 1+0, which means I could lose two of the four hard drives, and still be able to recover the data. Usually, I’ll put 1TB drives in each bay, fill them as I work on projects, and replace them when they’re full. Once they’re full, I label them with the corresponding job numbers and store them in a plastic case on my bookshelf. By no means am I suggesting that my system is a perfect system, but for me, it’s incredibly adaptable to the various projects I work on. In case I was to get robbed, or if my house caught fire, I still have all of my work also archived onto a cloud system, giving me a second level of security.

Finally, to finish up my backup solution, I also keep a two-bay Thunderbolt hard drive dock on my desk as my working drive system. Solid state drives (SSD) and the Thunderbolt connection give me the speed and reliability that I’d need from a drive that I’ll be working from, and rendering outputs off of. For now, there is a single 960gb SSD in the first bay, with the option to extend to the second bay if I need additional storage. I start work by transferring the job file from my archive to the working drive, do whatever I need to do to the files, then replace the old job folder on my archive with the updated one at the end of the day. This way, if I were to have a drive failure, the worst I will lose is a day’s worth of work. For video projects or anything that takes a lot of data, I usually keep copies of all my source files on both my working and archive drive, and just replace the Adobe Premiere project file as I go. Again, this is just my system that works for me, and I recommend you develop one that works for your workflow while keeping your data safe.

The Takeaway

The critical point you should take away is that these sorts of strategies are things you should be thinking about at every step of your production. How does your camera or codec choice affect your media needs? How are you going to ensure safe data backup in the field? How are you going to work with all of this footage in post-production in a way that’s both secure and efficient? Answering all of these questions ahead of time will keep your media safe and your clients happy.

— Zach Sutton and Ryan Hill, lensrentals.com

The post Protecting Your Data From Camera to Archive appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Whimsical builds and messing things up

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/whimsical-builds-and-messing-things-up/

Today is the early May bank holiday in England and Wales, a public holiday, and while this blog rarely rests, the Pi Towers team does. So, while we take a day with our families, our friends, and/or our favourite pastimes, I thought I’d point you at a couple of features from HackSpace magazine, our monthly magazine for makers.

To my mind, they go quite well with a deckchair in the garden, the buzz of a lawnmower a few houses down, and a view of the weeds I ought to have dealt with by now, but I’m sure you’ll find your own ambience.

Make anything with pencils – HackSpace magazine

If you want a unique piece of jewellery to show your love for pencils, follow Peter Brown’s lead. Peter glued twelve pencils together in two rows of six. He then measured the size of his finger and drilled a hole between the glued pencils using a drill bit.

First off, pencils. It hadn’t occurred to me that you could make super useful stuff like a miniature crossbow and a catapult out of pencils. Not only can you do this, you can probably go ahead and do it right now: all you need is a handful of pencils, some rubber bands, some drawing pins, and a bulldog clip (or, as you might prefer, some push pins and a binder clip). The sentence that really leaps out at me here is “To keep a handful of boys aged three to eleven occupied during a family trip, Marie decided to build mini crossbows to help their target practice.” The internet hasn’t helped me find out much about Marie, but I am in awe of her.

If you haven’t wandered off to make a stationery arsenal by now, read Lucy Rogers‘ reflections on making a right mess of things. I hope you do, because I think it’d be great if more people coped better with the fact that we all, unavoidably, fail. You probably won’t really get anywhere without a few goes where you just completely muck it all up.

A ceramic mug, broken into several pieces on the floor

Never mind. We can always line a plant pot with them.
“In Pieces” by dusk-photography / CC BY

This true of everything. Wet lab work and gardening and coding and parenting. And everything. You can share your heroic failures in the comments, if you like, as well as any historic weaponry you have fashioned from the contents of your desk tidy.

The post Whimsical builds and messing things up appeared first on Raspberry Pi.

Community profile: Dave Akerman

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-dave-akerman/

This column is from The MagPi issue 61. You can download a PDF of the full issue for free, or subscribe to receive the print edition through your letterbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve our charitable goals.

The pinned tweet on Dave Akerman’s Twitter account shows a table displaying the various components needed for a high-altitude balloon (HAB) flight. Batteries, leads, a camera and Raspberry Pi, plus an unusually themed payload. The caption reads ‘The Queen, The Duke of York, and my TARDIS”, and sums up Dave’s maker career in a heartbeat.

David Akerman on Twitter

The Queen, The Duke of York, and my TARDIS 🙂 #UKHAS #RaspberryPi

Though writing software for industrial automation pays the bills, the majority of Dave’s time is spent in the world of high-altitude ballooning and the ever-growing community that encompasses it. And, while he makes some money sending business-themed balloons to near space for the likes of Aardman Animations, Confused.com, and the BBC, Dave is best known in the Raspberry Pi community for his use of the small computer in every payload, and his work as a tutor alongside the Foundation’s staff at Skycademy events.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave continues to help others while breaking records and having a good time exploring the atmosphere.

Dave has dedicated many hours and many, many more miles to assist with the Foundation’s Skycademy programme, helping to explore high-altitude ballooning with educators from across the UK. Using a Raspberry Pi and various other pieces of lightweight tech, Dave and Foundation staff member James Robinson explored the incorporation of high-altitude ballooning into education. Through Skycademy, educators were able to learn new skills and take them to the classroom, setting off their own balloons with their students, and recording the results on Raspberry Pis.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave’s most recent flight broke a new record. On 13 August 2017, his HAB payload was able to send back the highest images taken by any amateur flight.

But education isn’t the only reason for Dave’s involvement in the HAB community. As with anyone passionate about a specific hobby, Dave strives to break records. The most recent record-breaking flight took place on 13 August 2017, when Dave’s Raspberry Pi Zero HAB sent home the highest images taken by any amateur high-altitude balloon launch: at 43014 metres. No other HAB balloon has provided images from such an altitude, and the lightweight nature of the Pi Zero definitely helped, as Dave went on to mention on Twitter a few days later.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave is recognised as being the first person to incorporate a Raspberry Pi into a HAB payload, and continues to break records with the help of the little green board. More recently, he’s been able to lighten the load by using the Raspberry Pi Zero.

When the first Pi made its way to near space, Dave tore the computer apart in order to meet the weight restriction. The Pi in the Sky board was created to add the extra features needed for the flight. Since then, the HAT has experienced a few changes.

Dave Akerman The MagPi Raspberry Pi Community Profile

The Pi in the Sky board, created specifically for HAB flights.

Dave first fell in love with high-altitude ballooning after coming across the hobby in a video shared on a photographic forum. With a lifelong interest in space thanks to watching the Moon landings as a boy, plus a talent for electronics and photography, it seems a natural progression for him. Throw in his coding skills from learning to program on a Teletype and it’s no wonder he was ready and eager to take to the skies, so to speak, and capture the curvature of the Earth. What was so great about using the Raspberry Pi was the instant gratification he got from receiving images in real time as they were taken during the flight. While other devices could control a camera and store captured images for later retrieval, thanks to the Pi Dave was able to transmit the files back down to Earth and check the progress of his balloon while attempting to break records with a flight.

Dave Akerman The MagPi Raspberry Pi Community Profile Morph

One of the many commercial flights Dave has organised featured the classic children’s TV character Morph, a creation of the Aardman Animations studio known for Wallace and Gromit. Morph took to the sky twice in his mission to reach near space, and finally succeeded in 2016.

High-altitude ballooning isn’t the only part of Dave’s life that incorporates a Raspberry Pi. Having “lost count” of how many Pis he has running tasks, Dave has also created radio receivers for APRS (ham radio data), ADS-B (aircraft tracking), and OGN (gliders), along with a time-lapse camera in his garden, and he has a few more Pi for tinkering purposes.

The post Community profile: Dave Akerman appeared first on Raspberry Pi.

Build a solar-powered nature camera for your garden

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/solar-powered-nature-camera/

Spring has sprung, and with it, sleepy-eyed wildlife is beginning to roam our gardens and local woodlands. So why not follow hackster.io maker reichley’s tutorial and build your own solar-powered squirrelhouse nature cam?

Raspberry Pi- and solar-powered nature camera

Inspiration

“I live half a mile above sea level and am SURROUNDED by animals…bears, foxes, turkeys, deer, squirrels, birds”, reichley explains in his tutorial. “Spring has arrived, and there are LOADS of squirrels running around. I was in the building mood and, being a nerd, wished to combine a common woodworking project with the connectivity and observability provided by single-board computers (and their camera add-ons).”

Building a tiny home

reichley started by sketching out a design for the house to determine where the various components would fit.

Raspberry Pi- and solar-powered nature camera

Since he’s fan of autonomy and renewable energy, he decided to run the project’s Raspberry Pi Zero W via solar power. To do so, he reiterated the design to include the necessary tech, scaling the roof to fit the panels.

Raspberry Pi- and solar-powered squirrel cam
Raspberry Pi- and solar-powered squirrel cam
Raspberry Pi- and solar-powered squirrel cam

To keep the project running 24/7, reichley had to figure out the overall power consumption of both the Zero W and the Raspberry Pi Camera Module, factoring in the constant WiFi connection and the sunshine hours in his garden.

Raspberry Pi- and solar-powered nature camera

He used a LiPo SHIM to bump up the power to the required 5V for the Zero. Moreover, he added a BH1750 lux sensor to shut off the LiPo SHIM, and thus the Pi, whenever it’s too dark for decent video.

Raspberry Pi- and solar-powered nature camera

To control the project, he used Calin Crisan’s motionEyeOS video surveillance operating system for single-board computers.

Build your own nature camera

To build your own version, follow reichley’s tutorial, in which you can also find links to all the necessary code and components. You can also check out our free tutorial for building an infrared bird box using the Raspberry Pi NoIR Camera Module. As Eben said in our YouTube live Q&A last week, we really like nature cameras here at Pi Towers, and we’d love to see yours. So if you have any live-stream links or photography from your Raspberry Pi–powered nature cam, please share them with us!

The post Build a solar-powered nature camera for your garden appeared first on Raspberry Pi.

Fstoppers Uploaded a Brilliant Hoax ‘Anti-Piracy’ Tutorial to The Pirate Bay

Post Syndicated from Andy original https://torrentfreak.com/fstoppers-uploaded-a-brilliant-hoax-anti-piracy-tutorial-to-the-pirate-bay-180307/

Fstoppers is an online community that produces extremely high-quality photographic tutorials. One of its most popular series is called Photographing the World which sees photographer Elia Locardi travel to exotic locations to demonstrate landscape and cityscape photography.

These tutorials sell for almost $300, with two or three versions in a pack selling for up $700. Of course, like any other media they get pirated so when Fstoppers were ready to release Photographing the World 3, they released it themselves on torrent sites a few days before retail.

Well, that’s what they wanted the world to believe.

“I think it’s fair to say that we’ve all downloaded ‘something’ illegally in the past. Whether it’s an MP3 years ago or a movie or a TV show, and occasionally you download something and it turns out it was kinda like a Rick Roll,” says Locardi.

“So we kept talking and we thought it would be a good idea to create this dummy lesson or shadow tutorial that was actually a fake and then seed it on BitTorrent.”

Where Fstoppers normally go to beautiful and exotic international locations, for their fake they decided to go to an Olive Garden in Charleston, South Carolina. Yet despite the clear change of location, they wanted people to believe the tutorial was legitimate.

“We wanted to ride this constant line of ‘Is this for real? Could this possibly be real? Is Elia [Locardi] joking right now? I don’t think he’s joking, he’s being totally serious’,” says Lee Morris, one of the co-owners of Fstoppers.

People really have to watch the tutorial to see what a fantastic job Fstoppers did in achieving that goal. For anyone unfamiliar with their work, the tutorial is initially hard to spot as a fake and even for veterans the level of ambiguity is really impressive.

However, when the tutorial heads back to the studio, where the post-processing lesson gets underway, there can be no doubt that something is amiss.

Things start off normally with serious teaching, then over time, the tutorial gets more and more ridiculous. Then, when the camera cuts away to show Locardi forming a ‘mask’ on an Olive Garden image, there can be no confusion.

That’s a cool mask….wait..

In order to get the tutorial out to the world, the site created its own torrent. They had never done anything like it before so got some associates to upload the huge 25GB+ package to The Pirate Bay and have their friends seed it. Then, in order to get past more savvy users on the site, they had other people come in and give the torrent good (but fake) reviews.

The fake torrent on The Pirate Bay (as of yesterday)

Screenshots provided by Fstoppers taken months ago reveal hundreds of downloaders. And, according to Morris, the fake became the most-downloaded Photographing the World 3 torrent online, meaning that the “majority of downloaders” got the comedy version.

Also of interest is the feedback Fstoppers got following their special release. Emails flooded in from pirates, some of whom were confused while others were upset at the ‘quality’ of the tutorial.

“The whole time we were thinking: ‘This isn’t even on the market yet! You guys are totally stealing this and emailing us and complaining about it,” says Fstoppers co-owner Patrick Hall.

While the tutorial itself is brilliant, Fstoppers points to a certain hypocrisy within its target audience of photographers, who themselves have to put up with a lot of online piracy of their work. Yet, clearly, many are happy to pirate the work of other photographers in order to make their own art better.

All that being said, the exercise is certainly an interesting one and the creativity behind the hoax puts it head and shoulders above more aggressive anti-piracy campaigns. However, when TF tracked down the torrent on The Pirate Bay last evening, it’s popularity had nosedived.

While it was initially downloaded by a lot of eager photographers, probably encouraged by the fake comments placed on the site by Fstoppers, the torrent is now only being shared by less than 10 people. As usual, the Pirate Bay users appear to have caught on, flagging the torrent as a fake. The moderators, it seems, have also deleted the fake comments.

While most people won’t want to download a 25GB torrent to see what Fstoppers came up with, the site has uploaded the fake tutorial to YouTube. It’s best viewed alongside their other work, which is sensational, but people should get a good idea by watching the explanation below.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Happy birthday to us!

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/happy-birthday-2018/

The eagle-eyed among you may have noticed that today is 28 February, which is as close as you’re going to get to our sixth birthday, given that we launched on a leap day. For the last three years, we’ve launched products on or around our birthday: Raspberry Pi 2 in 2015; Raspberry Pi 3 in 2016; and Raspberry Pi Zero W in 2017. But today is a snow day here at Pi Towers, so rather than launching something, we’re taking a photo tour of the last six years of Raspberry Pi products before we don our party hats for the Raspberry Jam Big Birthday Weekend this Saturday and Sunday.

Prehistory

Before there was Raspberry Pi, there was the Broadcom BCM2763 ‘micro DB’, designed, as it happens, by our very own Roger Thornton. This was the first thing we demoed as a Raspberry Pi in May 2011, shown here running an ARMv6 build of Ubuntu 9.04.

BCM2763 micro DB

Ubuntu on Raspberry Pi, 2011-style

A few months later, along came the first batch of 50 “alpha boards”, designed for us by Broadcom. I used to have a spreadsheet that told me where in the world each one of these lived. These are the first “real” Raspberry Pis, built around the BCM2835 application processor and LAN9512 USB hub and Ethernet adapter; remarkably, a software image taken from the download page today will still run on them.

Raspberry Pi alpha board, top view

Raspberry Pi alpha board

We shot some great demos with this board, including this video of Quake III:

Raspberry Pi – Quake 3 demo

A little something for the weekend: here’s Eben showing the Raspberry Pi running Quake 3, and chatting a bit about the performance of the board. Thanks to Rob Bishop and Dave Emett for getting the demo running.

Pete spent the second half of 2011 turning the alpha board into a shippable product, and just before Christmas we produced the first 20 “beta boards”, 10 of which were sold at auction, raising over £10000 for the Foundation.

The beginnings of a Bramble

Beta boards on parade

Here’s Dom, demoing both the board and his excellent taste in movie trailers:

Raspberry Pi Beta Board Bring up

See http://www.raspberrypi.org/ for more details, FAQ and forum.

Launch

Rather to Pete’s surprise, I took his beta board design (with a manually-added polygon in the Gerbers taking the place of Paul Grant’s infamous red wire), and ordered 2000 units from Egoman in China. After a few hiccups, units started to arrive in Cambridge, and on 29 February 2012, Raspberry Pi went on sale for the first time via our partners element14 and RS Components.

Pallet of pis

The first 2000 Raspberry Pis

Unboxing continues

The first Raspberry Pi from the first box from the first pallet

We took over 100000 orders on the first day: something of a shock for an organisation that had imagined in its wildest dreams that it might see lifetime sales of 10000 units. Some people who ordered that day had to wait until the summer to finally receive their units.

Evolution

Even as we struggled to catch up with demand, we were working on ways to improve the design. We quickly replaced the USB polyfuses in the top right-hand corner of the board with zero-ohm links to reduce IR drop. If you have a board with polyfuses, it’s a real limited edition; even more so if it also has Hynix memory. Pete’s “rev 2” design made this change permanent, tweaked the GPIO pin-out, and added one much-requested feature: mounting holes.

Revision 1 versus revision 2

If you look carefully, you’ll notice something else about the revision 2 board: it’s made in the UK. 2012 marked the start of our relationship with the Sony UK Technology Centre in Pencoed, South Wales. In the five years since, they’ve built every product we offer, including more than 12 million “big” Raspberry Pis and more than one million Zeros.

Celebrating 500,000 Welsh units, back when that seemed like a lot

Economies of scale, and the decline in the price of SDRAM, allowed us to double the memory capacity of the Model B to 512MB in the autumn of 2012. And as supply of Model B finally caught up with demand, we were able to launch the Model A, delivering on our original promise of a $25 computer.

A UK-built Raspberry Pi Model A

In 2014, James took all the lessons we’d learned from two-and-a-bit years in the market, and designed the Model B+, and its baby brother the Model A+. The Model B+ established the form factor for all our future products, with a 40-pin extended GPIO connector, four USB ports, and four mounting holes.

The Raspberry Pi 1 Model B+ — entering the era of proper product photography with a bang.

New toys

While James was working on the Model B+, Broadcom was busy behind the scenes developing a follow-on to the BCM2835 application processor. BCM2836 samples arrived in Cambridge at 18:00 one evening in April 2014 (chips never arrive at 09:00 — it’s always early evening, usually just before a public holiday), and within a few hours Dom had Raspbian, and the usual set of VideoCore multimedia demos, up and running.

We launched Raspberry Pi 2 at the start of 2015, pairing BCM2836 with 1GB of memory. With a quad-core Arm Cortex-A7 clocked at 900MHz, we’d increased performance sixfold, and memory fourfold, in just three years.

Nobody mention the xenon death flash.

And of course, while James was working on Raspberry Pi 2, Broadcom was developing BCM2837, with a quad-core 64-bit Arm Cortex-A53 clocked at 1.2GHz. Raspberry Pi 3 launched barely a year after Raspberry Pi 2, providing a further doubling of performance and, for the first time, wireless LAN and Bluetooth.

All our recent products are just the same board shot from different angles

Zero to hero

Where the PC industry has historically used Moore’s Law to “fill up” a given price point with more performance each year, the original Raspberry Pi used Moore’s law to deliver early-2000s PC performance at a lower price. But with Raspberry Pi 2 and 3, we’d gone back to filling up our original $35 price point. After the launch of Raspberry Pi 2, we started to wonder whether we could pull the same trick again, taking the original Raspberry Pi platform to a radically lower price point.

The result was Raspberry Pi Zero. Priced at just $5, with a 1GHz BCM2835 and 512MB of RAM, it was cheap enough to bundle on the front of The MagPi, making us the first computer magazine to give away a computer as a cover gift.

Cheap thrills

MagPi issue 40 in all its glory

We followed up with the $10 Raspberry Pi Zero W, launched exactly a year ago. This adds the wireless LAN and Bluetooth functionality from Raspberry Pi 3, using a rather improbable-looking PCB antenna designed by our buddies at Proant in Sweden.

Up to our old tricks again

Other things

Of course, this isn’t all. There has been a veritable blizzard of point releases; RAM changes; Chinese red units; promotional blue units; Brazilian blue-ish units; not to mention two Camera Modules, in two flavours each; a touchscreen; the Sense HAT (now aboard the ISS); three compute modules; and cases for the Raspberry Pi 3 and the Zero (the former just won a Design Effectiveness Award from the DBA). And on top of that, we publish three magazines (The MagPi, Hello World, and HackSpace magazine) and a whole host of Project Books and Essentials Guides.

Chinese Raspberry Pi 1 Model B

RS Components limited-edition blue Raspberry Pi 1 Model B

Brazilian-market Raspberry Pi 3 Model B

Visible-light Camera Module v2

Learning about injection moulding the hard way

250 pages of content each month, every month

Essential reading

Forward the Foundation

Why does all this matter? Because we’re providing everyone, everywhere, with the chance to own a general-purpose programmable computer for the price of a cup of coffee; because we’re giving people access to tools to let them learn new skills, build businesses, and bring their ideas to life; and because when you buy a Raspberry Pi product, every penny of profit goes to support the Raspberry Pi Foundation in its mission to change the face of computing education.

We’ve had an amazing six years, and they’ve been amazing in large part because of the community that’s grown up alongside us. This weekend, more than 150 Raspberry Jams will take place around the world, comprising the Raspberry Jam Big Birthday Weekend.

Raspberry Pi Big Birthday Weekend 2018. GIF with confetti and bopping JAM balloons

If you want to know more about the Raspberry Pi community, go ahead and find your nearest Jam on our interactive map — maybe we’ll see you there.

The post Happy birthday to us! appeared first on Raspberry Pi.

AWS Hot Startups for February 2018: Canva, Figma, InVision

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-for-february-2018-canva-figma-invision/

Note to readers! Starting next month, we will be publishing our monthly Hot Startups blog post on the AWS Startup Blog. Please come check us out.

As visual communication—whether through social media channels like Instagram or white space-heavy product pages—becomes a central part of everyone’s life, accessible design platforms and tools become more and more important in the world of tech. This trend is why we have chosen to spotlight three design-related startups—namely Canva, Figma, and InVision—as our hot startups for the month of February. Please read on to learn more about these design-savvy companies and be sure to check out our full post here.

Canva (Sydney, Australia)

For a long time, creating designs required expensive software, extensive studying, and time spent waiting for feedback from clients or colleagues. With Canva, a graphic design tool that makes creating designs much simpler and accessible, users have the opportunity to design anything and publish anywhere. The platform—which integrates professional design elements, including stock photography, graphic elements, and fonts for users to build designs either entirely from scratch or from thousands of free templates—is available on desktop, iOS, and Android, making it possible to spin up an invitation, poster, or graphic on a smartphone at any time.

To learn more about Canva, read our full interview with CEO Melanie Perkins here.

Figma (San Francisco, CA)

Figma is a cloud-based design platform that empowers designers to communicate and collaborate more effectively. Using recent advancements in WebGL, Figma offers a design tool that doesn’t require users to install any software or special operating systems. It also allows multiple people to work in a file at the same time—a crucial feature.

As the need for new design talent increases, the industry will need plenty of junior designers to keep up with the demand. Figma is prepared to help students by offering their platform for free. Through this, they “hope to give young designers the resources necessary to kick-start their education and eventually, their careers.”

For more about Figma, check out our full interview with CEO Dylan Field here.

InVision (New York, NY)

Founded in 2011 with the goal of helping improve every digital experience in the world, digital product design platform InVision helps users create a streamlined and scalable product design process, build and iterate on prototypes, and collaborate across organizations. The company, which raised a $100 million series E last November, bringing the company’s total funding to $235 million, currently powers the digital product design process at more than 80 percent of the Fortune 100 and brands like Airbnb, HBO, Netflix, and Uber.

Learn more about InVision here.

Be sure to check out our full post on the AWS Startups blog!

-Tina

Playboy Brands Boing Boing a “Clickbait” Site With No Fair Use Defense

Post Syndicated from Andy original https://torrentfreak.com/playboy-brands-boing-boing-a-clickbait-site-with-no-fair-use-defense-180126/

Late 2017, Boing Boing co-editor Xena Jardin posted an article in which he linked to an archive containing every Playboy centerfold image to date.

“Kind of amazing to see how our standards of hotness, and the art of commercial erotic photography, have changed over time,” Jardin noted.

While Boing Boing had nothing to do with the compilation, uploading, or storing of the Imgur-based archive, Playboy took exception to the popular blog linking to the album.

Noting that Jardin had referred to the archive uploader as a “wonderful person”, the adult publication responded with a lawsuit (pdf), claiming that Boing Boing had commercially exploited its copyrighted images.

Last week, with assistance from the Electronic Frontier Foundation, Boing Boing parent company Happy Mutants filed a motion to dismiss in which it defended its right to comment on and link to copyrighted content without that constituting infringement.

“This lawsuit is frankly mystifying. Playboy’s theory of liability seems to be that it is illegal to link to material posted by others on the web — an act performed daily by hundreds of millions of users of Facebook and Twitter, and by journalists like the ones in Playboy’s crosshairs here,” the company wrote.

EFF Senior Staff Attorney Daniel Nazer weighed in too, arguing that since Boing Boing’s reporting and commenting is protected by copyright’s fair use doctrine, the “deeply flawed” lawsuit should be dismissed.

Now, just a week later, Playboy has fired back. Opposing Happy Mutants’ request for the Court to dismiss the case, the company cites the now-famous Perfect 10 v. Amazon/Google case from 2007, which tried to prevent Google from facilitating access to infringing images.

Playboy highlights the court’s finding that Google could have been held contributorily liable – if it had knowledge that Perfect 10 images were available using its search engine, could have taken simple measures to prevent further damage, but failed to do so.

Turning to Boing Boing’s conduct, Playboy says that the company knew it was linking to infringing content, could have taken steps to prevent that, but failed to do so. It then launches an attack on the site itself, offering disparaging comments concerning its activities and business model.

“This is an important case. At issue is whether clickbait sites like Happy Mutants’ Boing Boing weblog — a site designed to attract viewers and encourage them to click on links in order to generate advertising revenue — can knowingly find, promote, and profit from infringing content with impunity,” Playboy writes.

“Clickbait sites like Boing Boing are not known for creating original content. Rather, their business model is based on ‘collecting’ interesting content created by others. As such, they effectively profit off the work of others without actually creating anything original themselves.”

Playboy notes that while sites like Boing Boing are within their rights to leverage works created by others, courts in the US and overseas have ruled that knowingly linking to infringing content is unacceptable.

Even given these conditions, Playboy argues, Happy Mutants and the EFF now want the Court to dismiss the case so that sites are free to “not only encourage, facilitate, and induce infringement, but to profit from those harmful activities.”

Claiming that Boing Boing’s only reason for linking to the infringing album was to “monetize the web traffic that over fifty years of Playboy photographs would generate”, Playboy insists that the site and parent company Happy Mutants was properly charged with copyright infringement.

Playboy also dismisses Boing Boing’s argument that a link to infringing content cannot result in liability due to the link having both infringing and substantial non-infringing uses.

First citing the Betamax case, which found that maker Sony could not be held liable for infringement because its video recorders had substantial non-infringing uses, Playboy counters with the Grokster decision, which held that a distributor of a product could be liable for infringement, if there was an intent to encourage or support infringement.

“In this case, Happy Mutants’ offending link — which does nothing more than support infringing content — is good for nothing but promoting infringement and there is no legitimate public interest in its unlicensed availability,” Playboy notes.

In its motion to dismiss, Happy Mutants also argued that unless Playboy could identify users who “in fact downloaded — rather than simply viewing — the material in question,” the case should be dismissed. However, Playboy rejects the argument, claiming it is based on an erroneous interpretation of the law.

Citing the Grokster decision once more, the adult publisher notes that the Supreme Court found that someone infringes contributorily when they intentionally induce or encourage direct infringement.

“The argument that contributory infringement only lies where the defendant’s actions result in further infringement ignores the ‘or’ and collapses ‘inducing’ and ‘encouraging’ into one thing when they are two distinct things,” Playboy writes.

As for Boing Boing’s four classic fair use arguments, the publisher describes these as “extremely weak” and proceeds to hit them one by one.

In respect of the purpose and character of the use, Playboy discounts Boing Boing’s position that the aim of its post was to show “how our standards of hotness, and the art of commercial erotic photography, have changed over time.” The publisher argues that is the exact same purpose of Playboy magazine, while highliting its publication Playboy: The Compete Centerfolds, 1953-2016.

Moving on to the second factor of fair use – the nature of the copyrighted work – Playboy notes that an entire album of artwork is involved, rather than just a single image.

On the third factor, concerning the amount and substantiality of the original work used, Playboy argues that in order to publish an opinion on how “standards of hotness” had developed over time, there was no need to link to all of the pictures in the archive.

“Had only representative images from each decade, or perhaps even each year, been taken, this would be a very different case — but Happy Mutants cannot dispute that it knew it was linking to an illegal library of ‘Every Playboy Playmate Centerfold Ever’ since that is what it titled its blog post,” Playboy notes.

Finally, when considering the effect of the use upon the potential market for or value of the copyrighted work, Playbody says its archive of images continues to be monetized and Boing Boing’s use of infringing images jeopardizes that.

“Given that people are generally not going to pay for what is freely available, it is disingenuous of Happy Mutants to claim that promoting the free availability of infringing archives of Playboy’s work for viewing and downloading is not going to have an adverse effect on the value or market of that work,” the publisher adds.

While it appears the parties agree on very little, there is agreement on one key aspect of the case – its wider importance.

On the one hand, Playboy insists that a finding in its favor will ensure that people can’t commercially exploit infringing content with impunity. On the other, Boing Boing believes that the health of the entire Internet is at stake.

“The world can’t afford a judgment against us in this case — it would end the web as we know it, threatening everyone who publishes online, from us five weirdos in our basements to multimillion-dollar, globe-spanning publishing empires like Playboy,” the company concludes.

Playboy’s opposition to Happy Mutants’ motion to dismiss can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Digital making for new parents

Post Syndicated from Carrie Anne Philbin original https://www.raspberrypi.org/blog/digital-making-for-new-parents/

Solving problems that are meaningful to us is at the core of our approach to teaching and learning about technology here at the Raspberry Pi Foundation. Over the last eight months, I’ve noticed that the types of digital making projects that motivate and engage me have changed (can’t think why). Always looking for ways to save money and automate my life and the lives of my loved ones, I’ve been thinking a lot about how digital making projects could be the new best friend of any new parent.

A baby, oblivious to the amount its parents have spent on stuff they never knew existed last year.
Image: sweet baby by MRef photography / CC BY-ND 2.0

Baby Monitor

I never knew how much equipment one small child needs until very recently. I also had no idea of the range of technology that is on offer to support you as a new parent to ensure the perfect environment outside of the womb. Baby monitors are at the top of this list. There are lots of Raspberry Pi baby monitor projects with a range of sensing functionality already in existence, and we’ve blogged about some of them before. They’re a great example of how an understanding of technology can open up a range of solutions that won’t break the bank. I’m looking forward to using all the capabilities of the Raspberry Pi to keep an eye on baby.

Baby name generator

Another surprising discovery was just how difficult it is to name a human being. Surprising because I can give a name to an inanimate object in less than three seconds, and come up with nicknames for colleagues in less than a day. My own offspring, though, and I draw a blank. The only solution: write a Python program to randomly generate names based on some parameters!

import names
from time import sleep
from guizero import App, ButtonGroup, Text, PushButton, TextBox

def get_name():
    boyname = names.get_first_name(gender='male')
    girlname = names.get_first_name(gender='female')
    othername = names.get_first_name()

    if babygender.get() == "male":
        name.set(str(boyname)+" "+str(babylastname.get()))
    elif babygender.get() == "female":
        name.set(str(girlname)+" "+str(babylastname.get()))
    else:
        name.set(str(othername)+" "+str(babylastname.get()))

app = App("Baby name generator")
surname_label = Text(app, "What is your surname?")
babylastname = TextBox(app, width=50)
babygender = ButtonGroup(app, options=[["boy", "male"], ["girl", "female"], ["all", "all"]], selected="male", horizontal=True)
intro = Text(app, "Your baby name could be")
name = Text(app, "")
button = PushButton(app, get_name, text="Generate me a name")

app.display()

Thanks to the names and GUIZero Python libraries, it is super simple to create, resolving any possible parent-to-be naming disputes in mere minutes.

Food, Poo, or Love?

I love data. Not just in Star Trek, but also more generally. Collecting and analysing data to understand my sleep patterns, my eating habits, how much exercise I do, and how much time I spend watching YouTube videos consumes much of my time. So of course I want to know lots about the little person we’ve made, long before he can use language to tell us himself.

I’m told that most newborns’ needs are quite simple: they want food, they want to be changed, or they just want some cuddles. I’m certain it’s more complicated than this, but it’s a good starting point for a data set, so stick with me here. I also wondered whether there might be a correlation between the amplitude of the cry and the type of need the baby has. A bit of an imprecise indicator, maybe, but fun to start to think about.

This build’s success is mostly thanks to Pimoroni’s Rainbow HAT, which, conveniently, has three capacitive touch buttons to record the newborn’s need, four fourteen-segment displays to display the words “FOOD”, “POO”, and “LOVE” when a button is pressed, and seven multicoloured LEDs to indicate the ferociousness of the baby’s cry in glorious technicolour. With the addition of a microphone, the ‘Food, Poo, Love Machine’ was born. Here it is in action:

Food Poo Love – Raspberry Pi Baby Monitor Project

Food Poo Love – The Raspberry Pi baby monitor project that allows you to track data on your new born baby.

Automatic Baby mobile

Another project that I’ve not had time to hack on, but that I think would be really awesome, is to automate a baby cot mobile. Imagine this one moving to the Star Trek theme music:

Image courtesy of Gisele Blaker Designs (check out her cool shop!)

Pretty awesome.

If you’ve got any more ideas for baby projects, do let me know. I’ll have a few months of nothing to do… right?

The post Digital making for new parents appeared first on Raspberry Pi.

Implementing Dynamic ETL Pipelines Using AWS Step Functions

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/implementing-dynamic-etl-pipelines-using-aws-step-functions/

This post contributed by:
Wangechi Dole, AWS Solutions Architect
Milan Krasnansky, ING, Digital Solutions Developer, SGK
Rian Mookencherry, Director – Product Innovation, SGK

Data processing and transformation is a common use case you see in our customer case studies and success stories. Often, customers deal with complex data from a variety of sources that needs to be transformed and customized through a series of steps to make it useful to different systems and stakeholders. This can be difficult due to the ever-increasing volume, velocity, and variety of data. Today, data management challenges cannot be solved with traditional databases.

Workflow automation helps you build solutions that are repeatable, scalable, and reliable. You can use AWS Step Functions for this. A great example is how SGK used Step Functions to automate the ETL processes for their client. With Step Functions, SGK has been able to automate changes within the data management system, substantially reducing the time required for data processing.

In this post, SGK shares the details of how they used Step Functions to build a robust data processing system based on highly configurable business transformation rules for ETL processes.

SGK: Building dynamic ETL pipelines

SGK is a subsidiary of Matthews International Corporation, a diversified organization focusing on brand solutions and industrial technologies. SGK’s Global Content Creation Studio network creates compelling content and solutions that connect brands and products to consumers through multiple assets including photography, video, and copywriting.

We were recently contracted to build a sophisticated and scalable data management system for one of our clients. We chose to build the solution on AWS to leverage advanced, managed services that help to improve the speed and agility of development.

The data management system served two main functions:

  1. Ingesting a large amount of complex data to facilitate both reporting and product funding decisions for the client’s global marketing and supply chain organizations.
  2. Processing the data through normalization and applying complex algorithms and data transformations. The system goal was to provide information in the relevant context—such as strategic marketing, supply chain, product planning, etc. —to the end consumer through automated data feeds or updates to existing ETL systems.

We were faced with several challenges:

  • Output data that needed to be refreshed at least twice a day to provide fresh datasets to both local and global markets. That constant data refresh posed several challenges, especially around data management and replication across multiple databases.
  • The complexity of reporting business rules that needed to be updated on a constant basis.
  • Data that could not be processed as contiguous blocks of typical time-series data. The measurement of the data was done across seasons (that is, combination of dates), which often resulted with up to three overlapping seasons at any given time.
  • Input data that came from 10+ different data sources. Each data source ranged from 1–20K rows with as many as 85 columns per input source.

These challenges meant that our small Dev team heavily invested time in frequent configuration changes to the system and data integrity verification to make sure that everything was operating properly. Maintaining this system proved to be a daunting task and that’s when we turned to Step Functions—along with other AWS services—to automate our ETL processes.

Solution overview

Our solution included the following AWS services:

  • AWS Step Functions: Before Step Functions was available, we were using multiple Lambda functions for this use case and running into memory limit issues. With Step Functions, we can execute steps in parallel simultaneously, in a cost-efficient manner, without running into memory limitations.
  • AWS Lambda: The Step Functions state machine uses Lambda functions to implement the Task states. Our Lambda functions are implemented in Java 8.
  • Amazon DynamoDB provides us with an easy and flexible way to manage business rules. We specify our rules as Keys. These are key-value pairs stored in a DynamoDB table.
  • Amazon RDS: Our ETL pipelines consume source data from our RDS MySQL database.
  • Amazon Redshift: We use Amazon Redshift for reporting purposes because it integrates with our BI tools. Currently we are using Tableau for reporting which integrates well with Amazon Redshift.
  • Amazon S3: We store our raw input files and intermediate results in S3 buckets.
  • Amazon CloudWatch Events: Our users expect results at a specific time. We use CloudWatch Events to trigger Step Functions on an automated schedule.

Solution architecture

This solution uses a declarative approach to defining business transformation rules that are applied by the underlying Step Functions state machine as data moves from RDS to Amazon Redshift. An S3 bucket is used to store intermediate results. A CloudWatch Event rule triggers the Step Functions state machine on a schedule. The following diagram illustrates our architecture:

Here are more details for the above diagram:

  1. A rule in CloudWatch Events triggers the state machine execution on an automated schedule.
  2. The state machine invokes the first Lambda function.
  3. The Lambda function deletes all existing records in Amazon Redshift. Depending on the dataset, the Lambda function can create a new table in Amazon Redshift to hold the data.
  4. The same Lambda function then retrieves Keys from a DynamoDB table. Keys represent specific marketing campaigns or seasons and map to specific records in RDS.
  5. The state machine executes the second Lambda function using the Keys from DynamoDB.
  6. The second Lambda function retrieves the referenced dataset from RDS. The records retrieved represent the entire dataset needed for a specific marketing campaign.
  7. The second Lambda function executes in parallel for each Key retrieved from DynamoDB and stores the output in CSV format temporarily in S3.
  8. Finally, the Lambda function uploads the data into Amazon Redshift.

To understand the above data processing workflow, take a closer look at the Step Functions state machine for this example.

We walk you through the state machine in more detail in the following sections.

Walkthrough

To get started, you need to:

  • Create a schedule in CloudWatch Events
  • Specify conditions for RDS data extracts
  • Create Amazon Redshift input files
  • Load data into Amazon Redshift

Step 1: Create a schedule in CloudWatch Events
Create rules in CloudWatch Events to trigger the Step Functions state machine on an automated schedule. The following is an example cron expression to automate your schedule:

In this example, the cron expression invokes the Step Functions state machine at 3:00am and 2:00pm (UTC) every day.

Step 2: Specify conditions for RDS data extracts
We use DynamoDB to store Keys that determine which rows of data to extract from our RDS MySQL database. An example Key is MCS2017, which stands for, Marketing Campaign Spring 2017. Each campaign has a specific start and end date and the corresponding dataset is stored in RDS MySQL. A record in RDS contains about 600 columns, and each Key can represent up to 20K records.

A given day can have multiple campaigns with different start and end dates running simultaneously. In the following example DynamoDB item, three campaigns are specified for the given date.

The state machine example shown above uses Keys 31, 32, and 33 in the first ChoiceState and Keys 21 and 22 in the second ChoiceState. These keys represent marketing campaigns for a given day. For example, on Monday, there are only two campaigns requested. The ChoiceState with Keys 21 and 22 is executed. If three campaigns are requested on Tuesday, for example, then ChoiceState with Keys 31, 32, and 33 is executed. MCS2017 can be represented by Key 21 and Key 33 on Monday and Tuesday, respectively. This approach gives us the flexibility to add or remove campaigns dynamically.

Step 3: Create Amazon Redshift input files
When the state machine begins execution, the first Lambda function is invoked as the resource for FirstState, represented in the Step Functions state machine as follows:

"Comment": ” AWS Amazon States Language.", 
  "StartAt": "FirstState",
 
"States": { 
  "FirstState": {
   
"Type": "Task",
   
"Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Start",
    "Next": "ChoiceState" 
  } 

As described in the solution architecture, the purpose of this Lambda function is to delete existing data in Amazon Redshift and retrieve keys from DynamoDB. In our use case, we found that deleting existing records was more efficient and less time-consuming than finding the delta and updating existing records. On average, an Amazon Redshift table can contain about 36 million cells, which translates to roughly 65K records. The following is the code snippet for the first Lambda function in Java 8:

public class LambdaFunctionHandler implements RequestHandler<Map<String,Object>,Map<String,String>> {
    Map<String,String> keys= new HashMap<>();
    public Map<String, String> handleRequest(Map<String, Object> input, Context context){
       Properties config = getConfig(); 
       // 1. Cleaning Redshift Database
       new RedshiftDataService(config).cleaningTable(); 
       // 2. Reading data from Dynamodb
       List<String> keyList = new DynamoDBDataService(config).getCurrentKeys();
       for(int i = 0; i < keyList.size(); i++) {
           keys.put(”key" + (i+1), keyList.get(i)); 
       }
       keys.put(”key" + T,String.valueOf(keyList.size()));
       // 3. Returning the key values and the key count from the “for” loop
       return (keys);
}

The following JSON represents ChoiceState.

"ChoiceState": {
   "Type" : "Choice",
   "Choices": [ 
   {

      "Variable": "$.keyT",
     "StringEquals": "3",
     "Next": "CurrentThreeKeys" 
   }, 
   {

     "Variable": "$.keyT",
    "StringEquals": "2",
    "Next": "CurrentTwooKeys" 
   } 
 ], 
 "Default": "DefaultState"
}

The variable $.keyT represents the number of keys retrieved from DynamoDB. This variable determines which of the parallel branches should be executed. At the time of publication, Step Functions does not support dynamic parallel state. Therefore, choices under ChoiceState are manually created and assigned hardcoded StringEquals values. These values represent the number of parallel executions for the second Lambda function.

For example, if $.keyT equals 3, the second Lambda function is executed three times in parallel with keys, $key1, $key2 and $key3 retrieved from DynamoDB. Similarly, if $.keyT equals two, the second Lambda function is executed twice in parallel.  The following JSON represents this parallel execution:

"CurrentThreeKeys": { 
  "Type": "Parallel",
  "Next": "NextState",
  "Branches": [ 
  {

     "StartAt": “key31",
    "States": { 
       “key31": {

          "Type": "Task",
        "InputPath": "$.key1",
        "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
        "End": true 
       } 
    } 
  }, 
  {

     "StartAt": “key32",
    "States": { 
     “key32": {

        "Type": "Task",
       "InputPath": "$.key2",
         "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
       "End": true 
      } 
     } 
   }, 
   {

      "StartAt": “key33",
       "States": { 
          “key33": {

                "Type": "Task",
             "InputPath": "$.key3",
             "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
           "End": true 
       } 
     } 
    } 
  ] 
} 

Step 4: Load data into Amazon Redshift
The second Lambda function in the state machine extracts records from RDS associated with keys retrieved for DynamoDB. It processes the data then loads into an Amazon Redshift table. The following is code snippet for the second Lambda function in Java 8.

public class LambdaFunctionHandler implements RequestHandler<String, String> {
 public static String key = null;

public String handleRequest(String input, Context context) { 
   key=input; 
   //1. Getting basic configurations for the next classes + s3 client Properties
   config = getConfig();

   AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient(); 
   // 2. Export query results from RDS into S3 bucket 
   new RdsDataService(config).exportDataToS3(s3,key); 
   // 3. Import query results from S3 bucket into Redshift 
    new RedshiftDataService(config).importDataFromS3(s3,key); 
   System.out.println(input); 
   return "SUCCESS"; 
 } 
}

After the data is loaded into Amazon Redshift, end users can visualize it using their preferred business intelligence tools.

Lessons learned

  • At the time of publication, the 1.5–GB memory hard limit for Lambda functions was inadequate for processing our complex workload. Step Functions gave us the flexibility to chunk our large datasets and process them in parallel, saving on costs and time.
  • In our previous implementation, we assigned each key a dedicated Lambda function along with CloudWatch rules for schedule automation. This approach proved to be inefficient and quickly became an operational burden. Previously, we processed each key sequentially, with each key adding about five minutes to the overall processing time. For example, processing three keys meant that the total processing time was three times longer. With Step Functions, the entire state machine executes in about five minutes.
  • Using DynamoDB with Step Functions gave us the flexibility to manage keys efficiently. In our previous implementations, keys were hardcoded in Lambda functions, which became difficult to manage due to frequent updates. DynamoDB is a great way to store dynamic data that changes frequently, and it works perfectly with our serverless architectures.

Conclusion

With Step Functions, we were able to fully automate the frequent configuration updates to our dataset resulting in significant cost savings, reduced risk to data errors due to system downtime, and more time for us to focus on new product development rather than support related issues. We hope that you have found the information useful and that it can serve as a jump-start to building your own ETL processes on AWS with managed AWS services.

For more information about how Step Functions makes it easy to coordinate the components of distributed applications and microservices in any workflow, see the use case examples and then build your first state machine in under five minutes in the Step Functions console.

If you have questions or suggestions, please comment below.

What’s the Best Solution for Managing Digital Photos and Videos?

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/discovering-best-solution-for-photo-video-backup/

Digital Asset Management (DAM)

If you have spent any time, as we have, talking to photographers and videographers about how they back up and archive their digital photos and videos, then you know that there’s no one answer or solution that users have discovered to meet their needs.

Based on what we’ve heard, visual media artists are still searching for the best combination of software, hardware, and cloud storage to preserve their media, and to be able to search, retrieve, and reuse that media as easily as possible.

Yes, there are a number of solutions out there, and some users have created combinations of hardware, software, and services to meet their needs, but we have met few who claim to be satisfied with their solution for digital asset management (DAM), or expect that they will be using the same solution in just a year or two.

We’d like to open a dialog with professionals and serious amateurs to learn more about what you’re doing, what you’d like to do, and how Backblaze might fit into that solution.

We have a bit of cred in this field, as we currently have hundreds of petabytes of digital media files in our data centers from users of Backblaze Backup and Backblaze B2 Cloud Storage. We want to make our cloud services as useful as possible for photographers and videographers.

Tell Us Both Your Current Solution and Your Dream Solution

To get started, we’d love to hear from you about how you’re managing your photos and videos. Whether you’re an amateur or a professional, your experiences are valuable and will help us understand how to provide the best cloud component of a digital asset management solution.

Here are some questions to consider:

  • Are you using direct-attached drives, NAS (Network-Attached Storage), or offline storage for your media?
  • Do you use the cloud for media you’re actively working on?
  • Do you back up or archive to the cloud?
  • Did you have a catalog or record of the media that you’ve archived that you use to search and retrieve media?
  • What’s different about how you work in the field (or traveling) versus how you work in a studio (or at home)?
  • What software and/or hardware currently works for you?
  • What’s the biggest impediment to working in the way you’d really like to?
  • How could the cloud work better for you?

Please Contribute Your Ideas

To contribute, please answer the following two questions in the comments below or send an email to [email protected]. Please comment or email your response by December 22, 2017.

  1. How are you currently backing up your digital photos, video files, and/or file libraries/catalogs? Do you have a backup system that uses attached drives, a local network, the cloud, or offline storage media? Does it work well for you?
  2. Imagine your ideal digital asset backup setup. What would it look like? Don’t be constrained by current products, technologies, brands, or solutions. Invent a technology or product if you wish. Describe an ideal system that would work the way you want it to.

We know you have opinions about managing photos and videos. Bring them on!

We’re soliciting answers far and wide from amateurs and experts, weekend video makers and well-known professional photographers. We have a few amateur and professional photographers and videographers here at Backblaze, and they are contributing their comments, as well.

Once we have gathered all the responses, we’ll write a post on what we learned about how people are currently working and what they would do if anything were possible. Look for that post after the beginning of the year.

Don’t Miss Future Posts on Media Management

We don’t want you to miss our future posts on photography, videography, and digital asset management. To receive email notices of blog updates (and no spam, we promise), enter your email address above using the Join button at the top of the page.

Come Back on Thursday for our Photography Post (and a Special Giveaway, too)

This coming Thursday we’ll have a blog post about the different ways that photographers and videographers are currently managing their digital media assets.

Plus, you’ll have the chance to win a valuable hardware/software combination for digital media management that I am sure you will appreciate. (You’ll have to wait until Thursday to find out what the prize is, but it has a total value of over $700.)

Past Posts on Photography, Videography, and Digital Asset Management

We’ve written a number of blog posts about photos, videos, and managing digital assets. We’ve posted links to some of them below.

Four Tips To Help Photographers and Videographers Get The Most From B2

Four Tips To Help Photographers and Videographers Get The Most From B2

How to Back Up Your Mac’s Photos Library

How to Back Up Your Mac’s Photos Library

How To Back Up Your Flickr Library

How To Back Up Your Flickr Library

Getting Video Archives Out of Your Closet

Getting Video Archives Out of Your Closet

B2 Cloud Storage Roundup

B2 Cloud Storage Roundup

Backing Up Photos While Traveling

Backing up photos while traveling – feedback

Should I Use an External Drive for Backup?

Should I use an external drive for backup?

How to Connect your Synology NAS to B2

How to Connect your Synology NAS to B2

The post What’s the Best Solution for Managing Digital Photos and Videos? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Cost of Cloud Storage

Post Syndicated from Tim Nufire original https://www.backblaze.com/blog/cost-of-cloud-storage/

the cost of the cloud as a percentage of revenue

This week, we’re celebrating the one year anniversary of the launch of Backblaze B2 Cloud Storage. Today’s post is focused on giving you a peek behind the curtain about the costs of providing cloud storage. Why? Over the last 10 years, the most common question we get is still “how do you do it?” In this multi-billion dollar, global industry exhibiting exponential growth, none of the other major players seem to be willing to discuss the underlying costs. By exposing a chunk of the Backblaze financials, we hope to provide a better understanding of what it costs to run “the cloud,” and continue our tradition of sharing information for the betterment of the larger community.

Context
Backblaze built one of the industry’s largest cloud storage systems and we’re proud of that accomplishment. We bootstrapped the business and funded our growth through a combination of our own business operations and just $5.3M in equity financing ($2.8M of which was invested into the business – the other $2.5M was a tender offer to shareholders). To do this, we had to build our storage system efficiently and run as a real, self-sustaining, business. After over a decade in the data storage business, we have developed a deep understanding of cloud storage economics.

Definitions
I promise we’ll get into the costs of cloud storage soon, but some quick definitions first:

    Revenue: Money we collect from customers.
    Cost of Goods Sold (“COGS”): The costs associated with providing the service.
    Operating Expenses (“OpEx”): The costs associated with developing and selling the service.
    Income/Loss: What is left after subtracting COGS and OpEx from Revenue.

I’m going to focus today’s discussion on the Cost of Goods Sold (“COGS”): What goes into it, how it breaks down, and what percent of revenue it makes up. Backblaze is a roughly break-even business with COGS accounting for 47% of our revenue and the remaining 53% spent on our Operating Expenses (“OpEx”) like developing new features, marketing, sales, office rent, and other administrative costs that are required for us to be a functional company.

This post’s focus on COGS should let us answer the commonly asked question of “how do you provide cloud storage for such a low cost?”

Breaking Down Cloud COGS

Providing a cloud storage service requires the following components (COGS and OpEX – below we break out COGS):
cloud infrastructure costs as a percentage of revenue

  • Hardware: 23% of Revenue
  • Backblaze stores data on hard drives. Those hard drives are “wrapped” with servers so they can connect to the public and store data. We’ve discussed our approach to how this works with our Vaults and Storage Pods. Our infrastructure is purpose built for data storage. That is, we thought about how data storage ought to work, and then built it from the ground up. Other companies may use different storage media like Flash, SSD, or even tape. But it all serves the same function of being the thing that data actually is stored on. For today, we’ll think of all this as “hardware.”

    We buy storage hardware that, on average, will last 5 years (60 months) before needing to be replaced. To account for hardware costs in a way that can be compared to our monthly expenses, we amortize them and recognize 1/60th of the purchase price each month.

    Storage Pods and hard drives are not the only hardware in our environment. We also have to buy the cabinets and rails that hold the servers, core servers that manage accounts/billing/etc., switches, routers, power strips, cables, and more. (Our post on bringing up a data center goes into some of this detail.) However, Storage Pods and the drives inside them make up about 90% of all the hardware cost.

  • Data Center (Space & Power): 8% of Revenue
  • “The cloud” is a great marketing term and one that has caught on for our industry. That said, all “clouds” store data on something physical like hard drives. Those hard drives (and servers) are actual, tangible things that take up actual space on earth, not in the clouds.

    At Backblaze, we lease space in colocation facilities which offer a secure, temperature controlled, reliable home for our equipment. Other companies build their own data centers. It’s the classic rent vs buy decision; but it always ends with hardware in racks in a data center.

    Hardware also needs power to function. Not everyone realizes it, but electricity is a significant cost of running cloud storage. In fact, some data center space is billed simply as a function of an electricity bill.

    Every hard drive storing data adds incremental space and power need. This is a cost that scales with storage growth.

    I also want to make a comment on taxes. We pay sales and property tax on hardware, and it is amortized as part of the hardware section above. However, it’s valuable to think about taxes when considering the data center since the location of the hardware actually drives the amount of taxes on the hardware that gets placed inside of it.

  • People: 7% of Revenue
  • Running a data center requires humans to make sure things go smoothly. The more data we store, the more human hands we need in the data center. All drives will fail eventually. When they fail, “stuff” needs to happen to get a replacement drive physically mounted inside the data center and filled with the customer data (all customer data is redundantly stored across multiple drives). The individuals that are associated specifically with managing the data center operations are included in COGS since, as you deploy more hard drives and servers, you need more of these people.

    Customer Support is the other group of people that are part of COGS. As customers use our services, questions invariably arise. To service our customers and get questions answered expediently, we staff customer support from our headquarters in San Mateo, CA. They do an amazing job! Staffing models, internally, are a function of the number of customers and the rate of acquiring new customers.

  • Bandwidth: 3% of Revenue
  • We have over 350 PB of customer data being stored across our data centers. The bulk of that has been uploaded by customers over the Internet (the other option, our Fireball service, is 6 months old and is seeing great adoption). Uploading data over the Internet requires bandwidth – basically, an Internet connection similar to the one running to your home or office. But, for a data center, instead of contracting with Time Warner or Comcast, we go “upstream.” Effectively, we’re buying wholesale.

    Understanding how that dynamic plays out with your customer base is a significant driver of how a cloud provider sets its pricing. Being in business for a decade has explicit advantages here. Because we understand our customer behavior, and have reached a certain scale, we are able to buy bandwidth in sufficient bulk to offer the industry’s best download pricing at $0.02 / Gigabyte (compared to $0.05 from Amazon, Google, and Microsoft).

    Why does optimizing download bandwidth charges matter for customers of a data storage business? Because it has a direct relationship to you being able to retrieve and use your data, which is important.

  • Other Fees: 6% of Revenue
  • We have grouped a the remaining costs inside of “Other Fees.” This includes fees we pay to our payment processor as well as the costs of running our Restore Return Refund program.

    A payment processor is required for businesses like ours that need to accept credit cards securely over the Internet. The bulk of the money we pay to the payment processor is actually passed through to pay the credit card companies like AmEx, Visa, and Mastercard.

    The Restore Return Refund program is a unique program for our consumer and business backup business. Customers can download any and all of their files directly from our website. We also offer customers the ability to order a hard drive with some or all of their data on it, we then FedEx it to the customer wherever in the world she is. If the customer chooses, she can return the drive to us for a full refund. Customers love the program, but it does cost Backblaze money. We choose to subsidize the cost associated with this service in an effort to provide the best customer experience we can.

The Big Picture

At the beginning of the post, I mentioned that Backblaze is, effectively, a break even business. The reality is that our products drive a profitable business but those profits are invested back into the business to fund product development and growth. That means growing our team as the size and complexity of the business expands; it also means being fortunate enough to have the cash on hand to fund “reserves” of extra hardware, bandwidth, data center space, etc. In our first few years as a bootstrapped business, having sufficient buffer was a challenge. Having weathered that storm, we are particularly proud of being in a financial place where we can afford to make things a bit more predictable.

All this adds up to answer the question of how Backblaze has managed to carve out its slice of the cloud market – a market that is a key focus for some of the largest companies of our time. We have innovated a novel, purpose built storage infrastructure with our Vaults and Pods. That infrastructure allows us to keep costs very, very low. Low costs enable us to offer the world’s most affordable, reliable cloud storage.

Does reliable, affordable storage matter? For a company like Vintage Aerial, it enables them to digitize 50 years’ worth of aerial photography of rural America and share that national treasure with the world. Having the best download pricing in the storage industry means Austin City Limits, a PBS show out of Austin, can digitize and preserve over 550 concerts.

We think offering purpose built, affordable storage is important. It empowers our customers to monetize existing assets, make sure data is backed up (and not lost), and focus on their core business because we can handle their data storage needs.

The post The Cost of Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

MagPi 59: the Raspberry Pi PC Challenge

Post Syndicated from Lucy Hattersley original https://www.raspberrypi.org/blog/magpi-59/

Hey everyone, Lucy here! I’m standing in for Rob this month to introduce The MagPi 59, the latest edition of the official Raspberry Pi magazine.

The MagPi 59

Ever wondered whether a Pi could truly replace your home computer? Looking for inspiration for a Pi-powered project you can make and use in the sunshine? Interested in winning a Raspberry Pi that’s a true collector’s item?

Then we’ve got you covered in Issue 59, out in stores today!

The MagPi 59

Shiny and new

The Raspberry Pi PC challenge

This month’s feature is fascinating! We set the legendary Rob Zwetsloot a challenge: use no other computer but a Raspberry Pi for a week, and let us know how it goes – for science!

Is there anything you can’t do with a $35 computer? To find out, you just have to read the magazine.

12 summer projects

We’re bringing together some of the greatest outdoor projects for the Raspberry Pi in this MagPi issue. From a high-altitude balloon, to aerial photography, to bike computers and motorised skateboards, there’s plenty of bright ideas in The MagPi 59.

12 Summer Projects in The MagPi 59

Maybe your Pi will ripen in the sun?

The best of the rest in The MagPi 59

We’ve got a fantastic collection of community projects this month. Ingmar Stapel shows off Big Rob, his SatNav-guided robot, while Eric Page demonstrates his Dog Treat Dispenser. There are also interesting tutorials on building a GPS tracker, controlling a Raspberry Pi with an Android app and Bluetooth, and building an electronic wind chime with magnetometers.

You can even enter our give-away of 10 ultra-rare ‘Raspberry Pi 3 plus official case’ kits signed by none other than Eben Upton, co-creator of the Raspberry Pi. Win one and be the envy of the entire Raspberry Pi community!

Electronic Wind Chimes - MagPi 59

MAGNETS!

You can find The MagPi 59 in the UK right now, at WHSmith, Sainsbury’s, Asda, and Tesco. Copies will be arriving in US stores including Barnes & Noble and Micro Center very soon. You can also get a copy online from our store or via our Android or iOS app. And don’t forget: there’s always the free PDF as well.

Get reading, get making, and enjoy the new issue!

Rob isn’t here to add his signature Picard GIF, but we’ve sorted it for him. He loves a good pun, so he does! – Janina & Alex

The post MagPi 59: the Raspberry Pi PC Challenge appeared first on Raspberry Pi.

[$] ProofMode: a camera app for verifiable photography

Post Syndicated from corbet original https://lwn.net/Articles/726142/rss

The default apps on a mobile platform like Android are familiar targets for
replacement, especially for developers concerned about security. But while
messaging and voice apps (which can be replaced by Signal and Ostel, for
instance) may be the best known examples, the non-profit Guardian Project has taken up the
cause of improving the security features of the camera app. Its latest
such project is ProofMode, an app
to let users take photos and videos that can be verified as authentic by
third parties.