Tag Archives: photography

3-2-1 Backup Best Practices Using the Cloud

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/3-2-1-backup-best-practices-using-the-cloud/

Archive 3-2-1

Whether you’re a hobbyist or a professional photographer or videographer, employing a 3-2-1 backup strategy for your valuable photos and videos is critical. A good backup strategy can protect you from accidental or incidental data loss and make sure your working or archived files are available when you need them.

Most photographers and videographers are aware of the necessity to back up their data, but like a lot of things that are good for us, like eating kale and exercising regularly, putting good habits into practice can be challenging. Maybe you’re currently using the cloud as part of your backup or archive strategy, or perhaps you’re still juggling hard disk drives between your workstations, a storage closet, and an offsite location.

If you’re not yet using the cloud, or are still getting familiar with the cloud for data backup and archiving, I’d like to go over some ways in which the cloud can make managing your data easier and provide you with a number of benefits you might not currently enjoy.

Let’s first do a quick review of 3-2-1 backup strategy.

The 3-2-1 Backup Strategy

A 3-2-1 strategy means having at least three total copies of your data, two of which are local but on different media, and at least one copy that is offsite.

A Typical 3-2-1 Scenario

Let’s use landscape.cr2 as an example file for this scenario. Landscape.cr2 lives on your primary computer. That’s one copy of the data file. You also have an external hard drive or Network-Attached Storage (NAS) that you use for backing up your computer. Your backup program runs on a regular schedule, or whenever a file is added to your system, and backs up landscape.cr2 to your external drive(s). That’s a second copy on a different device or medium. In addition to that external hard drive, you also have an online backup solution that makes another copy of your data. The backup program continuously scans your computer and uploads your data to a data center (aka the cloud). Landscape.cr2 is included in this upload, and that becomes the third copy of your data.

Why Two Onsite Copies and One Offsite Copy?

Whichever kind of computer you are using, an onsite backup is a simple way of having quick access to your data should anything happen to your computer. If your laptop or desktop’s hard drive crashes, and you have been regularly backing up to an external hard drive or NAS, you can quickly get the majority of your data back (or use the external drive on another computer while yours gets fixed or replaced). If you use an automatic backup program, the exposure for data loss is fairly minimal.

Synology NAS and cloud backup symbol

Having an onsite backup is a great start, but having an offsite backup is a key component to completing a backup strategy. Onsite backups are easy to set up, but unfortunately having a backup near the device that it’s backing up (for example, having a desktop PC or Mac and an external hard drive on the same desk), means that both of those copies of your data are susceptible to loss in case of fire, theft, water damage, or other unforeseen occurrences.

Backblaze data center

Most often, if the two devices you have as your local copies are close together, they’ll both be affected if the unfortunate should happen. A continuously updated copy of your data that’s not in the same physical location as the other two is paramount in protecting your files. Even the United States Government recommends this approach. In a 2012 paper for US-CERT (United States Computer Emergency Readiness Team), Carnegie Mellon recommended the 3-2-1 method in their publication titled: Data Backup Options.

The Cloud as Part of 3-2-1

a storage vault in the middle of a cloud

The cloud can make fulfilling the 3-2-1 strategy much easier. And, with recent advances in technology and cost competition, the cloud brings other advantages:

Broadband speed and coverage — Broadband bandwidth has increased and is more widely available while the reach of cellular data service has made many remote locations accessible. It’s possible to upload data to the cloud from home, office, and even when traveling to remote locations. For example, the summit of Mt. Everest now has mobile network service.

Competitive costCompetition in cloud storage has made for competitive pricing and a range of services. The cloud is more affordable than ever.

Advantages of Adding the Cloud to 3-2-1

If you’re already using 3-2-1, then you’ve made a great start in keeping your data safe. If you’re not yet using the cloud as part of your backup strategy, then you might consider the following advantages of adding it to your data security plans.

Convenience
The two offsite copies of your data required by 3-2-1 can be anywhere that’s geographically separated from your primary location. That can be convenient for some, such as for a photographer friend who takes a backup hard disk to leave at his mother’s house during their regular Sunday dinner. It’s not so easy for others, who have to transport or ship disks to other locations to fulfill the diverse location requirement. The cloud handles this without any extra effort.

Durability
Cloud data centers are designed to protect data against outages, service interruptions, hardware failures, and natural disasters. Backblaze claims 99.999999999% (11 9s) annual durability for its customers’ data.

Sharing & Collaboration
Having data in the cloud can make sharing much easier. Users can control who has access and to what data. Backblaze Backup and B2 Cloud Storage support sharing links that can be sent to anyone who needs permanent or temporary access to stored data. This is ideal if you’re working with second shooters on a project or relaying final deliverables to a client.

Data Ingest/Seeding
As digital resolutions increase, media files grow larger and larger. Forty-five megapixel images and 8K digital videos can quickly fill up any storage media and put demands on the time and bandwidth required to transfer data. Some cloud services provide seeding services that enable physical transfer of data directly to the cloud. An example is the Backblaze B2 Fireball, which is a 70 TB hard disk array with 1 GB connectivity that enables the customer to load and ship data securely to Backblaze’s data centers.

Challenges of the Cloud

For some, there are real challenges using the cloud for backing up or archiving data, especially when they have a lot of data, as many photographers and videographers do. As services expand and new technologies are adopted, transfer speeds will continue to increase and should help overcome that hurdle.

Data center racks

In the meantime, here are some tips for meeting these challenges:

  • Schedule your data uploads for off hours when the network load is light and the transfers won’t impede other data traffic.
  • Leverage multi-threaded uploads to improve transfer speed.
  • Take advantage of data ingest options to seed data to the cloud. It’s definitely faster and can even be more economical compared to other data transfer options.
  • Be patient. Once you get your initial files uploaded or seeded to the cloud, it becomes much easier to upload incremental updates. In the near-future we will see 5G mobile networks and higher broadband speeds that will make data transfers even faster.

Are you Using the Cloud to Best Advantage?

Backups are great for your active projects, but how do you handle your archives? We recently wrote about the difference between backing up and archiving, and knowing the difference will improve your data management strategy.

Many photographers and videographers are using a backup or even a sync solution for their data when archiving is the approach that better suits their needs. Briefly, a data backup is for recovery from hardware failure or recent data corruption or loss, and an archive is for space management and long term retention. If you’re using a data backup or sync service to store data that you wish to keep permanently or long-term, you’re trying to fit a round peg into a square hole.

What’s the Best Use for Backup?

  • Working files currently being edited, or in a live project.
  • Documents, correspondence, application settings, and other transient system information.

What’s the Best Use for Archive?

  • Finished projects for which you wish to retain all or just the primary data files used.
  • Photos and videos that you might use again at some time in the future.
  • Media that has value to your business for possible future sales.

Making the Most of the Cloud

If you’re following a 3-2-1 backup strategy that includes the cloud, you’ll be ahead of 90% of your peers. The cloud is becoming more useful and more economical every day. When you add the security of the cloud, collaboration with clients and peers, and proven durability to that list, the cloud is an unbeatable choice for upping your game in data backup and archiving.

You can read more posts in this series written in conjunction with Lensrentals.com on photography and videography.

•  •  •

Note: This post originally appeared on Lensrentals.com on September 18, 2018.

The post 3-2-1 Backup Best Practices Using the Cloud appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Instaframe: image recognition meets Instagram

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/instaframe-image-recognition-meets-instagram/

Bringing the digital photo frame into an even more modern age than the modern age it already resides in, Sean Tracey uses image recognition and social media to update his mother on the day-to-day happenings of her grandkids.

Sharing social media content

“Like every grandmother, my mum dotes on her grandchildren (the daughter and son of my sister, Grace and Freddie),” Sean explains in his tutorial for the project, “but they don’t live nearby, so she doesn’t get to see them as much as she might like.”

Sean tells of his mother’s lack of interest in social media platforms (they’re too complex), and of the anxiety he feels whenever she picks up his phone to catch up on the latest images of Grace and Freddie.

So I thought: “I know! Why don’t I make my mum a picture frame that filters my Instagram feed to show only pictures of my niece and nephew!”

Genius!

Image recognition and Instagram

Sean’s Instaframe project uses a Watson Visual Recognition model to recognise photos of his niece and nephew posted to his Instagram account, all via a Chrome extension. Then, via a series of smaller functions, these images are saved to a folder and displayed on a screen connected to a Raspberry Pi 3B+.

Sean has written up a full rundown of the build process on his website.

Photos and Pi

Do you like photos and Raspberry Pi? Then check out these other photo-focused Pi projects that we’re sure you’ll love (because they’re awesome) and will want to make yourself (because they’re awesome).

FlipFrame

FlipFrame, the rotating picture frame, rotates according to the orientation of the image on display.

FlipFrame

Upstagram

This tiny homage to the house from Up! takes bird’s-eye view photographs of Paris and uploads them to Instagram as it goes.

Pi-powered DSLR shutter

Adrian Bevan hacked his Raspberry Pi to act as a motion-activated shutter remote for his digital SLR — aka NatureBytes on steroids.

The post Instaframe: image recognition meets Instagram appeared first on Raspberry Pi.

Five Best Practices to Securely Preserve Your Video, Photo, and Other Data

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/five-best-practices-to-securely-preserve-your-video-photo-and-other-data/

computer and camera overlooking a lake

Whether you’re working with video, photo, audio, or other data, preserving the security of your data has to be at the top of your priority list. Data security might sound like a challenging proposition, but by following just a handful of guidelines it becomes a straightforward and easily accomplished task.

We’d like to share what we consider best practices for maintaining the safety of your data. For both seasoned pros and those just getting started with digital media, these best practices are important to implement and revisit regularly. We believe that by following these practices — independently of which specific data storage software, service, or device you use — you will ensure that all your media and other data are kept secure to the greatest extent possible.

The Five Best Practices to Keep Your Digital Media Safe

1 — Keep Multiple Copies of Your Media Files

Everyone by now is likely familiar with the 3-2-1 strategy for maintaining multiple copies of your data (video, photos, digital asset management catalogs, etc.). Following a 3-2-1 strategy simply means that you should always have at least three copies of your active data, two of which are local, and at least one that is in another location.

a tech standing looking at a pod full of hard drives in a data center
Choose a reliable storage provider

Mind you, this is for active data, that is, files and other data that you are currently working on and want to have backed up in case of accident, theft, or hardware failure. Once you’re finished working with your data, you should consider archiving your data, which we’ve also written about on our blog.

2 — Use Trustworthy Vendors

There are times when you can legitimately cut corners to save money, and there are times when you shouldn’t. When it comes to your digital media and services, you want to go with the best. That means using topnotch memory sticks, HDD and SSD drives, software, and cloud services.

For hardware devices and software, it’s always helpful to read reviews or talk with others using the devices to find out how well they work. For hard drive reliability, our Drive Stats blog posts can be informative and are a unique source of information in the data storage industry.

For cloud storage, you want a vendor with a strong track record of reliability and cost stability. You don’t want to use a cloud service or other SaaS vendor that has a history of making it difficult or expensive to access or download your data from their service. A topnotch service vendor will be transparent in their business practices, inform you when there are any outages in their service or maintenance windows, and try as hard as possible to make things right if problems occur.

3 — Always Use Encryption (The Strongest Available)

Encrypting your data provides a number of benefits. It protects your data no matter where it is stored, and also when it is being moved — potentially the most vulnerable exposure your data will have.

Encrypted data can’t be altered or corrupted without the changes being detected, which provides another advantage. Encryption also enables you to meet requirements for privacy and security compliance and to keep up with changing rules and regulations.

Encryption comes in different flavors. You should always select the strongest encryption available, and make sure that any passwords or multi-factor authentication you use are strong and unique for each application.

4 — Automate Whenever Possible

Don’t rely on your memory or personal discipline alone to remember to regularly back up your data. While we always start with the best of intentions, we are busy and we often let things slide (much like resolving to exercise regularly). It’s better to have a regular schedule that you commit to, and best if the backups happen automatically. Many backup and archive apps let you specify when backups, incremental backups, or snapshots occur. You usually can set how many copies of your data to keep, and whether backups are triggered by the date and time or when data changes.

Automating your backups and archives means that you won’t forget to back up and results in a greater likelihood that your data will not only be recoverable after an accident or hardware failure, but up to date. You’ll be glad for the reduced stress and worry in your life, as well.

5 — Be Mindful of Security in Your Workflow

Nobody wants to worry about security all the time, but if it’s ignored, sooner or later that inattention will catch up with you. The best way to both increase the security of your data and reduce stress in your life is to have a plan and implement it.

At its simplest, the concept of security mindfulness means that you should be conscious of how you handle your data during all stages of your workflow. Being mindful shouldn’t require you to overthink, stress or worry, but just to be aware of the possible outcomes of your decisions about how you’re handling your data.

If you follow the first four practices in this list, then this fifth concept should flow naturally from them. You’ve taken the right steps to a long term plan for maintaining your data securely.

Data Security Can Be Both Simple and Effective

The best security practices are the ones that are easy to follow consistently. If you pay attention to the five best practices we’ve outlined here, then you’re well on your way to secure data and peace of mind.

•  •  •

Note:  This post originally appeared on Lensrentals.com on September 18, 2018.

The post Five Best Practices to Securely Preserve Your Video, Photo, and Other Data appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Stereoscopic photography with StereoPi and a Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/stereoscopic-photography-stereopi-raspberry-pi/

StereoPi allows users to attached two Camera Modules to their Raspberry Pi Compute Module — it’s a great tool for building stereoscopic cameras, 360º monitors, and virtual reality rigs.

StereoPi draft 1

No Description

My love for stereoscopic photography goes way back

My great-uncle Eric was a keen stereoscopic photographer and member of The Stereoscopic Society. Every memory I have of visiting him includes looking at his latest stereo creations through a pair of gorgeously antique-looking, wooden viewers. And I’ve since inherited the beautiful mahogany viewing cabinet that used to stand in his dining room.

It looks like this, but fancier

Stereoscopic photography has always fascinated me. Two images that seem identical suddenly become, as if by magic, a three-dimensional wonder. As a child, I couldn’t make sense of it. And even now, while I do understand how it actually works, it remains magical in my mind — like fairies at the bottom of the garden. Or magnets.

So it’s no wonder that I was instantly taken with StereoPi when I stumbled across its crowdfunding campaign on Twitter. Having wanted to make a Pi-based stereoscopic camera ever since I joined the organisation, but not knowing how best to go about it, I thought this new board seemed ideal for me.

The StereoPi board

Despite its name, StereoPi is more than just a stereoscopic camera board. How to attach two Camera Modules to a Raspberry Pi is a question people ask us frequently and for various projects, from home security systems to robots, cameras, and VR.

Slim and standard editions of the StereoPi

Slim and standard editions of the StereoPi

The board attaches to any version of the Raspberry Pi Compute Module, including the newly released CM3+, and you can use it in conjunction with Raspbian to control it via the Python module picamera.

StereoPi stereoscopic livestream over 4G

StereoPi stereoscopic livestream over 4G. Project site: http://StereoPi.com

When it comes to what you can do with StereoPi, the possibilities are almost endless: mount two wide-angle lenses for 360º recording, build a VR rig to test out virtual reality games, or, as I plan to do, build a stereoscopic camera!

It’s on Crowd Supply now!

StereoPi is currently available to back on Crowd Supply, and purchase options start from $69. At 69% funded with 30 days still to go, we have faith that the StereoPi project will reach its goal and make its way into the world of impressive Raspberry Pi add-ons.

The post Stereoscopic photography with StereoPi and a Raspberry Pi appeared first on Raspberry Pi.

How Much Photo & Video Data Do You Have Stored?

Post Syndicated from Jim Goldstein original https://www.backblaze.com/blog/how-much-photo-video-data-do-you-have-stored/

How Much Photo and Video Data Do You Have?

Backblaze’s Director of Marketing Operations, Jim, is not just a marketing wizard, he’s worked as a professional photographer and run marketing for a gear rental business. He knows a lot of photographers. We thought that our readers would be interested in the results of an informal poll he recently conducted among his media friends about the amount of media data they store.You’re invited to contribute to the poll, as well!

— Editor

I asked my circle of professional and amateur photographer friends how much digital media data they have stored. It was a quick survey, and not in any way scientific, but it did show the range of data use by photographers and videographers.

Jim's media data storage poll

I received 64 responses. The answers ranged from less than 5 TB (17 users) to 2 petabytes (1 user). The most popular response was 10-19 TB (18 users). Here are the results.

Digital media storage poll results

Jim's digital media storage poll results

How Much Digital Media Do You Have Stored?

I wondered if the results would be similar if I expanded our survey to a wider audience.

The poll below replicates what I asked of my circle of professional and non-professional photographer and videographer friends. The poll results will be updated in real-time. I ask that you respond only once.

Backblaze is interested in the results as it will help us write blog articles that will be useful to our readership, and also offer cloud services suitable for the needs of our users. Please feel free to ask questions in the comments about cloud backup and storage, and about our products Backblaze Backup and Backblaze B2 Cloud Storage.

I’m anxious to see the results.

Our Poll — Please Vote!

How much photo/video data do you have in total (TB)?

Thanks for participating in the poll. If you’d like to provide more details about the data you store and how you do it, we’d love to hear from you in the comments.

The post How Much Photo & Video Data Do You Have Stored? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Stories of Camera and Data Catastrophes

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/stories-of-camera-and-data-catastrophes/

Salt water damaged camera

This is the third post in a series of post exchanges with our friends at Lensrentals.com, a popular online site for renting photography, videography, and lighting equipment. Seeing as how Halloween is just a few days away, we thought it appropriate to offer some scary tales of camera and data catastrophes. Enjoy.

Note: You can read all of Lensrentals’ posts on our blog. Find all of our posts on the Lensrentals blog.

— Editor

Stories of Camera and Data Catastrophes
by Zach Sutton, Editor-in-chief, Lensrentals.com

As one of the largest photo and video gear rental companies in the world, Lensrentals.com ships out thousands of pieces of gear each day. It would be impossible to expect that all of our gear would return to us in the same condition it was in when we rented it out. More often than not, the damage is the result of things being dropped, but now and then some pretty interesting things happen to the gear we rent out.

We have an incredible customer base, and when this kind of damage happens, they’re more than happy to pay the necessary repair fees. Stuff happens, mistakes are made, and we have a full-service repair center to keep the costs low. And while we have insurance policies for accidental damage such as drops, dings, and other accidents, it doesn’t cover neglect, which accounts for the stories we’re going to share with you below. Let’s take a look at some of our more exciting camera and data catastrophe stories.

Camera Data Catastrophes

Data catastrophes happen more often than anything else, but aren’t exactly the most exciting stories we’ve gotten over the years. The stories are usually similar. Someone rents a memory card or SSD from us, uses the card/SSD, then sends it back without pulling the footage off of it. When we receive gear back into our warehouse, we inspect and format all the media. If you realize your mistake and call or email us before that happens, we can usually put a hold on the media and ship it back to you to pull the data off of it. If we’ve already formatted the media, we will perform a recovery on the data using software such as TestDisk and PhotoRec, and let you know if we had any success. We then give you the option whether or not you want to rent the product again to have it shipped to you so you can pull the files.

The Salty Sony A7sII

A common issue we run into — and have addressed a number of times on our blog — is the dubious term “weather resistant.” This term is often used by equipment marketers and doesn’t give you the protection that people might assume by its name.

One example of that was last year, when we received a nonfunctioning Sony a7sII back from the California coast, and had to disassemble it to determine what was wrong. Upon opening the camera, it was quite apparent that it had been submerged in salt water. Water isn’t good for electronics, but the real killer is impurities, such as salt. Salt builds up on electronics, is a conductor of electricity, and will fry electronics in no time when power is applied. So, once we saw the salt corrosion, we knew that the camera was irreparable. Still, we disassembled it for no other reason than to provide evidence to others on what salt water can do to your electronics. You can read more about this and see the full break down in our post, About Getting Your Camera Wet… Teardown of a Salty Sony A7sII.

Sony A7sII disassembled into partsSony A7sII salt water damage

The Color Run Cleanup

Color runs are 5K running events that happen all over the world. If you haven’t seen one, participants and spectators toss colorful powders throughout the run, so that by the time the runners reach the finish line, they’re covered head to toe in colorful powder. This event sounds like a lot of fun, and one would naturally want to take photos of the spectacle, but any camera gear used for the event will definitely require a deep cleaning.

Color run damage to camera lens

Color run damage to camera

We’ve asked our clients multiple times not to take our cameras to color runs, but each year we get another system back that is covered in pink, green, and blue dust. The dust used for these events is incredibly fine, making it easy to get into every nook and cranny within the camera body and lenses. This requires the gear to be completely disassembled, cleaned, and reassembled. We have two photos in this post of the results of a color run, but you can view more on the post we did about Color runs back in 2013, How to Ruin Your (or Our) Gear in 5 Minutes (Without Water).

The Eclipse That Killed Cameras

About a year ago, we had the incredible phenomenon here in the United States of a total solar eclipse. It was the first total solar eclipse to occur in the continental United States since 1979, hence a pretty exciting moment for all of us, but we braced ourselves for the damage it would do to cameras.

Eclipse camera lens damage

For weeks leading up to the event, we sent out fliers with our rentals that encouraged people to not only wear eye protection, but to protect their camera lenses with high-density ND filters. Despite that, in the days following the eclipse, we had gear coming back to us with aperture blades melted and holes burnt into sensors.

Eclipse camera damage

Eclipse camera shutter damage

As one would expect, it’s not a good idea to point your camera directly at the sun, especially for long periods of time. Most of the damage done from the eclipse was caused by people who had set up their camera and lens on a tripod pointing at the sun while waiting for the eclipse. This prolonged exposure causes a lot of heat to build up and will eventually start burning through apertures, shutters, sensors and anything else in its way. Not only do we recommend ND filters for the front of your lens, but also black cards to stop light from entering the camera until it’s go time for the total eclipse. You can read about the whole experience in our blog post on the topic, Rental Camera Gear Destroyed by the Solar Eclipse of 2017.

Damage from Burning Man

While we have countless stories of gear being destroyed, we figured it’d be best to just leave you with this one. Burning Man is an annual event that takes place in the deserts of Nevada. Touted as an art installation and experience, tens of thousands of people spend a few days living in the remote desert with fellow Burners to create and participate in a wide range of activities. And where there is a desert, there always are sand, dust, and dust storms.

Burning Man camera damage

Burning Man dust damage

One might think that sand is the biggest nuisance for camera gear at Burning Man, but it’s actually the fine dust that the wind picks up. One of the more interesting phenomena that happens during Burning Man are the dust storms. The dust storms occur with little warning, kicking up the fine dust buried within the sand that can quickly cause damage to your electronics, your skin, and your lungs. Because it is so fine, it is easily able to enter your cameras and lenses.

Burning Man damage to Nikon camera

While Burning Man doesn’t always totally destroy gear, it does result in a lot of cleaning and disassembling of gear after the event. This takes time and patience and costs the customer money. While there are stories of people who bring camera gear to Burning Man wrapped in nothing more than plastic and gaffer tape, we don’t recommend that for good gear. It’s best to just leave your camera at home, or buy an old camera for cheap to document the week. To see more of what can happen to gear at Burning Man, you can read our blog post on the topic, Please, Don’t Take Our Photography and Video Gear to Burning Man.

Those are just a few stories of some of the data and camera catastrophes that we’ve experienced over the years. We hope this serves as a warning to those who might be considering putting their gear through some of the experiences above and hopefully sway them against it. If you have some of your own stories on data or gear catastrophes, feel free to share them below in the comments.

— Zach Sutton, Editor-in-chief, Lensrentals.com

The post Stories of Camera and Data Catastrophes appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

SelfieBot: taking and printing photos with a smile

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/selfiebot-sophy-wong-raspberry-pi-camera/

Does your camera giggle and smile as it takes your photo? Does your camera spit out your image from a thermal printer? No? Well, Sophy Wong’s SelfieBot does!

Raspberry Pi SelfieBot: Selfie Camera with a Personality

SelfieBot is a project Kim and I originally made for our booth at Seattle Mini Maker Faire 2017. Now, you can build your own! A full tutorial for SelfieBot is up on the Adafruit Learning System at https://learn.adafruit.com/raspberry-pi-selfie-bot/ This was our first Raspberry Pi project, and is an experiment in DIY AI.

Pasties, projects, and plans

Last year, I built a Raspberry Pi photobooth for a friend’s wedding, complete with a thermal printer for instant printouts, and a Twitter feed to keep those unable to attend the event in the loop. I called the project PastyCam, because I built it into the paper mache body of a Cornish pasty, and I planned on creating a tutorial blog post for the build. But I obviously haven’t. And I think it’s time, a year later, to admit defeat.

A photo of the Cornish Pasty photo booth Alex created for a wedding in Cornwall - SelfieBot Raspberry Pi Camera

The wedding was in Cornwall, so the Cornish pasty totally makes sense, alright?

But lucky for us, Sophy Wong has gifted us all with SelfieBot.

Sophy Wong

If you subscribe to HackSpace magazine, you’ll recognise Sophy from issue 4, where she adorned the cover, complete with glowing fingernails. And if you’re like me, you instantly wanted to be her as soon as you saw that image.

SelfieBot Raspberry Pi Camera

Makers should also know Sophy from her impressive contributions to the maker community, including her tutorials for Adafruit, her YouTube channel, and most recently her work with Mythbusters Jr.

sophy wong on Twitter

Filming for #MythbustersJr is wrapped, and I’m heading home to Seattle. What an incredible summer filled with amazing people. I’m so inspired by every single person, crew and cast, on this show, and I’ll miss you all until our paths cross again someday 😊

SelfieBot at MakerFaire

I saw SelfieBot in passing at Maker Faire Bay Area earlier this year. Yet somehow I managed to not introduce myself to Sophy and have a play with her Pi-powered creation. So a few weeks back at World Maker Faire New York, I accosted Sophy as soon as I could, and we bonded by swapping business cards and Pimoroni pins.

Creating SelfieBot

SelfieBot is more than just a printing photo booth. It giggles, it talks, it reacts to movement. It’s the robot version of that friend of yours who’s always taking photos. Always. All the time, Amy. It’s all the time! *ahem*

SelfieBot Raspberry Pi Camera

SelfieBot consists of a Raspberry Pi 2, a Pi Camera Module, a 5″ screen, an accelerometer, a mini thermal printer, and more, including 3D-printed and laser-cut parts.

sophy wong on Twitter

Getting SelfieBot ready for Maker Faire Bay Area next weekend! Super excited to be talking on Sunday with @kpimmel – come see us and meet SelfieBot!

If you want to build your own SelfieBot — and obviously you do — then you can find a complete breakdown of the build process, including info on all parts you’ll need, files for 3D printing, and so, so many wonderfully informative photographs, on the Adafruit Learning System!

The post SelfieBot: taking and printing photos with a smile appeared first on Raspberry Pi.

Securely Managing Your Digital Media (SD, CF, SSD, and Beyond)

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/securely-managing-your-digital-media-sd-cf-ssd-and-beyond/

3 rows of 3 memory cards

This is the second in our post exchange series with our friends Zach Sutton and Ryan Hill at Lensrentals.com, who have an online site for renting photography, videography, and lighting equipment. You can read our post from last month on their blog, 3-2-1 Backup Best Practices using Cloud Archiving, and all posts on our blog in this series at Lensrentals post series.

— Editor

Managing digital media securely is crucial for all photographers and videographers. At Lensrentals.com, we take media security very seriously, with dozens of rented memory cards, hard drives, and other data devices returned to our facility every day. All of our media is inspected with each and every rental customer. Most of the cards returned to us in rental shipments are not properly reformatted and erased, so it’s part of our usual service to clear all the data from returned media to keep each client’s identity and digital property secure.

We’ve gotten pretty good at the routine of managing data and formatting storage devices for our clients while making sure our media has a long life and remains free from corruption. Before we get too involved in our process of securing digital media, we should first talk fundamentals.

The Difference Between Erasing and Reformatting Digital Media

When you insert a card in the camera, you’re likely given two options, either erase the card or format the card. There is an important distinction between the two. Erasing images from a card does just that — erases them. That’s it. It designates the area the prior data occupied on the card as available to write over and confirms to you that the data has been removed.

The term erase is a bit misleading here. The underlying data, the 1’s and 0’s that are recorded on the media, are still there. What really happens is that the drive’s address table is changed to show that the space the previous file occupied is available for new data.

This is the reason that simply erasing a file does not securely remove it. Data recovery software can be used to recover that old data as long as it hasn’t been overwritten with new data.

Formatting goes further. When you format a drive or memory card, all of the files are erased (even files you’re designated as “protected”) and also usually adds a file system. This is a more effective method for removing all the data on the drive since all the space previously divided up for specific files has a brand new structure unencumbered by whatever size files were previously stored. Be beware, however, that it’s possible to retrieve older data even after a format. Whether that can happen depends on the formatting method and whether new data has overwritten what was previously stored.

To make sure that the older data cannot be recovered, a secure erase goes further. Rather than simply designating the data that can be overwritten with new data, a secure erase writes a random selection of 1s and 0s to the disk to make sure the old data is no longer available. This takes longer and is more taxing on the card because data is being overwritten rather than simply removed.

Always Format a Card for the Camera You’re Going to Be Using

If you’ve ever tried to use the same memory card on cameras of different makes without formatting it, you may have seen problems with how the data files are displayed. Each camera system handles its file structure a little differently.

For this reason it’s advisable to format the card for the specific camera you’re using. If this is not done, there is a risk of corrupting data on the card.

Our Process For Securing Data

Our inspection process for recording media varies a little depending on what kind of card we’re inspecting. For standardized media like SD cards or compact flash cards, we simply use a card reader to format the card to exFAT. This is done in Disk Utility on the Apple Macbooks that we issue to each of our Video Technicians. We use exFAT specifically because it’s recognizable by just about every device. Since these cards are used in a wide variety of different cameras, recorders, and accessories, and we have no way of knowing at the point of inspection what device they’ll be used with, we have to choose a format that will allow any camera to recognize the card. While our customer may still have to format a card in a camera for file structure purposes, the card will at least always come formatted in a way that the camera can recognize.

Sony SxS media
For proprietary media — things like REDMAGs, SxS, and other cards that we know will only be used in a particular camera — we use cameras to do the formatting. While the exFAT system would technically work, a camera-specific erase and format process saves the customer a step and allows us to more regularly double-check the media ports on our cameras. In fact, we actually format these cards twice at inspection. First, the Technician erases the card to clear out any customer footage that may have been left on it. Next, they record a new clip to the card, around 30 seconds, just to make sure everything is working as it’s supposed to. Finally, they format the card again, erasing the test footage before sending it to the shelf where it awaits use by another customer.

REDMAG Red Mini-Mag You’ll notice that at no point in this process do we do a full secure erase. This is both to save time and to prevent unnecessary wear and tear on the cards. About 75% of the media we get back from orders still has footage on it, so we don’t get the impression that many of our customers are overly concerned with keeping their footage private once they’re done shooting. However, if you are one of those 25% that may have a personal or professional interest in keeping your footage secure after shooting, we’d recommend that you securely erase the media before returning rented memory cards and drives. Or, if you’d rather we handle it, just send an email or note with your return order requesting that we perform a secure erase rather than simply formatting the cards, and we’ll be happy to oblige.

Managing your digital media securely can be easy if done right. Data management and backing up files, on the other hand, can be more involved and require more planning. If you have any questions on that topic, be sure to check out our recent blog post on proper data backup.

— Zach Sutton and Ryan Hill, lensrentals.com

The post Securely Managing Your Digital Media (SD, CF, SSD, and Beyond) appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Protecting Your Data From Camera to Archive

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/protecting-your-data-from-camera-to-archive/

Camera data getting backed up to Backblaze B2 cloud

Lensrentals.com is a highly respected company that rents photography and videography equipment. We’re a fan of their blog and asked Zach Sutton and Ryan Hill of Lensrentals to contribute something for our audience. We also contributed a post to their blog that was posted today: 3-2-1 Backup Best Practices using Cloud Archiving.

Enjoy!

— Editor

At Lensrentals.com we get a number of support calls, but unfortunately one of them is among the most common: data catastrophes.

The first of the frequent calls is from someone who thought they transferred over their footage or photos before returning their rental and discovered later that they were missing some images or footage. If we haven’t already gone through an inspection of those cards, it’s usually not a problem to send the cards back to them so they can collect their data. But if our techs have inspected the memory cards, then there isn’t much we can do. Our team at Lensrentals.com perform a full and secure reformatting of the cards to keep each customer’s data safe from the next renter. Once that footage is gone, it is unrecoverable and gone forever. This is never a fun conversation to have.

The second scenario is when a customer calls to tell us that they did manage to transfer all the footage over, but one or more of the clips or images were corrupted in the transferring process. Typically, people don’t discover this until after they’ve sent back the memory cards, and after we’ve already formatted the original media. This is another tough phone call to have. On occasion, data corruption happens in camera, but more often than not, the file gets corrupted during the transfer from the media to the computer or hard drive.

These kinds of problems aren’t entirely avoidable and are inherent risks users take when working with digital media. However, as with all risks, you can take proper steps to assure that your data is safe. If a problem arises, there are techniques you can use to work around it.

We’ve summarized our best suggestions for protecting your data from camera to archive in the following sections. We hope you find them useful.

How to Protect Your Digital Assets

Before Your Shoot

The first and most obvious step to take to assure your data is safe is to make sure you use reliable media. For us, we recommend using cards from brands you trust, such as Sandisk, Lexar or ProGrade Digital (a company that took the reins from Lexar). For hard drives, SanDisk, Samsung, Western Digital, and Intel are all considered incredibly reliable. These brands may be more expensive than bargain brands but have been proven time and time again to be more reliable. The few extra dollars spent on reliable media will potentially save you thousands in the long run and will assure that your data is safe and free of corruption.

One of the most important things you should do before any shoot is format your memory card in the camera. Formatting in camera is a great way to minimize file corruption as it keeps the card’s file structure conforming to that camera manufacturer’s specifications, and it should be done every time before every shoot. Equally important, if the camera gives you an option to do a complete or secure format, take that option over the other low-level formatting options available. In the same vein, it’s essential to also take the time to research and see if your camera needs to unmount or “eject” the media before removing it physically. While this option applies more for video camera recording systems, like those found on the RED camera platform and the Odyssey 7Q, it’s always worth checking into to avoid any corruption of the data. More often than not, preventable data corruption happens when the users turn off the camera system before the media has been unmounted.

Finally, if you’re shooting for the entire day, you’ll want to make sure you have enough media on hand for the entire day, so that you do not need to back up and reformat cards throughout the shoot. While it’s possible to take footage off of the card, reformat it, and use it again for the same day, that is not something you’d want to be doing during the hectic environment of a shoot day — it’s best to have extra media on hand. We’ve all made a mistake and deleted a file we didn’t mean to, so it’s best to avoid that mistake by not having to delete or manage files while shooting. Play it safe, and only reformat when you have the time and clear head to do so.

During Your Shoot

On many modern camera systems, you have the option of dual-recording using two different card slots. If your camera offers this option, we cannot recommend it enough. Doubling the media you’re recording onto can overcome a failure in one of the memory cards. While the added cost may be a hard sell, it’s negligible when compared to all the money spent on lights, cameras, actors and lousy pizza for the day. Additionally, develop a system that works for you and keeps everything as organized as possible. Spent media shouldn’t be in the same location as unused media, and your file structure should be consistent throughout the entire shoot. A proper file structure not only saves time but assures that none of the footage goes missing after the shoot, lost in some random folder.

Camera memory cards

Among one of the most critical jobs while on set is the work of a DIT (Digital Imaging Technician) for video, and a DT (Digital Technician) for photography. Essentially, the responsibilities of these positions are to keep the data archived and organized on a set, as well as metadata logging and other technical tasks involved in keeping a shoot organized. While it may not be cost effective to have a DIT/DT on every shoot, if the budget allows for it, I highly recommend you hire one to take on the responsibilities. Having someone on set who is solely responsible for safely backing up and organizing footage helps keep the rest of the crew focused on their obligations to assure nothing goes wrong. When they’re not transferring and archiving data, DIT/DT’s also log metadata, color correct footage and help with the other preliminary editing processes. Even if the budget doesn’t allow for this position to be filled, work to find someone who can solely handle these processes while on set. You don’t want your camera operator to be in charge of also backing up and organizing footage if you can help it.

Ingest Software

If there is one piece of information we’d like for videographers and photographers to take away from this article, it is this: file-moving or ‘offloading’ software is worth the investment and should be used every time you shoot anything. For those who are unfamiliar with offload software, it’s any application that is designed to make it easier for you to back up footage from one location to another, and one shoot to another. In short, to avoid accidents or data corruption, it’s always best to have your media on a MINIMUM of two different devices. The easiest way to do this is to simply dump media onto two separate hard drives, and keep those drives separately stored. Ideally (if the budget allows), you’ll also keep all of your data on the original media for the day as well, making sure you have multiple copies stored in various locations. Many other options are available and recommended if possible, such as RAID arrays or even copying the data over to a cloud service such as Backblaze B2. What offloading software does is just this process, and helps build a platform of automation while verifying all the data as it’s transferred.

There are a few different recommendations I give for offloading software, all at different price points and with unique features. At the highest end of video production, you’ll often see DITs using a piece of software called Silverstack, which offers color grading functionalities, LTO tape support, and basic editing tools for creating daily edits. At a $600 annual price, it is the most expensive in this field and is probably overkill for most users. As for my recommendation, I recommend a tool call Shotput Pro. At $129, Shotput Pro offers all the tools you’d need to build a great archiving process while sacrificing some of the color editing tools. Shotput Pro can simultaneously copy and transfer files to multiple locations, build PDF reports, and verify all transfers. If you’re looking for something even cheaper, there are additional options such as Offload and Hedge. They’re both available for $99 each and give all the tools you’d need within their simple interfaces.

When it comes to photo, the two most obvious choices are Adobe Lightroom and Capture One Pro. While both tools are known more for their editing tools, they also have a lot of archiving functions built into their ingest systems, allowing you to unload cards to multiple locations and make copies on the fly.

workstation with video camera and RAID NAS

When it comes to video, the most crucial feature all of the apps should have is an option called “checksum verification.” This subject can get complicated, but all you really need to know is that larger files are more likely to be corrupted when transferring and copying, so what checksum verification does is verify the file to assure that it’s identical to the original version down to the individual byte. It is by far the most reliable and effective way to ensure that entire volumes of data are copied without corruption or loss of data. Whichever application you choose, make sure checksum verification is an available feature, and part of your workflow every time you’re copying video files. While available on select photo ingesting software, corruption happens less on smaller files and is generally less of an issue. Still, if possible, use it.

Post-Production

Once you’ve completed your shoot and all of your data is safely transfered over to external drives, it’s time to look at how you can store your information long term. Different people approach archiving in different ways because none of us will have an identical workflow. There is no correct way to handle how to archive your photos and videos, but there are a few rules that you’ll want to implement.

The first rule is the most obvious. You’ll want to make sure your media is stored on multiple drives. That way, if one of your drives dies on you, you still have a backup version of the work ready to go. The second rule of thumb is that you’ll want to store these backups in different locations. This can be extremely important if there is a fire in your office, or you’re a victim of a robbery. The most obvious way to do this is to back up or archive into a cloud service such as Backblaze B2. In my production experience I’ve seen multiple production houses implement a system where they store their backup hard drives in a safety deposit box at their bank. The final rule of thumb is especially important when you’re working with significant amounts of data, and that is to keep a working drive separate from an archive drive. The reason for this is an obvious one: all hard drives have a life expectancy, and you can prolong that by minimizing drive use. Having a working drive separate from your archive drives means that your archive drives will have fewer hours on them, thereby extending their practical life.

Ryan Hill’s Workflow

To help visualize what we discussed above, I’ll lay out my personal workflow for you. Please keep in mind that I’m mainly a one-man band, so my workflow is based on me handling everything. I’m also working with a large variety of mediums, so nothing I’m doing is going to be video and camera specific as all of my video projects, photo projects, and graphic projects are organized in the same way. I won’t bore you with details on my file structure, except to say that everything in my root folder is organized by job number, followed by sub-folders with the data classified into categories. I will keep track of which jobs are which, and have a Google Spreadsheet that organizes the job numbers with descriptions and client information. All of this information is secured within my Google account but also allows me to access it from anywhere if needed.

With archiving, my system is pretty simple. I’ve got a 4-drive RAID array in my office that gets updated every time I’m working on a new project. The array is set to RAID 1+0, which means I could lose two of the four hard drives, and still be able to recover the data. Usually, I’ll put 1TB drives in each bay, fill them as I work on projects, and replace them when they’re full. Once they’re full, I label them with the corresponding job numbers and store them in a plastic case on my bookshelf. By no means am I suggesting that my system is a perfect system, but for me, it’s incredibly adaptable to the various projects I work on. In case I was to get robbed, or if my house caught fire, I still have all of my work also archived onto a cloud system, giving me a second level of security.

Finally, to finish up my backup solution, I also keep a two-bay Thunderbolt hard drive dock on my desk as my working drive system. Solid state drives (SSD) and the Thunderbolt connection give me the speed and reliability that I’d need from a drive that I’ll be working from, and rendering outputs off of. For now, there is a single 960gb SSD in the first bay, with the option to extend to the second bay if I need additional storage. I start work by transferring the job file from my archive to the working drive, do whatever I need to do to the files, then replace the old job folder on my archive with the updated one at the end of the day. This way, if I were to have a drive failure, the worst I will lose is a day’s worth of work. For video projects or anything that takes a lot of data, I usually keep copies of all my source files on both my working and archive drive, and just replace the Adobe Premiere project file as I go. Again, this is just my system that works for me, and I recommend you develop one that works for your workflow while keeping your data safe.

The Takeaway

The critical point you should take away is that these sorts of strategies are things you should be thinking about at every step of your production. How does your camera or codec choice affect your media needs? How are you going to ensure safe data backup in the field? How are you going to work with all of this footage in post-production in a way that’s both secure and efficient? Answering all of these questions ahead of time will keep your media safe and your clients happy.

— Zach Sutton and Ryan Hill, lensrentals.com

The post Protecting Your Data From Camera to Archive appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Whimsical builds and messing things up

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/whimsical-builds-and-messing-things-up/

Today is the early May bank holiday in England and Wales, a public holiday, and while this blog rarely rests, the Pi Towers team does. So, while we take a day with our families, our friends, and/or our favourite pastimes, I thought I’d point you at a couple of features from HackSpace magazine, our monthly magazine for makers.

To my mind, they go quite well with a deckchair in the garden, the buzz of a lawnmower a few houses down, and a view of the weeds I ought to have dealt with by now, but I’m sure you’ll find your own ambience.

Make anything with pencils – HackSpace magazine

If you want a unique piece of jewellery to show your love for pencils, follow Peter Brown’s lead. Peter glued twelve pencils together in two rows of six. He then measured the size of his finger and drilled a hole between the glued pencils using a drill bit.

First off, pencils. It hadn’t occurred to me that you could make super useful stuff like a miniature crossbow and a catapult out of pencils. Not only can you do this, you can probably go ahead and do it right now: all you need is a handful of pencils, some rubber bands, some drawing pins, and a bulldog clip (or, as you might prefer, some push pins and a binder clip). The sentence that really leaps out at me here is “To keep a handful of boys aged three to eleven occupied during a family trip, Marie decided to build mini crossbows to help their target practice.” The internet hasn’t helped me find out much about Marie, but I am in awe of her.

If you haven’t wandered off to make a stationery arsenal by now, read Lucy Rogers‘ reflections on making a right mess of things. I hope you do, because I think it’d be great if more people coped better with the fact that we all, unavoidably, fail. You probably won’t really get anywhere without a few goes where you just completely muck it all up.

A ceramic mug, broken into several pieces on the floor

Never mind. We can always line a plant pot with them.
“In Pieces” by dusk-photography / CC BY

This true of everything. Wet lab work and gardening and coding and parenting. And everything. You can share your heroic failures in the comments, if you like, as well as any historic weaponry you have fashioned from the contents of your desk tidy.

The post Whimsical builds and messing things up appeared first on Raspberry Pi.

Community profile: Dave Akerman

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-dave-akerman/

This column is from The MagPi issue 61. You can download a PDF of the full issue for free, or subscribe to receive the print edition through your letterbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve our charitable goals.

The pinned tweet on Dave Akerman’s Twitter account shows a table displaying the various components needed for a high-altitude balloon (HAB) flight. Batteries, leads, a camera and Raspberry Pi, plus an unusually themed payload. The caption reads ‘The Queen, The Duke of York, and my TARDIS”, and sums up Dave’s maker career in a heartbeat.

David Akerman on Twitter

The Queen, The Duke of York, and my TARDIS 🙂 #UKHAS #RaspberryPi

Though writing software for industrial automation pays the bills, the majority of Dave’s time is spent in the world of high-altitude ballooning and the ever-growing community that encompasses it. And, while he makes some money sending business-themed balloons to near space for the likes of Aardman Animations, Confused.com, and the BBC, Dave is best known in the Raspberry Pi community for his use of the small computer in every payload, and his work as a tutor alongside the Foundation’s staff at Skycademy events.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave continues to help others while breaking records and having a good time exploring the atmosphere.

Dave has dedicated many hours and many, many more miles to assist with the Foundation’s Skycademy programme, helping to explore high-altitude ballooning with educators from across the UK. Using a Raspberry Pi and various other pieces of lightweight tech, Dave and Foundation staff member James Robinson explored the incorporation of high-altitude ballooning into education. Through Skycademy, educators were able to learn new skills and take them to the classroom, setting off their own balloons with their students, and recording the results on Raspberry Pis.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave’s most recent flight broke a new record. On 13 August 2017, his HAB payload was able to send back the highest images taken by any amateur flight.

But education isn’t the only reason for Dave’s involvement in the HAB community. As with anyone passionate about a specific hobby, Dave strives to break records. The most recent record-breaking flight took place on 13 August 2017, when Dave’s Raspberry Pi Zero HAB sent home the highest images taken by any amateur high-altitude balloon launch: at 43014 metres. No other HAB balloon has provided images from such an altitude, and the lightweight nature of the Pi Zero definitely helped, as Dave went on to mention on Twitter a few days later.

Dave Akerman The MagPi Raspberry Pi Community Profile

Dave is recognised as being the first person to incorporate a Raspberry Pi into a HAB payload, and continues to break records with the help of the little green board. More recently, he’s been able to lighten the load by using the Raspberry Pi Zero.

When the first Pi made its way to near space, Dave tore the computer apart in order to meet the weight restriction. The Pi in the Sky board was created to add the extra features needed for the flight. Since then, the HAT has experienced a few changes.

Dave Akerman The MagPi Raspberry Pi Community Profile

The Pi in the Sky board, created specifically for HAB flights.

Dave first fell in love with high-altitude ballooning after coming across the hobby in a video shared on a photographic forum. With a lifelong interest in space thanks to watching the Moon landings as a boy, plus a talent for electronics and photography, it seems a natural progression for him. Throw in his coding skills from learning to program on a Teletype and it’s no wonder he was ready and eager to take to the skies, so to speak, and capture the curvature of the Earth. What was so great about using the Raspberry Pi was the instant gratification he got from receiving images in real time as they were taken during the flight. While other devices could control a camera and store captured images for later retrieval, thanks to the Pi Dave was able to transmit the files back down to Earth and check the progress of his balloon while attempting to break records with a flight.

Dave Akerman The MagPi Raspberry Pi Community Profile Morph

One of the many commercial flights Dave has organised featured the classic children’s TV character Morph, a creation of the Aardman Animations studio known for Wallace and Gromit. Morph took to the sky twice in his mission to reach near space, and finally succeeded in 2016.

High-altitude ballooning isn’t the only part of Dave’s life that incorporates a Raspberry Pi. Having “lost count” of how many Pis he has running tasks, Dave has also created radio receivers for APRS (ham radio data), ADS-B (aircraft tracking), and OGN (gliders), along with a time-lapse camera in his garden, and he has a few more Pi for tinkering purposes.

The post Community profile: Dave Akerman appeared first on Raspberry Pi.

Build a solar-powered nature camera for your garden

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/solar-powered-nature-camera/

Spring has sprung, and with it, sleepy-eyed wildlife is beginning to roam our gardens and local woodlands. So why not follow hackster.io maker reichley’s tutorial and build your own solar-powered squirrelhouse nature cam?

Raspberry Pi- and solar-powered nature camera

Inspiration

“I live half a mile above sea level and am SURROUNDED by animals…bears, foxes, turkeys, deer, squirrels, birds”, reichley explains in his tutorial. “Spring has arrived, and there are LOADS of squirrels running around. I was in the building mood and, being a nerd, wished to combine a common woodworking project with the connectivity and observability provided by single-board computers (and their camera add-ons).”

Building a tiny home

reichley started by sketching out a design for the house to determine where the various components would fit.

Raspberry Pi- and solar-powered nature camera

Since he’s fan of autonomy and renewable energy, he decided to run the project’s Raspberry Pi Zero W via solar power. To do so, he reiterated the design to include the necessary tech, scaling the roof to fit the panels.

Raspberry Pi- and solar-powered squirrel cam
Raspberry Pi- and solar-powered squirrel cam
Raspberry Pi- and solar-powered squirrel cam

To keep the project running 24/7, reichley had to figure out the overall power consumption of both the Zero W and the Raspberry Pi Camera Module, factoring in the constant WiFi connection and the sunshine hours in his garden.

Raspberry Pi- and solar-powered nature camera

He used a LiPo SHIM to bump up the power to the required 5V for the Zero. Moreover, he added a BH1750 lux sensor to shut off the LiPo SHIM, and thus the Pi, whenever it’s too dark for decent video.

Raspberry Pi- and solar-powered nature camera

To control the project, he used Calin Crisan’s motionEyeOS video surveillance operating system for single-board computers.

Build your own nature camera

To build your own version, follow reichley’s tutorial, in which you can also find links to all the necessary code and components. You can also check out our free tutorial for building an infrared bird box using the Raspberry Pi NoIR Camera Module. As Eben said in our YouTube live Q&A last week, we really like nature cameras here at Pi Towers, and we’d love to see yours. So if you have any live-stream links or photography from your Raspberry Pi–powered nature cam, please share them with us!

The post Build a solar-powered nature camera for your garden appeared first on Raspberry Pi.

Fstoppers Uploaded a Brilliant Hoax ‘Anti-Piracy’ Tutorial to The Pirate Bay

Post Syndicated from Andy original https://torrentfreak.com/fstoppers-uploaded-a-brilliant-hoax-anti-piracy-tutorial-to-the-pirate-bay-180307/

Fstoppers is an online community that produces extremely high-quality photographic tutorials. One of its most popular series is called Photographing the World which sees photographer Elia Locardi travel to exotic locations to demonstrate landscape and cityscape photography.

These tutorials sell for almost $300, with two or three versions in a pack selling for up $700. Of course, like any other media they get pirated so when Fstoppers were ready to release Photographing the World 3, they released it themselves on torrent sites a few days before retail.

Well, that’s what they wanted the world to believe.

“I think it’s fair to say that we’ve all downloaded ‘something’ illegally in the past. Whether it’s an MP3 years ago or a movie or a TV show, and occasionally you download something and it turns out it was kinda like a Rick Roll,” says Locardi.

“So we kept talking and we thought it would be a good idea to create this dummy lesson or shadow tutorial that was actually a fake and then seed it on BitTorrent.”

Where Fstoppers normally go to beautiful and exotic international locations, for their fake they decided to go to an Olive Garden in Charleston, South Carolina. Yet despite the clear change of location, they wanted people to believe the tutorial was legitimate.

“We wanted to ride this constant line of ‘Is this for real? Could this possibly be real? Is Elia [Locardi] joking right now? I don’t think he’s joking, he’s being totally serious’,” says Lee Morris, one of the co-owners of Fstoppers.

People really have to watch the tutorial to see what a fantastic job Fstoppers did in achieving that goal. For anyone unfamiliar with their work, the tutorial is initially hard to spot as a fake and even for veterans the level of ambiguity is really impressive.

However, when the tutorial heads back to the studio, where the post-processing lesson gets underway, there can be no doubt that something is amiss.

Things start off normally with serious teaching, then over time, the tutorial gets more and more ridiculous. Then, when the camera cuts away to show Locardi forming a ‘mask’ on an Olive Garden image, there can be no confusion.

That’s a cool mask….wait..

In order to get the tutorial out to the world, the site created its own torrent. They had never done anything like it before so got some associates to upload the huge 25GB+ package to The Pirate Bay and have their friends seed it. Then, in order to get past more savvy users on the site, they had other people come in and give the torrent good (but fake) reviews.

The fake torrent on The Pirate Bay (as of yesterday)

Screenshots provided by Fstoppers taken months ago reveal hundreds of downloaders. And, according to Morris, the fake became the most-downloaded Photographing the World 3 torrent online, meaning that the “majority of downloaders” got the comedy version.

Also of interest is the feedback Fstoppers got following their special release. Emails flooded in from pirates, some of whom were confused while others were upset at the ‘quality’ of the tutorial.

“The whole time we were thinking: ‘This isn’t even on the market yet! You guys are totally stealing this and emailing us and complaining about it,” says Fstoppers co-owner Patrick Hall.

While the tutorial itself is brilliant, Fstoppers points to a certain hypocrisy within its target audience of photographers, who themselves have to put up with a lot of online piracy of their work. Yet, clearly, many are happy to pirate the work of other photographers in order to make their own art better.

All that being said, the exercise is certainly an interesting one and the creativity behind the hoax puts it head and shoulders above more aggressive anti-piracy campaigns. However, when TF tracked down the torrent on The Pirate Bay last evening, it’s popularity had nosedived.

While it was initially downloaded by a lot of eager photographers, probably encouraged by the fake comments placed on the site by Fstoppers, the torrent is now only being shared by less than 10 people. As usual, the Pirate Bay users appear to have caught on, flagging the torrent as a fake. The moderators, it seems, have also deleted the fake comments.

While most people won’t want to download a 25GB torrent to see what Fstoppers came up with, the site has uploaded the fake tutorial to YouTube. It’s best viewed alongside their other work, which is sensational, but people should get a good idea by watching the explanation below.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Happy birthday to us!

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/happy-birthday-2018/

The eagle-eyed among you may have noticed that today is 28 February, which is as close as you’re going to get to our sixth birthday, given that we launched on a leap day. For the last three years, we’ve launched products on or around our birthday: Raspberry Pi 2 in 2015; Raspberry Pi 3 in 2016; and Raspberry Pi Zero W in 2017. But today is a snow day here at Pi Towers, so rather than launching something, we’re taking a photo tour of the last six years of Raspberry Pi products before we don our party hats for the Raspberry Jam Big Birthday Weekend this Saturday and Sunday.

Prehistory

Before there was Raspberry Pi, there was the Broadcom BCM2763 ‘micro DB’, designed, as it happens, by our very own Roger Thornton. This was the first thing we demoed as a Raspberry Pi in May 2011, shown here running an ARMv6 build of Ubuntu 9.04.

BCM2763 micro DB

Ubuntu on Raspberry Pi, 2011-style

A few months later, along came the first batch of 50 “alpha boards”, designed for us by Broadcom. I used to have a spreadsheet that told me where in the world each one of these lived. These are the first “real” Raspberry Pis, built around the BCM2835 application processor and LAN9512 USB hub and Ethernet adapter; remarkably, a software image taken from the download page today will still run on them.

Raspberry Pi alpha board, top view

Raspberry Pi alpha board

We shot some great demos with this board, including this video of Quake III:

Raspberry Pi – Quake 3 demo

A little something for the weekend: here’s Eben showing the Raspberry Pi running Quake 3, and chatting a bit about the performance of the board. Thanks to Rob Bishop and Dave Emett for getting the demo running.

Pete spent the second half of 2011 turning the alpha board into a shippable product, and just before Christmas we produced the first 20 “beta boards”, 10 of which were sold at auction, raising over £10000 for the Foundation.

The beginnings of a Bramble

Beta boards on parade

Here’s Dom, demoing both the board and his excellent taste in movie trailers:

Raspberry Pi Beta Board Bring up

See http://www.raspberrypi.org/ for more details, FAQ and forum.

Launch

Rather to Pete’s surprise, I took his beta board design (with a manually-added polygon in the Gerbers taking the place of Paul Grant’s infamous red wire), and ordered 2000 units from Egoman in China. After a few hiccups, units started to arrive in Cambridge, and on 29 February 2012, Raspberry Pi went on sale for the first time via our partners element14 and RS Components.

Pallet of pis

The first 2000 Raspberry Pis

Unboxing continues

The first Raspberry Pi from the first box from the first pallet

We took over 100000 orders on the first day: something of a shock for an organisation that had imagined in its wildest dreams that it might see lifetime sales of 10000 units. Some people who ordered that day had to wait until the summer to finally receive their units.

Evolution

Even as we struggled to catch up with demand, we were working on ways to improve the design. We quickly replaced the USB polyfuses in the top right-hand corner of the board with zero-ohm links to reduce IR drop. If you have a board with polyfuses, it’s a real limited edition; even more so if it also has Hynix memory. Pete’s “rev 2” design made this change permanent, tweaked the GPIO pin-out, and added one much-requested feature: mounting holes.

Revision 1 versus revision 2

If you look carefully, you’ll notice something else about the revision 2 board: it’s made in the UK. 2012 marked the start of our relationship with the Sony UK Technology Centre in Pencoed, South Wales. In the five years since, they’ve built every product we offer, including more than 12 million “big” Raspberry Pis and more than one million Zeros.

Celebrating 500,000 Welsh units, back when that seemed like a lot

Economies of scale, and the decline in the price of SDRAM, allowed us to double the memory capacity of the Model B to 512MB in the autumn of 2012. And as supply of Model B finally caught up with demand, we were able to launch the Model A, delivering on our original promise of a $25 computer.

A UK-built Raspberry Pi Model A

In 2014, James took all the lessons we’d learned from two-and-a-bit years in the market, and designed the Model B+, and its baby brother the Model A+. The Model B+ established the form factor for all our future products, with a 40-pin extended GPIO connector, four USB ports, and four mounting holes.

The Raspberry Pi 1 Model B+ — entering the era of proper product photography with a bang.

New toys

While James was working on the Model B+, Broadcom was busy behind the scenes developing a follow-on to the BCM2835 application processor. BCM2836 samples arrived in Cambridge at 18:00 one evening in April 2014 (chips never arrive at 09:00 — it’s always early evening, usually just before a public holiday), and within a few hours Dom had Raspbian, and the usual set of VideoCore multimedia demos, up and running.

We launched Raspberry Pi 2 at the start of 2015, pairing BCM2836 with 1GB of memory. With a quad-core Arm Cortex-A7 clocked at 900MHz, we’d increased performance sixfold, and memory fourfold, in just three years.

Nobody mention the xenon death flash.

And of course, while James was working on Raspberry Pi 2, Broadcom was developing BCM2837, with a quad-core 64-bit Arm Cortex-A53 clocked at 1.2GHz. Raspberry Pi 3 launched barely a year after Raspberry Pi 2, providing a further doubling of performance and, for the first time, wireless LAN and Bluetooth.

All our recent products are just the same board shot from different angles

Zero to hero

Where the PC industry has historically used Moore’s Law to “fill up” a given price point with more performance each year, the original Raspberry Pi used Moore’s law to deliver early-2000s PC performance at a lower price. But with Raspberry Pi 2 and 3, we’d gone back to filling up our original $35 price point. After the launch of Raspberry Pi 2, we started to wonder whether we could pull the same trick again, taking the original Raspberry Pi platform to a radically lower price point.

The result was Raspberry Pi Zero. Priced at just $5, with a 1GHz BCM2835 and 512MB of RAM, it was cheap enough to bundle on the front of The MagPi, making us the first computer magazine to give away a computer as a cover gift.

Cheap thrills

MagPi issue 40 in all its glory

We followed up with the $10 Raspberry Pi Zero W, launched exactly a year ago. This adds the wireless LAN and Bluetooth functionality from Raspberry Pi 3, using a rather improbable-looking PCB antenna designed by our buddies at Proant in Sweden.

Up to our old tricks again

Other things

Of course, this isn’t all. There has been a veritable blizzard of point releases; RAM changes; Chinese red units; promotional blue units; Brazilian blue-ish units; not to mention two Camera Modules, in two flavours each; a touchscreen; the Sense HAT (now aboard the ISS); three compute modules; and cases for the Raspberry Pi 3 and the Zero (the former just won a Design Effectiveness Award from the DBA). And on top of that, we publish three magazines (The MagPi, Hello World, and HackSpace magazine) and a whole host of Project Books and Essentials Guides.

Chinese Raspberry Pi 1 Model B

RS Components limited-edition blue Raspberry Pi 1 Model B

Brazilian-market Raspberry Pi 3 Model B

Visible-light Camera Module v2

Learning about injection moulding the hard way

250 pages of content each month, every month

Essential reading

Forward the Foundation

Why does all this matter? Because we’re providing everyone, everywhere, with the chance to own a general-purpose programmable computer for the price of a cup of coffee; because we’re giving people access to tools to let them learn new skills, build businesses, and bring their ideas to life; and because when you buy a Raspberry Pi product, every penny of profit goes to support the Raspberry Pi Foundation in its mission to change the face of computing education.

We’ve had an amazing six years, and they’ve been amazing in large part because of the community that’s grown up alongside us. This weekend, more than 150 Raspberry Jams will take place around the world, comprising the Raspberry Jam Big Birthday Weekend.

Raspberry Pi Big Birthday Weekend 2018. GIF with confetti and bopping JAM balloons

If you want to know more about the Raspberry Pi community, go ahead and find your nearest Jam on our interactive map — maybe we’ll see you there.

The post Happy birthday to us! appeared first on Raspberry Pi.

AWS Hot Startups for February 2018: Canva, Figma, InVision

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-for-february-2018-canva-figma-invision/

Note to readers! Starting next month, we will be publishing our monthly Hot Startups blog post on the AWS Startup Blog. Please come check us out.

As visual communication—whether through social media channels like Instagram or white space-heavy product pages—becomes a central part of everyone’s life, accessible design platforms and tools become more and more important in the world of tech. This trend is why we have chosen to spotlight three design-related startups—namely Canva, Figma, and InVision—as our hot startups for the month of February. Please read on to learn more about these design-savvy companies and be sure to check out our full post here.

Canva (Sydney, Australia)

For a long time, creating designs required expensive software, extensive studying, and time spent waiting for feedback from clients or colleagues. With Canva, a graphic design tool that makes creating designs much simpler and accessible, users have the opportunity to design anything and publish anywhere. The platform—which integrates professional design elements, including stock photography, graphic elements, and fonts for users to build designs either entirely from scratch or from thousands of free templates—is available on desktop, iOS, and Android, making it possible to spin up an invitation, poster, or graphic on a smartphone at any time.

To learn more about Canva, read our full interview with CEO Melanie Perkins here.

Figma (San Francisco, CA)

Figma is a cloud-based design platform that empowers designers to communicate and collaborate more effectively. Using recent advancements in WebGL, Figma offers a design tool that doesn’t require users to install any software or special operating systems. It also allows multiple people to work in a file at the same time—a crucial feature.

As the need for new design talent increases, the industry will need plenty of junior designers to keep up with the demand. Figma is prepared to help students by offering their platform for free. Through this, they “hope to give young designers the resources necessary to kick-start their education and eventually, their careers.”

For more about Figma, check out our full interview with CEO Dylan Field here.

InVision (New York, NY)

Founded in 2011 with the goal of helping improve every digital experience in the world, digital product design platform InVision helps users create a streamlined and scalable product design process, build and iterate on prototypes, and collaborate across organizations. The company, which raised a $100 million series E last November, bringing the company’s total funding to $235 million, currently powers the digital product design process at more than 80 percent of the Fortune 100 and brands like Airbnb, HBO, Netflix, and Uber.

Learn more about InVision here.

Be sure to check out our full post on the AWS Startups blog!

-Tina

Playboy Brands Boing Boing a “Clickbait” Site With No Fair Use Defense

Post Syndicated from Andy original https://torrentfreak.com/playboy-brands-boing-boing-a-clickbait-site-with-no-fair-use-defense-180126/

Late 2017, Boing Boing co-editor Xena Jardin posted an article in which he linked to an archive containing every Playboy centerfold image to date.

“Kind of amazing to see how our standards of hotness, and the art of commercial erotic photography, have changed over time,” Jardin noted.

While Boing Boing had nothing to do with the compilation, uploading, or storing of the Imgur-based archive, Playboy took exception to the popular blog linking to the album.

Noting that Jardin had referred to the archive uploader as a “wonderful person”, the adult publication responded with a lawsuit (pdf), claiming that Boing Boing had commercially exploited its copyrighted images.

Last week, with assistance from the Electronic Frontier Foundation, Boing Boing parent company Happy Mutants filed a motion to dismiss in which it defended its right to comment on and link to copyrighted content without that constituting infringement.

“This lawsuit is frankly mystifying. Playboy’s theory of liability seems to be that it is illegal to link to material posted by others on the web — an act performed daily by hundreds of millions of users of Facebook and Twitter, and by journalists like the ones in Playboy’s crosshairs here,” the company wrote.

EFF Senior Staff Attorney Daniel Nazer weighed in too, arguing that since Boing Boing’s reporting and commenting is protected by copyright’s fair use doctrine, the “deeply flawed” lawsuit should be dismissed.

Now, just a week later, Playboy has fired back. Opposing Happy Mutants’ request for the Court to dismiss the case, the company cites the now-famous Perfect 10 v. Amazon/Google case from 2007, which tried to prevent Google from facilitating access to infringing images.

Playboy highlights the court’s finding that Google could have been held contributorily liable – if it had knowledge that Perfect 10 images were available using its search engine, could have taken simple measures to prevent further damage, but failed to do so.

Turning to Boing Boing’s conduct, Playboy says that the company knew it was linking to infringing content, could have taken steps to prevent that, but failed to do so. It then launches an attack on the site itself, offering disparaging comments concerning its activities and business model.

“This is an important case. At issue is whether clickbait sites like Happy Mutants’ Boing Boing weblog — a site designed to attract viewers and encourage them to click on links in order to generate advertising revenue — can knowingly find, promote, and profit from infringing content with impunity,” Playboy writes.

“Clickbait sites like Boing Boing are not known for creating original content. Rather, their business model is based on ‘collecting’ interesting content created by others. As such, they effectively profit off the work of others without actually creating anything original themselves.”

Playboy notes that while sites like Boing Boing are within their rights to leverage works created by others, courts in the US and overseas have ruled that knowingly linking to infringing content is unacceptable.

Even given these conditions, Playboy argues, Happy Mutants and the EFF now want the Court to dismiss the case so that sites are free to “not only encourage, facilitate, and induce infringement, but to profit from those harmful activities.”

Claiming that Boing Boing’s only reason for linking to the infringing album was to “monetize the web traffic that over fifty years of Playboy photographs would generate”, Playboy insists that the site and parent company Happy Mutants was properly charged with copyright infringement.

Playboy also dismisses Boing Boing’s argument that a link to infringing content cannot result in liability due to the link having both infringing and substantial non-infringing uses.

First citing the Betamax case, which found that maker Sony could not be held liable for infringement because its video recorders had substantial non-infringing uses, Playboy counters with the Grokster decision, which held that a distributor of a product could be liable for infringement, if there was an intent to encourage or support infringement.

“In this case, Happy Mutants’ offending link — which does nothing more than support infringing content — is good for nothing but promoting infringement and there is no legitimate public interest in its unlicensed availability,” Playboy notes.

In its motion to dismiss, Happy Mutants also argued that unless Playboy could identify users who “in fact downloaded — rather than simply viewing — the material in question,” the case should be dismissed. However, Playboy rejects the argument, claiming it is based on an erroneous interpretation of the law.

Citing the Grokster decision once more, the adult publisher notes that the Supreme Court found that someone infringes contributorily when they intentionally induce or encourage direct infringement.

“The argument that contributory infringement only lies where the defendant’s actions result in further infringement ignores the ‘or’ and collapses ‘inducing’ and ‘encouraging’ into one thing when they are two distinct things,” Playboy writes.

As for Boing Boing’s four classic fair use arguments, the publisher describes these as “extremely weak” and proceeds to hit them one by one.

In respect of the purpose and character of the use, Playboy discounts Boing Boing’s position that the aim of its post was to show “how our standards of hotness, and the art of commercial erotic photography, have changed over time.” The publisher argues that is the exact same purpose of Playboy magazine, while highliting its publication Playboy: The Compete Centerfolds, 1953-2016.

Moving on to the second factor of fair use – the nature of the copyrighted work – Playboy notes that an entire album of artwork is involved, rather than just a single image.

On the third factor, concerning the amount and substantiality of the original work used, Playboy argues that in order to publish an opinion on how “standards of hotness” had developed over time, there was no need to link to all of the pictures in the archive.

“Had only representative images from each decade, or perhaps even each year, been taken, this would be a very different case — but Happy Mutants cannot dispute that it knew it was linking to an illegal library of ‘Every Playboy Playmate Centerfold Ever’ since that is what it titled its blog post,” Playboy notes.

Finally, when considering the effect of the use upon the potential market for or value of the copyrighted work, Playbody says its archive of images continues to be monetized and Boing Boing’s use of infringing images jeopardizes that.

“Given that people are generally not going to pay for what is freely available, it is disingenuous of Happy Mutants to claim that promoting the free availability of infringing archives of Playboy’s work for viewing and downloading is not going to have an adverse effect on the value or market of that work,” the publisher adds.

While it appears the parties agree on very little, there is agreement on one key aspect of the case – its wider importance.

On the one hand, Playboy insists that a finding in its favor will ensure that people can’t commercially exploit infringing content with impunity. On the other, Boing Boing believes that the health of the entire Internet is at stake.

“The world can’t afford a judgment against us in this case — it would end the web as we know it, threatening everyone who publishes online, from us five weirdos in our basements to multimillion-dollar, globe-spanning publishing empires like Playboy,” the company concludes.

Playboy’s opposition to Happy Mutants’ motion to dismiss can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Digital making for new parents

Post Syndicated from Carrie Anne Philbin original https://www.raspberrypi.org/blog/digital-making-for-new-parents/

Solving problems that are meaningful to us is at the core of our approach to teaching and learning about technology here at the Raspberry Pi Foundation. Over the last eight months, I’ve noticed that the types of digital making projects that motivate and engage me have changed (can’t think why). Always looking for ways to save money and automate my life and the lives of my loved ones, I’ve been thinking a lot about how digital making projects could be the new best friend of any new parent.

A baby, oblivious to the amount its parents have spent on stuff they never knew existed last year.
Image: sweet baby by MRef photography / CC BY-ND 2.0

Baby Monitor

I never knew how much equipment one small child needs until very recently. I also had no idea of the range of technology that is on offer to support you as a new parent to ensure the perfect environment outside of the womb. Baby monitors are at the top of this list. There are lots of Raspberry Pi baby monitor projects with a range of sensing functionality already in existence, and we’ve blogged about some of them before. They’re a great example of how an understanding of technology can open up a range of solutions that won’t break the bank. I’m looking forward to using all the capabilities of the Raspberry Pi to keep an eye on baby.

Baby name generator

Another surprising discovery was just how difficult it is to name a human being. Surprising because I can give a name to an inanimate object in less than three seconds, and come up with nicknames for colleagues in less than a day. My own offspring, though, and I draw a blank. The only solution: write a Python program to randomly generate names based on some parameters!

import names
from time import sleep
from guizero import App, ButtonGroup, Text, PushButton, TextBox

def get_name():
    boyname = names.get_first_name(gender='male')
    girlname = names.get_first_name(gender='female')
    othername = names.get_first_name()

    if babygender.get() == "male":
        name.set(str(boyname)+" "+str(babylastname.get()))
    elif babygender.get() == "female":
        name.set(str(girlname)+" "+str(babylastname.get()))
    else:
        name.set(str(othername)+" "+str(babylastname.get()))

app = App("Baby name generator")
surname_label = Text(app, "What is your surname?")
babylastname = TextBox(app, width=50)
babygender = ButtonGroup(app, options=[["boy", "male"], ["girl", "female"], ["all", "all"]], selected="male", horizontal=True)
intro = Text(app, "Your baby name could be")
name = Text(app, "")
button = PushButton(app, get_name, text="Generate me a name")

app.display()

Thanks to the names and GUIZero Python libraries, it is super simple to create, resolving any possible parent-to-be naming disputes in mere minutes.

Food, Poo, or Love?

I love data. Not just in Star Trek, but also more generally. Collecting and analysing data to understand my sleep patterns, my eating habits, how much exercise I do, and how much time I spend watching YouTube videos consumes much of my time. So of course I want to know lots about the little person we’ve made, long before he can use language to tell us himself.

I’m told that most newborns’ needs are quite simple: they want food, they want to be changed, or they just want some cuddles. I’m certain it’s more complicated than this, but it’s a good starting point for a data set, so stick with me here. I also wondered whether there might be a correlation between the amplitude of the cry and the type of need the baby has. A bit of an imprecise indicator, maybe, but fun to start to think about.

This build’s success is mostly thanks to Pimoroni’s Rainbow HAT, which, conveniently, has three capacitive touch buttons to record the newborn’s need, four fourteen-segment displays to display the words “FOOD”, “POO”, and “LOVE” when a button is pressed, and seven multicoloured LEDs to indicate the ferociousness of the baby’s cry in glorious technicolour. With the addition of a microphone, the ‘Food, Poo, Love Machine’ was born. Here it is in action:

Food Poo Love – Raspberry Pi Baby Monitor Project

Food Poo Love – The Raspberry Pi baby monitor project that allows you to track data on your new born baby.

Automatic Baby mobile

Another project that I’ve not had time to hack on, but that I think would be really awesome, is to automate a baby cot mobile. Imagine this one moving to the Star Trek theme music:

Image courtesy of Gisele Blaker Designs (check out her cool shop!)

Pretty awesome.

If you’ve got any more ideas for baby projects, do let me know. I’ll have a few months of nothing to do… right?

The post Digital making for new parents appeared first on Raspberry Pi.

Implementing Dynamic ETL Pipelines Using AWS Step Functions

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/implementing-dynamic-etl-pipelines-using-aws-step-functions/

This post contributed by:
Wangechi Dole, AWS Solutions Architect
Milan Krasnansky, ING, Digital Solutions Developer, SGK
Rian Mookencherry, Director – Product Innovation, SGK

Data processing and transformation is a common use case you see in our customer case studies and success stories. Often, customers deal with complex data from a variety of sources that needs to be transformed and customized through a series of steps to make it useful to different systems and stakeholders. This can be difficult due to the ever-increasing volume, velocity, and variety of data. Today, data management challenges cannot be solved with traditional databases.

Workflow automation helps you build solutions that are repeatable, scalable, and reliable. You can use AWS Step Functions for this. A great example is how SGK used Step Functions to automate the ETL processes for their client. With Step Functions, SGK has been able to automate changes within the data management system, substantially reducing the time required for data processing.

In this post, SGK shares the details of how they used Step Functions to build a robust data processing system based on highly configurable business transformation rules for ETL processes.

SGK: Building dynamic ETL pipelines

SGK is a subsidiary of Matthews International Corporation, a diversified organization focusing on brand solutions and industrial technologies. SGK’s Global Content Creation Studio network creates compelling content and solutions that connect brands and products to consumers through multiple assets including photography, video, and copywriting.

We were recently contracted to build a sophisticated and scalable data management system for one of our clients. We chose to build the solution on AWS to leverage advanced, managed services that help to improve the speed and agility of development.

The data management system served two main functions:

  1. Ingesting a large amount of complex data to facilitate both reporting and product funding decisions for the client’s global marketing and supply chain organizations.
  2. Processing the data through normalization and applying complex algorithms and data transformations. The system goal was to provide information in the relevant context—such as strategic marketing, supply chain, product planning, etc. —to the end consumer through automated data feeds or updates to existing ETL systems.

We were faced with several challenges:

  • Output data that needed to be refreshed at least twice a day to provide fresh datasets to both local and global markets. That constant data refresh posed several challenges, especially around data management and replication across multiple databases.
  • The complexity of reporting business rules that needed to be updated on a constant basis.
  • Data that could not be processed as contiguous blocks of typical time-series data. The measurement of the data was done across seasons (that is, combination of dates), which often resulted with up to three overlapping seasons at any given time.
  • Input data that came from 10+ different data sources. Each data source ranged from 1–20K rows with as many as 85 columns per input source.

These challenges meant that our small Dev team heavily invested time in frequent configuration changes to the system and data integrity verification to make sure that everything was operating properly. Maintaining this system proved to be a daunting task and that’s when we turned to Step Functions—along with other AWS services—to automate our ETL processes.

Solution overview

Our solution included the following AWS services:

  • AWS Step Functions: Before Step Functions was available, we were using multiple Lambda functions for this use case and running into memory limit issues. With Step Functions, we can execute steps in parallel simultaneously, in a cost-efficient manner, without running into memory limitations.
  • AWS Lambda: The Step Functions state machine uses Lambda functions to implement the Task states. Our Lambda functions are implemented in Java 8.
  • Amazon DynamoDB provides us with an easy and flexible way to manage business rules. We specify our rules as Keys. These are key-value pairs stored in a DynamoDB table.
  • Amazon RDS: Our ETL pipelines consume source data from our RDS MySQL database.
  • Amazon Redshift: We use Amazon Redshift for reporting purposes because it integrates with our BI tools. Currently we are using Tableau for reporting which integrates well with Amazon Redshift.
  • Amazon S3: We store our raw input files and intermediate results in S3 buckets.
  • Amazon CloudWatch Events: Our users expect results at a specific time. We use CloudWatch Events to trigger Step Functions on an automated schedule.

Solution architecture

This solution uses a declarative approach to defining business transformation rules that are applied by the underlying Step Functions state machine as data moves from RDS to Amazon Redshift. An S3 bucket is used to store intermediate results. A CloudWatch Event rule triggers the Step Functions state machine on a schedule. The following diagram illustrates our architecture:

Here are more details for the above diagram:

  1. A rule in CloudWatch Events triggers the state machine execution on an automated schedule.
  2. The state machine invokes the first Lambda function.
  3. The Lambda function deletes all existing records in Amazon Redshift. Depending on the dataset, the Lambda function can create a new table in Amazon Redshift to hold the data.
  4. The same Lambda function then retrieves Keys from a DynamoDB table. Keys represent specific marketing campaigns or seasons and map to specific records in RDS.
  5. The state machine executes the second Lambda function using the Keys from DynamoDB.
  6. The second Lambda function retrieves the referenced dataset from RDS. The records retrieved represent the entire dataset needed for a specific marketing campaign.
  7. The second Lambda function executes in parallel for each Key retrieved from DynamoDB and stores the output in CSV format temporarily in S3.
  8. Finally, the Lambda function uploads the data into Amazon Redshift.

To understand the above data processing workflow, take a closer look at the Step Functions state machine for this example.

We walk you through the state machine in more detail in the following sections.

Walkthrough

To get started, you need to:

  • Create a schedule in CloudWatch Events
  • Specify conditions for RDS data extracts
  • Create Amazon Redshift input files
  • Load data into Amazon Redshift

Step 1: Create a schedule in CloudWatch Events
Create rules in CloudWatch Events to trigger the Step Functions state machine on an automated schedule. The following is an example cron expression to automate your schedule:

In this example, the cron expression invokes the Step Functions state machine at 3:00am and 2:00pm (UTC) every day.

Step 2: Specify conditions for RDS data extracts
We use DynamoDB to store Keys that determine which rows of data to extract from our RDS MySQL database. An example Key is MCS2017, which stands for, Marketing Campaign Spring 2017. Each campaign has a specific start and end date and the corresponding dataset is stored in RDS MySQL. A record in RDS contains about 600 columns, and each Key can represent up to 20K records.

A given day can have multiple campaigns with different start and end dates running simultaneously. In the following example DynamoDB item, three campaigns are specified for the given date.

The state machine example shown above uses Keys 31, 32, and 33 in the first ChoiceState and Keys 21 and 22 in the second ChoiceState. These keys represent marketing campaigns for a given day. For example, on Monday, there are only two campaigns requested. The ChoiceState with Keys 21 and 22 is executed. If three campaigns are requested on Tuesday, for example, then ChoiceState with Keys 31, 32, and 33 is executed. MCS2017 can be represented by Key 21 and Key 33 on Monday and Tuesday, respectively. This approach gives us the flexibility to add or remove campaigns dynamically.

Step 3: Create Amazon Redshift input files
When the state machine begins execution, the first Lambda function is invoked as the resource for FirstState, represented in the Step Functions state machine as follows:

"Comment": ” AWS Amazon States Language.", 
  "StartAt": "FirstState",
 
"States": { 
  "FirstState": {
   
"Type": "Task",
   
"Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Start",
    "Next": "ChoiceState" 
  } 

As described in the solution architecture, the purpose of this Lambda function is to delete existing data in Amazon Redshift and retrieve keys from DynamoDB. In our use case, we found that deleting existing records was more efficient and less time-consuming than finding the delta and updating existing records. On average, an Amazon Redshift table can contain about 36 million cells, which translates to roughly 65K records. The following is the code snippet for the first Lambda function in Java 8:

public class LambdaFunctionHandler implements RequestHandler<Map<String,Object>,Map<String,String>> {
    Map<String,String> keys= new HashMap<>();
    public Map<String, String> handleRequest(Map<String, Object> input, Context context){
       Properties config = getConfig(); 
       // 1. Cleaning Redshift Database
       new RedshiftDataService(config).cleaningTable(); 
       // 2. Reading data from Dynamodb
       List<String> keyList = new DynamoDBDataService(config).getCurrentKeys();
       for(int i = 0; i < keyList.size(); i++) {
           keys.put(”key" + (i+1), keyList.get(i)); 
       }
       keys.put(”key" + T,String.valueOf(keyList.size()));
       // 3. Returning the key values and the key count from the “for” loop
       return (keys);
}

The following JSON represents ChoiceState.

"ChoiceState": {
   "Type" : "Choice",
   "Choices": [ 
   {

      "Variable": "$.keyT",
     "StringEquals": "3",
     "Next": "CurrentThreeKeys" 
   }, 
   {

     "Variable": "$.keyT",
    "StringEquals": "2",
    "Next": "CurrentTwooKeys" 
   } 
 ], 
 "Default": "DefaultState"
}

The variable $.keyT represents the number of keys retrieved from DynamoDB. This variable determines which of the parallel branches should be executed. At the time of publication, Step Functions does not support dynamic parallel state. Therefore, choices under ChoiceState are manually created and assigned hardcoded StringEquals values. These values represent the number of parallel executions for the second Lambda function.

For example, if $.keyT equals 3, the second Lambda function is executed three times in parallel with keys, $key1, $key2 and $key3 retrieved from DynamoDB. Similarly, if $.keyT equals two, the second Lambda function is executed twice in parallel.  The following JSON represents this parallel execution:

"CurrentThreeKeys": { 
  "Type": "Parallel",
  "Next": "NextState",
  "Branches": [ 
  {

     "StartAt": “key31",
    "States": { 
       “key31": {

          "Type": "Task",
        "InputPath": "$.key1",
        "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
        "End": true 
       } 
    } 
  }, 
  {

     "StartAt": “key32",
    "States": { 
     “key32": {

        "Type": "Task",
       "InputPath": "$.key2",
         "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
       "End": true 
      } 
     } 
   }, 
   {

      "StartAt": “key33",
       "States": { 
          “key33": {

                "Type": "Task",
             "InputPath": "$.key3",
             "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
           "End": true 
       } 
     } 
    } 
  ] 
} 

Step 4: Load data into Amazon Redshift
The second Lambda function in the state machine extracts records from RDS associated with keys retrieved for DynamoDB. It processes the data then loads into an Amazon Redshift table. The following is code snippet for the second Lambda function in Java 8.

public class LambdaFunctionHandler implements RequestHandler<String, String> {
 public static String key = null;

public String handleRequest(String input, Context context) { 
   key=input; 
   //1. Getting basic configurations for the next classes + s3 client Properties
   config = getConfig();

   AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient(); 
   // 2. Export query results from RDS into S3 bucket 
   new RdsDataService(config).exportDataToS3(s3,key); 
   // 3. Import query results from S3 bucket into Redshift 
    new RedshiftDataService(config).importDataFromS3(s3,key); 
   System.out.println(input); 
   return "SUCCESS"; 
 } 
}

After the data is loaded into Amazon Redshift, end users can visualize it using their preferred business intelligence tools.

Lessons learned

  • At the time of publication, the 1.5–GB memory hard limit for Lambda functions was inadequate for processing our complex workload. Step Functions gave us the flexibility to chunk our large datasets and process them in parallel, saving on costs and time.
  • In our previous implementation, we assigned each key a dedicated Lambda function along with CloudWatch rules for schedule automation. This approach proved to be inefficient and quickly became an operational burden. Previously, we processed each key sequentially, with each key adding about five minutes to the overall processing time. For example, processing three keys meant that the total processing time was three times longer. With Step Functions, the entire state machine executes in about five minutes.
  • Using DynamoDB with Step Functions gave us the flexibility to manage keys efficiently. In our previous implementations, keys were hardcoded in Lambda functions, which became difficult to manage due to frequent updates. DynamoDB is a great way to store dynamic data that changes frequently, and it works perfectly with our serverless architectures.

Conclusion

With Step Functions, we were able to fully automate the frequent configuration updates to our dataset resulting in significant cost savings, reduced risk to data errors due to system downtime, and more time for us to focus on new product development rather than support related issues. We hope that you have found the information useful and that it can serve as a jump-start to building your own ETL processes on AWS with managed AWS services.

For more information about how Step Functions makes it easy to coordinate the components of distributed applications and microservices in any workflow, see the use case examples and then build your first state machine in under five minutes in the Step Functions console.

If you have questions or suggestions, please comment below.