Students taking Design of Mechatronics at the Technical University of Denmark have created some seriously elegant and striking Raspberry Pi speakers. Their builds are part of a project asking them to “explore, design and build a 3D printed speaker, around readily available electronics and components”.
The students have been uploading their designs, incorporating Raspberry Pis and HiFiBerry HATs, to Thingiverse throughout April. The task is a collaboration with luxury brand Bang & Olufsen’s Create initiative, and the results wouldn’t look out of place in a high-end showroom; I’d happily take any of these home.
Søren Qvist’s wall-mounted kitchen sphere uses 3D-printed and laser-cut parts, along with the HiFiBerry HAT and B&O speakers to create a sleek-looking design.
Otto Ømann’s group have designed the Hex One – a work-in-progress wireless 360° speaker. A particular objective for their project is to create a speaker using as many 3D-printed parts as possible.
“The design is supposed to resemble that of a B&O speaker, and from a handful of categories we chose to create a portable and wearable speaker,” explain Gustav Larsen and his team.
Oliver Repholtz Behrens and team have housed a Raspberry Pi and HiFiBerry HAT inside this this stylish airplay speaker. You can follow their design progress on their team blog.
Tue Thomsen’s six-person team Mechatastic have produced the B&O TILE. “The speaker consists of four 3D-printed cabinet and top parts, where the top should be covered by fabric,” they explain. “The speaker insides consists of laser-cut wood to hold the tweeter and driver and encase the Raspberry Pi.”
The team aimed to design a speaker that would be at home in a kitchen. With a removable upper casing allowing for a choice of colour, the TILE can be customised to fit particular tastes and colour schemes.
Build your own speakers with Raspberry Pis
Raspberry Pi’s onboard audio jack, along with third-party HATs such as the HiFiBerry and Pimoroni Speaker pHAT, make speaker design and fabrication with the Pi an interesting alternative to pre-made tech. These builds don’t tend to be technically complex, and they provide some lovely examples of tech-based projects that reflect makers’ own particular aesthetic style.
If you have access to a 3D printer or a laser cutter, perhaps at a nearby maker space, then those can be excellent resources, but fancy kit isn’t a requirement. Basic joinery and crafting with card or paper are just a couple of ways you can build things that are all your own, using familiar tools and materials. We think more people would enjoy getting hands-on with this sort of thing if they gave it a whirl, and we publish a free magazine to help.
Looking for a new project to build around the Raspberry Pi Zero, I came across the pHAT DAC from Pimoroni. This little add-on board adds audio playback capabilities to the Pi Zero. Because the pHAT uses the GPIO pins, the USB OTG port remains available for a wifi dongle.
This video by Frederick Vandenbosch is a great example of building AirPlay speakers using a Pi and HAT, and a quick search will find you lots more relevant tutorials and ideas.
Have you built your own? Share your speaker-based Pi builds with us in the comments.
Data that describe processes in a spatial context are everywhere in our day-to-day lives and they dominate big data problems. Map data, for instance, whether describing networks of roads or remote sensing data from satellites, get us where we need to go. Atmospheric data from simulations and sensors underlie our weather forecasts and climate models. Devices and sensors with GPS can provide a spatial context to nearly all mobile data.
In this post, we introduce the WIND toolkit, a huge (500 TB), open weather model dataset that’s available to the world on Amazon’s cloud services. We walk through how to access this data and some of the open-source software developed to make it easily accessible. Our solution considers a subset of geospatial data that exist on a grid (raster) and explores ways to provide access to large-scale raster data from weather models. The solution uses foundational AWS services and the Hierarchical Data Format (HDF), a well adopted format for scientific data.
The approach developed here can be extended to any data that fit in an HDF5 file, which can describe sparse and dense vectors and matrices of arbitrary dimensions. This format is already popular within the physical sciences for both experimental and simulation data. We discuss solutions to gridded data storage for a massive dataset of public weather model outputs called the Wind Integration National Dataset (WIND) toolkit. We also highlight strategies that are general to other large geospatial data management problems.
Wind Integration National Dataset
As variable renewable power penetration levels increase in power systems worldwide, the importance of renewable integration studies to ensure continued economic and reliable operation of the power grid is also increasing. The WIND toolkit is the largest freely available grid integration dataset to date.
The WIND toolkit was developed by 3TIER by Vaisala. They were under a subcontract to the National Renewable Energy Laboratory (NREL) to support studies on integration of wind energy into the existing US grid. NREL is a part of a network of national laboratories for the US Department of Energy and has a mission to advance the science and engineering of energy efficiency, sustainable transportation, and renewable power technologies.
The toolkit has been used by consultants, research groups, and universities worldwide to support grid integration studies. Less traditional uses also include resource assessments for wind plants (such as those powering Amazon data centers), and studying the effects of weather on California condor migrations in the Baja peninsula.
The diversity of applications highlights the value of accessible, open public data. Yet, there’s a catch: the dataset is huge. The WIND toolkit provides simulated atmospheric (weather) data at a two-km spatial resolution and five-minute temporal resolution at multiple heights for seven years. The entire dataset is half a petabyte (500 TB) in size and is stored in the NREL High Performance Computing data center in Golden, Colorado. Making this dataset publicly available easily and in a cost-effective manner is a major challenge.
As other laboratories and public institutions work to release their data to the world, they may face similar challenges to those that we experienced. Some prior, well-intentioned efforts to release huge datasets as-is have resulted in data resources that are technically available but fundamentally unusable. They may be stored in an unintuitive format or indexed and organized to support only a subset of potential uses. Downloading hundreds of terabytes of data is often impractical. Most users don’t have access to a big data cluster (or super computer) to slice and dice the data as they need after it’s downloaded.
We aim to provide a large amount of data (50 terabytes) to the public in a way that is efficient, scalable, and easy to use. In many cases, researchers can access these huge cloud-located datasets using the same software and algorithms they have developed for smaller datasets stored locally. Only the pieces of data they need for their individual analysis must be downloaded. To make this work in practice, we worked with the HDF Group and have built upon their forthcoming Highly Scalable Data Service.
In the rest of this post, we discuss how the HSDS software was developed to use Amazon EC2 and Amazon S3 resources to provide convenient and scalable access to these huge geospatial datasets. We describe how the HSDS service has been put to work for the WIND Toolkit dataset and demonstrate how to access it using the h5pyd Python library and the REST API. We conclude with information about our ongoing work to release more ‘open’ datasets to the public using AWS services, and ways to improve and extend the HSDS with newer Amazon services like Amazon ECS and AWS Lambda.
Developing a scalable service for big geospatial data
The HDF5 file format and API have been used for many years and is an effective means of storing large scientific datasets. For example, NASA’s Earth Observing System (EOS) satellites collect more than 16 TBs of data per day using HDF5.
With the rise of the cloud, there are new challenges and opportunities to rethink how HDF5 can be enhanced to work effectively as a component in a cloud-native architecture. For the HDF Group, working with NREL has been a great opportunity to put ideas into practice with a production-size dataset.
An HDF5 file consists of a directed graph of group and dataset objects. Datasets can be thought of as a multidimensional array with support for user-defined metadata tags and compression. Typical operations on datasets would be reading or writing data to a regular subregion (a hyperslab) or reading and writing individual elements (a point selection). Also, group and dataset objects may each contain an arbitrary number of the user-defined metadata elements known as attributes.
Many people have used the HDF library in applications developed or ported to run on EC2 instances, but there are a number of constraints that often prove problematic:
The HDF5 library can’t read directly from HDF5 files stored as S3 objects. The entire file (often many GB in size) would need to be copied to local storage before the first byte can be read. Also, the instance must be configured with the appropriately sized EBS volume)
The HDF library only has access to the computational resources of the instance itself (as opposed to a cluster of instances), so many operations are bottlenecked by the library.
Any modifications to the HDF5 file would somehow have to be synchronized with changes that other instances have made to same file before writing back to S3.
Using a pattern common to many offerings from AWS, the solution to these constraints is to develop a service framework around the HDF data model. Using this model, the HDF Group has created the Highly Scalable Data Service (HSDS) that provides all the functionality that traditionally was provided by the HDF5 library. By using the service, you don’t need to manage your own file volumes, but can just read and write whatever data that you need.
Because the service manages the actual data persistence to a durable medium (S3, in this case), you don’t need to worry about disk management. Simply stream the data you need from the service as you need it. Secondly, putting the functionality behind a service allows some tricks to increase performance (described in more detail later). And lastly, HSDS allows any number of clients to access the data at the same time, enabling HDF5 to be used as a coordination mechanism for multiple readers and writers.
In designing the HSDS architecture, we gave much thought to how to achieve scalability of the HSDS service. For accessing HDF5 data, there are two different types of scaling to consider:
Multiple clients making many requests to the service
Single requests that require a significant amount of data processing
To deal with the first scaling challenge, as with most services, we considered how the service responds as the request rate increases. AWS provides some great tools that help in this regard:
Auto Scaling groups
Elastic Load Balancing load balancers
The ability of S3 to handle large aggregate throughput rates
By using a cluster of EC2 instances behind a load balancer, you can handle different client loads in a cost-effective manner.
The second scaling challenge concerns single requests that would take significant processing time with just one compute node. One example of this from the WIND toolkit would be extracting all the values in the seven-year time span for a given geographic point and dataset.
In HDF5, large datasets are typically stored as “chunks”; that is, a regular partition of the array. In HSDS, each chunk is stored as a binary object in S3. The sequential approach to retrieving the time series values would be for the service to read each chunk needed from S3, extract the needed elements, and go on to the next chunk. In this case, that would involve processing 2557 chunks, and would be quite slow.
Fortunately, with HSDS, you can speed this up quite a bit by exploiting the compute and I/O capabilities of the cluster. Upon receiving the request, the receiving node can use other nodes in the cluster to read different portions of the selection. With multiple nodes reading from S3 in parallel, performance improves as the cluster size increases.
The diagram below illustrates how this works in simplified case of four chunks and four nodes.
This architecture has worked in well in practice. In testing with the WIND toolkit and time series extraction, we observed a request latency of ~60 seconds using four nodes vs. ~5 seconds with 40 nodes. Performance roughly scales with the size of the cluster.
A planned enhancement to this is to use AWS Lambda for the worker processing. This enables 1000-way parallel reads at a reasonable cost, as you only pay for the milliseconds of CPU time used with AWS Lambda.
Public access to atmospheric data using HSDS and AWS
An early challenge in releasing the WIND toolkit data was in deciding how to subset the data for different use cases. In general, few researchers need access to the entire 0.5 PB of data and a great deal of efficiency and cost reduction can be gained by making directed constituent datasets.
NREL grid integration researchers initially extracted a 2-TB subset by selecting 120,000 points where the wind resource seemed appropriate for development. They also chose only those data important for wind applications (100-m wind speed, converted to power), the most interesting locations for those performing grid studies. To support the remaining users who needed more data resolution, we down-sampled the data to a 60-minute temporal resolution, keeping all the other variables and spatial resolution intact. This reduced dataset is 50 TB of data describing 30+ atmospheric variables of data for 7 years at a 60-minute temporal resolution.
The WindViz browser-based Gridded Wind Toolkit Visualizer was created as an example implementation of the HSDS REST API in JavaScript. The visualizer is written in the style of ECMAScript 2016 using a modern development toolchain that includes webpack and Babel. The source code is available through our GitHub repository. The demo page is hosted via GitHub pages, and we use a cross-origin AJAX request to fetch data from the HSDS service running on the EC2 infrastructure. The visualizer can be used to explore the gridded wind toolkit data on a map. Achieve full spatial resolution by zooming in to a specific region.
Programmatic access is possible using the h5pyd Python library, a distributed analog to the widely used h5py library. Users interact with the datasets (variables) and slice the data from its (time x longitude x latitude) cube form as they see fit.
Examples and use cases are described in a set of Jupyter notebooks and available on GitHub:
Now you have a Jupyter notebook server running on your EC2 server.
From your laptop, create an SSH tunnel:
$ ssh –L 8888:localhost:8888 (IP address of the EC2 server)
Now, you can browse to localhost:8888 using the correct token, and interact with the notebooks as if they were local. Within the directory, there are examples for accessing the HSDS API and plotting wind and weather data using matplotlib.
Controlling access and defraying costs
A final concern is rate limiting and access control. Although the HSDS service is scalable and relatively robust, we had a few practical concerns:
How can we protect from malicious or accidental use that may lead to high egress fees (for example, someone who attempts to repeatedly download the entire dataset from S3)?
How can we keep track of who is using the data both to document the value of the data resource and to justify the costs?
If costs become too high, can we charge for some or all API use to help cover the costs?
To approach these problems, we investigated using Amazon API Gateway and its simplified integration with the AWS Marketplace for SaaS monetization as well as third-party API proxies.
In the end, we chose to use API Umbrella due to its close involvement with http://data.gov. While AWS Marketplace is a compelling option for future datasets, the decision was made to keep this dataset entirely open, at least for now. As community use and associated costs grow, we’ll likely revisit Marketplace. Meanwhile, API Umbrella provides controls for rate limiting and API key registration out of the box and was simple to implement as a front-end proxy to HSDS. Those applications that may want to charge for API use can accomplish a similar strategy using Amazon API Gateway and AWS Marketplace.
Ongoing work and other resources
As NREL and other government research labs, municipalities, and organizations try to share data with the public, we expect many of you will face similar challenges to those we have tried to approach with the architecture described in this post. Providing large datasets is one challenge. Doing so in a way that is affordable and convenient for users is an entirely more difficult goal. Using AWS cloud-native services and the existing foundation of the HDF file format has allowed us to tackle that challenge in a meaningful way.
Dr. Caleb Phillips is a senior scientist with the Data Analysis and Visualization Group within the Computational Sciences Center at the National Renewable Energy Laboratory. Caleb comes from a background in computer science systems, applied statistics, computational modeling, and optimization. His work at NREL spans the breadth of renewable energy technologies and focuses on applying modern data science techniques to data problems at scale.
Dr. Caroline Draxl is a senior scientist at NREL. She supports the research and modeling activities of the US Department of Energy from mesoscale to wind plant scale. Caroline uses mesoscale models to research wind resources in various countries, and participates in on- and offshore boundary layer research and in the coupling of the mesoscale flow features (kilometer scale) to the microscale (tens of meters). She holds a M.S. degree in Meteorology and Geophysics from the University of Innsbruck, Austria, and a PhD in Meteorology from the Technical University of Denmark.
John Readey has been a Senior Architect at The HDF Group since he joined in June 2014. His interests include web services related to HDF, applications that support the use of HDF and data visualization.Before joining The HDF Group, John worked at Amazon.com from 2006–2014 where he developed service-based systems for eCommerce and AWS.
Jordan Perr-Sauer is an RPP intern with the Data Analysis and Visualization Group within the Computational Sciences Center at the National Renewable Energy Laboratory. Jordan hopes to use his professional background in software engineering and his academic training in applied mathematics to solve the challenging problems facing America and the world.
In September of last year, we launched our 2017/2018 Astro Pi challenge with our partners at the European Space Agency (ESA). Students from ESA membership and associate countries had the chance to design science experiments and write code to be run on one of our two Raspberry Pis on the International Space Station (ISS).
Submissions for the Mission Space Lab challenge have just closed, and the results are in! Students had the opportunity to design an experiment for one of the following two themes:
Life in space Making use of Astro Pi Vis (Ed) in the European Columbus module to learn about the conditions inside the ISS.
Life on Earth Making use of Astro Pi IR (Izzy), which will be aimed towards the Earth through a window to learn about Earth from space.
ESA astronaut Alexander Gerst, speaking from the replica of the Columbus module at the European Astronaut Center in Cologne, has a message for all Mission Space Lab participants:
Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the Raspberry Pi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?
Flight status
We had a total of 212 Mission Space Lab entries from 22 countries. Of these, a 114 fantastic projects have been given flight status, and the teams’ project code will run in space!
But they’re not winners yet. In April, the code will be sent to the ISS, and then the teams will receive back their experimental data. Next, to get deeper insight into the process of scientific endeavour, they will need produce a final report analysing their findings. Winners will be chosen based on the merit of their final report, and the winning teams will get exclusive prizes. Check the list below to see if your team got flight status.
Belgium
Flight status achieved:
Team De Vesten, Campus De Vesten, Antwerpen
Ursa Major, CoderDojo Belgium, West-Vlaanderen
Special operations STEM, Sint-Claracollege, Antwerpen
Canada
Flight status achieved:
Let It Grow, Branksome Hall, Toronto
The Dark Side of Light, Branksome Hall, Toronto
Genie On The ISS, Branksome Hall, Toronto
Byte by PIthons, Youth Tech Education Society & Kid Code Jeunesse, Edmonton
The Broadviewnauts, Broadview, Ottawa
Czech Republic
Flight status achieved:
BLEK, Střední Odborná Škola Blatná, Strakonice
Denmark
Flight status achieved:
2y Infotek, Nærum Gymnasium, Nærum
Equation Quotation, Allerød Gymnasium, Lillerød
Team Weather Watchers, Allerød Gymnasium, Allerød
Space Gardners, Nærum Gymnasium, Nærum
Finland
Flight status achieved:
Team Aurora, Hyvinkään yhteiskoulun lukio, Hyvinkää
France
Flight status achieved:
INC2, Lycée Raoul Follereau, Bourgogne
Space Project SP4, Lycée Saint-Paul IV, Reunion Island
Dresseurs2Python, clg Albert CAMUS, essonne
Lazos, Lycée Aux Lazaristes, Rhone
The space nerds, Lycée Saint André Colmar, Alsace
Les Spationautes Valériquais, lycée de la Côte d’Albâtre, Normandie
Where did it land ???? #skypaca #skycademy @pacauk #RaspberryPi
High-altitude ballooning
Some of you may be familiar with Raspberry Pi being used as the flight computer, or tracker, of high-altitude balloon (HAB) payloads. For those who aren’t, high-altitude ballooning is a relatively simple activity (at least in principle) where a tracker is attached to a large weather balloon which is then released into the atmosphere. While the HAB ascends, the tracker takes pictures and data readings the whole time. Eventually (around 30km up) the balloon bursts, leaving the payload free to descend and be recovered. For a better explanation, I’m handing over to the students of UTC Oxfordshire:
On Tuesday 2nd May, students launched a Raspberry Pi computer 35,000 metres into the stratosphere as part of an Employer-Led project at UTC Oxfordshire, set by the Raspberry Pi Foundation. The project involved engineering, scientific and communication/publicity skills being developed to create the payload and code to interpret experiments set by the science team.
Skycademy
Over the past few years, we’ve seen schools and their students explore the possibilities that high-altitude ballooning offers, and back in 2015 and 2016 we ran Skycademy. The programme was simple enough: get a bunch of educators together in the same space, show them how to launch a balloon flight, and then send them back to their students to try and repeat what they’ve learned. Since the first Skycademy event, a number of participants have carried out launches, and we are extremely proud of each and every one of them.
The case of the vanishing PACA HAB
Not every launch has been a 100% success though. There are many things that can and do go wrong during HAB flights, and watching each launch from the comfort of our office can be a nerve-wracking experience. We had such an experience back in July 2017, during the launch performed by Skycademy graduate and Raspberry Pi Certified Educator Dave Hartley and his students from Portslade Aldridge Community Academy (PACA).
Dave and his team had been working on their payload for some time, and were awaiting suitable weather conditions. Early one Wednesday in July, everything aligned: they had a narrow window of good weather and so set their launch plan in motion. Soon they had assembled the payload in the school grounds and all was ready for the launch.
Just before 11:00, they’d completed their final checks and released their payload into the atmosphere. Over the course of 64 minutes, the HAB steadily rose to an altitude of 25647m, where it captured some amazing pictures before the balloon burst and a rapid descent began.
Soon after the payload began to descend, the team noticed something worrying: their predicted descent path took the payload dangerously far south — it was threatening to land in the sea. As the payload continued to lose altitude, their calculated results kept shifting, alternately predicting a landing on the ground or out to sea. Eventually it became clear that the payload would narrowly overshoot the land, and it finally landed about 2 km out to sea.
The path of the balloon
It’s not uncommon for a HAB payload to get lost. There are many ways this can happen, particularly in a narrow country with a prevailing easterly wind like the UK. Payloads can get lost at sea, land somewhere inaccessible, or simply run out of power before they are located and retrieved. So normally, this would be the end of the story for the PACA students — even if the team had had a speedboat to hand, their payload was surely lost for good.
A message from Denmark
However, this is not the end of our story! A couple of months later, I arrived at work and saw this tweet from a colleague:
Anyone lost a Raspberry Pi HAB? Someone found this one on a beach in south western Denmark yesterday #UKHAS https://t.co/7lBzFiemgr
Good Samaritan Henning Hansen had found a Raspberry Pi washed up on a remote beach in Denmark! While walking a stretch of coast to collect plastic debris for an environmental monitoring project, he came across something unusual near the shore at 55°04’53.0″N and 8°38’46.9″E.
This of course piqued my interest, and we began to investigate the image he had shared on Facebook.
Inspecting the photo closely, we noticed a small asset label — the kind of label that, over a year earlier, we’d stuck to each and every bit of Skycademy field kit. We excitedly claimed the kit on behalf of Dave and his students, and contacted Henning to arrange the recovery of the payload. He told us it must have been carried ashore with the tide some time between 21 and 27 September, and probably on 21 September, since that day had the highest tide over the period. This meant the payload must have spent over two months at sea!
From the photo we could tell that the Raspberry Pi had suffered significant corrosion, having been exposed to salt water for so long, and so we felt pessimistic about the chances that there would be any recoverable data on it. However, Henning said that he’d been able to read some files from the FAT partition of the SD card, so all hope was not lost.
After a few weeks and a number of complications around dispatch and delivery (thank you, Henning, for your infinite patience!), Helen collected the HAB from a local Post Office.
SUCCESS!
We set about trying to read the data from the SD card, and eventually became disheartened: despite several attempts, we were unable to read its contents.
In a last-ditch effort, we gave the SD card to Jonathan, one of our engineers, who initially laughed at the prospect of recovering any data from it. But ten minutes later, he returned with news of success!
Since then, we’ve been able to reunite the payload with the PACA launch team, and the students sent us the perfect message to end this story:
Programa de revendedor aprovado agora no Brasil — our Approved Reseller programme is live in Brazil, with Anatel-approved Raspberry Pis in a rather delicious shade of blue on sale from today.
Blue Raspberry is more than just the best Jolly Ranger flavour
The challenge
The difficulty in buying our products — and the lack of Anatel certification — have been consistent points of feedback from our many Brazilian customers and followers. In much the same way that electrical products in the USA must be FCC-approved in order to be produced or sold there, products sold in Brazil must be approved by Anatel. And so we’re pleased to tell you that the Raspberry Pi finally has this approval.
Blue Raspberry
Today we’re also announcing the appointment of our first Approved Reseller in Brazil: FilipeFlop will be able to sell Raspberry Pi 3 units across the country.
A big shout-out to the team at FilipeFlop that has worked so hard with us to ensure that we’re getting the product on sale in Brazil at the right price. (They also helped us understand the various local duties and taxes which need to be paid!)
Please note: the blue colouring of the Raspberry Pi 3 sold in Brazil is the only difference between it and the standard green model. People outside Brazil will not be able to purchase the blue variant from FilipeFlop.
More Raspberry Pi Approved Resellers
Since first announcing it back in August, we have further expanded our Approved Reseller programme by adding resellers for Austria, Canada, Cyprus, Czech Republic, Denmark, Estonia, Finland, Germany, Latvia, Lithuania, Norway, Poland, Slovakia, Sweden, Switzerland, and the US. All Approved Resellers are listed on our products page, and more will follow over the next few weeks!
Make and share
If you’re based in Brazil and you’re ordering the new, blue Raspberry Pi, make sure to share your projects with us on social media. We can’t wait to see what you get up to with them!
As Backblaze continues to grow, we need to keep our web experience on point, so we put out a call for creative folks that can help us make the Backblaze experience all that it can be. We found Carlo! He’s a frontend web developer who used to work at Sea World. Lets learn a bit more about Carlo, shall we?
What is your Backblaze Title?
Senior Frontend Developer
Where are you originally from?
I grew up in San Diego, California.
What attracted you to Backblaze?
I am excited that frontend architecture is approaching parity with the rest of the web services software development ecosystem. Most of my experience has been full stack development, but I have recently started focusing on the front end. Backblaze shares my goal of having a first class user experience using frameworks like React.
What do you expect to learn while being at Backblaze?
I’m interested in building solutions that help customers visualize and work with their data intuitively and efficiently.
Where else have you worked?
GoPro, Sungevity, and Sea World.
What’s your dream job?
Hip Hop dressage choreographer.
Favorite place you’ve traveled?
The Arctic in Northern Finland, in a train in a boat sailing the gap between Germany and Denmark, and Vieques PR.
Favorite hobby?
Sketching, writing, and dressing up my hairless dogs.
Of what achievement are you most proud?
It’s either helping release a large SOA site, or orchestrating a Morrissey cover band flash mob #squadgoals. OK, maybe one those things didn’t happen…
Star Trek or Star Wars?
Interstellar!
Favorite food?
Mexican food.
Coke or Pepsi?
Ginger beer.
Why do you like certain things?
Things that I like bring me joy a la Marie Kondo.
Anything else you’d like you’d like to tell us?
¯\_(ツ)_/¯
Wow, hip hop dressage choreographer — that is amazing. Welcome aboard Carlo!
Last friday I posted a little Lazyweb experiment, a hunt for information about a certain kind of lamp sold by a street dealer in Mexico City. A quick followup on the results:
Surprinsingly many people responded, mostly by email, and partly by blog comment. As it appears I am not the only one who’s looking for this specific type of lamp. Furthermore, a non-trivial set of Planet Gnome readers actually already owns one of these devices. Apparently counterfeit versions of this lamp are sold all around the world by street dealers and on markets.
The lamp seems to be a modified version of the “IQ Light”, a self assembly lighting system made up of interlocking quadrilaterals. It is a scandinavian design, by Holger Strøm, 1973. It is nowadays exclusively distributed by Bald & Bang, Denmark. The lighting system has a very interesting web site of its own, which even includes an HOWTO for assembling these lamps. The Bald & Bang web site has a very stylish video which also shows how to assemble an IQ lamp.
While my mexican specimen and the official design are very similar, they differ: the mexican design looks – in a way – “tighter” and … better (at least in my humble opinion). For comparison, please have a look on the photo I took from the mexican version which is shown above, and on the many photos returned by Google Images, or the one from the IQ Light homepage. It appears as if the basic geometrical form used by the mexican design is somehow more narrow than the official danish one.
So, where can one buy one of those lamps? Fake and real ones are sold on eBay, every now an then. The Museum Store of the New York MoMA sells the original version for super-cheap $160. If you search with Google you’ll find many more offers like this one, but all of them are not exactly cheap – for a bunch of thin plastic sheets. All these shops sell the danish version of the design, noone was able to point me to a shop where the modified, “mexican” version is sold.
Given the hefty price tag and the fact that the fake, mexican version looks better then the original one, I will now build my own lamps, based on the mexican design. For that I will disassamble my specimen (at least partially) and create a paper stencil of the basic plastic pattern. I hope to put this up for download as a .ps file some time next week, since many people asked for instructions for building these lamps. Presumably the original design is protected by copyright, hence I will not publish a step-by-step guide how to build your own fake version. But thankfully this is not even necessary, since the vendor already published a HOWTO and a video for this, online.
Thank you very much for your numerous responses!
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.