Michael Portera‘s trading card scanner uses LEGO, servo motors, and a Raspberry Pi and Camera Module to scan Magic: The Gathering cards and look up their prices online. This is a neat and easy-to-recreate project that you can adapt for whatever your, or your younger self’s, favourite trading cards are.
For those of you who aren’t this nerdy [Janina is 100% this nerdy – Ed.], Magic: The Gathering (or MTG for short) is a trading card game first launched in 1993. It’s based on a sprawling fantasy multiverse storyline, and is very heavy on mechanics — the current comprehensive rules fill 228 pages! You can imagine it as being a bit like Dungeons and Dragons, with less role-playing and more of a chess vibe. Unlike in chess, however, you can beat your MTG opponent in one turn with just the right combination of cards. If that’s your style of play, that is.
Scanning trading cards
So far, there are around 20000 official MTG cards, and, as with other types of trading cards, some of them are worth a lot of money.
Michael is one of the many people who were keen MTG players in their youth. Here’s how he came up with his project idea:
I was really into trading cards as a kid. I recently came across a lot of Magic: The Gathering cards in a box and thought to myself — I wonder how many cards I have and how much they’re worth?! Logging and looking these up manually would take a while, so I decided to see if I could automate some of the process. Somehow, the process led to building a platform out of Lego and leveraging AWS S3 and Rekognition.
LEGO, servos and camera
To build the housing of the scanner, Michael used LEGO, stating “I’m not good at wood working, and I thought that it might be rough on the cards.” While he doesn’t provide a build plan for the housing, Michael only used bricks from in the LEGO Medium Creative Brick Box he bought for the project. In addition, his tutorial includes a lot of pictures to guide you.
Servo motors spin plastic wheels to move single cards from a stack set into the scanner. Michael positioned a Raspberry Pi Camera Module so that it can take a picture of the title of each card as it is set before the lens. The length of the camera’s ribbon cable gave Michael a little difficulty, so he recommends getting an extension for it if you’re planning to recreate the build.
Optical character recognition and MTG card price API
On the software side, Michael wrote three scripts. One is a Python script to control the servos and take pictures. This, he says, “[records] about 20–25 cards a minute.”
Another script identifies the cards and looks up their prices automatically. Michael tried out OpenCV and Tesseract for optical character recognition (OCR) first, before settling on AWS S3 and Rekognition for storing and processing images, respectively. You’ll need an AWS account to do this — Michael used the free tier, which he says allows him to process 5000 pictures per month.
A sizeable collection
Finally, the data that Rekognition sends back gets processed by another Python script that looks up the identified cards on the TCGplayer API to find their price.
Michael says he’s very satisfied with the accuracy of the project’s OCR. He found out that the 920 Magic: The Gathering cards he scanned are worth about $275 in total. He provides a full write-up plus code over on hackster.io.
And for my next trick…
You might be thinking what I’m thinking: the logical next step for this project is to turn it into a card sorter. Then you could input a list of the card deck you want to put together, and presto! The device picks out the right cards from your collection. Building a Commander deck just became a little easier!
What trading cards would you use this project with, and how would you extend it? Also, what’s your favourite commander? Let me know in the comments!
The survey of nearly forty Republican and Democratic campaign operatives, administered through November and December 2017, revealed that American political campaign staff — primarily working at the state and congressional levels — are not only unprepared for possible cyber attacks, but remain generally unconcerned about the threat. The survey sample was relatively small, but nevertheless the survey provides a first look at how campaign managers and staff are responding to the threat.
The overwhelming majority of those surveyed do not want to devote campaign resources to cybersecurity or to hire personnel to address cybersecurity issues. Even though campaign managers recognize there is a high probability that campaign and personal emails are at risk of being hacked, they are more concerned about fundraising and press coverage than they are about cybersecurity. Less than half of those surveyed said they had taken steps to make their data secure and most were unsure if they wanted to spend any money on this protection.
Security is never something we actually want. Security is something we need in order to avoid what we don’t want. It’s also more abstract, concerned with hypothetical future possibilities. Of course it’s lower on the priorities list than fundraising and press coverage. They’re more tangible, and they’re more immediate.
The Center for Democracy and Technology has a good summary of the current state of the DMCA’s chilling effects on security research.
To underline the nature of chilling effects on hacking and security research, CDT has worked to describe how tinkerers, hackers, and security researchers of all types both contribute to a baseline level of security in our digital environment and, in turn, are shaped themselves by this environment, most notably when things they do upset others and result in threats, potential lawsuits, and prosecution. We’ve published two reports (sponsored by the Hewlett Foundation and MacArthur Foundation) about needed reforms to the law and the myriad of ways that security research directly improves people’s lives. To get a more complete picture, we wanted to talk to security researchers themselves and gauge the forces that shape their work; essentially, we wanted to “take the pulse” of the security research community.
Today, we are releasing a third report in service of this effort: “Taking the Pulse of Hacking: A Risk Basis for Security Research.” We report findings after having interviewed a set of 20 security researchers and hackers — half academic and half non-academic — about what considerations they take into account when starting new projects or engaging in new work, as well as to what extent they or their colleagues have faced threats in the past that chilled their work. The results in our report show that a wide variety of constraints shape the work they do, from technical constraints to ethical boundaries to legal concerns, including the DMCA and especially the CFAA.
Note: I am a signatory on the letter supporting unrestricted security research.
In 2015, we announced Backblaze B2 Cloud Storage — the most affordable, high performance storage cloud on the planet. The decision to release B2 as a service was in direct response to customers asking us if they could use the same cloud storage infrastructure we use for our Computer Backup service. With B2, we entered a market in direct competition with Amazon S3, Google Cloud Services, and Microsoft Azure Storage. Today, we have over 500 petabytes of data from customers in over 150 countries. At $0.005 / GB / month for storage (1/4th of S3) and $0.01 / GB for downloads (1/5th of S3), it turns out there’s a healthy market for cloud storage that’s easy and affordable.
As B2 has grown, customers wanted to use our cloud storage for a variety of use cases that required not only storage but compute. We’re happy to say that through partnerships with Packet & ServerCentral, today we’re announcing that compute is now available for B2 customers.
Cloud Compute and Storage
Backblaze has directly connected B2 with the compute servers of Packet and ServerCentral, thereby allowing near-instant (< 10 ms) data transfers between services. Also, transferring data between B2 and both our compute partners is free.
Storing data in B2 and want to run an AI analysis on it? — There are no fees to move the data to our compute partners.
Generating data in an application? — Run the application with one of our partners and store it in B2.
Transfers are free and you’ll save more than 50% off of the equivalent set of services from AWS.
These partnerships enable B2 customers to use compute, give our compute partners’ customers access to cloud storage, and introduce new customers to industry-leading storage and compute — all with high-performance, low-latency, and low-cost.
Is This a Big Deal? We Think So
Compute is one of the most requested services from our customers Why? Because it unlocks a number of use cases for them. Let’s look at three popular examples:
Transcoding Media Files
B2 has earned wide adoption in the Media & Entertainment (“M&E”) industry. Our affordable storage and download pricing make B2 great for a wide variety of M&E use cases. But many M&E workflows require compute. Content syndicators, like American Public Television, need the ability to transcode files to meet localization and distribution management requirements.
There are a multitude of reasons that transcode is needed — thumbnail and proxy generation enable M&E professionals to work efficiently. Without compute, the act of transcoding files remains cumbersome. Either the files need to be brought down from the cloud, transcoded, and then pushed back up or they must be kept locally until the project is complete. Both scenarios are inefficient.
Starting today, any content producer can spin up compute with one of our partners, pay by the hour for their transcode processing, and return the new media files to B2 for storage and distribution. The company saves money, moves faster, and ensures their files are safe and secure.
Disaster Recovery
Backblaze’s heritage is based on providing outstanding backup services. When you have incredibly affordable cloud storage, it ends up being a great destination for your backup data.
Most enterprises have virtual machines (“VMs”) running in their infrastructure and those VMs need to be backed up. In a disaster scenario, a business wants to know they can get back up and running quickly.
With all data stored in B2, a business can get up and running quickly. Simply restore your backed up VM to one of our compute providers, and your business will be able to get back online.
Since B2 does not place restrictions, delays, or penalties on getting data out, customers can get back up and running quickly and affordably.
Saving $74 Million (aka “The Dropbox Effect”)
Ten years ago, Backblaze decided that S3 was too costly a platform to build its cloud storage business. Instead, we created the Backblaze Storage Pod and our own cloud storage infrastructure. That decision enabled us to offer our customers storage at a previously unavailable price point and maintain those prices for over a decade. It also laid the foundation for Netflix Open Connect and Facebook Open Compute.
Dropbox recently migrated the majority of their cloud services off of AWS and onto Dropbox’s own infrastructure. By leaving AWS, Dropbox was able to build out their own data centers and still save over $74 Million. They achieved those savings by avoiding the fees AWS charges for storing and downloading data, which, incidentally, are five times higher than Backblaze B2.
For Dropbox, being able to realize savings was possible because they have access to enough capital and expertise that they can build out their own infrastructure. For companies that have such resources and scale, that’s a great answer.
“Before this offering, the economics of the cloud would have made our business simply unviable.” — Gabriel Menegatti, SlicingDice
The questions Backblaze and our compute partners pondered was “how can we democratize the Dropbox effect for our storage and compute customers? How can we help customers do more and pay less?” The answer we came up with was to connect Backblaze’s B2 storage with strategic compute partners and remove any transfer fees between them. You may not save $74 million as Dropbox did, but you can choose the optimal providers for your use case and realize significant savings in the process.
This Sounds Good — Tell Me More About Your Partners
We’re very fortunate to be launching our compute program with two fantastic partners in Packet and ServerCentral. These partners allow us to offer a range of computing services.
Packet
We recommend Packet for customers that need on-demand, high performance, bare metal servers available by the hour. They also have robust offerings for private / customized deployments. Their offerings end up costing 50-75% of the equivalent offerings from EC2.
To get started with Packet and B2, visit our partner page on Packet.net.
ServerCentral
ServerCentral is the right partner for customers that have business and IT challenges that require more than “just” hardware. They specialize in fully managed, custom cloud solutions that solve complex business and IT challenges. ServerCentral also has expertise in managed network solutions to address global connectivity and content delivery.
To get started with ServerCentral and B2, visit our partner page on ServerCentral.com.
What’s Next?
We’re excited to find out. The combination of B2 and compute unlocks use cases that were previously impossible or at least unaffordable.
“The combination of performance and price offered by this partnership enables me to create an entirely new business line. Before this offering, the economics of the cloud would have made our business simply unviable,” noted Gabriel Menegatti, co-founder at SlicingDice, a serverless data warehousing service. “Knowing that transfers between compute and B2 are free means I don’t have to worry about my business being successful. And, with download pricing from B2 at just $0.01 GB, I know I’m avoiding a 400% tax from AWS on data I retrieve.”
What can you do with B2 & compute? Please share your ideas with us in the comments. And, for those attending NAB 2018 in Las Vegas next week, please come by and say hello!
We may need Tor, “the onion router”, more than we ever imagined. Authoritarian states are blocking more and more web sites and snooping on their populations online—even routine tracking of our online activities can reveal information that can be used to undermine democracy. Thus, there was strong interest in the “State of the Onion” panel at the 2018 LibrePlanet conference, where four contributors to the Tor project presented a progress update covering the past few years.
Subscribers can read on for a report on the panel by guest author Andy Oram.
The Free Software Foundation (FSF) announced the winners of the 2017 Free Software Awards during LibrePlanet. “Public Lab is a community and non-profit organization with the goal of democratizing science to address environmental issues. Their community-created tools and techniques utilize free software and low-cost devices to enable people at any level of technical skill to investigate environmental concerns.” The organization received the Award for Projects of Social Benefit. Karen Sandler, the Executive Director of the Software Freedom Conservancy, received the Award for the Advancement of Free Software.
With over 500 Petabytes of data under management we need more people keeping the drives spinning in our data center. We’re constantly hiring Systems Administrators and Data Center Technicians, and here’s our latest one! Lets learn a bit more about Jacob, shall we?
What is your Backblaze Title? Data center Technician
Where are you originally from? Ojai, CA
What attracted you to Backblaze? It’s a technical job that believes in training it’s employees and treating them well.
What do you expect to learn while being at Backblaze? As much as I can.
Where else have you worked? I was a Team Lead at Target, I did some volunteer work with the Ventura County Medical Center, and I also worked at a motocross track.
Where did you go to school? Ventura Community College, then 1 semester at Sac State
What’s your dream job? Don’t really have one. Whatever can support my family and that I enjoy.
Favorite place you’ve traveled? Yosemite National Park for the touristy stuff, Bend Oregon for a good getaway place.
Favorite hobby? Gaming and music. It’s a tie.
Of what achievement are you most proud? Marring my wife Masha.
Star Trek or Star Wars? Wars. 100%. I’m a major Star Wars geek.
Coke or Pepsi? Monster.
Favorite food? French fries.
Why do you like certain things? Because my brain tells me I like them.
Thank you for helping care for all of our customer’s data. Welcome to the data center team Jacob!
Applying technology to healthcare data has the potential to produce many exciting and important outcomes. The analysis produced from healthcare data can empower clinicians to improve the health of individuals and populations by enabling them to make better decisions that enhance the care they provide.
The Observational Health Data Sciences and Informatics (OHDSI, pronounced “Odyssey”) program and community is working toward this goal by producing data standards and open-source solutions to store and analyze observational health data. Using the OHDSI tools, you can visualize the health of your entire population. You can build cohorts of patients, analyze incidence rates for various conditions, and estimate the effect of treatments on patients with certain conditions. You can also model health outcome predictions using machine learning algorithms.
One of the challenges often faced when working with big data tools is the expense of the infrastructure required to run them. Another challenge is the learning curve to implement and begin using these tools. Amazon Web Services has enabled us to address many of the classic IT challenges by making enterprise class infrastructure and technology available in an affordable, elastic, and automated way. This blog post demonstrates how to combine some of the OHDSI projects (Atlas, Achilles, WebAPI, and the OMOP Common Data Model) with AWS technologies. By doing so, you can quickly and inexpensively implement a health data science and informatics environment.
Shown following is just one example of the population health analysis that is possible with the OHDSI tools. This visualization shows the prevalence of various drugs within the given population of people. This information helps researchers and clinicians discover trends and make better informed decisions about patient health.
OHDSI application architecture on AWS
Before deploying an application on AWS that transmits, processes, or stores protected health information (PHI) or personally identifiable information (PII), address your organization’s compliance concerns. Make sure that you have worked with your internal compliance and legal team to ensure compliance with the laws and regulations that govern your organization. To understand how you can use AWS services as a part of your overall compliance program, see the AWS HIPAA Compliance whitepaper. With that said, we paid careful attention to the HIPAA control set during the design of this solution.
This blog post presents a complete OHDSI application environment, including a data warehouse with sample data. It has the following features:
Following, you can see a block diagram of how the OHDSI tools map to the services provided by AWS.
Atlas is the web application that researchers interact with to perform analysis. Atlas interacts with the underlying databases through a web services application named WebAPI. In this example, both Atlas and WebAPI are deployed and managed by AWS Elastic Beanstalk. Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications. Simply upload the Atlas and WebAPI code and Elastic Beanstalk automatically handles the deployment. It covers everything from capacity provisioning, load balancing, autoscaling, and high availability, to application health monitoring. Using a feature of Elastic Beanstalk called ebextensions, the Atlas and WebAPI servers are customized to use an encrypted storage volume for the middleware application logs.
Atlas stores the state of the various patient cohorts that are analyzed in a dedicated database separate from your observational health data. This database is provided by Amazon Aurora with PostgreSQL compatibility.
Amazon Aurora is a relational database built for the cloud that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. It is configured for high availability and uses encryption at rest for the database and backups, and encryption in flight for the JDBC connections.
All of your observational health data is stored inside the OHDSI Observational Medical Outcomes Partnership Common Data Model (OMOP CDM). This model also stores useful vocabulary tables that help to translate values from various data sources (like EHR systems and claims data).
The OMOP CDM schema is deployed onto Amazon Redshift. Amazon Redshift is a fast, fully managed data warehouse that allows you to run complex analytic queries against petabytes of structured data. It uses using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution. You can also resize an Amazon Redshift cluster as your requirements for it change.
The solution in this blog post automatically loads de-identified sample data of 1,000 people from the CMS 2008–2010 Data Entrepreneurs’ Synthetic Public Use File (DE-SynPUF). The data has helpful formatting from LTS Computing LLC. Vocabulary data from the OHDSI Athena project is also loaded into the OMOP CDM, and a results set is computed by OHDSI Achilles.
Following is a detailed technical diagram showing the configuration of the architecture to be deployed.
Deploying OHDSI on AWS
Everything just described is automatically deployed by using an AWS CloudFormation template. Using this template, you can quickly get started with the OHDSI project. The CloudFormation templates for this deployment as well as all of the supporting scripts and source code can be found in the AWS Labs GitHub repo.
From your AWS account, open the CloudFormation Management Console and choose Create Stack. From there, copy and paste the following URL in the Specify an Amazon S3 template URL box, and choose Next.
On the next screen, you provide a Stack Name (this can be anything you like) and a few other parameters for your OHDSI environment.
You use the DatabasePassword parameter to set the password for the master user account of the Amazon Redshift and Aurora databases.
You use the EBEndpoint name to generate a unique URL for Atlas to access the OHDSI environment. It is http://EBEndpoint.AWS-Region.elasticbeanstalk.com, where EBEndpoint.AWS-Region indicates the Elastic Beanstalk endpoint and AWS Region. You can configure this URL through Elastic Beanstalk if you want to change it in the future.
You use the KPair option to choose one of your existing Amazon EC2 key pairs to use with the instances that Elastic Beanstalk deploys. By doing this, you can gain administrative access to these instances in the future if you need to. If you don’t already have an Amazon EC2 key pair, you can generate one for free. You do this by going to the Key Pairs section of the EC2 console and choosing Generate Key Pair.
Finally, you use the UserIPRange parameter to specify a CIDR IP address range from which to access your OHDSI environment. By default, your OHDSI environment is accessible over the public internet. Use UserIPRange to limit access over the Internet to a single IP address or a range of IP addresses that represent users you want to have access. Through additional configuration, you can also make your OHDSI environment completely private and accessible only through a VPN or AWS Direct Connect private circuit.
When you’ve provided all Parameters, choose Next.
On the next screen, you can provide some other optional information like tags at your discretion, or just choose Next.
On the next screen, you can review what will be deployed. At the bottom of the screen, there is a check box for you to acknowledge that AWS CloudFormation might create IAM resources with custom names. This is correct; the template being deployed creates four custom roles that give permission for the AWS services involved to communicate with each other. Details of these permissions are inside the CloudFormation template referenced in the URL given in the first step. Check the box acknowledging this and choose Next.
You can watch as CloudFormation builds out your OHDSI architecture. A CloudFormation deployment is called a stack. The parent stack creates two child stacks, one containing the VPC and IAM roles and another created by Elastic Beanstalk with the Atlas and WebAPI servers. When all three stacks have reached the green CREATE_COMPLETE status, as shown in the screenshot following, then the OHDSI architecture has been deployed.
There is still some work going on behind the scenes, though. To watch the progress, browse to the Amazon Redshift section of your AWS Management Console and choose the Amazon Redshift cluster that was created for your OHDSI architecture. After you do so, you can observe the Loads and Queries tabs.
First, on the Loads tab, you can see the CMS De-SynPUF sample data and Athena vocabulary data being loaded into the OMOP Common Data Model. After you see the VOCABULARY table reach the COMPLETED status (as shown following), all of the sample and vocabulary data has been loaded.
After the data loads, the Achilles computation starts. On the Queries tab, you can watch Achilles running queries against your database to build out the Results schema. Achilles runs a large number of queries, and the entire process can take quite some time (about 20 minutes for the sample data we’ve loaded). Eventually, no new queries show up in the Queries tab, which shows that the Achilles computation is completed. The entire process from the time you executed the CloudFormation template until the Achilles computation is completed usually takes about an hour and 15 minutes.
At this point, you can browse to the Elastic Beanstalk section of the AWS Management Console. There, you can choose the OHDSI Application and Environment (green box) that was deployed by the CloudFormation template. At the top of the dashboard, as shown following, you see a link to a URL. This URL matches the name you provided in the EBEndpoint parameter of the CloudFormation template. Choose this URL, and you can start using Atlas to explore the CMS DE-SynPUF sample data!
Cost of deploying this environment
It used to be common to see healthcare data analytics environments deployed in an on-premises data center with expensive data warehouse appliances and virtualized environments. The cloud era has democratized the availability of the infrastructure required to do this type of data analysis, so that now it is within reach of even small organizations. This environment can expand to analyze petabyte-scale health data, and you only pay for what you need. See an estimated breakdown of the monthly cost components for this environment as deployed on the AWS Solution Calculator.
It’s also worth noting that this environment does not have to be run all of the time. If you are only performing analyses periodically, you can terminate the environment when you are finished and restore it from the database backups when you want to continue working. This would reduce the cost of operation even further.
Summary
Now that you have a fully functional OHDSI environment with sample data, you can use this to explore and learn the toolset and its capabilities. After learning with the sample data, you can begin gaining insights by analyzing your own organization’s health data. You can do this using an extract, transform, load (ETL) process from one or more of your health data sources.
James Wiggins is a senior healthcare solutions architect at AWS. He is passionate about using technology to help organizations positively impact world health. He also loves spending time with his wife and three children.
Blockchain, AI, big data, NoSQL, microservices, single page applications, cloud, SOA. What do these have in common? They have been or are hyped. At some point they were “the big thing” du jour. Everyone was investigating the possibility of using them, everyone was talking about them, there were meetups, conferences, articles on Hacker news and reddit. There are more examples, of course (which is the javascript framework this month?) but I’ll focus my examples on those above.
Another thing they have in common is that they are useful. All of them have some pretty good applications that are definitely worth the time and investment.
Yet another thing they have in common is that they are far from universally applicable. I’ve argued that monoliths are often still the better approach and that microservices introduce too much complexity for the average project. Big Data is something very few organizations actually have; AI/machine learning can help a wide variety of problems, but it is just a tool in a toolbox, not the solution to all problems. Single page applications are great for, yeah, applications, but most websites are still websites, not feature-rich frontends – you don’t need an SPA for every type of website. NoSQL has solved niche issues, and issues of scale that few companies have had, but nothing beats a good old relational database for the typical project out there. “The cloud” is not always where you want your software to be; and SOA just means everything (ESBs, direct integrations, even microservices, according to some). And the blockchain – it seems to be having limited success beyond cryptocurrencies.
And finally, another trait many of them share is that the hype has settled down. Only yesterday I read an article about the “death of the microservices madness”. I don’t see nearly as many new NoSQL databases as a few years ago, some of the projects that have been popular have faded. SOA and “the cloud” are already “boring”, and we’ve realized we don’t actually have big data if it fits in an Excel spreadsheet. SPAs and AI are still high in popularity, but we are getting a good understanding as a community why and when they are useful.
But it seems that nuanced reality has never stopped us from hyping a particular technology or approach. And maybe that’s okay in order to get a promising, though niche, technology, the spotlight and let it shine in the particular usecases where it fits.
But countless projects have and will suffer from our collective inability to filter through these hypes. I’d bet millions of developer hours have been wasted in trying to use the above technologies where they just didn’t fit. It’s like that scene from Idiocracy where a guy tries to fit a rectangular figure into a circular hole.
And the new one is not “the blockchain”. I won’t repeat my rant, but in summary – it doesn’t solve many of the problems companies are trying to solve with it right now just because it’s cool. Or at least it doesn’t solve them better than existing solutions. Many pilots will be carried out, many hours will be wasted in figuring out why that thing doesn’t work. A few of those projects will be a good fit and will actually bring value.
Do you need to reach multi-party consensus for the data you store? Can all stakeholder support the infrastructure to run their node(s)? Do they have the staff to administer the node(s)? Do you need to execute distributed application code on the data? Won’t it be easier to just deploy RESTful APIs and integrate the parties through that? Do you need to store all the data, or just parts of it, to guarantee data integrity?
“If you have is a hammer, everything looks like a nail” as the famous saying goes. In the software industry we repeatedly find new and cool hammers and then try to hit as many nails as we can. But only few of them are actual nails. The rest remain ugly, hard to support, “who was the idiot that wrote this” and “I wasn’t here when the decisions were made” types of projects.
I don’t have the illusion that we will calm down and skip the next hypes. Especially if adding the hyped word to your company raises your stock price. But if there’s one thing I’d like people to ask themselves when choosing a technology stack, it is “do we really need that to solve our problems?”.
If the answer is really “yes”, then great, go ahead and deploy the multi-organization permissioned blockchain, or fork Ethereum, or whatever. If not, you can still do a project a home that you can safely abandon. And if you need some pilot project to figure out whether the new piece of technology would be beneficial – go ahead and try it. But have a baseline – the fact that it somehow worked doesn’t mean it’s better than old, tested models of doing the same thing.
Allow your robots to join in the fun this Christmas with a round of Channel 4’s Countdown. https://www.rosietheredrobot.com/2017/12/tea-minus-30.html
Rosie the Red Robot
First, a little bit of backstory. Challenged by his eldest daughter to build a robot, technology-loving Alan got to work building Rosie.
I became (unusually) determined. I wanted to show her what can be done… and the how can be learnt later. After all, there is nothing more exciting and encouraging than seeing technology come alive. Move. Groove. Quite literally.
Originally, Rosie had a Raspberry Pi 3 brain controlling ultrasonic sensors and motors via Python. From there, she has evolved into something much grander, and Alan has documented her upgrades on the Rosie the Red Robot blog. Using GPS trackers and a Raspberry Pi camera module, she became Rosie Patrol, a rolling, walking, interactive bot; then, with further upgrades, the Tea Minus 30 project came to be. Which brings us back to Countdown.
T(ea) minus 30
In case it hasn’t been a big part of your life up until now, Countdown is one of the longest running televisions shows in history, and occupies a special place in British culture. Contestants take turns to fill a board with nine randomly selected vowels and consonants, before battling the Countdown clock to find the longest word they can in the space of 30 seconds.
I’ve had quite a few requests to show just the Countdown clock for use in school activities/own games etc., so here it is! Enjoy! It’s a brand new version too, using the 2010 Office package.
There’s a numbers round involving arithmetic, too – but for now, we’re going to focus on letters and words, because that’s where Rosie’s skills shine.
Using an online resource, Alan created a dataset of the ten thousand most common English words.
Many words, listed in order of common-ness. Alan wrote a Python script to order them alphabetically and by length
Next, Alan wrote a Python script to select nine letters at random, then search the word list to find all the words that could be spelled using only these letters. He used the randint function to select letters from a pre-loaded alphabet, and introduced a requirement to include at least two vowels among the nine letters.
Words that match the available letters are displayed on the screen.
Putting it all together
With the basic game-play working, it was time to bring the project to life. For this, Alan used Rosie’s camera module, along with optical character recognition (OCR) and text-to-speech capabilities.
Alan writes, “Here’s a very amateurish drawing to brainstorm our idea. Let’s call it a design as it makes it sound like we know what we’re doing.”
Alan’s script has Rosie take a photo of the TV screen during the Countdown letters round, then perform OCR using the Google Cloud Vision API to detect the nine letters contestants have to work with. Next, Rosie runs Alan’s code to check the letters against the ten-thousand-word dataset, converts text to speech with Python gTTS, and finally speaks her highest-scoring word via omxplayer.
You can follow the adventures of Rosie the Red Robot on her blog, or follow her on Twitter. And if you’d like to build your own Rosie, Alan has provided code and tutorials for his projects too. Thanks, Alan!
This is the fifth article in a short series about Poland, Europe, and the United States. To explore the entire series, start here.
Perhaps not surprisingly, my previous blog post sparked several interesting discussions with my Polish friends who took a more decisive view of the social costs of firearm ownership, or who saw the Second Amendment as a barbaric construct with no place in today’s world. Their opinions reminded me of my own attitude some ten years ago; in this brief follow-up, I wanted to share several data points that convinced me to take a more measured stance.
Let’s start with the basics: most estimates place the number of guns in the United States at 300 to 350 million – that’s roughly one firearm per every single resident. In Gallup polls, some 40-50% of all households report having a gun, frequently more than one. The demographics of firearm ownership are more uniform than stereotypes may imply; there is some variance across regions, political affiliations, and genders – but for most part, it tends to fall within fairly narrow bands.
An overwhelming majority of gun owners cite personal safety as the leading motive for purchasing a firearm; hunting and recreation activities come strong second. The defensive aspect of firearm ownership is of special note, because it can potentially provide a very compelling argument for protecting the right to bear arms even if it’s a socially unwelcome practice, or if it comes at an elevated cost to the nation as a whole.
The self-defense argument is sometimes dismissed as pure fantasy, with many eminent pundits citing one questionable statistic to support this view: the fairly low number of justifiable homicides in the country. Despite its strong appeal to ideologues, the metric does not stand up to scrutiny: all available data implies that most encounters where a gun is pulled by a would-be victim will not end with the assailant getting killed; it’s overwhelmingly more likely that the bad guy would hastily retreat, be detained at gunpoint, or suffer non-fatal injuries. In fact, even in the unlikely case that a firearm is actually discharged with the intent to kill or maim, somewhere around 70-80% of victims survive.
In reality, we have no single, elegant, and reliable source of data about the frequency with which firearms are used to deter threats; the results of scientific polls probably offer the most comprehensive view, but are open to interpretation and their results vary significantly depending on sampling methods and questions asked. That said, a recent meta-analysis from Centers for Disease Control and Prevention provided some general bounds:
“Defensive use of guns by crime victims is a common occurrence, although the exact number remains disputed (Cook and Ludwig, 1996; Kleck, 2001a). Almost all national survey estimates indicate that defensive gun uses by victims are at least as common as offensive uses by criminals, with estimates of annual uses ranging from about 500,000 to more than 3 million.”
An earlier but probably similarly unbiased estimate from US Dept of Justice puts the number at approximately 1.5 million uses a year.
The CDC study also goes on to say:
“A different issue is whether defensive uses of guns, however numerous or rare they may be, are effective in preventing injury to the gun-wielding crime victim. Studies that directly assessed the effect of actual defensive uses of guns (i.e., incidents in which a gun was “used” by the crime victim in the sense of attacking or threatening an offender) have found consistently lower injury rates among gun-using crime victims compared with victims who used other self-protective strategies.”
An argument can be made that the availability of firearms translates to higher rates of violent crime, thus elevating the likelihood of encounters where a defensive firearm would be useful – feeding into an endless cycle of escalating violence. That said, such an effect does not seem to be particularly evident. For example, the United States comes out reasonably well in statistics related to assault, rape, and robbery; on these fronts, America looks less violent than the UK or a bunch of other OECD countries with low firearm ownership rates.
But there is an exception: one area where the United States clearly falls behind other highly developed nations are homicides. The per-capita figures are almost three times as high as in much of the European Union. And indeed, the bulk of intentional homicides – some 11 thousand deaths a year – trace back to firearms.
We tend to instinctively draw a connection to guns, but the origins of this tragic situation may be more elusive than they appear. For one, non-gun-related homicides happen in the US at a higher rate than in many other countries, too; Americans just seem to be generally more keen on killing each other than people in places such as Europe, Australia, or Canada. In addition, no convincing pattern emerges when comparing overall homicide rates across states with permissive and restrictive gun ownership laws. Some of the lowest per-capita homicide figures can be found in extremely gun-friendly states such as Idaho, Utah, or Vermont; whereas highly-regulated Washington D.C., Maryland, Illinois, and California all rank pretty high. There is, however, fairly strong correlation between gun and non-gun homicide rates across the country – suggesting that common factors such as population density, urban poverty, and drug-related gang activities play a far more significant role in violent crime than the ease of legally acquiring a firearm. It’s tragic but worth noting that a strikingly disproportionate percentage of homicides involves both victims and perpetrators that belong to socially disadvantaged and impoverished minorities. Another striking pattern is that up to about a half of all gun murders are related to or committed under the influence of illicit drugs.
Now, international comparisons show general correlation between gun ownership and some types of crime, but it’s difficult to draw solid conclusions from that: there are countless other ways to explain why crime rates may be low in the wealthy European states, and high in Venezuela, Mexico, Honduras, or South Africa; compensating for these factors is theoretically possible, but requires making far-fetched assumptions that are hopelessly vulnerable to researcher bias. Comparing European countries is easier, but yields inconclusive results: gun ownership in Poland is almost twenty times lower than in neighboring Germany and ten times lower than in Czech Republic – but you certainly wouldn’t able to tell that from national crime stats.
When it comes to gun control, one CDC study on the topic concluded with:
“The Task Force found insufficient evidence to determine the effectiveness of any of the firearms laws or combinations of laws reviewed on violent outcomes.”
This does not imply that such approaches are necessarily ineffective; for example, it seems pretty reasonable to assume that well-designed background checks or modest waiting periods do save lives. Similarly, safe storage requirements would likely prevent dozens of child deaths every year, at the cost of rendering firearms less available for home defense. But for the hundreds of sometimes far-fetched gun control proposals introduced every year on federal and state level, emotions often take place of real data, poisoning the debate around gun laws and ultimately bringing little or no public benefit. The heated assault weapon debate is one such red herring: although modern semi-automatic rifles look sinister, they are far more common in movies than on the streets; in reality, all kinds of rifles account only for somewhere around 4% of firearm homicides, and AR-15s are only a tiny fraction of that – likely claiming about as many lives as hammers, ladders, or swimming pools. The efforts to close the “gun show loophole” seem fairly sensible at the surface, too, but are of similarly uncertain merit; instead of gun shows, criminals depend on friends, family, and on more than 200,000 guns that stolen from their rightful owners every year. When breaking into a random home yields a 40-50% chance of scoring a firearm, it’s not hard to see why.
Another oddball example of simplistic legislative zeal are the attempts to mandate costly gun owner liability insurance, based on drawing an impassioned but flawed parallel between firearms and cars; what undermines this argument is that car accidents are commonplace, while gun handling mishaps – especially ones that injure others – are rare. We also have proposals to institute $100 ammunition purchase permits, to prohibit ammo sales over the Internet, or to impose a hefty per-bullet tax. Many critics feel that such laws seem to be geared not toward addressing any specific dangers, but toward making firearms more expensive and burdensome to own – slowly eroding the constitutional rights of the less wealthy folks. They also see hypocrisy in the common practice of making retired police officers and many high-ranking government officials exempt from said laws.
Regardless of individual merits of the regulations, it’s certainly true that with countless pieces of sometimes obtuse and poorly-written federal, state, and municipal statutes introduced every year, it’s increasingly easy for people to unintentionally run afoul of the rules. In California, the law as written today implies that any legal permanent resident in good standing can own a gun, but that only US citizens can transport it by car. Given that Californians are also generally barred from carrying firearms on foot in many populated areas, non-citizen residents are seemingly expected to teleport between the gun store, their home, and the shooting range. With many laws hastily drafted in the days after mass shootings and other tragedies, such gems are commonplace. The federal Gun-Free School Zones Act imposes special restrictions on gun ownership within 1,000 feet of a school and slaps harsh penalties for as little carrying it in an unlocked container from one’s home to a car parked in the driveway. In many urban areas, a lot of people either live within such a school zone or can’t conceivably avoid it when going about their business; GFSZA violations are almost certainly common and are policed only selectively.
Meanwhile, with sharp declines in crime continuing for the past 20 years, the public opinion is increasingly in favor of broad, reasonably policed gun ownership; for example, more than 70% respondents to one Gallup poll are against the restrictive handgun bans of the sort attempted in Chicago, San Francisco, or Washington D.C.; and in a recent Rasmussen poll, only 22% say that they would feel safer in a neighborhood where people are not allowed to keep guns. In fact, responding to the media’s undue obsession with random of acts of violence against law-abiding citizens, and worried about the historically very anti-gun views of the sitting president, Americans are buying a lot more firearms than ever before. Even the National Rifle Association – a staunchly conservative organization vilified by gun control advocates and mainstream pundits – enjoys a pretty reasonable approval rating across many demographics: 58% overall and 78% in households with a gun.
And here’s the kicker: despite its reputation for being a political arm of firearm manufacturers, the NRA is funded largely through individual memberships, small-scale donations, and purchase round-ups; organizational donations add up to about 5% of their budget – and if you throw in advertising income, the total still stays under 15%. That makes it quite unlike most of the other large-scale lobbying groups that Democrats aren’t as keen on naming-and-shaming on the campaign trail. The NRA’s financial muscle is also frequently overstated; it doesn’t even make it onto the list of top 100 lobbyists in Washington – and gun control advocacy groups, backed by activist billionaires such as Michael Bloomberg, now frequently outspend the pro-gun crowd. Of course, it would be better for the association’s socially conservative and unnecessarily polarizing rhetoric – sometimes veering onto the topics of abortion or video games – to be offset by the voice of other, more liberal groups. But ironically, organizations such as American Civil Liberties Union – well-known for fearlessly defending controversial speech – prefer to avoid the Second Amendment; they do so not because the latter concept has lesser constitutional standing, but because supporting it would not sit well with their own, progressive support base.
America’s attitude toward guns is a choice, not a necessity. It is also true that gun violence is a devastating problem; and that the emotional horror and lasting social impact of incidents such as school shootings can’t be possibly captured in any cold, dry statistic alone. But there is also nuance and reason to the gun control debate that can be hard to see for newcomers from more firearm-averse parts of the world.
For the next article in the series, click here. Alternatively, if you prefer to keep reading about firearms, go here for an overview of the gun control debate in the US.
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.