All posts by Stephen Cass

Get Your Interpreters Ready—This Year’s BASIC 10-Liner Competition Is Open For Entries

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-talk/computing/software/get-your-interpreters-readythis-years-basic-10-liner-competition-is-open-for-entries

Tired of slicing Python lists? Can’t face coding up another class in C++? Take break and try your hand at writing a little BASIC—very little, in fact, because the goal here is to write a game in just 10 lines!

The annual BASIC 10-Liner competition is now taking entries for the 2021 contest. Run almost every year by Gunnar Kanold since 2011, contestants must submit game programs written for any 8-bit home computer for which an emulator is available. Using the program to load a machine code payload is forbidden. 

Some of the entries in previous years have been nothing short of astounding, leveraging the abilities of classic computers like the the TRS-80, Commodore 64, and Amiga 800. Back in 2019, IEEE Spectrum interviewed Kanold about the contest, and he gave some tips if you’re thinking about throwing your hat into the ring:

While some contestants are expert programmers who have written “special development tools to perfect their 10 Liners,” the contest is still accessible to neophytes: “The barriers to participation are low. Ten lines [is] a manageable project.” …  Kanold points to programing tutorials that can be found on YouTube or classic manuals [PDF] and textbooks that have been archived online. “But the best resource is the contest itself,” he says. “You can find the 10 Liners from the previous years on the home page, most of them with comprehensive descriptions. With these excellent breakdowns, you can learn how the code works.”

So take a look, download an emulator of your favorite machine (if you don’t already have a favorite, I’m partial to the BBC Micro myself) and start programming! This year’s deadline is 27 March, 2021.

How Is This a Good Idea: Car Dashboard Video Games

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/cars-that-think/transportation/advanced-cars/hitagi-car-dashboard-videogames

Modern cars have become just another computer peripheral. Internally, a host of embedded processors handle tasks that range in complexity from rolling windows up and down to something close to autopilot. Externally, wireless smartphone connections allow a driver to monitor their car’s performance and send remote instructions to warm up the vehicle or unlock the doors. 

So why not lean into the computery nature of the modern car and use it for more than just driving around? Don’t take yourself so seriously: Use this machine, when safely parked, for having some fun. That appears to be the logic behind the notion of using the car dashboard screen—and often the steering wheel—as a platform and interface for video games.

These car-based games fall into three broad categories: Those that use a car’s center touchscreen for casual gaming recreation while the car is parked; those that co-opt the cars’ controls in some way, so that, for example, a virtual racing car can be controlled by a real steering wheel, also while parked; and finally, the as-yet-not-implemented idea that augmented reality and other games could be integrated into the active driving experience.

The first category falls into the domain of “harmless if somewhat pointless.” Looking at the games already on offer in Tesla cars, many of them are ports of classic Atari arcade titles such as Missile Command and Centipede, although some more recent games such as CupheadFallout Shelter and Beach Buggy Racing 2 have also been adapted. These are intended to allow a driver to kill some time while parked, at an EV charging station perhaps (a longer and more passive experience than popping into a gas station to fill a tank). 

But anyone who is driving a Tesla is going to have a much better piece of hardware to play games already on hand, in the form of their smartphone. Admittedly, a phone’s screen is much smaller than the Tesla’s display, but the phone has the advantage that it has a vastly larger selection of titles and can be played in the hands, rather than requiring the driver to extend their arms to play. I expect that the best use case for this category of game will prove to be in the least flashy games—chess and backgammon—where a driver and passenger might use the screen as board to play against each other. Otherwise, it all feels like a feature for a feature’s sake, good for a flex while showing off a new car, but not much else.

The next category of games—those that use a parked car as a glorified controller—edges into more dubious territory. The fundamental issue is that cars are not just another peripheral. They are heavy machinery. The non-profit National Safety Council estimates that in the United States, 38,800 people lost their lives due to car crashes in 2019, with another 4.4 million suffering serious injury. While there are no good statistics for deaths caused by, say, accidents involving laser printers, I’m willing to bet it’s a much, much smaller number. Until autonomous driving technology is good enough to take humans out of the loop entirely—so called Level 5 autonomy—drivers hold life and death in their hands every time they sit behind the wheel. 

This may seem like a moot point if the games can only be played while the car is parked. But the same controls previously dedicated to the licensed operation of a car are now also used for consequence-free light entertainment. 

In user experience terms, having the same input produce different outputs is known as a mode-based or modal interface. Popular in the days of computing when interactive input was often limited to the channel of a single keyboard, modal interfaces have since fallen out of favor. This was in large part because users tended to make so-called mode errors, where the user’s mental model of the system falls out of sync with the actual state of the system—something known as mode confusion. And they sometimes issue inappropriate commands in the belief that they’re using a different mode.

Mode confusion is irritating if you, for example, accidently delete a file while working at a computer. But it can be lethal when it happens elsewhere. For example, a number of aviation accidents have been caused by pilots making mode errors with controls in the cockpit.

Tesla Arcade does lock out some driving controls when playing a game: pressing the accelerator, for example, produces a warning notice. But turning the steering wheel to control a game car still also turns the wheels of the real car. It is easy to imagine a scenario where somebody parks their car with the wheels aligned straight ahead, then becomes immersed in a racing game while their partner runs an errand. Their partner returns and gets back in the passenger seat. The driver immediately turns off the game, starts the car and begins to back out—forgetting that the last thing they were doing in the game was taking a sharp curve. So instead of reversing straight back out, the wheels are set so the cars turn into another vehicle—or a pedestrian. 

The next proposed category of in-car gaming, where the game is tied into the active driving experience, is even more worrisome. In May, Elon Musk mused on Twitter about the possibility of having something like the augmented reality hit Pokemon Go running on Teslas. Whether any such game is played using an in-car display or some kind of heads-up display, the issue is that it will be inevitably adding to the driver’s mental load, raising the likelihood of distraction at a critical moment. 

There have been some very public incidents where drivers, lulled into a false sense of security by autonomous driving systems, have become so distracted as to be practically somnambulant: A pedestrian crossing the road was killed during a 2018 Uber autonomous driving road test in Arizona, and a driver’s overreliance on a Tesla’s autopilot system got him killed when a truck failed to yield properly in 2016. But it doesn’t take the latest in AI technology to beguile drivers. 

Safety experts have previously raised concerns about common in-vehicle information systems, which are intended to help drivers with navigation, provide audio entertainment, or support hands-free phone calls or text messaging. A 2017 report by the AAA Foundation for Traffic Safety warned that many such systems already place excessive visual or mental demands on drivers. The truth is that we are simply not as good at multitasking as we like to think we are. And every additional task we try to perform simultaneously—such as completing a game objective while also paying attention to traffic—pushes us closer to a condition that accident investigators refer to as task saturation

As a person reaches task saturation, their situational awareness becomes more and more spotty, and they can end up simply ignoring critical warnings, or become incapable of completing all the things they need to do in time to avoid disaster. And given that every game designer’s objective is to grab and hold our attention, it’s not much of a stretch to imagine that actively playing a game while driving would make more mental demands than using, say, a satellite navigation display. 

So let’s keep the games where they belong—away from steering wheels and actual car dashboards.

Painless FPGA Programming

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/geek-life/hands-on/painless-fpga-programming

Once priced out of the reach of enthusiasts, field-programmable gate arrays have become affordable in recent years, and have started popping up in all kinds of neat projects. By the very nature of their reconfigurable hardware, FPGAs can be used in many different ways, such as implementing novel CPUs. But from a maker’s point of view, a huge part of the attraction lies in the circuits’ capacity for handling lots of input/output signals.

Multiple inputs—say, from an array of microphones or other sensors—can be processed in perfect sync, and large numbers of outputs can be generated ­simultaneously without losing any accuracy in timing. I’ve looked on in envy at FPGA-based undertakings far beyond the capabilities of conventional microcontrollers, such as driving 9,000 smart LEDS. But I’ve been reluctant to dive into FPGAs myself because, in addition to price, there’s been another barrier to entry—learning how to program them. Traditionally, FPGAs are programmed using pro-level hardware-description languages such as Verilog or VHDL. But a new manufacturing partnership between SparkFun Electronics and Alchitry promised a gentle introduction on the programming side and the bonus of easy integration on the hardware side. So I decided to take the plunge and see if that promise could be fulfilled.

As a test project, I settled on building a kinetic sculpture, partially inspired by George Cutts’s motorized sculpture Sea Change, which I’d seen at the outdoor Storm King Art Center during my sole expedition beyond the borders of New York City in 2020. Sea Change uses two rotating elements with their motions locked together. What could I do with many more individually controlled moving pieces?

As the basis for my project, I bought an Alchitry Au Kit for US $130. This comes with the core Alchitry Au board, featuring a Xilinx Artix 7 FPGA with over 33,000 logic cells. The board also comes with eight built-in LEDS, 256 MB of RAM, and a whopping 102 I/O pins that operate at 3.3 volts. It also comes with a Qwiic connector, SparkFun’s open standard for a physical interface for the I²C protocol. Having this connector built right into the board means it’s easy to hook up microcontrollers or sensors from any manufacturer who uses the Qwiic interface.

On the software side, the kit uses Alchitry Labs, an IDE (integrated development environment) designed to run on top of Xilinx’s Vivado package that creates the actual FPGA configuration file. For the record, Vivado runs only on Windows (although I had no problem running it on my Mac under Parallels Desktop), and some folks may not be thrilled that the free version of Vivado has a mandatory reporting function that sends back statistical details about your designs, such as the number of logic cells used.

Alchitry Labs is designed for people programming the Alchitry Au, as well as the cheaper but not quite as sophisticated ­Alchitry Cu board (which has an FPGA with only 7,680 logic cells and 79 I/O pins but is supported by an open-source tool chain). FPGA programming is done via the Lucid language, which provides some guardrails for novices and useful higher-level abstractions. Projects are prepopulated with the required definitions needed for accessing things like the reset button, onboard LEDs, or the serial interface. A suite of additional components can be added by clicking through a library that contains modules for things like SPI (serial peripheral interface) and I²C, pulse-width modulation, and HDMI encoding and decoding.

An ample set of tutorials is provided, although they could probably do with some tidying up. For example, the documentation I found for using the servo controller was written for the Alchitry’s now discontinued Mojo board, which uses a slightly different way of specifying I/O pins, but it wasn’t too hard to figure out. Many of the tutorials are built around using the I/O Element board that comes with the Au kit, and I was soon driving the Element’s four 7-segment displays to show the output of basic mathematical operations on numbers entered via banks of (teeny tiny) dip switches.

While I’m still very much an FPGA newbie, the tutorials did get me through some of the big mental shifts required for programming hardware versus writing software. You’re not actually writing a program; you’re writing a specification for the tool chain to chew on, and you need to think much more in terms of combinatorial rather than sequential logic.

My sculpture consists of 18 servos arranged vertically, each attached to a horizontal arm with black and red tiles mounted at either end that can oscillate back and forth as desired. Why 18? Well, it’s possible to drive 16 servos with an external servo board connected to a regular microcontroller, and I wanted to push beyond that, but 17 seemed too petty a number. And having many more servos posed its own challenges. When each servo is first energized, it can easily draw 1 ampere before settling down to a few hundred milliamperes in actual operation. Even 18 A is a lot of current to handle at once: My bench supply maxes out at 10 A. I devised a quick-and-dirty power management kludge by dividing the servos into two strings of nine, and added manual switches to the 5-V power lines for each string, letting me energize them one at a time.

Fortunately, the servos will accept a 3.3-V control signal, so I could connect them ­directly to a protoboard included in the Au kit that plugs onto the FPGA board. The FPGA board controls the servos and in turn takes instructions via a Qwiic connector from a RedBoard Turbo microcontroller, following SparkFun’s outline of using I²C in this way for a clock project. An LCD Keypad shield is mounted on the RedBoard to provide a user interface for controlling servo patterns. The result is a sometimes mesmerizing sculpture, but even more important, I’m now confident enough with FPGAs to consider using them in more projects. Promise fulfilled.

The Intense Sound of 1,000 Analog Oscillators

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/geek-life/profiles/the-intense-sound-of-1000-analog-oscillators

When the COVID-19 lockdown came to the United Kingdom, Sam Battle became a man on a mission. Battle has created many unusual electronic musical instruments under the sobriquet of Look Mum No Computer, including an organ made out of singing Furbys and a synthesizer-bicycle hybrid. As people isolated themselves, he embarked on his most ambitious project yet: the KiloDrone.

Many of Battle’s projects involve analog synthesizers, in which a tone is generated by a circuit oscillating at audio frequencies. Multiple oscillators allow a synth to generate multiple tones simultaneously, creating richer sounds. Which raises the question: Can you ever have too many oscillators?

The KiloDrone was built to answer that question. It’s a drone synth, which means it produces sustained sounds, rather than the characteristic attack and decay of, say, a piano. Typically, a drone synth has two to eight oscillators. The KiloDrone, as its name suggests, has 1,000.

The genesis of the KiloDrone came when Battle was “messing around with some transistors, and found a circuit that was lighting up an LED when it shouldn’t be,” he explains. Battle found that he’d rediscovered the reverse-avalanche oscillator, which requires just a capacitor, resistor, and transistor. He realized he could use it as an adjustable audio oscillator, and relied on one for a Red Bull sponsorship in which he built a light-controlled synthesizer out of a drink can in 2017. After that, while he was touring in Europe, a friend joked that he should build “a big box of them.”

“Whenever something captures my imagination, it sort of stays there until it’s done,” says Battle. So he built the 100-oscillator MegaDrone. “And it didn’t seem like that was enough,” says Battle, who quickly settled on 1,000 oscillators as his next target. But “I kept on procrastinating. Then at the beginning of lockdown I was like, ‘I’ve got a few months. I may as well pick up that project.’ ”

A 1,000-oscillator analog synth isn’t exactly portable. The KiloDrone is wall mounted, 2.3 meters high, and 4 meters wide. Each oscillator is connected to an LED as well as a tuning knob and volume control. The oscillators are grouped in banks of 10, with each bank equipped with a master tuning and level control. (You can buy one of the printed circuit boards used to make the banks for US $53 and build your own smaller, standalone drone.)

Despite its size, the KiloDrone draws just 1.2 amperes at 12 volts. Consequently, it dissipates little heat, which keeps the oscillators’ frequency stable. “A lot of people were very reserved about [the KiloDrone initially],” says Battle. “They were like, ‘Oh, by the time you [adjust] the last oscillator it’ll be out of tune.’”

So what does the KiloDrone sound like? Imagine the THX effect played before movies in cinemas—like that, only more so. You can listen to it in Look Mum No Computer’s videos, but Battle says they don’t fully capture the experience of hearing it live. He hopes that, when the pandemic ends, people can come to listen to it—and play it—in a planned Museum of Everything Else dedicated to DIY devices, especially analog instruments. “I like the tangibility of analog. I hate working on computers,” says Battle. “I just can’t stand looking at screens. I like standing up and moving around and making things in a physical world.”

This article appears in the October 2020 print issue as “Let a Thousand Analog Oscillators Sing.”

The Tech Behind The Mind Reading Pangolin Dress Could Lead To Wireless—And Batteryless—Exoskeleton Control

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/the-human-os/biomedical/bionics/the-tech-behind-a-mind-reading-dress-could-lead-to-wireless-batteryless-exoskeleton-control

An eye-catching dress that’s part art project, part cutting-edge tech, was presented today at the Ars Electronica Festival in Linz, Austria. The dress showcases an ultra-low energy, high resolution, brain-computer interface that’s so sensitive that if the wearer thinks about moving a single finger, it can identify which one—without the need for an implant.

The dress is the result of a collaboration between researchers at Johannes Kepler University Linz (JKU), developers at medical engineering company G.tec, and fashiontech designer Anouk Wipprecht.  Known as the Pangolin dress, it has 1,024 individual head-mounted electrodes in 64 groups of 16. These detect electrical signals coming from the brain. The data from these sensors is combined, analyzed, and converted into colors displayed by 32 Neopixel LEDs and 32 servo-driven scales, creating a whole-body visualization of neural activity.

The biggest technical advance is in the new custom chip connected to each electrode. Measuring 1.6 millimeters on a side, the single-channel chip integrates an amplifier, an analog-to-digital converter (ADC), and a digital signal processor—and it draws less than 5 microwatts of power. Because it uses so little power, the chip can be powered by a nearby basestation in the manner of a contactless RFID chip, and return data wirelessly as well (although for the Pangolin dress, wired data connections were used to allow the creators to focus on developing and debugging other system elements while COVID-19 restrictions limited access to facilities.) Powering and communicating with the chip wirelessly would eliminate the need to tether a patient to a test system in a wired system, and even eliminate the bulk and weight of a batteries found in conventional wireless BCI systems. The biggest challenge was fitting into “the power budget for the sensor electronics,” says Harald Pretl, professor at the Institute for Integrated Circuits at JKU. “We had to create an amplifier design dedicated to this, an ADC dedicated to this, and also, based on ultrawideband, our own transmission protocol.” Another challenge was dealing with “fading and shadowing effects around the head,” adds Pretl.

But although the chip design was a custom build, it was deliberately made without recourse to exotic fabrication techniques: “We tried to focus on not using expensive technology there. We are using 180-nanometer [fabrication] technology, but we were able to get higher performance by using some advanced circuit design tricks,” explains JKU’s Thomas Faseth, who is the Pangolin dress’s project lead.

For the Pangolin dress, each group of electrodes and chips was mounted on a six-sided tile, and the wearer’s head—in this case, that of Megi Gin in Austria [below], who became as much a test participant as model—was covered in tiles, each hosting a custom chip and antenna.

The next step up the technology stack was provided by Austrian commercial BCI developer and manufacturer, G.tec. G.tec brought its experience of analyzing neural signals to bear,  integrating and interpreting the data coming from the tiles. For example, when you decide to move a muscle voluntarily, it sets off a localized pattern of activity in your motor cortex that can be detected and identified, “With 1,024 separate channels, you can get single finger resolution. This is something you normally cannot do with surface electrodes, you need implants,” says G.tec co-founder Christoph Guger. Normally, says Guger, surface electrode systems have 64 channels, which is only enough to distinguish, for example, whether the movement is intended for the right or left arm.

Because brains vary, identifying such fine grained detail requires calibrating the system to the individual wearer, using machine learning to recognize the patterns associated with different motions. However, the system doesn’t actually need you to be able to move any given muscle—in fact it learns more easily when the participant simply imagines performing a movement, because imagining a movement typically takes longer than performing it, producing a more sustained signal. This means that patients with paralyzed or missing limbs may be able to avail of the technology, and Guger even speculates as far as the possibility of BCI-controlled exoskeletons.

The Pangolin dress determines whether the wearer is one of a number of mental states, including stressed, neutral, and meditative, and expresses this through motion and light. The dress itself was created by designer Wipprecht, who previously wrote for IEEE Spectrum about her collaboration with G.tec in designing the Unicorn headset, intended to help therapists tune programs for children with ADHD.  

The dress combines rigid and fabric elements, with the rigid elements 3D-printed using selective laser sintering in nine interlocking segments. The modular nature of the sensors sparked the idea of taking inspiration from the flexible pangolin’s often lustrous keratin scales, says Wipprecht. For Wipprecht, part of the challenge was that brain signals can change much faster than a servo can change angle, so she decided to focus on reflecting the frequency of change instead: i.e when low frequency brain waves predominate, the servos and lights move and pulse slowly.  “So much of the interaction design in my pieces is about creating an elegant behavior that gives a good sense of what the data is doing. In this case, since we have a lot of data points through all the 1,024 channels in the dress, my interactions can become even more spot-on … A hectic state is reflected in quick jittery movements and white lights, a neutral state is reflected in neutral movements and blue lights, and a meditative state is where the dress will turn purple with smooth, flowing movements,” says Wipprecht.

The Pangolin collaboration began late last year and continued despite the complication of Covid-19 restrictions. Although the JKU and G.tec teams were located in Austria, Wipprecht is in the hard-hit state of Florida. For the Austrian teams this meant severely restricted access to their labs, but they were able to send four sensor modules to Wipprecht as she worked on the dress design. The completed dress only arrived in Austria last month for final  testing.

The Pangolin team hopes to create another iteration of the dress, this time using the sensors in a completely wireless mode. Discussions about commercializing the sensor technology are ongoing, but Guger estimates that within about two years it could be an off-the-shelf option for other researchers.

The AxiDraw MiniKit Is the Modern XY Plotter You Didn’t Know You Wanted


Post Syndicated from Stephen Cass original https://spectrum.ieee.org/geek-life/hands-on/the-axidraw-minikit-is-the-modern-xy-plotter-you-didnt-know-you-wanted

I’ll be honest: when the original AxiDraw came out from Evil Mad Scientist Laboratories in 2016, I thought it was cute but a curiosity. My thought was that if you were going to play around with reintroducing the venerable technology of the pen-and-paper X-Y plotter, why not do something really different, like the iBoardBot remote-controlled erasable miniature whiteboard? And the AxiDraw’s US $475 price tag felt steep for a spindly-looking contraption that required the user to manually position each sheet of paper, one at a time.

Boy, was I wrong. The AxiDraw took off—folks have embraced it for creating generative art, making microfluidic systems with wax, laser cutting, engraving of all types, testing computer mice, and much more. In short, it became a general-purpose platform for all sorts of experiments, and the reason was also the reason for that high price tag: the AxiDraw’s consistent precision. But there’s a way to get that precision and start experimenting for less money, albeit with a couple of trade-offs, with the $325 AxiDraw MiniKit, which came out late last year.

The first trade-off is the size: the MiniKit’s drawing area is 15 by 10 centimeters, while the standard AxiDraw can handle up to 30 by 21.8 cm. But the MiniKit’s smaller size may be a boon to some, fitting better on a desk or workbench, and easier to transport. The second trade-off is time, specifically yours, because the MiniKit is the first AxiDraw to come in build-it-yourself kit form.

As someone who is a lot more confident with circuits than cogs, assembling the MiniKit looked a little daunting at first. All kinds of wheels, minutely varied screws and washers, and various mystery components were crammed into the box. Fortunately, the instructions are some of the most thoughtfully detailed that I’ve ever seen, with many pictures and tips on identifying the M3x8 button-head screw versus the M4x8 torx tapping screw.

The MiniKit’s x-axis is formed by one long fixed rail. A movable chassis runs along this rail, and it supports the y-axis rail in turn, to which the penholder is affixed. Construction took a few hours: The trickiest parts were getting the toothed drive belt in position with adequate tension and adjusting the wheels that grip the MiniKit’s rails so that things can move freely but without any looseness.

The MiniKit uses two stepper motors to position the penholder, but rather than having the motors drive motion along the x and y axes directly, both motors are placed on opposite ends of the x-axis rail. The drive belt is looped through the movable chassis and along the y-axis in such a way that turning one motor while holding the other in place causes the penholder to move at a 45 degree angle to the rails—if the MiniKit were a compass, this would be along the NE to SW line. Turning the other motor, while holding the first one still, moves the penholder along the NW to SE line. This means the axis controlled by each motor is effectively rotated through 45 degrees, so the MiniKit’s microcontroller takes care of mapping incoming x-y coordinates to the appropriate motor signals.

The MiniKit is primarily controlled from a desktop or laptop using an extension to Inkscape, an open source vector-drawing tool. The software is also required for the final assembly step: calibrating the servo that raises and lowers the penholder. However, many other software options for controlling AxiDraw plotters exist, and it’s pretty straightforward to write your own code using the Processing language, which is designed for creating ­computer-assisted and computer-generated art, or write a Python script.

Once I had the servo calibrated and pushed the penholder by hand to its home position in the upper left corner of a page, I fitted a roller-ball pen into the MiniKit—you have a choice of holding pens in a vertical or slanted position, the slant being ­especially suitable if you want to try, say, a fountain pen. The test graphic rendered perfectly, first time, scratching out a welcome message in a font that’s a believable imitation of cursive handwriting (or at least how we used to write cursive, before keyboards caused our penmanship to atrophy).

Evil Mad Scientist Laboratories also offers a handy free stand-alone utility program called StippleGen. You can feed an image into the utility, and it creates a version made up of monochromatic stippled dots of various sizes. The number of dots and their size range is controllable, and the program also tries to determine the best, shortest, path between the dots, so as to minimize the amount of time that the AxiDraw spends moving around.

The resulting file can then be imported into Inkscape and plotted. Warning though: This can take quite a while. For my first test I used a photo of some Irish passage graves, which needed about 5,000 dots to be recognizable: It took about 2 hours to complete drawing the image, taking over a second to draw each dot and move to the next. (I was able to shave quite a bit off that time by ­reducing how high the MiniKit lifted the pen between dots in later tests.) The stipple plots work best with faces, which tend to be discernible at lower dot counts than other photographic subjects, no doubt due to the specialized facial-recognition neural hardware in our brains.

In short, the AxiDraw MiniKit rewards experimentation—there are many things to try, but you don’t have to master them all before you get rewarding results. And if nothing else, it should make addressing envelopes easier!

Top Programming Languages 2020

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/at-work/tech-careers/top-programming-language-2020

It would be an understatement to say it’s been a turbulent year since the last time IEEE Spectrum broke out the digital measuring tools to probe the relative popularity of programming languages. Yet one thing remains constant: the dominance of Python.

Since it’s impossible for even the most aggressive spy agency in the world to find out what language every single programmer uses when they sit down at their keyboards—especially the ones tapping away on retro computers or even programmable calculators—we rely on combining 11 metrics from online sources that we think are good proxies for the popularity of 55 languages.

Because different programmers have different interests and needs, our online rankings are interactive, allowing you to weight the metrics as you see fit. Think one measure is way more valuable than the others? Max it out. Disagree with us about the worth of another? Turn it off. We have a number of preset rankings that focus on things such as emerging languages or what jobs employers are looking to fill (big thanks to CareerBuilder for making it possible to query their database this year, now that it’s no longer accessible using a public application programming language).

Our default ranking is weighted toward the interests of an IEEE member, and looking at the top entries, we see that Python has held onto its comfortable lead, with Java and C once again coming in second and third place, respectively. Arduino has seen a big jump, rising from 11th place to seventh. (Purists may argue that Arduino is not a language but rather a hardware platform that is programmed using a derivative of Wiring, which itself is derived from C/C++. But we have always taken a very pragmatic approach to our definition of “programming language,” and the reality is that when people are looking to use an Arduino-compatible microcontroller, they typically search for “Arduino code” or buy books about “Arduino programming,” not “Wiring code” or “C programming.”)

One interpretation of Python’s high ranking is that its metrics are inflated by its increasing use as a teaching language: Students are simply asking and searching for the answers to the same elementary questions over and over. There’s an historical parallel here. In the 1980s, BASIC was very visible—there were books, magazines, and even TV programs devoted to the language. But few professional programmers used it, and when the home computer bubble burst, so did BASIC’s, although some advanced descendants like Microsoft Visual Basic are still relatively popular professionally.

There are two counterarguments, though: The first is that students are people, too! If we pay attention only to what professional and expert coders do, we’re at risk of missing an important part of the picture. The second is that, unlike BASIC, Python is frequently used professionally and in high-profile realms, such as machine learning, thanks to its enormous collection of high quality, specialized libraries.

However, the COVID-19 pandemic has left some traces on the 2020 rankings. For example, if you look at the Twitter metric alone in the interactive, you can see that Cobol is in seventh place. This is likely due to the fact that in April, when we were gathering the Twitter data, Cobol was in the news because unemployment benefit systems in U.S. states were crashing under the load as workers were laid off due to lockdowns. It turns out that many of these systems had not been significantly upgraded since they were created decades ago, and a call went out for Cobol programmers to help shore them up.

There’s always a vibrant conversation about Spectrum’s Top Programming Languages online, so we encourage you to explore the full rankings and leave comments there, particularly if you want to nominate an emerging language for inclusion in next year’s rankings.

This article appears in the August 2020 print issue as “The Top Programming Languages.”

Interactive: The Top Programming Languages 2020

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/static/interactive-the-top-programming-languages-2020



















Why Wait For Apple? Try Out The Original ARM Desktop Experience Today With A Raspberry Pi

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-talk/consumer-electronics/gadgets/why-wait-for-apple-try-out-the-original-arm-desktop-experience-today-with-a-raspberry-pi

In a generally anticipated announcement, Apple declared on Monday that it is moving its Macintosh computers away from Intel processors to its own custom silicon chips. These chips use the ARM architecture that’s found in over 95 percent of the world’s smartphones. Apple has already used such chips to great effect in its iPhone and iPad lines, building up its experience of adding custom circuitry around ARM cores to provide accelerated hardware support for things like machine learning applications.

Apple hopes to reap the advantages of tight integration between the new silicon and software running on the next version of its MacOS operating system. The company demonstrated an A.I.-enhanced version of Final Cut Pro that was able to automatically crop high resolution video to follow an off-road cyclist, for example. But this isn’t the first time an operating system specifically tailored to use ARM-based hardware has come to a consumer desktop computer.

Back in 1987, Acorn Computers released the first in its line of moderately successful Archimedes desktops. Acorn was inventor of the ARM architecture: Their first CPU, the ARM1 was developed as an external coprocessor for power users of Acorn’s influential BBC Microcomputer (which itself used a 6502 processor, alongside the Apple II and Commodore 64). The ARM1 was never publicly released but the ARM2 became the basis of the first Archimedes. 

Along with the ARM2, Acorn also developed its own tightly integrated multitasking operating system for the Archimedes, RISC OS. RISC OS comes with a graphical user interface, a number of core applications, and a super-advanced version of the Basic programming language created for the BBC Microcomputer. One of the nice things about BBC Basic is that it lets you mix assembler code and Basic very easily, letting you optimize programs for considerable speed boosts without getting bogged down.

Although Archimedes’ computers stopped being made in the 1990s, RISC OS is still alive and kicking, thanks to a dedicated group of volunteers. As well as supporting old hardware and software, they have ported RISC OS to the ARM-based Raspberry Pi. So if you have a Pi and a spare SD card, you can try it today: A stable version is available for all versions of the Pi up to the Pi 3, and a beta release is available for the Pi 4. It’s interesting to play around with, especially for interface designers looking for examples that aren’t from the well-trodden worlds of Microsoft and Apple: there’s even a web browser built in. 

Ironically, the fact that ARM became deeply associated with mobile devices and largely vanished from desktop and laptop computers until now, is in no small part due to Apple. Apple played a big role in the early development of the ARM architecture when it created ARM Holdings in 1990 as a joint venture between itself, IC manufacturer VSLI, and Acorn Computers, the original inventors of the architecture. Apple wanted a chip for their Newton PDA, and when the Newton turned out to be something of a flop, ARM Holdings began shopping its technology around.

ARM became the darling of mobile manufacturers because the architecture is very power efficient, so it won over other processors that delivered better performance but at a battery-draining higher energy cost. Now it’s back to competing on performance, and its ability to succeed will likely depend on how well Apple has matched its surrounding silicon hardware accelerators to the needs of developers and consumers. 

Quickly Embed AI Into Your Projects With Nvidia’s Jetson Nano

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/geek-life/hands-on/quickly-embed-ai-into-your-projects-with-nvidias-jetson-nano

When opportunity knocks, open the door: No one has taken heed of that adage like Nvidia, which has transformed itself from a company focused on catering to the needs of video gamers to one at the heart of the artificial-intelligence revolution. In 2001, no one predicted that the same processor architecture developed to draw realistic explosions in 3D would be just the thing to power a renaissance in deep learning. But when Nvidia realized that academics were gobbling up its graphics cards, it responded, supporting researchers with the launch of the CUDA parallel computing software framework in 2006.

Since then, Nvidia has been a big player in the world of high-end embedded AI applications, where teams of highly trained (and paid) engineers have used its hardware for things like autonomous vehicles. Now the company claims to be making it easy for even hobbyists to use embedded machine learning, with its US $100 Jetson Nano dev kit, which was originally launched in early 2019 and rereleased this March with several upgrades. So, I set out to see just how easy it was: Could I, for example, quickly and cheaply make a camera that could recognize and track chosen objects?

Embedded machine learning is evolving rapidly. In April 2019, Hands On looked at Google’s Coral Dev AI board which incorporates the company’s Edge tensor processing unit (TPU), and in July 2019, IEEE Spectrum featured Adafruit’s software library, which lets even a handheld game device do simple speech recognition. The Jetson Nano is closer to the Coral Dev board: With its 128 parallel processing cores, like the Coral, it’s powerful enough to handle a real-time video feed, and both have Raspberry Pi–style 40-pin GPIO connectors for driving external hardware.

But the Coral Dev board is designed to provide the minimal amount of support needed to prototype embedded applications. Its Edge TPU is on a removable module that can be plugged directly into target hardware as development progresses. The Coral carrier board that the processor module sits on has only one large USB port and is generally intended to be operated over a remote connection via a USB-C or Ethernet connector rather than being plugged in to a keyboard and screen directly.

The Jetson Nano has a similar module-and-carrier board approach, but its carrier board has a lot more going for stand-alone experimentation. It has four USB 3.0 ports for peripherals, HDMI and DisplayPort connectors, a Micro-USB port to supply power or allow remote operation, an Ethernet port, two ribbon connectors for attaching Raspberry Pi–compatible camera modules, and a barrel jack socket for providing the additional power needed for intensive computations. (I appreciated this last touch, because the Coral Dev board uses two USB-C ports side by side, one for power and the other for data, and I was always confusing the two.)

Nvidia’s “Getting Started” instructions stepped me through downloading a customized version of the Ubuntu operating system designed for the Nano. This includes things like a Nano version of the established Raspberry Pi Rpi.GPIO library for accessing the GPIO header from Python scripts.

Then, as per the instructions, I began the free “Welcome to Getting Started With AI on Jetson Nano!” course. This includes an introductory primer on neural networks, followed by details on how to use the Nano in remote mode. In this mode, the Nano runs a local Web server that hosts a Jupyter notebook, the tool of choice for many researchers because it allows explanatory text and images to be intermingled with Python code.

Unfortunately, I couldn’t connect to the Nano’s server: After some poking around online, it turns out that the Ubuntu version required for the “Welcome…” course is different from the version linked in the “Getting Started” instructions. There are actually a few places where Nvidia’s generally excellent documentation, videos, and other tutorials have this kind of rough edge: Sometimes it’s because things haven’t been updated for the new board; other times it’s simple mistakes, such as a sample script used for testing the output of the GPIO header that tries to import the original Rpi.GPIO library rather than the Jetson.GPIO library.

But soon I was up and running with the well-known ResNet-18 pretrained neural network, which can classify 1,000 objects. Through a provided notebook it is easy to modify the last layer of the network so that it recognizes just two things—say, thumbs-up and thumbs-down gestures—using images directly captured from the USB camera, and then retrain the network for gesture recognition.

Then I tried something a little more ambitious. Following along with Nvidia’s suggested “Hello AI World” tutorial, I was soon running a 10-line Python script directly on the Nano’s desktop that did real-time recognition with the ResNET-18 network. With a little bit of tweaking, I was able to extract the ID and center coordinates of every object recognized. A little more modification and the script toggled four GPIO pins to indicate if a particular type of object—say a bottle, or person—was above, below, left of, or right of center in the camera’s view.

I hooked up an Arduino Uno with a motor shield to drive the two servos in a $19 pan/tilt head that I bought from Adafruit. I mounted my USB camera on the pan/tilt head and wired the Arduino to monitor the Nano’s GPIO pins. When the Nano senses an off-center object and turns on the appropriate GPIO pin, the Arduino nudges the pan/tilt head in the direction that should bring the object toward the center of its field of view.

I was genuinely surprised how quickly I was able to go from opening the Nano’s box to a working autonomous tracking camera (not counting the time I spent waiting for various supporting bits and pieces to arrive in the mail while isolated from my usual workbench at the Spectrum office!). And there are plenty of avenues for further expansion and experimentation. If you’re looking to merge physical computing with AI, the Jetson Nano is a terrific place to start. 

This article appears in the July 2020 print issue as “Nvidia Makes It Easy to Embed AI.”

SpaceX Returns U.S. Astronauts to Space

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-talk/aerospace/space-flight/spacex-returns-us-astronauts-to-space

With a “let’s light this candle,” and an ear-shattering roar and a blaze of light in Florida this afternoon, the United States rejoined an exclusive club of nations: those capable of launching people into space. Since the space shuttle fleet was decommissioned in 2011, American astronauts have had to hitch a ride on Russian Soyuz spacecraft. The launch today was the second attempt after the first attempt earlier this week was cancelled due to weather concerns. All went well, including a successful recovery of the the Falcon booster’s first stage, with a soft landing on the drone ship Of Course I Still Love You, stationed in the Atlantic.

This nine-year gap has highlighted how moribund the United States’ official space program has become: after all, between 1960 and 1969, NASA developed and flew three new crewed spacecraft—Gemini, the Apollo Command and Service Module, and the Lunar Lander, not to mention launch boosters such as the Saturn IB and Saturn V. But this return to human spaceflight marks more than just a return to the status quo, but heralds a new epoch when commercial, rather than government entities, take the lead in designing, delivering, and operating new spacecraft. The Dragon 2 spacecraft and Falcon 9 booster were both created by SpaceX, the current leader in the still nascent world of private spaceflight.

To be clear, NASA has always relied on commercial contractors to build most of its hardware. Gemini was built by McDonnell Aircraft (now absorbed into Boeing) for example, and each of the Saturn V’s three stages was built by a separate company—Boeing, North American (now part of Boeing), and Douglas (now also part of Boeing)—with the rocket’s instrumentation, telemetry, and computer module built by IBM. But during the Apollo era, NASA kept these contractors on a very tight leash, dictating many aspects of the design, and also had the advantage of vast budgets devoted to testing and debugging. NASA also exerted complete control of the entire flight from delivery of the hardware through launch and the subsequent mission.

But today, NASA’s took a back seat. While NASA’s mission control in Houston remains in charge of the overall mission to the International Space Station (ISS), SpaceX’s mission control in Hawthorne, California, was responsible for the launch and flight into orbit. The Dragon capsule won’t reach the ISS until tomorrow, when it will dock and essentially become part of the station until it is time for it to return.

The two astronauts flying onboard the Dragon—Robert Behnken and Douglas Hurley—are both veteran of the space shuttle program. SpaceX’s replacement is like the shuttle in that it is intended to be a largely resuable mode of transport, including a booster that can fly itself home to a safe landing, but the overall approach is more reminiscent of the Gemini spacecraft of the 1960s than anything since.

Few remember today, but the snub-nosed, matt-black-and-white Gemini spacecraft, with their stylish red trim, was originally designed with reusability in mind. Components were arranged to make servicing between flights easier, and a version was in the works to bring astronauts to and from a military space station, the Manned Orbiting Laboratory (MOL). The MOL was cancelled abruptly, and so only one refurbished Gemini capsule was ever reflown in an uncrewed test, and it’s planned role as reusable space station visitor largely forgotten.

Like Gemini, the Dragon 2 is comprised of two main components: a bell-shaped crew compartment with a curved heatshield and an unpressurized “trunk” section behind the heatshield that acts as both an adapter for mating the spacecraft to the launch rocket and provides additional payload space. Like Gemini, when returning to Earth, the unpressurized section is jettisoned, and the crew capsule alone makes a so-called “blunt body” descent through the atmosphere before releasing parachutes and splashing down in the ocean for pickup.

But the Dragon 2 also features enormous improvements. For one thing, it’s much bigger than Gemini, capable of carrying up to seven people, compared to Gemini’s two. The unpressurized section is also coated with solar cells to generate power, a first for U.S. crewed spacecraft, which previously relied exclusively on batteries or fuel cells. The crew capsule is also outfitted with rockets that are powerful enough for orbital maneuvering, while Gemini relied on thrusters in its disposable section. These rockets also mean that the crew capsule is capable of being its own launch escape system, pulling the crew the safety in the event of a booster failure. Gemini relied on two airplane-style ejector seats for launch emergencies, something that few astronauts seemed to have any confidence in.

The Dragon spacecraft also has a sophisticated autonomous guidance system: in theory, astronauts could just sit back at launch and let the spacecraft fly all the way into dock at the ISS. However, during this proving flight, the astronauts will take control of the spacecraft to test the manual backup controls, albeit in a much slicker manner than the old Gemini days, when Buzz Aldrin had to bust out a sextant to make a rendezvous after Gemini 12’s radar stopped working. The new system interface is more reminiscent of a video game than a piece of navigation equipment—in fact, you can try your own hand at space station docking using a replica of the interface online.

Exactly how long the crew will stay onboard the ISS is unknown, although there is a limit of 119 days due to concerns about the longevity of the Dragon’s solar panels in orbit.

This Classic Calculator Was Literally Reverse Engineered From the Bare Metal


Post Syndicated from Stephen Cass original https://spectrum.ieee.org/geek-life/hands-on/this-classic-calculator-was-literally-reverse-engineered-from-the-bare-metal

Was the Sinclair Scientific calculator elegant? It certainly was a hit, gracing the cover of publications like Popular Mechanics after its release in 1974. Cleverly written firmware dragooned its limited processor, intended only for basic arithmetic, into performing way beyond specifications. This allowed Sinclair to sell a scientific calculator to countless folks who otherwise could not have afforded one. But it was also slow and sometimes inaccurate, provided barely enough mathematical functions to qualify as a scientific calculator, and was difficult for the uninitiated to use.

I’d vaguely known of this calculator because of its place as a milestone toward the birth of the British microcomputer industry and such beloved machines as the Sinclair ZX Spectrum. So I clicked when I came across Chris Chung’s replica kit of the calculator on the Tindie marketplace. Then I read the description of how the original calculator worked—scientific notion only? no “equals” button?—and how the replica reproduced its behavior, by using an emulator running firmware that had been reverse engineered by visually examining the metal of an original processor. This I had to try.

Let’s first get to the hardware. The kit is one of a number of Sinclair calculator replicas, but it wins points for simplicity: One chip and a credit-card-size printed circuit board are combined with a small handful of discrete components. Chung actually offers two kits: his original, developed in 2014 but put on Tindie in late 2019, displays numbers using two small QDSP-6064 bubble LED modules [PDF], which have the classic look of 1970s calculators but are long discontinued and hard to get. His 2020 update of the kit uses modern seven-segment LED displays. The difference in rarity is reflected in the price: The 2020 version costs US $39, while the original version costs $79. However, in a nice touch, the original version lets you have your cake and eat it with regard to the bubble LEDs: The through holes in the PCB are sized to create a friction fit. This means you don’t have to solder in the modules, preserving them for some future project.

Both versions are functionally identical, based around a MSP430 microcontroller. The MSP430 eliminates the need for most of the other components found in the Sinclair Scientific, and runs an emulator of the TMS080x family of chips. The TMS080x family was built by Texas Instruments (TI), and specific versions, like the TMS0805 used in the Sinclair Scientific, were differentiated by the contents of the chip’s 3,520-bit ROM.

For many years, how Sinclair coaxed this chip into performing magic remained locked away in TMS0805 chip ROMs. That began to change in 2013, when Ken Shirriff got wind of the Sinclair Scientific after getting involved with the Visual 6502 team. This group likes to reverse engineer classic chips, for which the original design drawings are often lost. Sometimes this involves etching chip packages away with acid and carefully photographing the exposed silicon dies with a microscope to see the individual transistors. Shirriff was able to create a generic simulator of a TMS080x chip in JavaScript just by studying Texas Instruments’ patent filings, but the specific code used in the Sinclair’s TMS0805’s ROM still eluded him, until Visual 6502 team member John McMaster took photographs of an exposed die in 2014.

Shirriff is no stranger to IEEE Spectrum due to his extensive work in the history of computing, so I emailed him to ask how he’d gone from a micrograph to working code. By looking at how the metal oxide gates were laid down in the ROM section of the chip, “I was able to extract the raw bits the same day,” Shirriff wrote back. “Phil Mainwaring, Ed Spittles, and I spent another day figuring out how the raw bits corresponded to the code….The code was 320 words of 11 bits, but the ROM was physically 55 rows and 64 columns.…Through a combination of examining the circuitry, analyzing patterns in the bits, and brute-force trying a bunch of combinations, we figured out the arrangement and could extract the code.”

Once he had the code loaded into his simulator, Shirriff could tease out its workings. The algorithms used throughout “were essentially the simplest brute-force algorithms that could get an answer. But there were some interesting mathematical tricks they used to get more accuracy, not to mention programming tricks to get the code to fit,” he explained. (If you want detailed explanations, Shirriff maintains an online version of his simulator that lets you step through the code, line by line.)

The Sinclair Scientific was able to reduce complexity by using reverse Polish notation, in which mathematical operators come after the numbers they are operating on—for instance, “5 + 4 =” becomes “5 4 +.” The trigonometric functions used an iterated approximation technique that could take several seconds to produce results and were often accurate only to the first three significant figures. The calculator also used fixed scientific notation for everything—there was no way to enter a decimal point. So rather than entering “521.4,” you’d enter “5214,” which is displayed as “5.214”; then you’d press “E” and enter “2,” making the number “5.214 x 102.” Only one number could be entered at a time.

On paper this looks terrible, something you’d have used only if you couldn’t afford, say, the HP-35, whose designers prided themselves on accuracy and functionality (although the HP-35 also used reverse Polish notation, it did so in a more sophisticated way).

But Sinclair wasn’t trying to compete against other calculators; he was competing against slide rules. I’d read that point in histories before, but I didn’t really understand its meaning until I held this kit in my hand. By chance, I’d also recently purchased a vintage Pickett slide rule and learned how to do basic operations with it, thanks to the lessons available on the International Slide Rule Museum website. As I used the Sinclair Scientific, I was struck by how conceptually similar it was to using my slide rule. There, two- to three-figure accuracy is the norm, the rule’s sliding “cursor” means you’re passing just one number between scales, and you typically don’t bother about magnitudes—you work with the most significant figures and make a mental estimate to insert the decimal point at the end, so that 52 x 2 and 5,200 x 20 are calculated in exactly the same way.

This realization is why I feel replica kits are so important—they are an easy way to provide the understanding that only comes from operating a device with your own hands. It was a great reminder that, as folks like Henry Petroski have pointed out, good design is not really something that exists as an abstract ideal but as something that exists only within a specific context. So, was the Sinclair Scientific elegant? For me, the answer is a resounding yes.

This article appears in the June 2020 print issue as “The Sinclair Scientific, Remade.”

9 New Suggestions For Bored Engineers

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-talk/geek-life/hands-on/9-new-suggestions-for-bored-engineers

IEEE COVID-19 coverage logo, link to landing page

Like many of you, IEEE Spectrum’s staff is still working from home. As the weeks have turned into months, we’ve had to find ways to occupy our minds and hands. A few weeks ago, we looked through Spectrum’s archives to find 19 things you can do with minimal space and resources. Now here’s a new selection of ideas to keep cabin fever at bay:

Al Alcorn, Creator of Pong, Explains How Early Home Computers Owe Their Color Graphics to This One Cheap, Sleazy Trick

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-talk/tech-history/silicon-revolution/al-alcorn-creator-of-pong-explains-how-early-home-computers-owe-their-color-to-this-one-cheap-sleazy-trick

In March, we published a Hands On article about Matt Sarnoff’s modern homebrew computer that uses a very old hack: NTSC artifact color. This hack allows digital systems without specialized graphics hardware to produce color images by exploiting quirks in how TVs decode analog video signals.

NTSC artifact color was used most notably by the Apple II in 1977, where Steve “Woz” Wozniak’s use of the hack brought it to wide attention; it was later used in the IBM PC and TRS-80 Color Computers. But it was unclear where the idea had originally come from, so we were thrilled to see that video game and electrical engineering legend Allan Alcorn left a comment on the article with an answer: the first color computer graphics that many people ever saw owe their origin to a cheap test tool used in a Californian TV repair shop in the 1960s. IEEE Spectrum talked to Alcorn to find out more:

Stephen Cass: Analog NTSC televisions generate color by looking at the phase of a signal relative to a reference frequency. So how did you come across this color test tool, and how did it work?

Al Alcorn: When I was 13, 14, my neighbor across the street had a television repair shop. I would go down there and at the same time, I had my father sign me up for an RCA correspondence course on radio and television repair. So, by the time I got to Berkeley, I was a journeyman TV repairman and actually paid my way through college through television. In one repair shop, there was a real cheap, sleazy color bar generator [for testing televisions]. And instead of doing color properly by synthesizing the phases and stuff like that, it simply used a crystal that was 3.58 megahertz [the carrier frequency for the color signal] minus 15.750 kilohertz, which was the horizontal scan frequency. So it slipped one phase, 360 degrees, every scan line. You put that signal on the screen and you’ve got a color bar from left to right. It really was really the cheapest, sleaziest way of doing it!

SC: How did that idea of not doing NTSC “by the book” enter into your own designs?

AA: So, I learned the cheap, sleazy way doing repair work. But then I got a job at Ampex [a leader in audio/visual technology at the time]. Initially, I wanted to be an analog engineer, digital was not as interesting. At Ampex, it was the first time I saw video being done by digital circuits; they had gotten fast enough, and that opened my eyes up. [Then I went to Atari]. Nolan [Bushnell, co-founder of Atari] decided we wanted to be in the home consumer electronics space. We had done this [monochrome] arcade game [1972’s Pong] which got us going, but he always wanted to be in the consumer space. I worked with another engineer and we reduced the entire logic of the Pong game down to a single N-channel silicon chip. Anyway, part of the way into the design, Nolan said, “Oh, by the way, it has to be colored.” But I knew he was going to pull this stunt, so I’d already chosen the crystal [that drove the chip] to be 3.58 MHz, minus 15.750 kilohertz.

SC: Why did you suspect he was going to do that?

AA: Because there never was a plan. We had no outline or business plan, [it was just Nolan]. I’m sure you’ve heard that the whole idea behind the original arcade Pong was that it was a test for me just to practice, building the simplest possible game. But Nolan lied to me and said it was going to be a home product. Well, at the end it was kind of sad, a failure, because I had like 70 ICs in it, and that was [too expensive] for a home game. But [then Nolan decided] it would work for an arcade game! And near the end of making [arcade] Pong, Nolan said, “Well, where’s the sound?” I said “What do you mean, sound?” I didn’t want to add in any more parts. He said “I want the roar of the crowd of thousands applauding.” And [Ted] Dabney, the other owner said “I want boos and hisses.” I said to them “Okay, I’ll be right back.” I just went in with a little probe, looking for around the vertical sync circuit for frequencies that [happened to be in the audible range]. I found a place and used a 555 timer [to briefly connect the circuit to a loudspeaker to make blip sounds when triggered]. I said “There you go Nolan, if you don’t like it, you do it.” And he said “Okay.” Subsequently, I’ve seen articles about how brilliant the sound was!  The whole idea is to get the maximum functionality for the minimum circuitry. It worked. We had $500 in the bank. We had nothing and so we put it out there. Time is of the essence.

SC: So, in the home version of Pong, the graphics would simply change color from one side of the screen to the other?

AA: Right, the whole goal for doing this was just to put on the box: “Color!” Funny story—home Pong becomes a hit. This is like in 1974, 75. It’s a big hit. And they’re creating advertisements for television. Trying to record the Pong signal onto videotape. I get a call from some studio somewhere, saying, “We can’t get it to play on the videotape recorder, why?”  I say, “Well, it’s not really video! There’s no interlace… Treat it as though it’s PAL, just run up through a standard converter.”

SC: How does Wozniak get wind of this?

AA: In those days, in Silicon Valley, we didn’t keep secrets. I hired Steve Jobs on a fluke, and he’s not an engineer. His buddy Woz was working at HP, but we were a far more fun place to hang out. We had a production floor with about 30 to 50 arcade video games being shipped, and they were on the floor being burnt in. Jobs didn’t get along with the other guys very well, so he’d work at night. Woz would come in and play while Jobs did his work, or got Woz to do it for him. And I enjoyed Woz. I mean, this guy is a genius, I mean, a savant. It’s just like, “Oh my God.”

When the Apple II was being done, I helped them. I mean, I actually loaned them my oscilloscope, I had a 465 Tektronix scope, which I still have, and they designed the Apple II with it. I designed Pong with it. I did some work, I think, on the cassette storage. And then I remember showing Woz the trick for the hi-res color, explaining, sitting him down and saying, “Okay, this is how NTSC is supposed to work.” And then I said, “Okay. Now the reality is that if you do everything at this clock [frequency] and you do this with a pulse of square waves…” And basically explained the trick. And he ran with it.  That was the tradition. I mean, it was cool. I was kind of showing off!

SC: When people today are encouraged to tinker and experiment with electronics, it’s typically using things like the Arduino, which are heavily focused on digital circuits. Do you think analog engineering has been neglected?

AA: Well, it certainly is. There was a time, I think it was in the ’90s, where it got so absurd that there just weren’t any good analog engineers out there. And you really need analog engineers on certain things. A good analog engineer at that time was highly paid. Made a lot of money because they’re just rare. So, yeah. But most kids want to be—well, just want to get rich. And the path is through programming something on an iPhone. And that’s it. You get rich and go. But there is a lot of value in analog engineering. 

Lessons Learned by NYC Makers Producing Personal Protective Equipment for Medics

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-talk/biomedical/devices/lessons-learned-nyc-makers-producing-personal-protective-equipment-medics

A coalition of 14 makerspaces, hospitals, and unions has organized to create and distribute face shields and other protective equipment to frontline nurses and doctors coping with a wave of COVID-19 cases in New York City.

They made their first delivery of 97 shields on 22 March, and currently have enough capacity to produce between 500 and 1,000 shields a day, according to Jake Lee, a computer science masters student at Columbia University and spokesperson for the coalition called NYC Makes PPE. And they’ve some tips for other engineers and makers itching to put their skills and facilities to use.

Show The World You Can Write A Cool Program Inside A Single Tweet

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-talk/computing/software/show-the-world-you-can-write-a-cool-program-inside-a-single-tweet

Want to give your coding chops a public workout? Then prove what you can do with the BBC Micro Bot. Billed as the world’s first “8-bit cloud,” and launched on 11 February, the BBC Micro Bot is a Twitter account that waits for people to tweet at it. Then the bot takes the tweet, runs it through an emulator of the classic 1980s BBC Microcomputer running Basic, and tweets back an animated gif of three seconds of the emulator’s output. It might sound like that couldn’t amount to much, but folks have been using it to demonstrate some amazing feats of programming, most notably including Ebon Upton, creator of the Raspberry Pi.

“The Bot’s [output] posts received over 10 million impressions in the first few weeks, and it’s running around 1000 Basic programs per week,” said the account’s creator, Dominic Pajak, in an email interview with IEEE Spectrum.

Upton, for example, performed a coding tour de force with an implementation of Conway’s Game of Life, complete with a so-called Gosper Gun, all running fast enough to see the Gun spit out glider patterns in real time. (For those not familiar with Conway’s Game of Life, it’s a set of simple rules for cellular automata that exist on a flat grid. Cells are turned on and off based on the state of neighboring cells according to those rules. Particularly interesting patterns that emerge from the rules have been given all sorts of names.)

Upton did this by writing 150 bytes of data and machine code for the BBC Microcomputer’s original CPU, the 6502, which the emulator behind the BBC Micro Bot is comprehensive enough to handle. He then converted this binary data into tweetable text using Base64 encoding, and wrapped that data with a small Basic program that decoded it and launched the machine code. Since then, people have been using even more elaborate encoding schemes to pack even more in. 

Pajak, who is the vice-president of business development for Arduino, created the BBC Micro Bot because he is a self-described fan of computer history and Twitter. “It struck me that I could combine the two.” He chose the BBC Micro as his target because “growing up in the United Kingdom during the 1980s, I learnt to program the BBC Micro at school. I certainly owe my career to that early start.” Pajak adds that, “There are technical reasons too. BBC Basic was largely developed by Sophie Wilson (who went on to create the Arm architecture) and was by far the best Basic implementation of the era, with some very nice features allowing for ‘minified’ code.”

Pajak explains that the bot is written in Javascript for the Node.js runtime environment, and acts as a front end to Matt Godbolt’s JSbeeb emulation of the BBC Micro. When the bot spots a tweet intended for it, “it does a little bit of filtering and then injects the text into the emulated BBC Micro’s keyboard buffer. The bot uses ffmpeg to create a 3-second video after 30 seconds of emulation time.” Originally the bot was running on a Raspberry Pi 4, but he’s since moved it to Amazon Web Services.

Pajak has been pleasantly surprised by the response: “The fact that BBC BASIC is being used for the first time by people across the world via Twitter is definitely a surprise, and its great to see people discovering and having fun with it. There were quite a lot of slogans and memes being generated by users in Latin America recently. This I didn’t foresee for sure.”

The level of sophistication of the programs has risen sharply, from simple Basic programs through Upton’s Game of Life implementation and beyond. “The barriers keep getting pushed. Every now and then I have to do a double take: Can this really be done with 280 characters of code?Pajak points to Katie Anderson’s tongue-in-cheek encoding of the Windows 3.1 logo, and the replication of a classic bouncing ball demo by Paul Malin—game giant Activision’s technical director—which, Pajak says, uses “a special encoding to squeeze 361 ASCII characters of code into a 280 Unicode character tweet.”

If you’re interested in trying to write a program for the Bot, there are a host of books and other coding advice about the BBC Micro available online, with Pajak hosting a starting set of pointers and a text encoder on his website, www.dompajak.com.

As for the future and other computers, Pajak says he given some tips to people who want to build similar bots for the Apple II and Commodore computers. For himself, he’s contemplating finding a way to execute the tweets on a physical BBC Micro, saying “I already have my BBC Micro connected to the Internet using an Arduino MKR1010…”

19 Suggestions for Bored Engineers

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-talk/geek-life/reviews/tips-for-bored-engineers

Thanks to the Covid-19 pandemic, a lot of us find ourselves working from home (including all of your editors here at IEEE Spectrum). To help stave off cabin fever, we looked through our recent archives for things that would best occupy your minds and hands, even if you don’t have much space. In the coming days we’ll be on the look out for more ideas—so if you have any tips or suggestions for your fellow readers, please send them to [email protected].

4 Missions Will Search the Martian Atmosphere and Soil for Signs of Life

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/aerospace/robotic-exploration/4-missions-will-search-the-martian-atmosphere-and-soil-for-signs-of-life

graphic link to special report landing page

In 2020, Earth and Mars will align. Every 26 months, a launch window for a low-energy “Hohmann” transfer orbit opens between the two. This July, no fewer than four missions are hoping to begin the nine-month journey.

ExoMars 2020

A joint mission between the European Space Agency (ESA) and Roscosmos, the Russian space agency, ExoMars 2020 is dedicated to the search for life, including anything that might be alive today. A Russian-built lander is intended to carry a European rover to the Martian surface. But the mission is technically challenging, and some experts doubt that Roscosmos is up to its task. “They’re relying on a Russian-built lander, which is worrying,” says Emily Lakdawalla, a senior editor at the Planetary Society. “Russia has not launched a successful planetary mission since 1984.” ESA has also acknowledged that there’s some doubt about whether the mission will launch at all, as the parachutes that will be used during landing have run into problems during high-altitude testing.

Assuming the spacecraft does alight on the Martian soil, the lander, called Kazachok, will monitor the climate and probe for subsurface water at the landing location. Meanwhile the rover, named Rosalind Franklin, will go further afield. Rosalind Franklin is equipped with a drill that can penetrate up to 2 meters beneath the ground, where organic molecules are more likely to be preserved than on the harsh surface. An onboard laboratory will analyze samples and in particular look at the molecules’ “handedness.” Complex molecules typically come in “right-handed” and “left-handed” versions. Nonbiological chemical reactions involve both in equal measure, but biology as we know it strongly prefers either right- or left-handed molecules in any given metabolic reaction.

Mars 2020

NASA’s mission will explore the Martian surface using substantially the same rover design as that of the plutonium-powered Curiosity, which has been trundling around the Gale crater since 2012. The big difference is in the payload. Mars 2020 has a suite of scientific instruments based on the premise that Mars was much warmer and wetter billions of years ago, and that life may have originated then. Mars 2020 will look for evidence of ancient habitable environments and for chemical signatures of any microbes that lived in them.

Mars 2020 is also a test bed, with two major prototype systems. The first is the Mars Oxygen In-Situ Resource Utilization Experiment [PDF], or MOXIE. About the size of a car battery, MOXIE converts the carbon dioxide atmosphere of Mars into oxygen using electrolysis. Demonstrating this technology, albeit on a small scale, is a critical step toward human exploration of Mars.

The other prototype is the Mars Helicopter, a solar-powered, 1.8-kilogram autonomous twin-rotor drone. To cope with the thin Martian atmosphere, the rotors will spin 10 times as fast as those of a helicopter on Earth. If it can successfully take off, it will be the first aircraft to fly on another world. Currently five flights are planned for distances of up to a few hundred meters. “It makes the [rover] engineers twitch a little bit to help to accommodate that kind of stuff, but it is awfully fun,” says Lakdawalla.

HX-1

In 2011, the first attempt by the China National Space Administration (CNSA) to send a probe to Mars failed when the Russian rocket carrying it couldn’t get out of Earth orbit. So the Chinese are going with their own Long March rocket for their next try. The HX-1 mission will search for the signs of life with an orbiter and a small rover. The rover has been fitted with a ground-penetrating radar that should have a much deeper range—up to 100 meters—than similar radars on ExoMars and Mars 2020, which can reach only a few meters down.

Although technical details about the mission are sparse, the CNSA has momentum behind it. “It’s really astonishing what they have accomplished with their lunar missions,” says Lakdawalla. She’s impressed that the Chinese engineers succeeded in depositing a lander and rover on the moon’s surface in 2013 “on their first-ever effort to land on another world.” Such success bodes well for Mars, she says: “They’ve clearly demonstrated technical capability and interplanetary navigation.”

Hope

The United Arab Emirates is another country hoping to make a big leap. The UAE space agency was officially established only in 2014, and the country’s space experience is limited to a few Earth-orbiting observation and communications satellites. Now the country plans to send an orbiter called Hope to Mars to study its atmosphere, in hopes of understanding how it evolved into today’s tenuous blanket of carbon dioxide and other trace gases. It plans to use a Japanese rocket and launch site.

Despite the UAE’s inexperience, its chances of entering the Mars club aren’t bad, explains Lakdawalla. “It is just an orbiter, so I think success on the first try is a lot easier, given how routine it’s become to put things in orbit,” she says. Lakdawalla notes that the UAE has excellent international partnerships, and says the new space agency isn’t focused on doing all of the technology development itself: “The UAE is about hiring the best consultants and getting help to leapfrog their own technology.”

This article appears in the January 2020 print issue as “Life on Mars?”

Build a Vector Graphics Display Clock with a Cathode-Ray Tube

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/geek-life/hands-on/build-a-vector-graphics-display-clock-with-a-cathoderay-tube

Once upon a time, there was a type of particle accelerator so popular that it was mass-produced by the million. Engineers and scientists at their benches, and folks at home in their living rooms, would carefully arrange themselves to watch the dancing glow of a beam of subatomic particles smashing into a phosphorescent screen. This attention-hogging accelerator was, of course, the cathode-ray tube (CRT), which reigned supreme as the electronic display technology for decades, before being unceremoniously relegated to the figurative and literal trash heap of history by flat-screen technologies.

But there are still CRTs to be found, and you can put some of them to great use with Howard Constantine’s US $100 Oscilloscope Clock Kit. The kit works with many CRTs that were designed to be used in oscilloscope-type displays and operated with relatively low voltages—in the range of hundreds, rather than thousands, of volts.

As CRTs are becoming as unfamiliar to modern engineers as amplifier tubes did to the transistor generation, a quick recap of a few salient points is likely in order here. Oscilloscope-type CRTs are different from those found in televisions and most computer monitors. TV-type CRTs use magnetic fields generated by coils located outside the vacuum tube to deflect an electron beam, which is scanned line by line across the screen to build up what’s called a raster image. Oscilloscope-type CRTs use two pairs of horizontally and vertically oriented plates located inside the tube to electrostatically deflect the beam: This approach was handy for oscilloscopes because an analog input voltage can control the vertical position of the beam directly (albeit filtered through some signal-conditioning circuitry), while an internal timing circuit controls the horizontal deflection, letting engineers see time-varying signals.

The beam’s horizontal deflection can also be controlled with a second input voltage (called X-Y, or vector, mode). This made oscilloscope CRTs appealing to early computer-graphics pioneers, who pressed them into service as displays for things like radar defense networks. Some seminal computer games were made using vector displays, including arguably the first-ever video game, the 1958 Tennis for Two, and the massive 1979 arcade hit Asteroids. But vector displays struggled when it came to, say, showing bitmaps or even simply a large area filled with a solid color, and so eventually lost out to raster displays.

But CRT vector displays have a distinct look that’s hard to replicate—not least when the screen is round, which was the easiest shape to make back in the day. (I do find it a little ironic that after decades of engineers striving to create the most perfectly flat and rectangular displays possible, smartphone makers have begun rhapsodizing about offering partially curved screens with rounded corners.)

The Oscilloscope Clock Kit allows you to recapture that look. The kit itself comprises the printed circuit board and all the accompanying components, including two PIC microcontrollers—one controls the clock and generates the graphics and text, while the other is dedicated to periodically shifting the clock around the screen to avoid phosphor burn-in. Normally you have to supply your own enclosure and CRT (eBay usually has listings), but as Constantine has a limited stock he uses to make fully assembled clocks for sale, he kindly let me buy a 7-centimeter-wide DG7-6 from him for $75 and one of his acrylic enclosures for $40.

Getting a clear enclosure was important to me because I wanted to be able to show off the tube for maximum effect, while also keeping fingers safely away from the electronics. This is important because even though the CRT is considered “low voltage,” that still means 300 volts in some parts of the circuitry. Perhaps I was particularly skittish on this topic because of childhood memories: When I demonstrated a burgeoning propensity for taking things apart with a screwdriver, my father, a broadcast engineer, headed off any designs I might have had on our TV set with lurid true tales of people being zapped by the charge stored in a television’s high-voltage capacitors even after the set had been unplugged.

Fortunately for my nerves, the clock kit’s smaller capacitors pose much less of a hazard. Nonetheless, now that even 5 V is increasingly shunned as a circuit-melting torrent of electricity, I recommend builders work with a little more caution than they are used to, especially as checking stages of the build do require probing it with a multimeter when the board is plugged in.

Soldering the through-hole components was straightforward, although because the tallest component—a transformer—is one of the first required, it’s likely you’ll need to use some kind of circuit-board holder rather than trying to lay the board flat when working. The biggest obstacle came when it was time to wire up my DG7-6 CRT. The kit provides 10 leads for operating a CRT—two that supply a few volts of alternating power to the heater filament to raise the cathode temperature enough for thermoelectric emission to come into play, one that has a constant negative voltage of about 295 V to provide the cathode’s supply of electrons, three that connect to a train of accelerating and focusing electrodes in the neck of the CRT, and four that connect to the horizontal and vertical deflecting plate pairs. But my DG7-6 only had nine pins! A check of the DG7-6’s data sheet [PDF] (which I found on Frank Philipse’s wonderful archive) showed that the cathode and one side of the heater filament shared a pin. A quick email to Constantine revealed the solution was a quick fix: All I had to do was jumper the cathode connection to one of the filament leads. After that, the instructions stepped me through the calibration steps required to produce a sharp bright test dot in the center of the screen rather than a fuzzy dim elliptical blob off to the side.

When building the kit, you can incorporate one of two optional $40 add-on circuits that eliminate the need to set the clock manually—a Wi-Fi module and a GPS module. Without one of those, the clock automatically detects whether it is plugged into a 50- or 60-hertz wall socket and uses that frequency as a reference time base. Setting the clock manually is simply a matter of pushing a “fast set” and “slow set” button until the clock shows the correct time.

The end result is an eye-catching timepiece that restores a CRT to its rightful place: the center of attention.

This article appears in the January 2020 print issue as “Oscilloscope Clock.”

IEEE Spectrum’s 2019 Gift Guide

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/geek-life/tools-toys/ieee-spectrums-2019-gift-guide

  • Pinball

    As IEEE Spectrum reported at the end of last year, pinball is having a revival, driven in part by the shift to e-commerce, which is turning erstwhile big-box retail stores into cheap real estate for family entertainment centers. Modern pinball machines, with enhancements like upgradable software, are vastly more sophisticated than their electromechanical ancestors. Stern Pinball is in the vanguard of this renaissance, making home and arcade versions of many of its games. The latest title available in a home version is the US $4,500 Star Wars Pin, based on comics artwork and models inspired by the original movie trilogy.

  • Classic Microprocessor Kit

    Wichit Sirichote is a professor at King Mongkut’s Institute of Technology, in Bangkok. He’s also the maker of a terrific line of single-board computers based on classic CPUs such as the 6502, Z80, and 8088. Prices range from $85 to $175, depending on the CPU. They are bare-bones machines designed for learning and protoyping, but they are very flexible: You can upload code through an RS-232 port, plug in a standard LCD character display directly using an onboard connector, and add other custom hardware via a bus-expansion slot. Sirichote wrote his own monitor software for the boards that lets you, for example, examine the contents of CPU registers, and extensive documentation is available.

  • Artie 3000

    Artie 3000 is a modern version of the classic turtle drawing robot that could be found in classrooms connected to a microcomputer running the Logo language. But what’s really nice about the $70 Artie is that unlike with the turtles of old, you can easily program it in a variety of ways. At the most basic level, you can control Artie and its pen directly, remote-control style, and then move on up through various graphical block languages and into full-fledged Python and JavaScript. Budding programmers benefit from a gentle learning curve, and advanced coders interested in procedurally generated art get access to a cheap robot.

  • Musical Tesla Coil Kit

    It’s not going to win awards for the quality of its sound, but it is a crowd pleaser. OneTesla’s $400 Musical Tesla Coil Kit can be driven directly by a MIDI-enabled instrument or play a MIDI file. The frequency of the notes is used to modulate the output of the coil with a square wave, producing buzzing notes and impressive sparks over half a meter long. A smaller version is also available for $230.

  • Doppel

    Doppel is worn like a watch, except you wear it on the inside of your wrist rather than on the outside. A rotating weight inside creates a rhythmic vibration. The purpose of the $280 wearable is to improve focus and reduce stress, by using rhythms that are faster or slower, respectively, than your normal resting heart rate. (The accompanying smartphone app measures your heart rate when you place your finger over your phone’s camera lens and looks at changes in the ambient light that gets filtered through.) It did help reduce my anxiety levels somewhat, but I found it worked best with deliberate mindfulness techniques, so if you’re not already familiar with those, your mileage may vary.

  • Piper Computer Kit

    Minecraft is already used to introduce children to writing software. The $300 Piper Computer Kit, aimed at 8- to 13-year-olds, extends that idea to hardware. Kids first assemble the wooden case and plug together the basic components of the kit, which is based on a Raspberry Pi and comes with its own screen. While the kit includes a mouse, there is no keyboard. Instead a breadboarding module is provided, which can be used to, for example, wire up buttons to control events in Minecraft through a series of game levels.

  • Books

    If you’re looking for a nonfiction book to give, try one from the Platform Studies series from MIT Press. These books describe influential platforms in the history of digital media, examining how the specific technical details and hardware capabilities of each platform (or, in academia-speak, “affordances”) shaped the software that ran on them and how that combination in turn affected the industry and wider culture. The most recent 2019 title (The Media Snatcher) dissects the PC Engine/TurboGrafx-16 console. The highlights of the series so far for me are Racing the Beam, about the Atari 2600, and Minitel.

    In the fiction department, Spectrum’s recommendation is Fall, or Dodge in Hell (William Morrow, 2019) by Neal Stephenson, author of the cyberpunk classic Snow Crash. Fall is a sequel of sorts to his 2011 novel, Reamde, but it can be read completely independently of Reamde (and is in fact a much better book). Reamde is an entertaining enough technothriller, but Fall is Stephenson at his best, weaving together deep philosophical questions against the background of a compelling vision of the future (a chapter featuring a journey across an America that’s been utterly fragmented by competing social-media feeds is plausibly chilling). In Fall, the lead character awakens in a digital afterlife, in which his first order of business is to create a universe to live in.

This article appears in the December 2019 print issue as “2019 Holiday Gift Guide.”