All posts by David Schneider

Deep Learning at the Speed of Light

Post Syndicated from David Schneider original

In 2011, Marc Andreessen, ­general partner of venture capital firm Andreessen Horowitz, wrote an influential article in The Wall Street Journal titled,“Why Software Is Eating the World.” A decade later now, it’s deep learning that’s eating the world.

Deep learning, which is to say artificial neural networks with many hidden layers, is regularly stunning us with solutions to real-world problems. And it is doing that in more and more realms, including natural-language processing, fraud detection, image recognition, and autonomous driving. Indeed, these neural networks are getting better by the day.

But these advances come at an enormous price in the computing resources and energy they consume. So it’s no wonder that engineers and computer scientists are making huge efforts to figure out ways to train and run deep neural networks more efficiently.

An ambitious new strategy that’s coming to the fore this year is to perform many of the required mathematical calculations using photons rather than electrons. In particular, one company, ­Lightmatter, will begin marketing late this year a neural-network accelerator chip that calculates with light. It will be a refinement of the prototype Mars chip that the company showed off at the virtual Hot Chips conference last August.

While the development of a commercial optical accelerator for deep learning is a remarkable accomplishment, the general idea of computing with light is not new. Engineers regularly resorted to this tactic in the 1960s and ’70s, when electronic digital computers were too feeble to perform the complex calculations needed to process synthetic-aperture radar data. So they processed the data in the analog domain, using light.

Because of the subsequent Moore’s Law gains in what could be done with digital electronics, optical computing never really caught on, despite the ascendancy of light as a vehicle for data communications. But all that may be about to change: Moore’s Law may be nearing an end, just as the computing demands of deep learning are exploding.

There aren’t many ways to deal with this problem. Deep-learning researchers may develop more efficient algorithms, sure, but it’s hard to imagine those gains will be sufficient. “I challenge you to lock a bunch of theorists in a room and have them come up with a better algorithm every 18 months,” says Nicholas Harris, CEO of ­Lightmatter. That’s why he and his colleagues are bent on “developing a new compute technology that doesn’t rely on the transistor.”

So what then does it rely on?

The fundamental component in Lightmatter’s chip is a Mach-Zehnder interferometer. This optical device was jointly invented by Ludwig Mach and Ludwig Zehnder in the 1890s. But only recently have such optical devices been miniaturized to the point where large numbers of them can be integrated onto a chip and used to perform the matrix multiplications involved in neural-network calculations.

Keren Bergman, a professor of electrical engineering and the director of the Lightwave Research Laboratory at Columbia University, in New York City, explains that these feats have become possible only in the last few years because of the maturing of the manufacturing ecosystem for integrated photonics, needed to make photonic chips for communications. “What you would do on a bench 30 years ago, now they can put it all on a chip,” she says.

Processing analog signals carried by light slashes energy costs and boosts the speed of calculations, but the precision can’t match what’s possible in the digital domain. “We have an 8-bit-equivalent system,” say Harris. This limits his company’s chip to neural-network inference calculations—the ones that are carried out after the network has been trained. Harris and his colleagues hope their technology might one day be applied to training neural networks, too, but training demands more precision than their optical processor can now provide.

Lightmatter is not alone in the quest to harness light for ­neural-network calculations. Other startups working along these lines include Fathom Computing, ­LightIntelligence, LightOn, ­Luminous, and Optalysis. One of these, Luminous, hopes to apply optical computing to spiking neural networks, which take advantage of the way the neurons of the brain process information—­perhaps accounting for why the human brain can do the remarkable things it does using just a dozen or so watts.

Luminous expects to develop practical systems sometime between 2022 and 2025. So we’ll have to wait a few years yet to see what pans out with its approach. But many are excited about the prospects, including Bill Gates, one of the company’s high-profile investors.

It’s clear, though, that the computing resources being dedicated to artificial-intelligence systems can’t keep growing at the current rate, doubling every three to four months. Engineers are now keen to harness integrated photonics to address this challenge with a new class of computing machines that are dramatically different from conventional electronic chips yet are now practical to manufacture. Bergman boasts: “We have the ability to make devices that in the past could only be imagined.”

This article appears in the January 2021 print issue as “Deep Learning at the Speed of Light.”

At Last, First Light for the James Webb Space Telescope

Post Syndicated from David Schneider original

graphic link to special report landing page

Back in 1990, after significant cost overruns and delays, the Hubble Space Telescope was finally carried into orbit aboard the space shuttle. Astronomers rejoiced, but within a few weeks, elation turned to dread. Hubble wasn’t able to achieve anything like the results anticipated, because, simply put, its 2.4-meter mirror was made in the wrong shape.

For a time, it appeared that the US $5 billion spent on the project had been a colossal waste. Then NASA came up with a plan: Astronauts would compensate for ­Hubble’s distorted main mirror by adding small secondary mirrors shaped in just the right way to correct the flaw. Three years later, that fix was carried out, and Hubble’s mission to probe some of the faintest objects in the sky was saved.

Fast-forward three decades. Later this year, the James Webb Space Telescope is slated to be launched. Like Hubble, the Webb telescope project has been plagued by cost overruns and delays. At the turn of the millennium, the cost was thought to be $1 billion. But the final bill will likely be close to $10 billion.

Unlike Hubble, the Webb telescope won’t be orbiting Earth. Instead, it will be sent to the Earth-Sun L2 Lagrange point, which is about 1.5 million kilometers away in the opposite direction from the sun. This point is cloaked in semidarkness, with Earth shadowing much of the sun. Such positioning is good for observing, but bad for solar powering. So the telescope will circle around L2 instead of positioning itself there.

The downside will be that the telescope will be too distant to service, at least with any kind of spacecraft available now. “Webb’s peril is that it will be parked in space at a place that we can’t get back to if something goes wrong,” says Eric Chaisson, an astrophysicist at Harvard who was a senior scientist at Johns Hopkins Space Telescope Science Institute when Hubble was commissioned. “Given its complexity, it’s hard to believe something won’t, though we can all hope, as I do, that all goes right.”

Why then send the Webb telescope so far away?

The answer is that the Webb telescope is intended to gather images of stars and galaxies created soon after the big bang. And to look back that far in time, the telescope must view objects that are very distant. These objects will be extremely faint, sure. More important, the visible light they gave off will be shifted into the infrared.

For this reason, the telescope’s detectors are sensitive to comparatively long infrared wavelengths. And those detectors would be blinded if the body of the telescope itself was giving off a lot of radiation at these wavelengths due to its heat. To avoid that, the telescope will be kept far from the sun at L2 and will be shaded by a tennis-court-size sun shield composed of five layers of aluminum- and silicon-coated DuPont Kapton. This shielding will keep the mirror of the telescope at a chilly 40 to 50 kelvins, considerably colder than Hubble’s mirror, which is kept essentially at room temperature.

Clearly, for the Webb telescope to succeed, everything about the mission has to go right. Gaining confidence in that outcome has taken longer than anyone involved in the project had envisioned. A particularly troubling episode occurred in 2018, when a shake test of the sun shield jostled some screws loose. That and other problems, attributed to human error, shook legislators’ confidence in the prime contractor, Northrop Grumman. The government convened an independent review board, which uncovered fundamental issues with how the project was being managed.

In 2018 testimony to the House Committee on Science, Space, and Technology, Wesley Bush, then the CEO of Northrop Grumman, came under fire when the chairman of the committee, Rep. Lamar Smith (R-Texas), asked him whether Northrop Grumman would agree to pay for $800 million of unexpected expenditures beyond the nominal final spending limit.

Naturally, Bush demurred. He also marshalled an argument that has been used to justify large expenditures on space missions since Sputnik: the need to demonstrate technological prowess. “It is especially important that we take on programs like Webb to demonstrate to the world that we can lead,” said Bush.

During the thoroughgoing reevaluation in 2018, launch was postponed to March of 2021. Then the pandemic hit, delaying work up and down the line. In July of 2020, launch was postponed yet again, to 31 October 2021.

Whether Northrop Grumman will really hit that target is anyone’s guess: The company did not respond to requests from IEEE Spectrum for information about how the pandemic is affecting the project timeline. But if this massive, quarter-century-long undertaking finally makes it into space this year, astronomers will no doubt be elated. Let’s just hope that elation over a space telescope doesn’t again turn into dread.

This article appears in the January 2021 print issue as “Where No One Has Seen Before.”

Use Your Bike as a Backup to Your Backup Power Supply

Post Syndicated from David Schneider original

As this article goes to press, Hurricane Delta is making landfall not far from where Hurricanes Sally and Laura came ashore earlier in the season. I live on the East Coast of the United States, and fairly far inland, so such storms are not as frequent or intense as they are in states bordering on the Gulf of Mexico. But they are still a concern, if only because they can topple trees and cause widespread power outages. And ice storms during the winter here are also apt to bring down power lines.

I don’t mind the resulting darkness so much. What I really don’t relish, though, is losing Internet access—especially now that it is my main connection to the world due to the pandemic. And the pandemic is reducing the number of field crews available to fix power lines, making outages last that much longer. So this year, I figured I’d get prepared for a blackout in the most self-sufficient way possible.

I could, of course, just purchase a conventional small gasoline-powered generator. But I didn’t want to do that for a few reasons. In particular, I recalled stories about what went on after Hurricane Sandy in 2012, when many people using such backup generators had trouble finding fuel, including the IEEE Operations Center, in New Jersey.

My first thought was to use photovoltaics, so I purchased two 100-watt panels on Amazon for less than US $1/W. I had a 35-ampere DC-to-DC converter from an earlier project to use as a charge controller, so my next step was to spec out a deep-cycle lead-acid battery to keep things going at night.

Some experiments with a watt meter led me to conclude that a battery of at least 300 watt-hours capacity could keep four laptops, a cable modem, and a wireless router running for about 4 hours, while also charging the family’s phones and flashlights. That should get us through dark evenings. It would also suffice at a reduced load if the sun were hidden behind clouds all day. So I purchased a 12-volt, 35-ampere-hour battery (which nominally can store 420 Wh).

But what if skies remained gray for many days in a row? Rather than trying to purchase enough battery storage to cover all reasonable eventualities, I decided that my backup source of electricity needed a backup itself, one that I could use to charge that battery during times when my photovoltaic panels won’t function. When necessary, I’d simply detach the battery from the panels and attach it to my backup source to be recharged. I briefly considered whether a wind turbine might serve that role, but then opted for something I figured would be more dependable: my two legs.

I cycle regularly for about an hour a day, during which I probably put out an average of 80 W, based on some rough calculations (I’m small and typically cycle around 22 kilometers per hour). That 80 Wh is only a fraction of what my solar panels can provide in a day, but it would be enough to keep my laptop connected to the Internet for a few hours, charge phones, and so forth. And pedaling my own power seemed like a healthy, stress reducing, activity to pass the time during a power outage.

I discovered from one blogger that it wouldn’t be hard to modify a stationary bike stand to generate electrical power. Although I had a bike stand already, mine provides frictional drag using a fluid-filled chamber, which I was reluctant to crack open. Instead, I purchased one similar to the one that the blogger used, which employs magnets and eddy currents to create drag forces on a shaft that presses against the back wheel to increase exercise intensity.

I ripped out all that drag-inducing stuff and attached a brushless motor to the shaft using a flexible coupler and a wooden spacer. Then I connected the three leads of the motor—originally intended to motorize a skateboard and now acting as a generator—to a three-phase bridge rectifier. The output of the rectifier in turn is connected to my battery through a Drok meter. This meter allows me to monitor the voltage, current, wattage, and total energy produced.

Testing my power-producing bicycle stand quickly revealed a flaw in my logic. Pedaling at a comfortable pace, meaning one that I could keep up for a long time, produced only about 60 W, not 80. In retrospect, I decided that I had failed to consider the inefficiencies of power conversion, which surely are significant because my motor/generator gets pretty hot after a while. But even 60 Wh would do in a pinch. And there’s no rule that says I couldn’t cycle for longer than an hour. Even better, I can get my kids to contribute a little sweat to support their phone and computer use during the gray days of a winter power outage.

Actually, generating power with their muscles provides a valuable lesson for kids, whether or not the power goes out. Every time they switch on a lightbulb or a television, they will think more about what this energy consumption means, given the considerable effort it takes to produce those watts yourself.

This article appears in the November 2020 print issue as “Pedaling Out of the Dark.”

The Subtle Effects of Blood Circulation Can Be Used to Detect Deep Fakes

Post Syndicated from David Schneider original

You probably have etched in your mind the first time you saw a synthetic video of a someone that looked good enough to convince you it was real. For me, that moment came in 2014, after seeing a commercial for Dove Chocolate that resurrected the actress Audrey Hepburn, who died in 1993.

Awe about what image-processing technology could accomplish changed to fear, though, a few years later, after I viewed a video that Jordan Peele and Buzzfeed had produced with the help of AI. The clip depicted Barack Obama saying things he never actually said. That video went viral, helping to alert the world to the danger of faked videos, which have become increasingly easy to create using deep learning.

Dubbed deep-fakes, these videos can be used for various nefarious purposes, perhaps most troublingly for political disinformation. For this reason, Facebook and some other social-media networks prohibit such fake videos on their platforms. But enforcing such prohibitions isn’t straightforward.

Facebook, for one, is working hard to develop software that can detect deep fakes. But those efforts will no doubt just motivate the development of software for creating even better fakes that can pass muster with the available detection tools. That cat-and-mouse game will probably continue for the foreseeable future. Still, some recent research promises to give the upper hand to the fake-detecting cats, at least for the time being.

This work, done by two researchers at Binghamton University (Umur Aybars Ciftci and Lijun Yin) and one at Intel (Ilke Demir), was published in IEEE Transactions on Pattern Analysis and Machine Learning this past July. In an article titled, “FakeCatcher: Detection of Synthetic Portrait Videos using Biological Signals,” the authors describe software they created that takes advantage of the fact that real videos of people contain physiological signals that are not visible to the eye.

In particular, video of a person’s face contains subtle shifts in color that result from pulses in blood circulation. You might imagine that these changes would be too minute to detect merely from a video, but viewing videos that have been enhanced to exaggerate these color shifts will quickly disabuse you of that notion. This phenomenon forms the basis of a technique called photoplethysmography, or PPG for short, which can be used, for example, to monitor newborns without having to attach anything to a their very sensitive skin.

Deep fakes don’t lack such circulation-induced shifts in color, but they don’t recreate them with high fidelity. The researchers at SUNY and Intel found that “biological signals are not coherently preserved in different synthetic facial parts” and that “synthetic content does not contain frames with stable PPG.” Translation: Deep fakes can’t convincingly mimic how your pulse shows up in your face.

The inconsistencies in PPG signals found in deep fakes provided these researchers with the basis for a deep-learning system of their own, dubbed FakeCatcher, which can categorize videos of a person’s face as either real or fake with greater than 90 percent accuracy. And these same three researchers followed this study with another demonstrating that this approach can be applied not only to revealing that a video is fake, but also to show what software was used to create it.

That newer work, posted to the arXiv pre-print server on 26 August, was titled, “How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals.” In it, the researchers showed they that can distinguish with greater than 90 percent accuracy whether the video was real, or which of four different deep-fake generators (DeepFakes, Face2Face, FaceSwap or NeuralTex) was used to create a bogus video.

Will a newer generation of deep-fake generators someday be able to outwit this physiology-based approach to detection? No doubt, that will eventually happen. But for the moment, knowing that there’s a promising new means available to thwart fraudulent videos warms my deep fake–revealing heart.

Lightmatter’s Mars Chip Performs Neural-Network Calculations at the Speed of Light

Post Syndicated from David Schneider original

For some years now, electrical engineers and computer scientists have been trying hard to figure out how to perform neural-network calculations faster and more efficiently. Indeed, the design of accelerators suitable for neural-network calculations has lately become a hotbed of activity, with the most common solution, GPUs, vying with various application-specific ICs (think Google’s Tensor Processing Unit) and field-programmable gate arrays.

Well, another contender has just entered the arena, one based on an entirely different paradigm: computing with light. A MIT spinoff called Lightmatter described its “Mars” device at last week’s Hot Chips virtual conference. Lightmatter is not the only company pursuing this novel strategy, but it seems to be ahead of its competition.

It’s somewhat misleading, though, for me to call this approach “novel.” Optical computing, in fact, has a long history. It was used as far back as the late 1950s, to process some of the first synthetic-aperture radar (SAR) images, which were constructed at a time when digital computers were not up to the task of carrying out the necessary mathematical calculations. The lack of suitable digital computers back in the day explains why engineers built analog computers of various kinds, ones based on spinning disks, sloshing fluids, continuous amounts of electric charge, or even light.

Over the decades, researchers have from time to time resurrected the idea of computing things with light, but the concept hasn’t proven widely practical for anything. Lightmatter is trying to change that now when it comes to neural-network calculations. Its Mars device has at its heart a chip that includes an analog optical processor, designed specifically to perform the mathematical operations that are fundamental to neural networks.

The key operations here are matrix multiplications, which consist of multiplying pairs of numbers and adding up the results. That you can perform addition with light isn’t surprising, given that the electromagnetic waves that constitute light add together when two light beams are combined.

What’s trickier to understand is how you can do multiplication using light. Let’s me sketch that out here, although for a fuller account I’d recommend reading the very nice description of its technology that Lightmatter has provided on its blog.

The basic unit of Lightmatter’s optical processor is what’s known as a Mach-Zehnder interferometer. Ludwig Mach and Ludwig Zehnder invented this device in the 1890s, so we’re not talking about something exactly modern here. What’s new is the notion of shrinking many Mach-Zehnder interferometers down to a size that’s measured in nanometers and integrating them together on one chip for the purpose of speeding up neural-network calculations.

Such an interferometer splits incoming light into two beams, which then take two different paths. The resulting two beams are then recombined. If the two paths are identical, the output looks just like the input. If, however, one of the two beams must travel farther than the other or is slowed, it falls out of phase with the other beam. At an extreme, it can be a full 180 degrees (one half wavelength) out of phase, in which case the two beams interfere destructively when recombined, and the output is entirely nulled.

More generally, the field amplitude of the light at the output will be the amplitude of the light at the input times the cosine of half of the phase difference between the light traveling in its two arms. If you can control that phase difference in some convenient way, you then have a device that works for multiplication.

Lightmatter’s tiny Mach-Zehnder interferometers are constructed by fashioning appropriately small waveguides for light inside a nanophotonic chip. By using materials whose refractive index depends on the electric field they are subjected to, the relative phase of the split beam can be controlled simply by applying a voltage to create an electric field, as happens when you charge a capacitor. In Lightmatter’s chip, that’s done by applying an electric field of one polarity to one arm of the interferometer and an electric field of the opposite polarity to the other arm.

As is true for a capacitor, current flows only while charge is being built up. Once there is sufficient charge to provide an electric field of the desired strength, no more current flows and thus no more energy is required. That’s important here, because it means that once you have set the value of the multiplier you want to apply, no more energy is needed if that value (a “weight” in the neural-network calculation) doesn’t subsequently change. The flow of light through the chip similarly doesn’t consume energy. So you have here a very efficient system for performing multiplication, one that operates at, well, the speed of light.

One of the weaknesses of analog computers of all kinds has been the limited accuracy of the calculations they can perform. That, too, is a shortcoming of Lightmatter’s chip—you just can’t specify numbers with as fine resolution as you can do using a digital circuitry. Fortunately, the “inference” calculations that neural networks carry out once they have been trained don’t need much resolution. Training a neural network, however, does. “Training requires higher dynamic range; we’re focused on inference because of that,” says Nicholas Harris, Lightmatter’s CEO and one of the company’s founders. “We have an 8-bit-equivalent system.”

You might imagine that Lightmatter’s revolutionary new equipment for performing neural-network calculations with light is at this stage just a laboratory prototype, but that would be a mistake. The company is quite far along in producing a practical product, one that can be added to any server motherboard with a PCI Express slot and immediately programmed to start cranking out neural-network inference calculations. “We are very focused on making it so that it doesn’t look like alien technology,” says Harris. He explains that Lightmatter not only has this hardware built, it has also created the necessary software toolchains to support its use with standard neural-network frameworks (TensorFlow and PyTorch).

Lightmatter expects to go into production with its Mars device in late in 2021. Harris says that the company’s chips, sophisticated as they are, have good yields, in large part because the nanophotonic components involved are not really that small compared with what’s found in cutting-edge electronic devices. “You don’t get the same point defects that destroy things.” So it shouldn’t be difficult to keep yields high and the pricing for the Mars device low enough to be competitive with GPUs.

And who knows, perhaps other companies such as Lightintelligence, LightOn, Optalysis, or Fathom Computing, will introduce their own light-based neural-network accelerator cards by then. Harris isn’t worried about that, though, he says. “I’d say we’re pretty far ahead.”

The Electric Weed-Zapper Renaissance

Post Syndicated from David Schneider original

In the 1890s, U.S. railroad companies struggled with what remains a problem for railroads across the world: weeds. The solution that 19th-century railroad engineers devised made use of a then-new technology—high-voltage electricity, which they discovered could zap troublesome vegetation overgrowing their tracks. Somewhat later, the people in charge of maintaining tracks turned using fire instead. But the approach to weed control that they and countless others ultimately adopted was applying chemical herbicides, which were easier to manage and more effective.

The use of herbicides, whether on railroad rights of way, agricultural fields, or suburban gardens, later raised health concerns, though. More than 100,000 people in the United States, for example, have claimed that Monsanto’s Roundup weed killer caused them to get cancer—claims that Bayer, which now owns Monsanto, is trying hard of late to settle.

Meanwhile, more and more places are banning the use of Roundup and similar glyphosate herbicides. Currently, half of all U.S. states have legal restrictions in place that limit the use of such chemical weed killers. Such restrictions are also in place in 19 other countries, including Austria, which banned the chemical in 2019, and Germany, which will be phasing it out by 2023. So, it’s no wonder that the concept of using electricity to kill weeds is undergoing a renaissance.

Actually, the idea never really died. A U.S. company called Lasco has been selling electric weed-killing equipment for decades. More recently, another U.S. company has been marketing this technology under the name “The Weed Zapper.” But the most interesting developments along these lines are in Europe, where electric weed control seems to be gaining real traction.

One company trying to replace herbicides with electricity is RootWave, based in the U.K. Andrew Diprose, RootWave’s CEO, is the son of Michael Diprose, who spent much of his career as a researcher at the University of Sheffield studying ways to control weeds with electricity.

Electricity, the younger Diprose explains, boasts some key benefits over other non-chemical forms of weed control, which include using hot water, steam, and mechanical extraction. In particular, electric weed control doesn’t require any water. It’s also considerably more energy efficient than using steam, which requires an order of magnitude more fuel. And unlike mechanical means, electric weed killing is also consistent with modern “no till” agricultural practices. What’s more, Diprose asserts, the cost is now comparable with chemical herbicides.

Unlike the electric weed-killing gear that’s long been sold in the United States, RootWave’s equipment runs at tens of kilohertz—a much higher frequency than the power mains. This brings two advantages. For one, it makes the equipment lighter, because the transformers required to raise the voltage to weed-zapping levels (thousands of volts) can be much smaller. It also makes the equipment safer, because higher frequencies pose less of a threat of electrocution. Should you accidentally touch a live electrode “you will get a burn,” says Diprose, but there is much less of a threat of causing cardiac arrest than there would be with a system that operated at 50 or 60 hertz.

RootWave has two systems, a hand-carried one operating at 5 kilowatts and a 20-kilowatt version carried by a tractor. The company is currently collaborating with various industrial partners, including another U.K. startup called Small Robot Company, which plans to outfit an agricultural robot for automated weed killing with electricity. 

And RootWave isn’t the only European company trying to revive this old idea. Netherlands-based CNH Industrial is also promoting electric weed control with a tractor-mounted system it has dubbed “XPower.” Like RootWave’s tractor-mounted system, the electrodes are swept over a field at a prescribed height, killing the weeds that poke up higher than the crop to be preserved.

Of the many advantages CNH touts for its weed-electrocution system (which presumably applies to all such systems, ever since the 1890s) is “No specific resistance expectable.” I should certainly hope not. But I do think that a more apropos wording here, for something that destroys weeds by placing them in a high-voltage electrical circuit, might be a phrase that both Star Trek fans and electrical engineers could better appreciate: “Resistance is futile.

How to Look People in the Eye While Videoconferencing

Post Syndicated from David Schneider original

Many of us are spending more time videoconferencing than we’ve ever done before. And that situation probably won’t be changing anytime soon.

One thing that’s become painfully obvious to me during the past few months is that some people are more conscientious than others about how they present themselves. The worst offenders position themselves in front of a window, forcing others to view them in silhouette and making me wonder whether I’m in a business meeting or watching “Mystery Science Theater 3000.”

I try to avoid such obvious videoconferencing faux pas. But there’s one fundamental awkwardness that no amount of strategic laptop positioning can solve: not being able to make eye contact with other participants because they, like me, are looking at their screens rather than their Web cameras. Everyone appears to be casting their gaze downward, as if bored or perhaps telling a lie. It’s nearly impossible to avoid that annoying tendency—not without some drastic action.

The action in my case was to construct something I’d seen featured on Hackaday, a gizmo designed by a video blogger that makes clever use of a semitransparent mirror—basically the same strategy used for teleprompters but at a fraction of the cost and with materials I could easily scavenge or order. With such a mirror, you can view the screen straight on while also looking directly at an external webcam. This contraption, which I’ve taken to calling my Zoom Box, allows me to look other people right in the eye while in a Zoom or Webex meeting with them.

Construction began with…well, deconstruction. A dead laptop languishing in the closet finally met its destiny: I pulled its display apart and extracted the built-in HD webcam. That’s because I needed an external webcam that was as flat as possible. Before starting this project, I didn’t realize that webcams scavenged from a laptop can be made to function perfectly well as external USB webcams. And indeed I was reassured to see that the one I pulled said “USB5V” on its circuit board.

This camera board housed dual microphones, too, which meant there were quite a few wires coming out of its shorn cable. Identifying the ground wire was easy enough—by checking continuity with the copper ground plane on the board. The +5-volt line was a little harder to figure out: I needed first to identify the voltage regulator, to which the +5-V line was attached, and then do some more continuity checking.

The two USB signal wires coming out of the camera board could be easily discerned—they make up the one twisted pair present. When wiring these signal lines to a USB cable, I just had to guess which member of the twisted pair was data+ and which was data–. As luck had it, my guess was correct: The webcam worked just fine when plugged into the various computers I tried it with, which included two Windows machines and a Mac.

Other materials necessary for completing this project include black duct tape, a supply of black foam board, and a semitransparent acrylic mirror. The mirror I purchased (on is only 1 millimeter thick, which is good and bad. It’s good because the thinness made it easy to cut to size—just score and snap. It’s bad because, being so thin, it flexes easily. So it needs good support on all four sides to stay flat.

That support comes from strategically shaped pieces of foam board attached to the interior of a rectangular foam-board box I constructed. Duct tape holds the sides together. The box includes an overhang at the back, which with the help of some binder clips is attached to the face of the large monitor I was using for this project.

Having plenty of foam-board scraps left over, I carved an appropriately shaped opening in a rectangle of foam to house the webcam, placing a small hole in it above the camera’s lens. A second scrap of foam of the same size taped to the first one holds the camera in place.

As a final embellishment to my videoconferencing studio-in-a-box, I added lighting, which helps especially when participating in a videoconference at night—normal ceiling lights are often problematic distractions by either casting weird shadows or playing havoc with the exposure when in the field of view. My solution was to use three white high-power LEDs, ones with a relatively low (“warm”) color temperature. These I wired in series and powered with a 9-V DC wall wart, through an inexpensive pulse-width-modulation board designed to control motors. They are mounted on heat sinks, with one positioned at the top of the box and the other two on the sides.

Okay, perhaps hacking all this together was going a little overboard for the sake of making me look presentable. But we may be videoconferencing quite a bit yet. And as Mark Pesce explains elsewhere in this issue (“A Solution for Zoom Fatigue May Be Near,” Macro & Micro), it’s hard to make out people’s social cues on your screen when peering at tiny images of them. I figure the least I can do for my friends and colleagues is to keep my face well lit and look them in the eye.

This article appears in the August 2020 print issue as “The Zoom Box.”

More Worries over the Security of Web Assembly

Post Syndicated from David Schneider original

In 1887, Lord Acton famously wrote, “Power tends to corrupt, and absolute power corrupts absolutely.” He was, of course, referring to people who wield power, but the same could be said for software.

As Luke Wagner of Mozilla described in these pages in 2017, the Web has recently adopted a system that affords software running in browsers much more power than was formerly available—thanks to something called Web Assembly, or Wasm for short. Developers take programs written, say, in C++, originally designed to run natively on the user’s computer, and compile them into Wasm that can then be sent over the Web and run on a standard Web browser. This allows the browser-based version of the program to run nearly as fast as the native one—giving the Web a powerful boost. But as researchers continue to discover, with that additional power comes additional security issues.

One of the earliest concerns with Web Assembly was its use for running software that would mine cryptocurrency using people’s browsers. Salon, to note a prominent example, began in February of 2018 to allow users to browse its content without having to view advertisements so long as they allowed Salon to make use of their spare CPU cycles to mine the cryptocurrency Monero. This represented a whole new approach to web publishing economics, one that many might prefer to being inundated with ads.

Salon was straightforward about what it was doing, allowing readers to opt in to cryptomining or not. Its explanation of the deal it was offering could be faulted perhaps for being a little vague, but it did address such questions as “Why are my fans turning on?”

To accomplish this in-browser crypto-mining, Salon used a software developed by a now-defunct operation called CoinHive, which made good use of Web Assembly for the required number crunching. Such mining could also have been carried out in the Web’s traditional in-browser programming language, Javascript, but much less effectively.

Although there was debate within the computer-security community for a while about whether such cryptocurrency mining really constituted malware or just a new model for monetizing websites, in practice it amounted to malware, with most sites involved not informing their visitors that such mining was going on. In many cases, you couldn’t fault the website owners, who were oblivious that mining code had been sneaked onto their websites.

A 2019 study conducted by researchers at the Technical University of Braunschweig in Germany investigated the top 1 million websites and found Web Assembly to be used in about 1,600 of them. More than half of those instances were for mining crytocurrency. Another shady use of Web Assembly they found, though far less prevalent, was for code obfuscation: to hide malicious actions running in the browser that would be more apparent if done using Javascript.

To make matters even worse, security researchers have increasingly been finding vulnerabilities in Web Assembly, some that had been known and rectified for native programs years ago. The latest discoveries in this regard appear in a paper posted online by Daniel Lehmann and Michael Pradel of the University of Stuttgart, and Johannes Kinder of Bundeswehr University Munich, submitted to the 2020 Usenix Security Conference, which is to take place in August. These researchers show that Web Assembly, as least as it is now implemented, contains vulnerabilities that are much more subtle than just the possibility that it could be used for surreptitious cryptomining or for code obfuscation.

One class of vulnerabilities stems fundamentally from how Web Assembly manages memory compared with what goes on natively. Web Assembly code runs on a virtual machine, one the browser creates. That virtual machine includes a single contiguous block of memory without any holes. That’s different from what takes place when a program runs natively, where the virtual memory provided for a program has many gaps—referred to as unmapped pages. When code is run natively, a software exploit that tries to read or write to a portion of memory that it isn’t supposed to access could end up targeting an unmapped page, causing the malicious program to halt. Not so with Web Assembly.

Another memory-related vulnerability of Web Assembly arises from the fact that an attacker can deduce how a program’s memory will be laid out simply by examining the Wasm code. For a native application, the computer’s operating system offers what’s called address space layout randomization, which makes it harder for an attacker to target a particular spot in program memory.

To help illustrate the security weaknesses of Web Assembly, these authors describe a hypothetical Wasm application that converts images from one format to another. They imagine that somebody created such a service by compiling a program that uses a version of the libpgn library that contains a known buffer-overflow vulnerability. That wouldn’t likely be a problem for a program that runs natively because modern compilers include what are known as stack canaries—a protection mechanism that prevents exploitation of this kind of vulnerability. Web Assembly includes no such protections and thus would inherit a vulnerability that was truly problematic.

Although the creators of Web Assembly took pains to make it safe, it shouldn’t come as a great surprise that unwelcome applications of its power and unexpected vulnerabilities of its design have come to light. That’s been the story of networked computers from the outset, after all.

U.S. Eases Restrictions on Private Remote-Sensing Satellites

Post Syndicated from David Schneider original

Later this month, satellite-based remote-sensing in the United States will be getting a big boost. Not from a better rocket, but from the U.S. Commerce Department, which will be relaxing the rules that govern how companies provide such services.

For many years, the Commerce Department has been tightly regulating those satellite-imaging companies, because of worries about geopolitical adversaries buying images for nefarious purposes and compromising U.S. national security. But the newly announced rules, set to go into effect on July 20, represent a significant easing of restrictions.

Previously, obtaining permission to operate a remote-sensing satellite has been a gamble—the criteria by which a company’s plans were judged were vague, as was the process, an inter-agency review requiring input from the U.S. Department of Defense as well as the State Department. But in May of 2018, the Trump administration’s Space Policy Directive-2 made it apparent that the regulatory winds were changing. In an effort to promote economic growth, the Commerce Department was commanded to rescind or revise regulations established in the Land Remote Sensing Policy Act of 1992,  a piece of legislation that compelled remote-sensing satellite companies to obtain licenses and required that their operations not compromise national security.

Following that directive, in May of 2019 the Commerce Department issued a Notice of Proposed Rulemaking in an attempt to streamline what many in the satellite remote-sensing industry saw as a cumbersome and restrictive process.

But the proposed rules didn’t please industry players. To the surprise of many of them, though, the final rules announced last May were significantly less strict. For example, they allow satellite remote-sensing companies to sell images of a particular type and resolution if substantially similar images are already commercially available in other countries. The new rules also drop earlier restrictions on nighttime imaging, radar imaging, and short-wave infrared imaging.

On June 25th, Commerce Secretary Wilbur Ross explained at a virtual meeting of the National Oceanic and Atmospheric Administration’s Advisory Committee on Commercial Remote Sensing why the final rules differ so much from what was proposed in 2019:

Last year at this meeting, you told us that our first draft of the rule would be detrimental to the U.S. industry and that it could threaten a decade’s worth of progress. You provided us with assessments of technology, foreign competition, and the impact of new remote sensing applications. We listened. We made the case with our government colleagues that the U.S. industry must innovate and introduce new products as quickly as possible. We argued that it was no longer possible to control new applications in the intensifying global competition for dominance.

In other words, the cat was already out of the bag: there’s no sense prohibiting U.S. companies from offering satellite-imaging services already available from foreign companies.

An area where the new rules remain relatively strict though, concerns the taking of pictures of other objects in orbit. Companies that want to offer satellite inspection or maintenance services would need rules that allow what regulators call “non-Earth imaging.” But there are national security implications here, because pictures obtained in this way could blow the cover of U.S. spy satellites masquerading as space debris.

While the extent to which spy satellites cloak themselves in the guise of space debris isn’t known, it seems clear that this would be an ideal tactic for avoiding detection. That strategy won’t work, though, if images taken by commercial satellites reveal a radar-reflecting object to be a cubesat instead of a mangled mass of metal.

Because of that concern, the current rules demand that companies limit the detailed imaging of other objects in space to ones for which they have obtained permission from the satellite owner and from the Secretary of Commerce at least 5 days in advance of obtaining images. But that stipulation begs a key question: Who should a satellite-imaging company contact if it wants to take pictures of a piece of space debris? Maybe imaging space debris would only require the permission of the Secretary of Commerce. But then, would the Secretary ever give such a request a green light? After all, if permission were typically granted, instances when it wasn’t would become suspicious.

More likely, imaging space debris—or spy satellites trying to pass as junk—is going to remain off the table for the time being. So even though the new rules are a welcome development to most commercial satellite companies, some will remain disappointed, including those companies that make up the Consortium for the Execution of Rendezvous and Servicing Operations (CONFERS), which had recommended that “the U.S. government should declare the space domain as a public space and the ability to conduct [non-Earth imaging] as the equivalent of taking photos of public activities on a public street.”

Biking’s Bedazzling Boom

Post Syndicated from David Schneider original

It might seem odd that, earlier this month, Stuttgart-based Bosch, a leading global supplier of automotive parts and equipment, seemed to be asking political leaders to reduce the amount of space on roadways they are allowing for cars and trucks.

This makes more sense when you realize that this call for action came from the folks at Bosch eBike Systems, a division of the company that makes electric bicycles. Their argument is simple enough: The COVID19 pandemic has prompted many people to shift from traveling via mass transit to bicycling, and municipal authorities should respond to this change by beefing up the bike infrastructure in their cities and towns.

There’s no doubt that a tectonic shift in people’s interest in cycling is taking place. Indeed, the current situation appears to rival in ferocity the bike boom of the early 1970s, which was sparked by a variety of factors, including: the maturing into adulthood of many baby boomers who were increasingly concerned about the environment; the 1973 Arab oil embargo; and the mass production of lightweight road bikes.

While the ’70s bike boom was largely a North American affair, the current one, like the pandemic itself, is global. Detailed statistics are hard to come by, but retailers in many countries are reporting a surge of sales, for both conventional bikes and e-bikes—the latter of which may be this bike boom’s technological enabler the way lightweight road bikes were to the boom that took place 50 years ago. Dutch e-bike maker VanMoof, for example, reported a 50 percent year-over-year increase in its March sales. And that’s when many countries were still in lockdown.

Eco Compteur, a French company that sells equipment for tracking pedestrian and bicycle traffic, is documenting the current trend with direct observations. It reports bicycle use in Europe growing strongly since lockdown measures eased. And according to its measurements, in most parts of the United States, bicycle usage is up by double or even triple digits over the same time last year.

Well before Bosch’s electric-bike division went public with its urgings, local officials had been responding with ways to help riders of both regular bikes and e-bikes. In March, for example, the mayor of New York City halted the police crackdown on food-delivery workers using throttle-assisted e-bikes. (Previously, they had been treated as scofflaws and ticketed.) And in April, New York introduced a budget bill that will legalize such e-bikes statewide.

Biking in all forms is indeed getting a boost around the world, as localities create or enlarge bike lanes, accomplishing at breakneck speed what typically would have taken years. Countless cities and towns—including Boston, Berlin, and Bogota, where free e-bikes have even been provided to healthcare workers—are fast creating bike lanes to help their many new bicycle riders get around.

Maybe it’s not accurate to characterize these local improvements to biking infrastructure as countless; some people are indeed trying to keep of tally of these developments. The “Local Actions to Support Walking and Cycling During Social Distancing Dataset” has roughly 700 entries as of this writing. That dataset is the brainchild of Tabitha Combs at the University of North Carolina in Chapel Hill, who does research on transportation planning.

“That’s probably 10 percent of what’s happening in the world right now,” says Combs, who points out that one of the pandemic’s few positive side effects has been its influence on cycling “You’ve got to get out of the house and do something,” she says. “People are rediscovering bicycling.”

The key question is whether the changes in people’s inclination to cycle to work or school or just for exercise—and the many improvements to biking infrastructure that the pandemic has sparked as a result—will endure after this public-health crisis ends. Combs says that cities in Europe appear more committed than those in the United States in this regard, with some allocating substantial funds to planning their new bike infrastructure.

Cycling is perhaps one realm were responding to the pandemic doesn’t force communities to sacrifice economically: Indeed, increasing the opportunities for people to walk and bike often “facilitates spontaneous commerce,” says Combs. And researchers at Portland State have shown that cycling infrastructure can even boost nearby home values. So lots of people should be able to agree that having the world bicycling more is an excellent way to battle the pandemic.

Q&A: Architect of New “Inspire” Quantum-Computing Platform on Spin Qubits and Programming Quantum Chips

Post Syndicated from David Schneider original

IEEE Spectrum spoke with Richard Versluis of QuTech in the Netherlands, which last week launched “Inspire,” Europe’s first public-access quantum-computing platform. Versluis, who is the system architect at QuTech, wrote about the challenges of building practical quantum computers in Spectrum’s April issue.

Richard Versluis on…

Spectrum: Tell me about QuTech. Is it a company, a government agency, a university, or some combination of the above?

Versluis: QuTech is the advanced research center for Quantum Computing and Quantum Internet, a collaboration founded in 2014 by Delft University of Technology (TU Delft) and the Netherlands Organisation for Applied Scientific Research (TNO). TNO is an independent research institute. About 70 percent of our funding is from other businesses. But because we receive considerable base funding from the government, we are not allowed to make a profit. Our status is very much like the national labs in the United States.

A DIY Audio Induction Loop for the Hard of Hearing

Post Syndicated from David Schneider original

I regularly attend an unprogrammed Quaker meeting, where a lot of the folks are older and have hearing impairments. This style of worship poses a special challenge for these people because, unlike the convention at many other religious gatherings, there is no one person who addresses the congregation. At an unprogrammed Quaker meeting, anyone may speak from anywhere in the room.

Some years ago, the congregation installed microphones in the room, which pick up voices and amplify them. These signals are then transmitted by FM radio over special frequencies near the radio broadcast band. The thing is, a person who needs help hearing others must use a small FM receiver and earphones, which is a little awkward, in part because it makes you stand out.

One way to mitigate this problem is to take advantage of the fact that some of these people wear hearing aids equipped with a telecoil (or T-coil). These hearing aids can be switched to a mode whereby they pick up audio-frequency signals electronically instead of using their built-in microphones. This system for passing the signal wirelessly doesn’t involve radio transmissions—it just uses magnetic induction. Suitable audio induction loops for energizing T-coils are found in all sorts of places, including museums and theaters; even some taxicabs are equipped with them. You may see their presence advertised with a blue-and-white sign that looks like an ear with a “T” next to it.

Many of the hearing-aid wearers who attend our Quaker meeting live in a nearby retirement community where induction loops have been installed to help hearing-impaired residents enjoy lectures and films. These people have had good experiences in such settings, and so they have been hoping that an audio induction loop would help them to hear what’s being said at our Quaker meeting, too.

I worried, though, that their disappointment with our current system reflected the difficulty of picking up voices cleanly in a reverberant room using microphones positioned at the periphery. And commercial installers of this equipment charge thousands of dollars—a lot for something that’s probably not going to solve our particular problem. But for very little money, I figured, we could install a DIY audio induction loop and at least let people try it out.

A little searching online uncovered some brief commentary posted by someone who installs such systems at music festivals. What he described was quite simple—creating a multiturn loop using a multiconductor wire (one for which the total resistance is between 4 and 8 ohms), and attaching it to a 200-watt audio amplifier, just as you would an 8-ohm speaker.

Before diving into this, I enlisted an older friend (a retired electrical engineer) who wears a T-coil hearing aid to perform an experiment. I constructed an induction coil from a six-turn square loop of magnet wire that was about a half meter on a side (I taped the wire to a flattened cardboard box), using wire of the right diameter to make the loop resistance 8 ohms. I then attached it to the speaker terminals of an ordinary stereo receiver, one that was collecting dust in the back of my garage.

Using that arrangement, I was able to convey audio to the T-coil in his hearing aid with his head about a meter and a half away (on axis) from my coil. This test was tougher than needed to assess requirements, because the magnetic field at my friend’s head was mostly horizontal, and my understanding is that T-coils are positioned to pick up vertical fields.

Based on that test (and the Biot-Savart Law), I figured out that I should be able to create a magnetic field of the same strength about 3 meters away from a long wire, which is exactly what you would need if said long wire were part of a loop installed in the attic of the building where the meeting is held.

Next I used a wire-resistance table to determine that 240 meters of 20-gauge wire should be around 8 ohms. That would be long enough for me to create a six-turn loop in the attic. So I bought a roll of 1,000 feet (304 meters) of two-conductor 20-gauge wire for about US $125 on Amazon. Using two conductors would allow me to create two parallel loops, powering each through one channel of my stereo receiver. Because each channel of the receiver can output as much as 80 watts, I figured I’d be close to the 200-watt figure I saw quoted online. Preliminary tests with six turns of this wire laid out on the floor of the building showed the signal-pickup range to be about 5 meters.

Encouraged by those initial tests, another friend and I recently placed a similar wire loop in the attic of the building—which was a lot harder than laying it on the floor because we had to snake the wire around an obstacle course of roof trusses and HVAC ducting. We ended up with a loop that’s got a rather complicated geometry and doesn’t lay flat on the joists in many places.

Nevertheless, this cobbled-together induction loop appears to convey signals just fine to hearing-aid wearers below. At least that’s what my retired electrical engineer friend reported when I set the stereo receiver attached to the loop to output a program from our local public-radio station. The only dead spots he found as he walked around the room were in the far corners.

That said, I don’t expect the addition of this informally constructed induction loop—or even a professionally installed one—could ever make up for the weak link in our system: the mounting of the microphones on the walls of the room, far from the people speaking. Unfortunately, there’s not much of an alternative, short of giving everyone TED-talk-style microphones to wear, which wouldn’t really mesh with the Quaker penchant for simplicity.

This article appears in the February 2020 print issue as “A DIY Audio Induction Loop.”

U.S. Commercial Drone Deliveries Will Finally Be a Thing in 2020

Post Syndicated from David Schneider original

graphic link to special report landing page

When Amazon made public its plans to deliver packages by drone six years ago, many skeptics scoffed—including some at this magazine. It just didn’t seem safe or practical to have tiny buzzing robotic aircraft crisscrossing the sky with Amazon orders. Today, views on the prospect of getting stuff swiftly whisked to you this way have shifted, in part because some packages are already being delivered by drone, including examples in Europe, Australia, and Africa, sometimes with life-saving consequences. In 2020, we should see such operations multiply, even in the strictly regulated skies over the United States.

There are several reasons to believe that package delivery by drone may soon be coming to a city near you. The most obvious one is that technical barriers standing in the way are crumbling.

The chief challenge, of course, is the worry that an autonomous package-delivery drone might collide with an aircraft carrying people. In 2020, however, it’s going to be easier to ensure that won’t happen, because as of 1 January, airplanes and helicopters are required to broadcast their positions by radio using what is known as automatic dependent surveillance–broadcast out (ADS-B Out) equipment carried on board. (There are exceptions to that requirement, such as for gliders and balloons, or for aircraft operating only in uncontrolled airspace.) This makes it relatively straightforward for the operator of a properly equipped drone to determine whether a conventional airplane or helicopter is close enough to be of concern.

Indeed, DJI, the world’s leading drone maker, has promised that from here on out it will equip any drone it sells weighing over 250 grams (9 ounces) with the ability to receive ADS-B signals and to inform the operator that a conventional airplane or helicopter is flying nearby. DJI calls this feature AirSense. “It works very well,” says Brendan Schulman, vice president for policy and legal affairs at DJI—noting, though, that it works only “in one direction.” That is, pilots don’t get the benefit of ADS-B signals from drones.

Drones will not carry ADS-B Out equipment, Schulman explains, because the vast number of small drones would overwhelm air-traffic controllers with mostly useless information about their whereabouts. But it will eventually be possible for pilots and others to determine whether there are any drones close enough to worry about; the key is a system for the remote identification of drones that the U.S. Federal Aviation Administration is now working to establish. The FAA took the first formal step in that direction yesterday, when the agency published a Notice of Proposed Rulemaking on remote ID for drones.

Before the new regulations go into effect, the FAA will have to receive and react to public comments on its proposed rules for drone ID. That will take many months. But some form of electronic license plates for drones is definitely coming, and we’ll likely see that happening even before the FAA mandates it. This identification system will pave the way for package delivery and other beyond-line-of-sight operations that fly over people. (Indeed, the FAA has stated that it does not intend to establish rules for drone flights over people until remote ID is in place.)

One of the few U.S. sites where drones are making commercial deliveries already is Wake County, N.C. Since March of last year, drones have been ferrying medical samples at WakeMed’s sprawling hospital campus on the east side of Raleigh. Last September, UPS Flight Forward, the subsidiary of United Parcel Service that is carrying out these drone flights, obtained formal certification from the FAA as an air carrier. The following month, Wing, a division of Alphabet, Google’s parent company, launched the first residential drone-based delivery service to begin commercial operations in the United States, ferrying small packages from downtown Christiansburg, Va., to nearby neighborhoods. These projects in North Carolina and Virginia, two of a handful being carried out under the FAA’s UAS Integration Pilot Program, show that the idea of using drones to deliver packages is slowly but surely maturing.

“We’ve been operating this service five days a week, on the hour,” says Stuart Ginn, a former airline pilot who is now a head-and-neck surgeon at WakeMed. He was instrumental in bringing drone delivery to this hospital system in partnership with UPS and California-based Matternet.

Right now the drone flying at WakeMed doesn’t travel beyond the operators’ line of sight. But Ginn says that he and others behind the project should soon get FAA clearance to fly packages to the hospital by drone from a clinic located some 16 kilometers away. “I’d be surprised and disappointed if that doesn’t happen in 2020,” says Ginn. The ability to connect nearby medical facilities by drone, notes Ginn, will get used “in ways we don’t anticipate.”

This article appears in the January 2020 print issue as “The Delivery Drones Are Coming.”

Track the Movement of the Milky Way With This DIY Radio Telescope

Post Syndicated from David Schneider original

A young friend recently spent a week learning about radio astronomy at the Green Bank Observatory in West Virginia. His experience prompted me to ask: How big a radio antenna would you need to observe anything interesting? It turns out the answer is a half meter across. For less than US$150 I built one that size, and it can easily detect the motions of the spiral arms of the Milky Way galaxy. Wow!

My quest began with a used satellite-TV dish and a “cantenna” waveguide made from a coffee can placed at the dish’s focus. Unfortunately, I had to abandon this simple approach when I figured out that a coffee can was too small to work for the wavelength I was most interested in: 21 centimeters. That is the wavelength of neutral hydrogen emissions, a favorite of radio astronomers because it can be used to map the location and motion of clouds of interstellar gas, such as those in the spiral arms of our galaxy.

Some research revealed that amateur radio astronomers were having success with modest horn antennas. So I copied the example of many in an online group called Open Source Radio Telescopes and purchased some aluminized foam-board insulation as antenna construction material. But I was troubled when my multimeter showed no evidence that the aluminized surface could conduct electricity. To make sure the material would have the desired effect on radio waves, I built a small box out of this foam board and put my cellphone inside it. It should have been completely shielded, but my cellphone received calls just fine.

That experiment still perplexes me, because people definitely do build radio telescopes out of aluminized foam board. In any case, I abandoned the board and for $13 purchased a roll of 20-inch-wide (51-centimeter-wide) aluminum flashing—the thin sheet metal used to weatherproof tricky spots on roofs.

The width of my roll determined the aperture of my horn’s wide end. The roll was 10 feet (3 meters) long, which limited the length of the four sides to 75 cm. An online calculator showed that a horn of those dimensions would have a respectable directional gain of 17 decibels. Some hours with snips and aluminized HVAC tape ($8) resulted in a small horn antenna. Attaching a square of ordinary foam board (not the aluminized kind) to the open end made it plenty robust.

I also purchased a 1-gallon can of paint thinner ($9) and gave away its contents. The empty can serves as a waveguide feed at the base of the horn antenna. A handy online waveguide calculator told me this feed would have an operating range that nicely brackets the neutral-hydrogen line frequency of 1420 megahertz.

Some folks contributing to Open Source Radio Telescopes were using similar cans. But none of the projects’ documentation showed exactly how to construct the feed’s “pin”: the part that picks up signals inside the waveguide and passes them to the telescope’s receiver. Many cantenna tutorials say to make the pin a quarter of a wavelength long, which in this case works out to 53 millimeters. The tricky part is figuring out where to place it in the can—it needs to be a quarter of a wavelength from the base. However, in this case the relevant wavelength isn’t 21 centimeters but what’s called the guide wavelength, which corrects for the difference between how the signal propagates in free space versus inside the waveguide. An online tutorial and another calculator showed the appropriate distance from the base to be 68 mm. So that’s where I drilled a hole to accommodate an N-type coaxial bulkhead connector that I had purchased on for $5, along with an N-to-SMA adapter ($7).

For my receiver, I went with a USB dongle that contains a television tuner plus a free software-defined radio application called HDSDR. (The software was chosen on the basis of a report from two amateur radio astronomers in Slovenia who had used it to good effect.)

I purchased the dongle from ($37) because that company had also recently started selling a gizmo that seemed perfect for my application: It contains two low-noise amplifiers and a surface-acoustic-wave (SAW) filter centered on 1420 MHz ($38). The dongle itself provides power for the amplifier through the coaxial cable that connects them, a 30-cm (12- inch) length of coax purchased on ($9). The dongle just sits on the ground next to my horn and is attached to a Windows laptop through a USB extension cable.

At my instrument’s “first light,” I was able to detect the neutral hydrogen line with just a little squinting. After getting more familiar with the HDSDR software, I figured out how to time-average the signal and focus on the spectral plot, which I adjusted to display average power.

This plot distinctly showed a hydrogen “line” (really a fat bump) when I pointed my horn at the star Deneb, which is a convenient guide star in the constellation of Cygnus. Point at Cygnus and you’ll receive a strong signal from the local arm of the Milky Way very near the expected 1420.4-MHz frequency. Point it toward Cassiopeia, at a higher galactic longitude, and you’ll see the hydrogen-line signal shift to 1420.5 MHz—a subtle Doppler shift indicating that the material giving off these radio waves is speeding toward us in a relative sense. With some hunting, you may be able to discern two or more distinct signals at different frequencies coming from different spiral arms of the Milky Way.

Don’t expect to hear E.T., but being able to map the Milky Way in this fashion feels strangely empowering. It’ll be $150 well spent.

This article appears in the October 2019 print issue as “Build Your Own Radio Telescope.”

Squeezing Rocket Fuel From Moon Rocks

Post Syndicated from David Schneider original

Here’s how lunar explorers will mine the regolith to make rocket fuel

graphic link to special report landing page
graphic link to special report landing  page

The most valuable natural resource on the moon may be water. In addition to sustaining lunar colonists, it could also be broken down into its constituent elements—hydrogen and oxygen—and used to make rocket propellant.

Although the ancients called the dark areas on the moon maria (Latin for “seas”), it has long been clear that liquid water can’t exist on the lunar surface, where it would swiftly evaporate. Since the 1960s, though, scientists have hypothesized that the moon indeed harbors water, in the form of ice. Because the moon has a very small axial tilt—just 1.5 degrees—the floors of many polar craters remain in perpetual darkness. Water could thus condense and survive in such polar “cold traps,” where it might one day be mined.

Build a Retro Timekeeper That Never Was

Post Syndicated from David Schneider original

Celebrate the first moon landing with a GPS clock that uses Apollo-era panel meters

Two years ago, on the 48th anniversary of the Apollo 11 landing, a cloth bag that Neil Armstrong used to return the first lunar samples to Earth was sold at a Sotheby’s auction for US $1.8 million. The seller had purchased it online two years earlier for a mere $995—a fantastically good deal for what turned out to be a precious artifact of the Apollo era.

While I wasn’t nearly so fortunate, I, too, got a good deal online for some hardware that probably contributed in some way to the Apollo program, though I don’t know how exactly. I obtained three vintage analog panel voltmeters for $15 each from an eBay seller who had bought them from NASA’s Marshall Space Flight Center, in Huntsville, Ala.

I could tell from their art deco–inspired faces that the three Weston voltage meters were old when I first saw them online, but I didn’t know how old. To my delight, I discovered manufacturing dates written on the back of the faceplates. These meters, it turns out, were made between May and December of 1955—and presumably shipped to Huntsville shortly afterward.

At the time, NASA did not yet exist. Huntsville was, however, home to an Army installation, known as Redstone Arsenal, where missiles were being developed. In 1955, Wernher von Braun and numerous other German rocket scientists were hard at work there building rockets as part of the United States’ ballistic missile program. This team would build the first U.S. satellite launcher, and later, after the site had become the Marshall Space Flight Center, von Braun and others in Huntsville helped to develop the giant Saturn boosters, which eventually sent the Apollo astronauts to the moon.

Having scored three classy panel meters of some vague historical significance (I can imagine von Braun having peered at their twitching needles), I wanted to do something fun with them. Inspired by a project I had seen on Hackaday, I decided to make a clock that would indicate the hours, minutes, and seconds by deflecting the needle on the analog voltmeters. But I would go a step further than the Hackaday project and combine the modern space age with the old. My clock would be synchronized with the atomic clocks carried on GPS satellites, while still looking like something that would be at home on a rack of instrumentation at some NASA facility during the Apollo program.

I had in my junk box a now-outdated GPS module [PDF], so I hooked that to an Arduino Nano, which I programmed to extract the time from the GPS signal. Then I opened up the meters and replaced the current-limiting resistors inside with ones of my own choosing so as to make the full-scale reading on each meter (originally 10 and 50 volts), to be somewhat less than 5 V. That allowed me to control the needle position on each meter using pulse-width modulation, which is easily output using certain dedicated pins on the Arduino.

The biggest challenge was to make it look like a period piece of equipment. For that, I first obtained a standard rack-mount “blanking” panel, one painted in a putty color, which contrasts nicely with the black Bakelite housings of the meters. I also purchased five vintage panel-lamp housings, into which I inserted LEDs for use as indicator lights.

Photo of the back of the three mounted voltmeters on a rack panel.

Next came the trickiest part: changing the faces of the meters so that they could indicate time. The first step in that process was to remove the faceplate from one meter and scan it. Using that as a template, I designed new faceplates with Adobe Illustrator, ones that show hours, minutes, and seconds yet preserve the style of the original faces. I printed the new faces on ivory-colored stock cut from a manila folder, which helped to give them a vintage appearance.

A fiddly bit was cutting out arc-shaped openings for the meters’ mirrors. Younger readers might not be aware that better analog meters, especially those used in volt-ohm meters (“VOMs,” the predecessor to today’s digital multimeters), included mirrors behind the faceplate, which helped you to judge whether you were viewing the needle square on. Without that, you could easily misread the indicated value because of parallax. My Marshall Space Flight Center meters included such mirrors, and I didn’t want to cover them with my new faces. An X-Acto knife worked passably well for this task, although if you look hard, you can see the flaws.

The final flourish was to affix a black plastic nameplate with the words “Satellite Time” in white lettering. That cost just a few dollars to have made and helps my clock look the part.

The three meters now show GPS time in my local time zone. The five panel lights indicate the number of GPS satellites in the sky. In theory, there can be as many as 16 overhead at any one time, so I display the number in binary using the five lights arrayed across the bottom of the clock.

Because my living room doesn’t include a rack for my rack-mount panel, I cut a couple of supports out of half-inch (1.27-centimeter) thick gray PVC, allowing the clock to stand upright on any horizontal surface. Power is supplied to the Arduino through a standard wall wart.

Despite its unconventional appearance, this panel-meter-based clock is easy enough to read. And with its ear to the GPS constellation, it always tells the correct time. More important, I get a chuckle looking at it, knowing that it’s built from components that probably contributed in some small way to putting astronauts on the moon a half century ago.

This article appears in the July 2019 print issue as “A Retro Timekeeper That Never Was.”

A Neural-Net Based on Light Could Best Digital Computers

Post Syndicated from David Schneider original

Researchers turn to optical computing to carry out neural-network calculations

We now perform mathematical calculations so often and so effortlessly with digital electronic computers that it’s easy to forget that there was ever any other way to compute things. In an earlier era, though, engineers had to devise clever strategies to calculate the solutions they needed using various kinds of analog computers.

Some of those early computers were electronic, but many were mechanical, relying on gears, balls and disks, hydraulic pumps and reservoirs, or the like. For some applications, like the processing of synthetic-aperture radar data in the 1960s, the analog computations were done optically. That approach gave way to digital computations as electronic technology improved.

Curiously, though, some researchers are once again exploring the use of analog optical computers for a modern-day computational challenge: neural-network calculations.

The calculations at the heart of neural networks (matrix multiplications) are conceptually simple—a lot simpler than, say, the Fourier transforms needed to process synthetic-aperture radar data. For readers unfamiliar with matrix multiplication, let me try to de-mystify it.

A matrix is, well, a matrix of numbers, arrayed into rows and columns. When you multiply two matrices together, the result is another matrix, whose elements are determined by multiplying various pairs of numbers (drawn from the two matrices you started with) and summing the results. That is, multiplying matrices just amounts to a lot of multiplying and adding.

But neural networks can be huge, many-layer affairs, meaning that the arithmetic operations required to run them are so numerous that they can tax the hardware (or energy budget) that’s available. Often graphics processing units (GPUs) are enlisted to help with all the number crunching. Electrical engineers have also been busy designing all sorts of special-purpose chips to serve as neural-network accelerators, Google’s Tensor Processing Unit probably being the most famous. And now optical accelerators are on the horizon.

Two MIT spin-offs—Lightelligence and Lightmatter—are of particular note. These startups grew out of work on an optical-computing chip for neural-network computations that MIT researchers published in 2017.

More recently, yet another set of MIT researchers (including two who had contributed to the 2017 paper) has developed yet another approach for carrying out neural-network calculations optically. Although it’s still years away from commercial application, it neatly illustrates how optics (or more properly a combination of optics and electronics) can be used to perform the necessary calculations.

The new strategy is entirely theoretical at this point, but Ryan Hamerly, lead author on the paper that’s recently been published about the new approach, says, “We’re building a demonstration experiment.” And while it might take many such experiments and several years of chip development to really know whether it works, their approach, “promises to be significantly better than what can be done with current-day electronics,” according to Hamerly.

So how does the new strategy work? I’m not sure I could explain all the details even if I had the space, but let me try to give you a flavor here.

The necessary matrix multiplications can be done using three simple kinds of components: optical beam splitters, photodiodes, and capacitors. That sounds rather remarkable, but recall that matrix multiplications are really just a bunch of multiplications and additions. So all we really need here is an analog gizmo that can multiply two values together and another analog gizmo to sum up the results.

It turns out that you can build an analog multiplier with a beam splitter and a photodiode. A beam splitter is an optical device that takes two optical inputs and provides two optical outputs. If it is configured in a certain way, the amplitude of light that it outputs on one side will be the sum of the amplitudes of its two inputs; the amplitude of its other output will be the difference of the two inputs. A photodiode outputs an electronic signal that is proportional to the intensity of the light impinging on it.

The essential thing to realize here is that the intensity of light (a measure of the power it carries) is proportional to its amplitude squared. That’s key because if you square the sum of two light signals (let’s denote this as A + B), you will get A2 + 2AB + B2. If you square the difference of these same two light signals (AB), you will get A2 2AB + B2. Subtract the latter from the former and you get 4AB, which you will notice is proportional to the product of the two inputs, A and B.

So by scaling your analog signals appropriately, a beam splitter and photodiode in combination can serve as an analog multiplier. What’s more, you can do a series of multiplications just by presenting the appropriate light signals, one after the other, to this kind of multiplier. Feed the series of electronic outputs of your multiplier into a capacitor and you’ll be adding up the results of each multiplication, forming the result you need to define one element in the product matrix. Rinse and repeat enough times, and you have just multiplied two matrices!

There are some other mathematical manipulations, too, that you’d need to run a neural network; in particular you have to apply a non-linear activation function to each neuron. But that can easily be done electronically. The question is what kind of signal-to-noise ratio a real device could maintain while doing all this, which will control the resolution of the calculations it performs. That resolution might not end up being very high. “That’s a downside of any analog system,” says Hamerly. Happily, at least for inference calculations (during which a neural network that has already been trained does its thing), relatively low resolution is normally fine.

It’s hard to know how fast an electro-optical accelerator chip designed along these lines would compute, explains Hamerly, because the metric normally used to judge such performance depends on both throughput and chip area, and he isn’t yet prepared to estimate what sort of area the chip he is envisioning would require. But he’s optimistic that this approach could slash the energy required for such calculations.

Indeed, Hamerly and his colleagues argue that their approach could use less energy than even the theoretical minimum for a gate-based digital device of equivalent accuracy—a value known as the Landauer limit. (It’s impossible to reduce the energy of computation to anything less than this limit without resorting to some form of reversible computing.) If that’s true for this or any other optical accelerator on the drawing board, many neural network calculations would no doubt be done using light rather than just electrons.

With the remarkable advances electronic computers have made over the past 50 years, optical computing never really gained traction, but maybe neural networks will finally provide the killer app for it. As Hamerly’s colleague and coauthor Liane Bernstein notes: “This could be the time for optics.”

DJI Promises to Add “AirSense” to Its New Drones

Post Syndicated from David Schneider original

The company plans to include ADS-B receivers in next year’s drones weighing more than 250 grams

There’s been a lot of talk over the years about endowing drones with the ability to “sense and avoid” nearby aircraft—adding radar or optical sensors that would mimic what a pilot does when she peers out the window in all directions to be sure there are no other aircraft nearby on a collision course. But incorporating such sense-and-avoid capability is quite challenging technically, and would probably never be possible to add to small consumer drones.

Thankfully, the threat of mid-air collision can be reduced using a strategy that is much easier to implement: Have aircraft track their positions with GPS and then broadcast that information to one another, so that everybody knows where everybody else is flying. Drones could at least receive those broadcasts and alert their operators or automatically adjust their flying to steer well clear of planes and helicopters.

Indeed, DJI, the world’s leading drone manufacturer, has already outfitted some if its drones with just this ability, and today it announced that by 2020 it would include that feature, which it calls “AirSense,” in all new drones it releases that weigh more than 250 grams.