Mojo Vision, a Silicon Valley startup developing augmented reality contact lenses, announced today that it had raised an additional US $51 million to move forward with development. This brings total investment in the company to $159 million.
With workers confined to their homes by the coronavirus pandemic, video conferencing is providing a vital lifeline for businesses. But people are quickly realising that video chats can be a poor substitute for in-person meetings.
A viral video from filmmaker H.P. Mendoza perfectly captured the awkwardness of video conferences: people frequently talk over each other, someone seems to always have microphone or camera issues, and there’s often lag. The format also fails to capture many non-verbal cues that help people communicate effectively.
These shortcomings may provide an opportunity for virtual reality (VR) technology as leading VR hardware makers start to incorporate gesture, eye gaze, and facial expression tracking into their products.
Post Syndicated from G. Pascal Zachary original https://spectrum.ieee.org/tech-talk/consumer-electronics/gadgets/wrong-right-to-repair
THE ENGINEER’S PLACE The end came with a whimper. My personal laser printer showed a persistent error message. In the past, closing the cover cleared the message and let me print. Not this time. I surveyed guidance on the Web, even studied the remedies proposed by the printer’s maker. No joy.
After weeks, and then months after opening and closing the cover, and turning the printer off and on, I surrendered. Last week, I unplugged it, removed the ink cartridge (for re-use) and carried the printer to a nearby responsible electronics recycler.
I cringed and wondered. Should I feel shame for contravening the nifty dictum of the self-styled “right to repair” movement, which insists that “instead of throwing things out,” we should “reuse, salvage and rebuild?”
In the case of my zombie printer, I’m convinced the recycler was the best destination. A near-identical model, brand new, sells on Amazon for $99. The ink cartridge costs a third as much. Even if the printer could be repaired, at what expense in parts and labor?
So I bought a new printer.
When I ponder the wisdom of my decision, I think “Shame on me.” Rather than fight to repair my wounded device, I did what Big Tech and other manufacturers increasingly want owners to do. I threw it away.
Today repair remains an option, one that makers want to monopolize or eliminate. Apple, the world’s most valuable company, is the worst offender, effectively forbidding owners to repair or maintain their smart phones. Not even the battery is replaceable by an owner. Forbidden also are repairs by owners of cracked screens. Such brazen actions void Apple’s warranty.
Many people have a tale of trying to bootleg an iPhone repair. My favorite is when I found a guy on Yelp! who asked me to meet him inside a Starbucks. His nom de repair is ScreenDoc, and he ran our rendezvous like a drug buy. He only entered the shop after I ordered a coffee and sat down. Seated at my table, working with tiny tools, he swapped my broken screen for a new one. I slipped him $90 in cash, and he left.
Sound tawdry? The nationwide campaign, led by Repair.Org, agrees, which is why Repair.Org supports legislation in at least 20 states to promote “your right to repair,” by requiring manufacturers “to share the information necessary for repair.”
Long before the advent of the repair campaign, and a related movement called the Maintainers, there were loud critics of “planned obsolescence.” During Depression-era America, an influential book published 1932 advocated “creative waste”—the idea that throwing things away and buying new things can fuel a strong economy. One advocate, Bernard London, wrote a paper in 1932, “Ending the Depression Through Planned Obsolescence,” in which he called on the federal government to print expiration dates on manufactured goods. “Furniture and clothing and other commodities should have a span of life, just as humans have,” he wrote. “They should be retired, and replaced by fresh merchandise.”
Manufacturers purposely made stuff that broke or wore out, so consumers would have to buy the stuff again. Echoes of this practice persist. In shopping for new tires, for instance, drivers pay more for those “rated” to last longer.
The big threat to devices today isn’t failure, but rather “creative destruction,” or the new advent of new and improved stuff. Who needs to think about repairs when we are dazzled by the latest “upgrade.”
The newest iPhones, for instance, are promoted on the appeal of their improved cameras. The latest Apple watch series boasts new band colors. Such incremental improvements long pre-date Apple’s popularity. One hundred years ago, General Motors decided to release new models, new colors, and faster engines every year. “The changes in the new model should be so novel and attractive as to create demand…and a certain amount of dissatisfaction with past models as compared with the new one,” wrote Alfred Sloan, then automaker’s CEO, in his 1963 autobiography My Years With General Motors.
Some of us never grow disenchanted with certain machines. We love them forever. And we strive to keep them going. Some cherished cars fall into this category, and computers do, too. I’m typing this article on my beloved 2014 Mac Powerbook. My battery is toast, so I can only securely use the laptop while plugged in. And I type on an external keyboard because the original keys are so worn out that a few won’t function at all even though Apple has twice replaced the key caps for me.
I don’t want my PowerBook Pro to die; yet my repair options are ruled by Apple. And a cruel master is she. My best path forward is to ask Apple to replace the keyboard and battery. I dread finding out whether Apple continues to offer this option. Though I feel no shame regarding my utter dependence on Apple for repairs, I do feel outrage and puzzlement. I am aware that the do-it-yourself (DIY) movement that has transformed how we maintain our homes and our bodies, how we eat and drink, work and play.
But DIY maintenance is not for everybody or appropriate for every situation. Nor does it inevitably produce greater “caring.” Results vary. Quality can suffer. While a person’s self-esteem may rise with every home improvement they carry out, the value of their home may decline as a result (because of the quality of the DIY fixes). I favor a simple rule: encourage consumers to repair if they wish but not insist on self-repair under every circumstance, and leave the option that original makers of complex devices will repair them the best (Tesla owners, take heed!)
When self-reliance becomes non-negotiable, the results can be dispiriting. But when the impulse to do things yourself, like brewing your own beer, baking your own bread, raising your own chickens and building your own computers, takes hold, the results can be good for your soul.
In 1974, a repair enthusiast named Robert Pirsig published a book that proved highly influential and sold millions of copies. Zen and the Art of Motorcycle Maintenance came to define a spiritual and mental outlook by contrasting the approaches of two bike owners. One rides an expensive new bike and relies on professionals to repair. The other rides an older bike that he repairs on his own and, by doing so, hones his problem-solving abilities and, unexpectedly, connects to a deeper wisdom that enhances his sense of dignity and endows his life with greater meaning.
The shift in attitudes a half-century ago was dramatic, reflecting the profound expansion of the human-built world. Once humans sought to “connect” with nature; now they wished to do the same (or more) with their machines. In many ways, the repair movement is a revival of this venerable counter-cultural tradition.
Today’s repair enthusiasts would have us believe that the well-maintained artifact is the new beautiful. But denying consumers the ability to repair their stuff is, to me, chiefly an economic, not a spiritual or aesthetic, issue.
The denial of the repair option is not limited to laptops and smart phones. Automobiles are now essentially computers on wheels. Digital diagnostics make repair no longer the dominion of the clever tinkerer. Specialized software, reading reports from the sensors scattered throughout your car, decides which “modules” to replace. The ease comes at a price. Your dealer now dominates the repair business. Independent car shops often can’t or won’t invest in the car manufacturer’s expensive software. And the hardy souls that once maintained their own vehicles, in their driveway or on the street, are as close to extinction as the white rhino.
The predatory issue is central. The denial of the repair option is often a form of profiteering. The manufacturer earns money from what he or she considers the “after market.” Many makers of popular devices now see repair and maintenance as a kind of annuity, a stream of revenue similar in type to that provided by sales of a printer cartridge or razor blade. For auto dealers, profits from “service” now can exceed profits from sales of new cars. Increasingly products are designed, across many categories, to render impossible, or greatly limit, repair by owner.
I am not sure the practice is wrong, and certainly not wrong in all cases. The profits from repair are often justified by claims of superior service. Brand-name makers, in theory, can control reliability by maintaining their own devices. Reliability easily conflates with “peace of mind,” so that the repair path collides squarely with another basic human urge: convenience.
Not everyone opposes convenience, so the Repair movement might regret choosing to advocate for a “right” to repair rather than an “option.” An option implies protecting a consumer’s choice, not mandating a specific repair scenario. I’m skeptical about applying the language of legal rights to the problem of repair and maintenance; because there are many cases where technology companies especially have the obligation to repair problems, and not foist them onto their customers.
Here’s a live example. Among my chief reasons for my loyalty to the iPhone is that Apple supplies updated software that protects me against viruses and security hacks; Apple even installs this software on my phone sometimes without my conscious assent, or awareness. If I had to assent explicitly to each iPhone software update, I would invariably fail to have the latest protection and then suffer the negative consequences. So I don’t want to be responsible for repairing or maintaining a phone that is inherently collective in nature. I am freer and happier when Apple does it.
I understand that ceding the repair to an impersonal System might seem to libertarians like a road to serfdom. But having the System in charge of repair probably makes sense for essential products and services.
The artifacts in our world are profoundly networked now, and even though some devices look and feel individual to us, they are not. Their discreteness is an illusion. Increasingly no person is a technological island. Our devices are part of systems that depend on collective action and communal support.
Given the deep interconnectedness of our built environment, the distinction between repairing your own devices and letting others do so breaks down; and insisting on maintaining the distinction strikes me as inherently anti-social and destructive to the common good. At the very least the question of who repairs what should be viewed as morally neutral. Our answers should be shaped by economics and practicality, not romantic notions about individual freedom and responsibility.
Because the right-to-repair movement is based on a romantic notion, and pits those who maintain against those who don’t, a backlash against the concept is inevitable. A healthier approach to the genuine challenge of maintaining technological systems, and their dependent devices, would be to also strengthen collective responses and systems of repair and maintenance.
Much is at stake in this argument. Thinking about who is responsible for what aspects of our techno-human condition helps clarify what forms of resistance are possible in a world dominated by Big Tech companies and complex socio-technical systems. Resistance can and should take many forms, but resistance will be far more effective, I submit, if we do not choose repair and maintenance as a proxy for democratic control over innovation.
So I offer different solution. Rather than burden individuals with enhanced rights and duties for repair and maintenance of our devices, let’s demand that makers of digitally-controlled stuff make repairs at fair prices, quickly and reliably. Or maybe we go further and demand that these companies repair and maintain their products at a slight loss, or even a large loss, in order to incentivize them to design and build high-quality stuff in the first place; stuff that requires less maintenance and fewer repairs.
By insuring that repair is fair, reliable and low cost by law and custom, we can achieve the best of both worlds: keep our gadgets running and feel good knowing that the quality of our stuff is not the measure of ourselves.
On the CES floor in Las Vegas this past January, I saw dozens of companies showing off products designed to help us adapt to climate change. It was an unsettling reminder that we’ve tipped the balance on global warming and that hotter temperatures, wildfires, and floods are the new reality.
Based on our current carbon dioxide emissions, we can expect warming of up to 1.5 °C by 2033. Even if we stopped spewing carbon today, temperatures would continue to rise for a time, and weather would grow still more erratic.
The companies at CES recognize that it’s too late to stop climate change. Faced with that realization, this group of entrepreneurs is focusing on climate adaptation. For them, the goal is to make sure that people and the global economy will still survive across as much of the world as possible. These entrepreneurs’ companies are developing practicalities, such as garments that adapt to the weather or new building materials with higher melting points so that roads won’t crack in extreme temperatures.
One of the biggest risks in a warming world is that both outdoor workers and their equipment will overheat more often. Scientists expect to see humans migrate from parts of the world where temperatures and humidity combine to repeatedly create heat indexes of 40.6 °C, because beyond that temperature humans have a hard time surviving [PDF]. But even in more temperate locations, the growing number of hotter days will also make it tough for outdoor workers.
Embr Labs is building a bracelet that the company says can lower a person’s perceived temperature a few degrees simply by changing the temperature on their wrist. The bracelet doesn’t change actual body temperature, so it can’t help outdoor workers avoid risk on a sweltering day. But it could still be used to keep workers cooler on safe yet still uncomfortably warm days. It might also allow companies to raise their indoor temperatures, saving on air-conditioning costs.
Elsewhere, Epicore Biosystems is building wearable microfluidic sensors that monitor people for dehydration or high body temperatures. The Epicore sensors are already being used for athletes. But it’s not hard to imagine that in the near future there’d be a market for putting them on construction, farm, and warehouse workers who have to perform outside jobs in hot weather.
Extreme temperatures—and extreme fluctuations between temperatures—are also terrible for our existing road and rail infrastructure. Companies such as RailPod, as well as universities, are building AI-powered drones and robots that can monitor miles of roadway or track and send back data on repairs.
And then there’s flooding. Coastal roads and roads near rivers will need to withstand king tides, flash floods, and sustained floodwaters. Pavement engineers are working on porous concrete to mitigate flood damage and on embedded sensors to communicate a road’s status in real time to transportation officials.
There are so many uncertainties about our warming planet, but what isn’t in doubt is that climate change will damage our infrastructure and disrupt our patterns of work. Plenty of companies are focused on the admirable goal of preventing further warming, but we need to also pay attention to the companies that can help us adapt. A warmer planet is already here.
This article appears in the April 2020 print issue as “Tech for a Warming World.”
Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/consumer-electronics/portable-devices/heres-how-facebooks-braincomputer-interface-development-is-progressing
In 2017, Facebook announced that it had assigned at least 60 engineers to an effort to build a brain-computer interface (BCI). The goal: allow mobile device and computer users to communicate at a speed of at least 100 words per minute—far faster than anyone can type on a phone.
Last July, Facebook-supported researchers at the University of California San Francisco (UCSF) published the results of a study demonstrating that Facebook’s prototype brain-computer interface could be used to decode speech in real time—at least speech in the form of a limited range of answers to questions.
Facebook that month published a blog post explaining a bit about the technology developed so far. The post described a device that shines near-infrared light into the skull and uses changes in the way brain tissue absorbs that light to measure the blood oxygenation of groups of brain cells.
Said the blog post:
Think of a pulse oximeter—the clip-like sensor with a glowing red light you’ve probably had attached to your index finger at the doctor’s office. Just as it’s able to measure the oxygen saturation level of your blood through your finger, we can also use near-infrared light to measure blood oxygenation in the brain from outside of the body in a safe, non-invasive way….And while measuring oxygenation may never allow us to decode imagined sentences, being able to recognize even a handful of imagined commands, like “home,” “select,” and “delete,” would provide entirely new ways of interacting with today’s VR systems—and tomorrow’s AR glasses.
The company hasn’t talked much about the project since—until this month, when Mark Chevillet, research director for Facebook Reality Labs and the BCI project leader, gave an update at ApplySci’s Wearable Tech, Digital Health, and Neurotech Silicon Valley conference.
For starters, the team has been finishing up a move to its new hardware design. It’s not, by any means, the final version, but they say it is vastly more usable than the initial prototype.
The hardware used for UCSF’s research was big, expensive, and not all that wearable, Chevillet admitted. But the team has developed a cheaper and more wearable version, using lower cost components and some custom electronics. This so-called research kit, shown in the July blog post [photo below], is currently being tested to confirm that it is just as sensitive as the larger device, he says.
Meanwhile, the researchers are focusing their efforts on speed and noise reduction.
“We are measuring the hemodynamic response,” Chevillet says, “which peaks about five seconds after the brain signal.” The current system detects the response at the peak, which may be too slow for a truly useful brain-computer interface. “We could detect it earlier, before the peak, if we can drive up our signal and drive down the noise,” says Chevillet.
The new headsets will help this effort, Chevillet indicated, because the biggest source of noise is movement. The smaller headset sits tightly on the head, resulting in fewer shifts in position than is the case with the larger research device.
The team is also looking into increasing the size of the optical fibers that collect the signal in order to detect more photons, he says.
And it has built and is testing a system that uses time domains to eliminate noise, Chevillet reports. By sending in pulses of light, instead of continuous light, he says, the team hopes to distinguish between the photons that travel only through the scalp and skull before being reflected—the noise—from those that actually make it into brain tissue. “We hope to have the results to report out later this year,” he says.
Another way to improve the signal-to-noise ratio of the device, he suggests, is increasing the contrast. You can’t necessarily increase the brightness of the light, he says; it has to stay below a safe level for brain tissue. But the team can increase the number of pixels in the photodetector array. “We are trying a 32-by-32-pixel single photon detector array to see if we can improve the signal-to-noise ratio, and will report that out later this year,” Chevillet says.
But, he admits, “even with what we are doing to get a better signal, it will be noisy.”
That’s why, Chevillet explained, the company is focusing on detecting the mental efforts that produce speech—it doesn’t actually read random thoughts. “We can use noisy signals with speech algorithms,” he says, “because we have speech algorithms that have been trained on huge amounts of audio, and we can transfer that training over.”
This approach to the brain-computer interface is intriguing but won’t be easy to pull off, says Roozbeh Ghaffari, a biomedical researcher at Northwestern University and CEO of Epicore Biosystems. “There may indeed be ways to relate neurons firing to changes in local blood oxygenation levels,” Ghaffari told Spectrum. “But the changes in blood oxygenation levels that map to neuronal activity are highly localized; the ability to map these localized changes to speech activity—from the skin surface, on a cycle-by-cycle basis—could be challenging.”
Post Syndicated from Peter Palomaki original https://spectrum.ieee.org/consumer-electronics/audiovideo/move-over-cmos-here-come-snapshots-by-quantum-dots
In the early 2000s, the commercialization of CMOS image sensors led to smaller and smaller—and cheaper and cheaper—digital cameras. Now the thinnest of mobile phones contains at least two camera modules, and all except the most dedicated photographers have stopped carrying a separate camera, concluding that the camera sensors in their phones take pictures that are good enough.
But do they? In bright sun, parts of an image are often washed out. In low light, images become grainy and unclear. Colors do not quite pop like those taken with a professional camera. And those are just the problems with cameras that record visible light. Although it would be great to have night vision in our cameras, infrared sensors cost a lot more for much poorer image quality than their visible-light brethren.
It’s time for another revolution in imaging technology. This one will be brought to you by the quantum dot, a nanometer-size particle of semiconductor material, which acts much differently from its bulk counterpart.
When a semiconductor material absorbs light, it releases an electron from a chemical bond, and that electron is free to roam. The same process happens in a quantum dot (QD). But one thing is different: Although an electron is indeed released, it can’t roam as easily; it gets squeezed by the edges of the particle, because the quantum dot is only a few nanometers in diameter. This squeeze is called quantum confinement, and it gives the particle some special properties.
The most useful property for imaging is that the light absorbed by the quantum dot is tunable—that is, the color can be continuously adjusted to almost any wavelength in the visible and infrared spectrum simply by choosing the right material and the right particle size. This tunability works in reverse as well—the color of the light emitted when the electron recombines can be selected precisely. It is this light-emission tunability that in recent years inspired the manufacturers of TVs and other kinds of displays to use quantum dots to improve color reproduction. (They’ve given the enhancement a number of names; the most common branding is “QLED.”)
In addition to tunability, quantum dots have a few other nice features. Their small size allows these particles to be incorporated into printable inks, making quantum dots easy to slip into a manufacturing process. Quantum dots can absorb light more efficiently than silicon, which could allow camera makers to produce thinner image sensors. And QDs are sensitive across a broad dynamic range, from very low light to very high brightness.
Before we tell you how quantum-dot cameras will work—and when they will likely be commercially available—we should explain something about the CMOS sensor, today’s state of the art for digital images. Clearly there has been considerable progress in the underlying technology in the past decade or two, particularly in making it smaller and cheaper. But the way in which it converts light into an image has largely remained unchanged.
In a typical camera, like the one in your phone, light passes through a series of lenses and a mosaic of red, green, and blue filters before being absorbed by one of the sensor pixels (sometimes called a photosite, to distinguish it from a pixel on an image) on the silicon CMOS chip. The filters determine which color each photosite will record.
When a photosite absorbs a photon, an electron is freed from a chemical bond and moves to an electrode at the edge of the pixel, where it is stored in a capacitor. A readout circuit converts the charge collected in each photosite over a set time to a voltage. The voltage determines the brightness for that pixel in the image.
A common manufacturing process creates both the silicon detectors and the readout circuits. This process involves a long but well-established series of steps of photolithography, etches, and growths. Such fabrication keeps costs low and is relatively simple. But it saddles silicon detectors with some disadvantages.
Typically, the readout electronics go on top of the detector, in what are called front-illuminated devices. Because of this placement, the metal contacts and traces reflect some of the incident light, decreasing efficiency. Back-illuminated devices avoid this reflection by having the readout electronics under the detector, but this placement increases fabrication cost and complexity. Only in the last decade has the cost of back-illuminated sensors dropped enough for them to be used in consumer devices, including phones and digital cameras.
Finally, silicon absorbs only wavelengths less than about 1 micrometer, so it won’t work for imaging beyond the near-infrared range.
Now let’s look at how quantum dots can change this equation.
As we mentioned before, by precisely tailoring the size of quantum dots, manufacturers of the materials can select exactly what wavelengths of light they absorb. The largest quantum dots in the visible spectrum, about 10 nanometers in diameter, absorb ultraviolet (UV), blue, and green light, and they emit red light, which is to say they’re fluorescent. The smaller the QD, the more its absorption and emission shift toward blue in the color spectrum. For example, cadmium selenide QDs of about 3 nm absorb UV and blue light and emit green light.
Cameras with quantum-dot–based detectors operate basically the same way as their silicon CMOS counterparts. When a QD in a photosite absorbs a photon, an electron escapes its localized bond. The edge of the QD confines the electron’s travels. However, if another QD is close enough, the free electron can “hop” over to it and, through sequential hops between QDs, reach the photosite’s electrode where it can be counted by the pixel’s readout circuit.
The readout circuits are manufactured in the same way as those built for silicon photodetectors—fabricated directly on a wafer. Adding the quantum dots to the wafer does add a processing step but an extremely simple one: They can be suspended in a solution as a sort of ink and printed or spin-coated over the circuitry.
Made in this way, quantum-dot photodetectors have the performance advantage of back-illuminated pixels, where nearly all the incident light reaches the detectors, without that technology’s added cost and complexity.
And quantum dots have another advantage. Because they absorb light better than silicon, it takes only a thin layer atop the readout circuitry to gather almost all of the incoming photons, meaning the absorbing layer doesn’t need to be nearly as thick as in standard CMOS image sensors. As a bonus, this thin, highly absorbing layer of QDs excels in both low light and high brightness, giving the sensor a better dynamic range.
And, as Steve Jobs used to say, “there’s one more thing.” Quantum-dot–based cameras have huge potential to bring infrared photography mainstream, because their tunability extends into infrared wavelengths.
Today’s infrared cameras function just like visible-light cameras, although the materials used for light absorption are quite different. Traditional infrared cameras use semiconductors with a small bandgap—such as lead selenide, indium antimonide, mercury cadmium telluride, or indium gallium arsenide—to absorb light that silicon does not. Pixel arrays made from these materials must be fabricated separately from the silicon CMOS circuits used to measure currents and generate an image. The detector array and circuit must then be connected at every pixel, typically by metal-to-metal bonding.
This time-consuming process, also known as hybridization, involves putting a small bump of low-melting-point indium on every pixel of both the detector array and the CMOS circuitry. The manufacturing machinery must then line the two up and press them together, then briefly melt the indium to create electrical connections. The complexity of this process limits the possible array sizes, pixel sizes, and sensor resolutions. Worse still, because it’s done one camera chip at a time, hybridization is a low-throughput, costly process.
But quantum dots that are just as sensitive to infrared light as these traditional materials can be synthesized using inexpensive, large-scale chemical processing techniques. And, just as with their visible-light cousins, infrared-absorbing QDs can be painted onto chips after the silicon circuitry is complete, a quick and easy process needing no hybridization. Eliminating hybridization means that the resolution—the pixel size—can be less than the 15 µm or so needed to accommodate indium bumps, allowing for more pixels in a smaller area. A smaller sensor means smaller optics—and new shapes and sizes of infrared cameras at a far lower cost.
All these factors make quantum dots seem like a perfect imaging technology. But they aren’t without challenges. Right now, the main obstacles to commercialization are stability, efficiency, and uniformity.
Manufacturers mainly solved these issues for the light-emitting quantum dots used in television displays by developing scalable chemical processes that enable the creation of high-efficiency dots in large quantities with very few defects. But quantum dots still oxidize in air, causing imperfections and changes to the sensor properties, including reduced sensitivity, increased noise, slower response time, and even shorting.
This stability problem didn’t get in the way of commercialization of displays, however, because protecting the QDs used there from the atmosphere isn’t terribly difficult. In the way that QDs are currently used in displays, the QD absorbs light from a blue LED and the photogenerated charge carriers stay within each individual quantum dot to recombine and fluoresce. So these QDs don’t need to connect directly to circuitry, meaning that they can be protected by a surrounding polymer matrix with a barrier layer added on both sides of the polymer film, to prevent atmospheric exposure.
But for use in photodetection, sealing off individual QDs in a polymer won’t work: The ejected electrons need to be free to migrate to the electrodes, where they can be counted.
One approach to allowing this migration while protecting the QDs from the ravages of the atmosphere would be to encapsulate the full layer of QDs or the entire device. That will likely be the initial solution. Alternatively, the QDs themselves could be specifically engineered to reduce the impact of oxidation without creating a barrier to charge transport, all while maintaining stability and processibility. Researchers are working toward that goal, but it’s a tall order.
Another hurdle comes from the organic surfactants used today to maintain a stable solution of the quantum dots. These surfactants act as insulators, so they keep charge carriers from moving easily through the film of QDs to the electrode that collects the signal. Right now, manufacturers deal with this by depositing the QDs as a thin film and then replacing the long surfactant molecules with shorter ones that increase conductivity. But this adds a processing step and can make the QDs more susceptible to degrading over time, as the replacement process can damage the outer layer of QDs.
There is also a problem with the efficiency of photon detection. Due in part to their small size and large surface area, quantum dots can have many defects— imperfections in their crystal lattices that can cause photogenerated charges to recombine before the electron can reach an electrode. When this happens, the photon that initially hit the quantum dot is never detected by the circuitry, reducing the signal that ultimately reaches the camera’s processor.
In traditional photodetectors—ones that contain single-crystal semiconductors—the defects are few and far between, resulting in efficiencies of greater than 50 percent. For QD-based photodetectors, this number is typically less than 20 percent. So in spite of the QDs themselves being better than silicon at absorbing light, the overall efficiency of QD-based photodetectors can’t yet compete. But quantum-dot materials and device designs are improving steadily, with their efficiency continually getting better.
Because manufacturers use chemical processes to make quantum dots, there is some inherent variation in their size. And because the optical and electronic properties of a QD are driven by its size, any deviation from the desired diameter will cause a change in the color of light absorbed. With variations in the source chemicals, along with those in synthesis, purification, and storage, there can be significant size differences between one batch of QDs and another. The manufacturers must control their processes carefully to avoid this. Major companies with experience in this area have gotten quite good at maintaining uniformity, but smaller manufacturers often struggle to produce a consistent product.
In spite of these challenges, companies have begun commercializing QD-based cameras, and these products are on the road to becoming mainstream.
A good early example is the Acuros camera, available from SWIR Vision Systems. That company is focused on manufacturing shortwave infrared quantum-dot cameras for use in applications where existing infrared cameras are too expensive. Its camera uses lead sulfide quantum dots, which absorb visible through shortwave infrared light. The detector in this camera currently has an average efficiency of 15 percent for infrared wavelengths, meaning that 15 percent of the photons that hit the detector end up as measurable signal. This is considerably lower than the efficiency of existing indium gallium arsenide technology, which can reach 80 percent. But with 15-µm pixels, the Acuros camera has a higher resolution than most infrared cameras. And it’s sold at a price that, the company indicates, should be attractive to commercial users who cannot afford a traditional infrared camera—for applications like maritime imaging, produce inspection, and industrial-process monitoring.
As for the consumer camera market, in 2017 TechCrunch reported that Apple had acquired InVisage, a company dedicated to creating quantum-dot cameras for use in smartphones. Apple, as usual, has been quiet about its plans for this technology.
It may be that Apple is more interested in the infrared capabilities of QD-based cameras than their visible-light performance. Apple uses infrared light and sensors in its facial recognition technology, and cheaper chips with higher resolution for this purpose would clearly interest the company.
Other companies are also pushing hard to solve the stability and efficiency problems with quantum-dot photo sensors and to extend the boundaries of what is possible in terms of wavelength and sensitivity. BAE Systems, Brimrose, Episensors, and Voxtel are among those working to commercialize quantum-dot camera technology. Academic groups around the world are also deeply involved in QD-based sensor and camera research, including teams at MIT, University of Chicago, University of Toronto, ETH Zurich, Sorbonne University, and City University of Hong Kong.
Within five years, it’s likely that we will have QD-based image sensors in our phones, enabling us to take better photos and videos in low light, improve facial recognition technology, and incorporate infrared photodetection into our daily lives in ways we can’t yet predict. And they will do all of that with smaller sensors that cost less than anything available today.
This article appears in the March 2020 print issue as “Snapshots by Quantum Dots.”
About the Authors
Peter Palomaki is the owner and chief scientist of Palomaki Consulting, where he helps companies understand and implement quantum dot technology. Sean Keuleyan is lead scientist at Voxtel, in Beaverton, Ore.
Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/tech-talk/consumer-electronics/gadgets/bosch-ar-smartglasses-tiny-eyeball-lasers
My priority at CES every year is to find futuristic new technology that I can get excited about. But even at a tech show as large as CES, this can be surprisingly difficult. If I’m very lucky, I’ll find one or two things that really blow my mind. And it almost always takes a lot of digging, because the coolest stuff is rarely mentioned in keynotes or put on display. It’s hidden away in private rooms because it’s still a prototype that likely won’t be for ready the CES spotlight for another year or two.
Deep inside Bosch’s CES booth, I found something to get excited about. After making a minor nuisance of myself, Bosch agreed to give me a private demo of its next-generation Smartglasses, which promise everything that Google Glass didn’t quite manage to deliver.
Post Syndicated from Mark Pesce original https://spectrum.ieee.org/consumer-electronics/gaming/sense-and-sensoribility
I’m writing a book about augmented reality, which forced me to confront a central question: When will this technology truly arrive? I’m not talking about the smartphone-screen versions offered up by the likes of Pokémon Go and Minecraft Earth, but in that long-promised form that will require nothing more cumbersome than what feels like a pair of sunglasses.
Virtual reality is easier. It can now be delivered, in reasonable quality, for a few hundred dollars. The nearest equivalent for AR, Microsoft’s second-generation HoloLens, costs an order of magnitude more while visually delivering a lot less. Ivan Sutherland’s pioneering Sword of Damocles AR system, built in 1968, is more than a half-century old, so you might expect that we’d be further along. Why aren’t we?
Computation proved to be less of a barrier to AR than anyone believed back in the 1960s, as general-purpose processors evolved into application-specific ICs and graphics processing units. But the essence of augmented reality—the manipulation of a person’s perception—cannot be achieved by brute computation alone.
Connecting what’s inside our heads to what is outside our bodies requires a holistic approach, one that knits into a seamless cloth the warp of the computational and the weft of the sensory. VR and AR have always lived at this intersection, limited by electronic sensors and their imperfections—all the way back to the mechanical arm that dangled from the ceiling and connected to the headgear in Sutherland’s first AR system, inspiring its name.
Today’s AR technology is much more sophisticated than Sutherland’s contraption, of course. To sense the user’s surroundings, modern systems employ photon-measuring time-of-flight lidar or process images from multiple cameras in real time—computationally expensive solutions even now. But much more is required.
Human cognition integrates various forms of perception to provide our sense of what is real. To reproduce that sense, an AR system must hitch a ride on the mind’s innate workings. AR systems focus on vision and hearing. Stimulating our eyes and ears is easy enough to do with a display panel or a speaker situated meters away, where it occupies just a corner of our awareness. The difficulty increases exponentially as we place these synthetic information sources closer to our eyes and ears.
Although virtual reality can now transport us to another world, it does so by effectively amputating our bodies, leaving us to explore these ersatz universes as little more than a head on a stick. The person doing so feels stranded, isolated, alone, and all too frequently motion sick. We can network participants together in these simulations—the much-promised “social VR” experience—but bringing even a second person into a virtual world is still beyond the capabilities of broadly available gear.
Augmented reality is even harder. It doesn’t ask us to sacrifice our bodies or our connection to others. An AR system must measure and maintain a model of the real world sufficient to enable a smooth fusion of the real with the synthetic. Today’s technology can just barely do this, and not at a scale of billions of units.
Like autonomous vehicles (another blend of sensors and computation that looks easier on paper than it proves in practice), augmented reality continues to surprise us with its difficulties and dilemmas. That’s all to the good. We need hard problems, ones that can’t be solved with a straightforward technological fix but require deep thought, reflection, insight, even a touch of wisdom. Getting to a solution means more than building a circuit. It means deepening our understanding of ourselves, which is always a good thing.
Augmented reality in a contact lens? Science fiction writers envisioned the technology decades ago, and startups have been working on developing an actual product for at least 10 years.
Today, Mojo Vision announced that it has done just that—put 14K pixels-per-inch microdisplays, wireless radios, image sensors, and motion sensors into contact lenses that fit comfortably in the eyes. The first generation of Mojo Lenses are being powered wirelessly, though future generations will have batteries on board. A small external pack, besides providing power, handles sensor data and sends information to the display. The company is calling the technology Invisible Computing, and company representatives say it will get people’s eyes off their phones and back onto the world around them.
At CES, among the bigger, brighter TVs, mock smart homes that seem to know more about you than you do, and all the Alexa- and Google-Assistant-enabled devices eager to talk to you, are a few products that defy categorization. Some of these new products grabbed my attention because they involve truly innovative technology. Some are just clever and cheap enough to catch on, and some are a little too wild to find a big market—but it’s still impressive when a developer realizes an extreme dream.
So, as CES 2020 retreats into history, here is my top 10 list of CES gadgets that at least got my attention, if not a spot on my shopping list. There is no way to rank these in order of importance, so I’ll list them roughly by size, from small to big. (The largest products demonstrated at CES, like John Deere’s AI-powered crop sprayer, Brunswick’s futuristic speedboat, or Hyundai’s flying taxi developed in partnership with Uber Elevate can’t be called gadgets, so didn’t make this roundup.)
As far as I know, the current state of the art in indoor mosquito management is frantically trying to determine where that buzzing noise is coming from so that you can whack the damn bug before it lands and you lose track of it.
This “system” rarely works, but at CES this week, we found something better: Israeli startup Bzigo (the name of both the company and the product), which makes a mosquito detection and tracking system that combines an IR camera and laser designator with computer vision algorithms that follows mosquitoes as they fly and tells you exactly where they land to help you smash them. It’s not quite as deadly as the backyard star wars system, but it’s a lot more practical, because you’ll be able to buy one.
Bzigo’s visual tracking system can reliably spot mosquitoes at distances of up to 8 meters. A single near-IR (850nm) camera with a pair of IR illuminators and a wide angle lens can spot mosquitoes over an entire room, day or night. Once a bug is detected, an eye-safe laser will follow it until it lands and then draws a box around it for you so you can attack with your implement of choice.
At maximum range, you run into plenty of situations where the apparent size of a mosquito can be less than a single pixel. Bzigo’s AI relies on a mosquito’s motion rather than an identifiable image of the bug itself, and tracking those potentially intermittent and far-off pixel traces requires four 1GHz cores running at 100% continuously (all on-device). That’s a lot of oomph, but the result is that false positives are down around 1%, and 90% of landings are detected. This is not to say that the system can only detect 90% of bugs— since mosquitoes take off and land frequently, they’re almost always detected after just a few flight cycles. It’s taken Bzigo four years to reach this level of accuracy and precision with detection and tracking, and it’s impressive.
The super obvious missing feature is that this system only points at mosquitoes, as opposed to actually dealing with them in a direct (and lethal) way. You could argue that it’s the detection and tracking that’s the hard part (and it certainly is for humans), and that automated bug destruction is a lower priority, and you’d probably be right.
Or at least, Bzigo would agree with you, because that’s the approach they’ve taken. However, there are plans for a Bzigo V2, which adds a dedicated mosquito killing feature. If you guessed that said feature would involve replacing the laser designator with a much higher powered laser bug zapper, you’d be wrong, because we’re told that the V2 will rely on a custom nano-drone to destroy its winged foes at close range.
Bzigo has raised one round of funding, and they’re currently looking to raise a bit more to fund manufacturing of the device. Once that comes through, the first version of the system should be available in 12-14 months for about $170.
A useful, though somewhat eerie, technology will start guiding passengers through airline terminals later this year. Developed by Misapplied Sciences based in Redmond, Wash., and being rolled out by Delta Airlines, Parallel Reality technology will replace the typical information-packed airport display screen that shows all departing flights with individual messages customized for each traveler. The real magic? All of those messages will appear simultaneously on a single large screen, but you’ll just see the one intended for you.
In 2009, the number of television sets per U.S. household averaged 2.6; by 2015, that dropped to 2.3. While the official U.S. government numbers haven’t been updated since, indications are that the average is quickly dropping, as more and more people use their phones, tablets, or laptops for solo entertainment, only moving to a big screen for group viewing, if at all.
“We want that main spot,” said Larson. So do the other TV manufacturers, of course.
Maybe we won’t see a breakthrough new technology at CES 2020. But it’s nice to see consumer electronics companies thinking about our pain points.
LG, kicking of CES press day, was all about AI, or, as the company brands it, ThinQ. Its long-term vision was a grandiose, familiar one in which all the objects that use electricity in your house will talk amongst themselves to make your life perfect.
LG’s nearer term applications of AI to household appliances were more interesting. For one, the company promised that this year’s models of its robot vacuum, the R9, will learn from mistakes—when it gets stuck in a tight corner or under a cabinet, say, and you have to retrieve it, it won’t go there again (kind of like my cat).
In another useful application of AI, LG plans to introduce washing machines that will detect the type of fabrics in the pile of clothes you shove in, automatically setting the appropriate wash cycle settings. (I reached out to the company for information on sensors and other details, and will update when I get an answer.)
While CES doesn’t officially start until Tuesday, announcements and teasers always begin flowing long before the show. From sifting through those materials, I’ve started to spot a few trends.
CES 2020, which kicks off on Tuesday, will be full of Internet of Things (IoT) devices made smarter than ever by artificial intelligence, TVs and home appliances that beg you to converse with them, and wearables that tell you more about yourself than you likely want to know.
But the best part of CES is the moment you spot a tiny treasure—that little gadget that is absolutely useful, and perfectly designed. Some tiny treasures are tucked into an array of gear from major manufacturers, and some are the sole products of new companies betting their bank accounts on a CES launch. There’s also joy—or at least entertainment value—in trying out a product that has automated something that really doesn’t need to be automated.
“Okay, Google. Turn the volume up to max.”
Imagine if this voice command was applied to your Google Home system without you hearing it. A group of researchers in Japan have shown that this is possible, by using strategically placed speakers that emit ultrasound to hack voice-assisted devices.
Voice assistance systems such as Siri, Google Assistant, and Amazon Alexa has grown in popularity in recent years, but these remain vulnerable to several different forms of hacking—attacking either the software or hardware that the systems rely on.
“An effective countermeasure against these attacks is to fix the vulnerability,” explains Ryo Iijima, a researcher at Waseda University. “In contrast, our new Audio Hotspot Attack technique leverages a physical phenomenon [that occurs as ultrasound waves travel] in the air, which cannot be changed in nature.”
His team’s new hacking technique, described in a study published in the 19 November edition of IEEE Transactions on Emerging Topics in Computing, relies on parametric speakers. That type of speaker is capable of emitting ultrasound (inaudible to the human hear) in a directional manner. Placed at the right location and directed towards the voice assistance system, these speakers can emit ultrasound waves containing the secret voice command, which only becomes audible at certain distances.
The delay in audibility happens because the frequency of the sound wave changes as it moves through the air, eventually hitting a threshold that is audible only once the sound wave is in close proximity of the targeted voice-assisted system. What’s more, the researchers used an amplifier to ensure that the volume of the voice command is normal by the time it reaches its target.
The attack could be carried out with a single parametric speaker directed at the voice assistance system, or two speakers whose sound waves “cross” right at the location of the system. The first approach yields a stronger voice command, while the latter approach offers more precision and a lower chance of being heard by someone standing close to the target.
The researchers tested their ultrasound approach with 18 volunteers under two different scenarios: a standard size room and a long hallway. For their experiments, they attempted to impose their ultrasound commands onto two smart speakers, Amazon Echo and Google Home, from several distances with 0.5-meter increments. A smart phone was used to generate malicious voice commands from the parametric loudspeakers.
An attack was considered successful if three consecutive voice commands were accepted at a given distance. The results suggest that attacks from 3.5 meters are the most successful, but the hallway experiments show that this technique is effective from distances as far as 12 m. Google Home was more vulnerable to attack than Amazon Echo.
“We knew the theory of directional sound beams before we started our experiments,” says Iijima. “But what was surprising to us after we performed the experiments was that such directional sound beams were actually non-recognizable by the participants of our experiments, while [the sound] successfully activated all speakers.”
In this study, the voiceprint technology required to authenticate a user was deactivated, so a generic, computer-generated voice was able to activate the systems and provide commands. However, the research team says they’ve done preliminary experiments with recorded voiceclips of legitimate users, finding that these could be transmitted inaudibly via ultrasound and be used to activate the voice assistance systems.
A key limitation of this approach is the fact that any objects between the parametric speakers and the system being hacked would interfere with the transmission of the sound waves. “One possible method of overcoming this limitation would be to install parametric loudspeakers on a ceiling, thus creating a ‘sound shower,’” Iijima explains.
The PlayStation Portable (PSP) was the labradoodle of the electronic games industry, an odd crossbreed, muscular and fun, but also kind of pricey. The PSP, a combination game system and media player, wasn’t the most popular handheld system, but it was a crackling cauldron of surprisingly advanced tech for its time.
Handheld game players had been around for decades. The first ones were dedicated to playing a single game; a popular early example was Mattel’s Football, introduced in 1977. Handheld players never went away entirely, but mostly languished until Nintendo came up with a spectacularly winning formula in 2001. The company had already established its credentials with larger console systems, and when it introduced the Game Boy Advance handheld, it already had not just popular games but popular game franchises. The company successfully migrated its Mario, Zelda, and Pokémon series to the Game Boy Advance, which became hugely popular as a result.
In the console market, Sony and Microsoft had been chipping away at Nintendo’s dominance for years with PlayStation and Xbox, respectively. But as late as 2003, nobody had managed to provide any real competition to Nintendo in handheld systems.
If anyone could give Nintendo a run for its money there, it was probably Sony. In 1995, Sony had pushed into the U.S. and European console markets with the original PlayStation by offering games that made use of superior graphics, coupled with outstanding multimedia capabilities. (For example, subsequent models of the PlayStation would have the ability to play Blu-ray disks.) It could use the same playbook for a handheld system. The best time to catch a well-established competitor is when the competitor is transitioning from one generation of technology to the next. In 2003, Nintendo was known to be preparing its follow-on to the Game Boy Advance. That year, Sony announced the PSP.
Nintendo got its next portable, the DS, to the market in November 2004. The PSP debuted in Japan in December of 2004; distribution to North America began a few months later. The Nintendo DS embodied several innovations, the most notable of which was a second LCD screen (this one a touch screen) that worked in tandem with the first. But where the DS was basically just a good handheld game system, the PSP was a new hybrid: a combination game system and media player.
And it was a beast. Designed by a team led by Shinichi Ogasawara at Sony, the PSP debuted as the most powerful handheld on the market, with remarkable graphics capabilities. It was based on a pair of MIPS R4000 microprocessors from MIPS Technologies, which ran at a then-impressive 333 megahertz. One functioned as the central processing unit and the other as a media processor. There was also a dedicated graphics processor, which Sony called the Graphics Core, that ran at 166 MHz. It could handle 24-bit color graphics, about eight bits better than most comparable units at the time. It also had one of the largest screens in the handheld-gaming world, a 480- by 272-pixel thin-film-transistor LCD that measured 4.3 inches on the diagonal. The original version had 32 megabytes of onboard memory, considered inadequate by some, so Sony doubled it in subsequent models. It had a slot for a Memory Stick Duo, a Sony proprietary flash-memory format, for additional storage (the Memory Stick Duo was soon renamed Memory Stick Pro Duo). It had stereo speakers built in, a headphone jack, and an embedded microphone.
For a handheld in that time period, it had an immense array of connectivity options, including Wi-Fi, Bluetooth, IrDA (an infrared-based protocol), and USB. The system software included a clunky but usable browser for Web surfing. It could connect to Sony’s PlayStation. Starting in 2006, it came with access to the PlayStation Network, a portal for games, video, and music that Sony launched that year. The network let users buy games and get instant access to them, and it also let them play against each other online.
The PSP could store and play music, in either the popular MP3 format or Sony’s proprietary ATRAC system. But it turned out that music would always be a subsidiary capability, partly because in those days Apple owned the music-player market with its iPod player and iTunes software. Also, where other game consoles generally used game cartridges, the PSP used a Sony-proprietary optical format called Universal Media Disc (UMD). Discs were sold containing a game or a movie, but UMD was not a recordable format. So users had to buy prerecorded media on UMDs or put their own tunes and video on a Memory Stick Pro Duo, which turned out to be a process that was annoyingly clunky for music and exasperatingly difficult for video.
In sporadic efforts to exploit the PSP’s impressive media-player features, Sony kept adding new potential uses. In 2008, the company produced a GPS-enabled accessory that turned the PSP into a navigation device for driving. In 2009, Sony secured rights to distribute electronic versions of some of the best-selling comic books around the world. The Digital Comics Reader never really took off (Sony shut it down in 2012), but it was emblematic of the varied and creative uses Sony envisioned for the PSP platform.
Sony had its own game franchises to exploit, including the Final Fantasy, God of War, and Grand Theft Auto series, all of which sustained the platform’s popularity. We should also pause here to pay tribute to a quirky, mostly acclaimed fantasy series called Jak and Daxter that had a small but die-hard following (including this author). The point is that Sony did attract third-party game developers, which was critical, but Nintendo was consistently more successful at that, giving it the edge in the number of titles available.
In the final analysis, the technical dazzle of the PSP was undercut somewhat by the proprietary formats, notably the UMD, which Sony devised specifically for the PSP and which was never used on any other device. The company stopped manufacturing the PSP in 2014 and the UMD not long after; by then the PSP’s successor, the PlayStation Vita, had been around for 2 years. Sony ended production of the Vita in March 2019.
Astoundingly, Sony still produces memory sticks, but its format was long ago eclipsed by the SD and microSD card formats. Recalling the old Betamax misfire, Digital Audio Tape, MiniDiscs, and the forlorn ATRAC audio-compression algorithm, it’s hard to escape the conclusion that Sony has been plagued by really bad decisions in the realm of formats.
At its introduction, the PSP was priced at US $249 in the United States, almost $50 more than the Nintendo DS. It was not an unreasonable price, given all the relatively advanced tech that went into a PSP. Even so, PSP accessories, such as the expensive Memory Stick Pro Duo cards, could make the price disparity greater.
Despite all the drawbacks (higher cost, fewer titles, proprietary formats), Sony still sold over 80 million PSPs in the nine years it was in production, from late 2004 to 2014. The total sales figure was only a bit more than half of what the Nintendo DS sold between 2004 and 2013. But it was a pretty good run for a powerful device that offered a very compelling preview of the same sorts of things people would be doing with smartphones nearly a decade in the future.
The replacement of analog media with digital formats began with recorded music in the early 1980s, and then progressed inexorably through recorded video, still photography, broadcast radio, and broadcast television. In Europe, digital broadcast radio began as a small research project in 1981, at the Institut für Rundfunktechnik, in Munich. At roughly the same time, Japan began exploring digital television. The contrast between the two ostensibly similar projects is important: When consumer and broadcast companies in various countries began pursuing digital-TV technology, developing standards became as much a political process as it was a technological one. Radio, however, escaped that politicization, in large part because regional markets were content to adopt their own standards.
That was a plus early on, because it let developers sidestep the long and contentious standards wars that bogged down other technologies. But it was probably a net negative in the long run because it left global markets fragmented. While Europe stuck with its own digital-radio system, which is called Digital Audio Broadcasting (DAB) and European stations began using it in 1995, North American countries elected to pursue an entirely different technology. This American system, called HD Radio, is a proprietary technology created by iBiquity Digital Corp. The Federal Communications Commission selected HD Radio as the standard for the United States in 2002; to this day, any company that wants to market an HD Radio set or operate a radio station that broadcasts in the HD format has to pay fees to DTS Inc., which acquired iBiquity in 2015.
There were many good reasons for making broadcast radio digital. Most of them stem from one giant advantage: spectral efficiency. Any modulated radio signal has sidebands, swaths of spectrum that are higher and lower than the carrier frequency that are unavoidably created by the modulation process. These sidebands are particularly pronounced for FM. They reduce an FM signal’s spectral efficiency, which is defined as the amount of information the signal can carry for a given amount of bandwidth. Because of data compression and other techniques, digital radio signals can have several times the spectral efficiency of analog FM ones. In both DAB and HD Radio, this spectacularly higher spectral efficiency means a huge amount of extra bandwidth. It’s used to provide not just higher audio quality and supplementary program data, but also, in some cases, a couple of additional high-quality, stereo radio channels, all within the bandwidth formerly occupied by a single station’s lower-quality analog signal.
The first radio station to adopt DAB was NRK Klassisk, in Norway, which began digital broadcast operations on 1 June, 1995. (Twenty-two years later, Norway would become the first country to cease all analog radio broadcasting in favor of digital.) Several other DAB stations, in England, Germany, and Sweden, followed within a few months. It nevertheless took several years for DAB to get popular. Part of the reason for the lag was the early radio sets themselves, which were clunky and expensive. In fact, it would be years before a company called Pure Digital would kick-start the DAB market with a compact, high-performance, and yet inexpensive radio called the Evoke-1.
The story starts with an English company with a different name: Imagination Technologies. It was then, as now, a developer of integrated circuits for video and audio processing. In 2000, it began selling a digital tuner IC but couldn’t find any buyers among the big radio manufacturers. The IC’s biggest advocate inside the company was CEO Hossein Yassaie, an Iranian-born engineer who had earned a Ph.D. at the University of Birmingham, in England, in the application of charge-coupled devices in sonar. He helped design and build a complete radio receiver that incorporated the chip to demonstrate its capabilities.
That radio, which bore the VideoLogic label, looked like almost every other rack stereo component at the time. It was a black rectangular box with a faceplate that was short and wide. The faceplate shared the same simple, elegant styling common to most audio equipment of the day.
In 2001, Imagination Technologies’ radio operation, which by this point was also called VideoLogic, introduced what the company claims was the world’s first portable digital radio. This one carried yet another new brand name: Pure. From a visual standpoint, the Pure DRX-601EX was very different from its predecessor, with a retro-tinged style. It stood upright, like an old-school lunchbox, and it had a blonde-oak finish. The triangular faceplate had a chrome finish evocative of a manta ray and took up only part of the front surface. Much of the front of the radio was a grille with rakishly slanted openings covering the speakers. The curved handle on top was solid, but nonetheless a visual reference to the plastic straps on the inexpensive transistor radios of the 1960s. It was striking visually, and it sounded great, but, at £499, it was not cheap.
What does all this have to do with the Evoke-1? We’re getting there. Later that same year, 2001, VideoLogic got involved in a project with the BBC, which had been operating several DAB radio channels since 1995 and was eager to increase its audiences for them. The Digital Radio Development Bureau (DRDB), a trade organization that promoted DAB and was partly funded by the BBC, seized on the 100th anniversary of Marconi’s first wireless transmission as a hook. With that in mind, the DRDB worked with VideoLogic to create a special commemorative edition DAB radio that it would sell for only £99. The radio hit the market late in 2001, and sold out rapidly. There was clearly growing demand for digital radio in the United Kingdom.
Emboldened by the brisk sales, in 2002, VideoLogic introduced the Pure Evoke-1. It shared the DRX-601EX’s distinctive combination of blonde wood and brushed chrome, but with this model, the company seemed to tip the retro/modern balance toward the former. There was more brushed chrome on the face, and the single speaker was covered with a perforated screen reminiscent of the grilles on ’60s transistor radios. The company also sold a separate speaker that could be plugged in to provide true stereo sound.
A year later it was still the only DAB radio on the market available for less than £100, according to a review in Radio Now, which praised the Evoke-1 for sounding “amazing” for a portable. “The radio is pretty sensitive—if there’s a hint of signal the Evoke-1 will find it through its telescopic aerial,” the review went on. The Evoke-1 ran only on mains power (the subsequent Evoke-2 would have built-in stereo speakers and also run on batteries).
The availability of the affordable Evoke-1 radios helped DAB finally take off in the United Kingdom. In 2003, the company claimed it was selling more than twice as many Evoke-1s as any competitive portable radio. In the ensuing years, hundreds of stations in Europe would switch to DAB, expanding the market. There are now over 40 countries with radio stations that conform to either the DAB standard, or its follow-on, called DAB+. The company said its cumulative DAB radio sales exceeded 5 million units as of 2014.
Along the way, Imagination’s radio unit ditched the name VideoLogic and began calling itself Pure. Pure was successful selling its DAB radio ICs to other radio manufacturers, but eventually sales of its own Evoke radios began to decline. The Pure operation started losing money in 2009. In 2016, shortly after Yassaie stepped down as CEO, Imagination Technologies sold Pure to an Austrian investment operation called AVenture for a mere £2.6 million. Pure not only continues to make DAB+ radios, it claims it is still one of the leading sellers in the category.
And Yassaie? “In recognition of his services to technology and innovation,” he was knighted by Elizabeth II, Queen of the United Kingdom, on New Year’s Day in 2013. At the time, Imagination Technologies had market capitalization of £1 billion and was employing more than 1,300 people.
U.S. radio stations, by the way, began adopting HD Radio in 2003. In contrast to Europe, digital radio in the United States has been largely ignored by consumers, despite the existence of thousands of HD stations in the country. Proponents have attempted to popularize it by getting HD Radio receivers factory-installed in cars, with some success. This year, the percentage of cars with factory installed HD Radio reached 50 percent. However, by way of comparison, satellite radio (Sirius XM) is installed in 75 percent of cars sold in the United States.
Walkie-talkies have been around since the 1940s, but until the late 1990s the available models were generally powerful units used in the military and law enforcement or children’s toys with severely limited range. That all changed in 1996 with the advent in the United States of the Family Radio Service, a personal-radio system using spectrum in the ultrahigh-frequency band. Motorola was reportedly first to market with FRS walkie-talkies in 1997, and it was the leading seller for at least the next three years—a brief but intense heyday for walkie-talkies as a popular consumer product. Moto’s iconic T250 Talkabouts were the most popular model during that stretch.
Portable two-way radios were invented in the late 1930s, almost simultaneously by engineers working separately in Canada and in the United States. Like so many technological innovations, walkie-talkies were first employed in the military, in this case by the Allies in World War II, starting around 1943. The very first versions of the technology were already referred to as “walkie-talkies.” They earned the “walkie” half of the nickname inasmuch as the electronics, based on vacuum-tube circuits powered by carbon-zinc or lead-acid batteries, were just small enough and light enough to fit in a backpack. The most important of these early models was the SCR-300 [PDF], introduced in 1943 by the Galvin Manufacturing Co. (the precursor of Motorola) for the U.S. Armed Forces. It weighed either 14.5 or 17 kilograms (32 or 38 pounds), depending on the battery it was using. Galvin began shipping smaller, handheld models a year later, though to distinguish them from the backpack models these were referred to as “handie-talkies.” The introductory appellation was the one that stuck, though.
Running through the history of walkie-talkies are decades-long efforts in the United States and other countries to find a suitable frequency band where using them wouldn’t interfere with other radio equipment. Starting in 1960, the U.S. Federal Communications Commission required a license for commercial use of mobile radio equipment used on land (as opposed to at sea). The FCC would later designate a category called General Mobile Radio Service (GMRS). Public-safety and emergency-response agencies soon began relying on GMRS radios, including tabletop sets and walkie-talkies, which also became useful for marine communications and some commercial applications in mining and the like.
Around the same time, the FCC also allocated radio spectrum for unlicensed walkie-talkie use. This spectrum was close to Citizens Band frequencies, and as both consumer walkie-talkies and CB radios grew in popularity in the 1970s, they interfered with each other more and more often, according to a brief account published by walkie-talkie manufacturer Midland. To stop walkie-talkies from interfering with CB radio, the FCC moved walkie-talkie usage to 49 megahertz, a band also open to unlicensed use. By the late 1980s and early ’90s, more and more companies were using those unlicensed bands for yet other radio-based products such as cordless phones and baby monitors, and interference among all those products—including walkie-talkies—again became a problem, according to the Midland account.
The countdown to FRS began in 1994, when Radio Shack (a subsidiary of Tandy Corp.) brought a proposal to the FCC that would once again change the frequency for unlicensed land-mobile radios, along with implementing several other ideas aimed at expanding their use. “Working with Tandy Corp., Motorola was a leading advocate for the creation of the Family Radio Service in the U.S. and was primarily responsible for developing the technical and operational rules ultimately adopted by the Federal Communications Commission,” a spokesman for Motorola Solutions told IEEE Spectrum in an email.
Two years later, in establishing FRS, the FCC set aside a total of 14 channels in two groups, one clustered slightly above 462.5 MHz and the other above 467.6 MHz. Other countries soon followed suit. The European Union in 1997 approved something called Private Mobile Radio, or PMR446 (as its frequency band is 446 MHz). Australia, China, and many other countries similarly cleared spectrum for unlicensed land-mobile radio use.
The bands vary from country to country, but they’re all in the ultrahigh-frequency range. For compact mobile radios, there are several advantages to UHF. One is that the relatively high frequencies mean lots of available bandwidth, which translates into channels that can be well separated and with little interference from adjacent ones. Also, short wavelengths mean small antennas.
FRS in the United States originally permitted maximum power output of half a watt. Though manufacturers would often claim ranges of 40 kilometers (25 miles) or more, these distances were wildly inflated. Under real-world conditions, a half-watt of power typically translates to a range of perhaps 1–4 km, depending on various conditions including the presence or absence of obstructions. For contrast, GMRS radio output can be as high as 5 watts, enabling them to consistently reach 30–50 km, again depending on local conditions.
FRS radios use narrow-band frequency modulation, with channels spaced at 25-kilohertz intervals. Since 2017, all FRS channels have coincided with ones used by GMRS radios. Though interference between the two is possible, users of unlicensed FRS equipment and licensed GMRS gear have coexisted mostly peacefully, with the FCC occasionally tinkering with the rules to keep the peace.
The combination of frequency modulation and the ultrahigh-frequency band made FRS walkie-talkies immensely more reliable and clear sounding than any personal radios that came before. Even today, under some circumstances, FRS walkie-talkie communications can be clearer and more reliable than cellphones.
Motorola’s participation in defining the rules that would govern the new FRS category helped it get to the market first, in 1997, according to an article in the Los Angeles Times in 2000. That L.A. Times piece reported that Motorola had been the sales leader in the walkie-talkie market ever since. Other manufacturers of FRS radios at the time included Audiovox, Cobra, Icom, Kenwood, Midland, and Radio Shack.
Prices of FRS sets could range from $40 or $50 to several hundred dollars for a pair, and many manufacturers also sold them in packs of four, for family use. Historical sources quote Motorola selling two T250s for below $100; $85 was a commonly cited price. That would make them far more expensive than the toy walkie-talkies that were common before FRS, but considerably cheaper than the most expensive two-way handheld radios available at the time.
Motorola’s T250s had superb industrial design. They were squat, about 17 centimeters long including the antenna, and also light, at about 200 grams (just over 7 ounces). The most popular color was bright yellow, and the case was edged with rubbery black grips. It turned out they were as rugged as they looked, which appealed to the outdoor workers and adventurers who needed them and also to city dwellers who longed for outdoor adventure.
T250s had jacks for earphones and external microphones (the Talkabout Remote Speaker Microphone was sold separately) that allowed users to operate the devices hands-free in something called VOX mode. T250s had a backlit LCD screen, and they ran on three AA batteries. Motorola followed up the T250 with the T280, a version that had a rechargeable nickel-metal-hydride battery.
That same L.A. Times article also mentions that the demand for FRS radios had taken off in 1999. Another article reported that available models grew from 14 in 1997 to 73 in 1999. In general, the introduction of cellular telephony helped amplify demand for two-way wireless. At the time, cellular telephony wasn’t really new, but its popularity wasn’t yet universal, even in developed countries. Many people considered walkie-talkies a viable alternative to cellphones. There were obvious drawbacks: notably, the short range and the occasional lack of privacy associated with walkie-talkies—everyone using the same channel could hear everyone else’s communications, if they were close enough to each other. And yet some people regarded these as acceptable trade-offs for the convenience and the lack of subscription fees. And because cellular coverage was far from ubiquitous, many used walkie-talkies as adjuncts to their cell service, for example in remote areas or when traveling.
And manufacturers did find ways of adding a measure of privacy. Though FRS radios before 2017 were limited to 14 channels, some models, including the T250, could in effect expand the number with a feature sometimes referred to as “privacy codes.” They were part of a squelching scheme that filtered out signals from other users of the same channel who weren’t using the code. Multiplying the number of channels by the number of privacy codes on the T250 rendered 532 combinations; in practice, if a pair of T250 users wanted to keep their conversation private, they were likely to be able to do so.
Other features identified in an article in The New York Times published in 2000 included notification tones when someone else was sending, channel scanners that would identify the busiest or least-used channels, and rechargeable batteries, as well as folding antennas, altimeters, FM radio, and GPS.
The late ’90s heyday of walkie-talkies was relatively brief. Cellular telephony got more popular and more technologically advanced in the 2000s, and that drastically reduced both the usefulness of and the necessity for a separate wireless two-way communications medium. Nevertheless, in 2017, the FCC gave FRS a bit of a boost when it added more channels, bringing the total to 22, and increased, from 0.5 W to 2 W, the maximum permitted power for channels 1 to 7 and 15 to 22. The extra power could mean ranges of perhaps 4 km or more, depending on conditions. As part of the same rules revision, the FCC also made the FRS and GMRS bands entirely overlapping. Only GMRS’s license requirement and the permitted power levels now differentiate the two.
Today, people still buy walkie-talkies for various reasons, among them the fact that cell coverage is not now and never will be truly ubiquitous, especially in remote areas on land and at sea. Motorola Solutions continues to produce Talkabout walkie-talkies—the product line now consists of a dozen different models, ranging in price from $35 to $120 a pair. Market information is lacking, surprisingly so. But the Talkabout series is often acclaimed as the most successful line of walkie-talkies ever.