Tag Archives: Gadgets/Audio/Video

MicroLED Displays Could Show Up in Products as Soon as 2020

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/consumer-electronics/audiovideo/microled-displays-could-show-up-in-products-as-soon-as-2020

Companies have demonstrated striking prototypes. Now they have to master mass production

One of the most striking things about the prototype microLED display that Silicon Valley startup Mojo Vision unveiled in June was its size. At half a millimeter across, it’s barely bigger than a single pixel from the microLED TV prototype Samsung showed off in 2018. That both use versions of the same technology is remarkable, and it portends big potential for screens made of superefficient and bright micrometer-scale gallium nitride LEDs. Impressive prototypes have proliferated during the past year, and now that companies are turning to the hard work of scaling up their manufacturing processes, displays could appear in some products as soon as late next year.

“We’re seeing really good progress on all fronts, and we’re seeing more and more companies coming out with prototypes,” says Eric Virey, an analyst who follows the nascent industry for Yole Développement.

The driving force behind microLED displays remains a combination of brightness and efficiency that LCD and OLED technology can’t come close to. One demo of a smartwatch-size display by Silicon Valley–based Glo shines at 4,000 nits (candelas per square meter) while consuming less than 1 watt. An equivalent LCD display would burn out in seconds trying to meet half that brightness.

The companies involved broadly fit into two categories. Some are making monolithic displays, where the gallium nitride pixels are made as a complete array on a chip and a separate silicon backplane controls those pixels. And others are using “pick and place” technology to transfer individual LEDs or multi-microLED pixels into place on a thin-film-transistor (TFT) backplane. The former is suited to microdisplays for applications like augmented reality and head-up displays. The latter is a better fit for larger displays.

For those in the first camp, a pathway to a high-throughput, high-yield technology that bonds the backplane to the microLED array is key. The United Kingdom’s Plessey Semiconductors demonstrated a throughput-boosting technology recently, by bonding a wafer full of Jasper Display Corp.’s silicon CMOS backplanes to a wafer of its microLED arrays.

New York City’s Lumiode is founded on the idea that such bonding steps aren’t necessary. “When you have to bond two things together, yield is limited by how that bonding happens,” says Vincent Lee, the startup’s CEO.

Instead Lumiode has been developing a process that allows it to build a TFT array on top of a premade GaN microLED array. That has involved developing low-temperature manufacturing processes gentle enough not to damage or degrade the microLEDs. Much of the work this year has been translating that process to a low-volume foundry for production, says Lee.

For Glo, the work has been more about making the microLED fit the current-delivering capabilities of today’s commercial backplanes, like those driving smartphone displays. “Our device design is focused on low current—two orders of magnitude lower than solid-state lighting,” on the order of nanoamps or microamps, explains Glo CEO Fariba Danesh. “This year is the first year people are talking about current. We’ve been talking about that for five years.”

Glo takes those microLEDs and places them on either a CMOS backplane for microdisplays or on a TFT backplane for larger displays using the same technology for any resolution or display size. The company doesn’t talk about its pick-and-place technology, but it is key to commercial products. “Our transfer yields are right now high enough to make some parts; now we are focused on making thousands and then millions of zero-defect panels,” says Danesh.

Others are looking to simplify production by changing what gets picked up and placed down. X-Celeprint’s scheme is to place an integrated pixel chip that contains both CMOS driver circuits and red, green, and blue microLEDs. It’s a multistep process, but it means that the display backplane now needs to be only a simple-to-manufacture network of wires instead of silicon circuitry. Engineers at CEA-Leti, in Grenoble, France, recently demoed a way to simplify that scheme by transferring the microLEDs to the CMOS drivers all at once in a wafer-to-wafer bonding process.

“It’s really too early to pick a winner and know which technology is best,” says Yole’s Virey. “Some of the prototypes are quite impressive, but they are not perfect yet. A lot of additional work is required to go from a prototype to a commercial display. You need to improve your cost, you need to improve your yield, and you need to make an absolutely perfect display each time.”

This article appears in the August 2019 print issue as “MicroLED Displays Expected in 2020.”

Delhi Rolls Out a Massive Network of Surveillance Cameras

Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/consumer-electronics/audiovideo/delhi-rolls-out-a-massive-network-of-closedcircuit-tvs-to-fight-crime

The state government says closed-circuit TVs will help fight crime, but digital liberties activists are concerned about the project’s lack of transparency

In India, the government of Delhi is rolling out an ambitious video surveillance program as a crime-prevention measure. Technicians will install more than a quarter million closed-circuit TV (CCTV) cameras near residential and commercial properties across the city, and in schools. A central monitoring system is expected to take care of behind-the-scenes logistics, though authorities have not shared details on how the feeds will be monitored.

After delays due to political and legal wrangles, the installations began on 7 and 8 July. The first cameras to go up in a residential area were installed in Laxmi Bai Nagar, at a housing society for government employees, and at the upmarket Pandara Road in New Delhi. When the roll out is complete, there will be an average of 4,000 cameras in each of Delhi’s 70 assembly constituencies, for a total of around 280,000 cameras.

In early 2020, the National Capital Territory of Delhi (usually just called ‘Delhi’), which includes New Delhi, the capital of India, will vote to elect a new state assembly. Lowering the crime rates is a key election issue for the incumbent Aam Aadmi Party (literally, Common Man’s [sic] Party). The party has promised that the CCTV cameras will deter premeditated crime and foster a semblance of order among the general public.

June 1878: Muybridge Photographs a Galloping Horse

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/consumer-electronics/audiovideo/june-1878-muybridge-photographs-a-galloping-horse

Shutter speed rose from a thousandth of a second in 1878 to a millionth of a billionth of a second in the ’90s. Today, that’s considered slow

Eadweard Muybridge (1830–1904), an English photographer, established his American fame in 1867 by taking a mobile studio to Yosemite Valley and producing large silver prints of its stunning vistas. Five years later he was hired by Leland Stanford, then the president of the Central Pacific Railroad, formerly the governor of California and latterly the founder of the eponymous university in Palo Alto. Stanford—who was also a horse breeder—challenged Muybridge to settle the old dispute about whether all four of a horse’s legs are off the ground at one time during a gallop.

Muybridge found it difficult to prove the point. In 1872 he took (and then lost) a single image of a trotting horse with all hooves aloft. But he persevered, and his eventual solution was to capture moving objects with cameras capable of a shutter speed as brief as 1/1,000 of a second.

The conclusive experiment took place 141 years ago, on 19 June 1878, at Stanford’s Palo Alto farm. Muybridge lined up thread-triggered glass-plate cameras along the track, used a white-sheet background for the best contrast, and copied the resulting images as simple silhouettes on a disc rotating in a zoopraxiscope, a device he invented in order to display a rapid series of stills to convey motion. Sallie Gardner, the horse Stanford had provided for the test, clearly had all four hooves off the ground. But the airborne moment did not take place as portrayed in famous paintings, perhaps most notably Théodore Géricault’s 1821 Derby at Epsom, now hanging in the Louvre, which shows the animal’s legs extended, away from its body. Instead, it occurred when the horse’s legs were beneath its body, just prior to the moment the horse pushed off with its hind legs.

This work led to Muybridge’s magnum opus, which he prepared for the University of Pennsylvania. Starting in 1883, he began to make an extensive series depicting animal and human locomotion. Its creation relied on 24 cameras fixed in parallel to the 36-meter-long track and two portable sets of 12 batteries at each end. The track had a marked background, and animals or people activated the shutters by breaking stretched strings.

The final product was a book with 781 plates, published in 1887. This compendium showed not only running domestic animals (dogs and cats, cows and pigs) but also a bison, a deer, an elephant, and a tiger, as well as a running ostrich and a flying parrot. Human sequences depicted various runs and also ascents, descents, lifts, throws, wrestling, dances, a child crawling, and a woman pouring a bucket of water over another woman.

Muybridge’s 1,000 frames a second soon became 10,000. By 1940, the patented design of a rotating mirror camera raised the rate to 1 million per second. In 1999, Ahmed Zewail got a Nobel Prize in Chemistry for developing a spectrograph that could capture the transition states of chemical reactions on a scale of femtoseconds—that is, 10-15 second, or one-millionth of one-billionth of a second.

Today, we can use intense, ultrafast laser pulses to capture events separated by mere attoseconds, or 10-18 second. This time resolution makes it possible to see what has been until recently hidden from any direct experimental access: the motions of electrons on the atomic scale.

Many examples can be given to illustrate the extraordinary scientific and engineering progress we have made during the past 141 years, but this contrast is as impressive as any other advance I can think of—from settling the dispute about airborne horse hooves to observing flitting electrons.

This article appears in the June 2019 print issue as “June 1878: Muybridge’s Galloping Horse.”

Hearables Will Monitor Your Brain and Body to Augment Your Life

Post Syndicated from Poppy Crum original https://spectrum.ieee.org/consumer-electronics/audiovideo/hearables-will-monitor-your-brain-and-body-to-augment-your-life

Devices tucked inside your ears will make technology more personal than ever before

The eyes, it’s been said, are windows to the soul. I’d argue that the real portals are the ears.

Consider that, at this very moment, a cacophony of biological conversations is blasting through dime-size patches of skin just inside and outside the openings to your ear canals. There, blood is coursing through your veins, its pressure rising and falling as you react to stress and excitement, its levels of oxygen changing in response to the air around you and the way your body is using the air you breathe. Here we can also detect the electrical signals that zip through the cortex as it responds to the sensory information around us. And in that patch of skin itself, changing electrical conductivity signals moments of anticipation and emotional intensity.

The ear is like a biological equivalent of a USB port. It is unparalleled not only as a point for “writing” to the brain, as happens when our earbuds transmit the sounds of our favorite music, but also for “reading” from the brain. Soon, wearable devices that tuck into our ears—I call them hearables—will monitor our biological signals to reveal when we are emotionally stressed and when our brains are being overtaxed. When we are struggling to hear or understand, these devices will proactively help us focus on the sounds we want to hear. They’ll also reduce the sounds that cause us stress, and even connect to other devices around us, like thermostats and lighting controls, to let us feel more at ease in our surroundings. They will be a technology that is truly empathetic—a goal I have been working toward as chief scientist at Dolby Laboratories and an adjunct professor at Stanford University.

How will future generations of hearables make life better? I’ve envisioned many scenarios, but here are four to get things started.

Scenario 1: Say you’re trying to follow a basketball game on TV while cooking in your kitchen. You’re having trouble following the action, and your hearables know there’s a problem because they’ve detected an increase in your mental stress, based on changes in your blood pressure and brain waves. They can figure out exactly where you’re trying to direct your attention by pairing that stress increase with variations in the electrical signals created in your ear as you move your eyes. Then the hearables will automatically increase the volume of sounds coming from that direction. That’s a pretty simple fix that could make you a little more relaxed when you finally get to the dinner table.

Scenario 2: You find yourself at a popular new restaurant, where loud music and reverberant acoustics make it difficult to hold conversations (even if you don’t have hearing loss). Your hearables are monitoring your mental effort, again by tracking your brain waves, determining when you are struggling to hear. They then appropriately adjust the signal-to-noise ratio and directionality of their built-in mics to make it easier for you to understand what people nearby are saying. These hearables can distinguish your friends from the patrons you wish to ignore, based on audio fingerprints that the device previously collected.

They can even figure out exactly whom you are trying to hear by tracking your attention, even if you can’t see the person directly. We’ve all been at a party where we heard our names in a conversation across the room and wanted to be able to teleport into the conversation. Soon we’ll be able to do just that.

Scenario 3: You’re driving your car, with your partner next to you and your two children in the back seat. You’re tired from hours of driving and the children are getting noisy, which makes it hard for you to pay attention to the directions coming from your smartphone. And then your daughter announces she needs to go potty—now. Your hearables detect the increasing noisy unrest from children in the back seat, as well as the mental stress it’s putting on you. The devices increase the smartphone volume while also instructing the car to adjust seat temperature and airflow—inspired by previously tracked biometrics—and perhaps start softly playing your favorite music. They even understand your child’s “I need to go potty” as a bathroom request, analyze it to determine its urgency, and decide whether the navigation app should immediately search for the closest rest stop, or look for one near a restaurant—because they also heard your other daughter say she’s hungry.

Scenario 4: You’ve been wearing a hearable for a few years now. Recently, the device has been detecting specific changes in the spectral quality and patterns of sounds when you speak. After tracking this trend for several months, your hearable suggests that you schedule an appointment with your physician because these changes can correlate with heart disease.

Nothing in these scenarios is beyond what we’ll be able to accomplish in the next five years. For all of them, the necessary hardware or software is available now or well on its way in laboratories.

Start with the earpieces themselves. Wireless earbuds such as Apple’s AirPods, which partially occlude the ear canal, and fully occlusive ones, such as Nuheara’s IQbuds or Bose’s Sleepbuds, show how advances in miniaturization and battery technology have enabled small, lightweight form factors that weren’t possible just a decade ago. Nonetheless, small form factors mean limits on battery size, and battery life is still a make-or-break feature that will require innovation to power a large number of sensors and machine-learning algorithms for days at a time, without recharging. Some near-term solutions will better optimize computation efficiency by distributing processing among the hearable, a user’s mobile, and the cloud. But look out for some transformative innovation in energy storage over the next few years. A personal favorite from the past year was Widex’s debut of a fuel cell for in-ear devices capable of reaching a full day’s charge in seconds.

As for sensors, we already have ones that can monitor heart rate optically, as done today by many wearable fitness trackers, and others that can measure heart rate activity electrically, as done today by the Apple Watch Series 4. We have sensors that monitor blood oxygenation, familiar to many from the finger-clip monitor used in doctors’ offices but already migrating into fitness trackers. We have rings, watches, and patches that track physical and emotional stress by measuring galvanic skin response; blood pressure monitors that look like bracelets; headbands that track brain waves; and patches that monitor attention and drowsiness by tracking eye movement. All of these devices are steadily shrinking and getting more reliable every day, on a trajectory that will soon make them appropriate for use in hearables. Indeed, positioning these kinds of monitors in the ear, where blood vessels and nerves run close to the surface, can give them a sensitivity advantage compared with their placement in wrist-worn or other types of wearables.

There are still quite a few challenges in getting the ergonomics of hearables right. The main one is contact—that is, how designers can ensure adequate continuous contact between an earbud and the skin of the outer ear canal. Such contact is essential if the devices are going to do any kind of accurate biological sensing. Natural behaviors like sneezing, chewing, or yawning can briefly break contact or, at a minimum, change the impedance of the earbud-skin connection. That can give a hearable’s algorithms an incomplete or inaccurate assessment of the wearer’s mental or physical states. I don’t think this will be a huge obstacle. Ultimately, the solutions will depend as much on insightful algorithms that take into account the variability caused by movement as they do on hardware design.

Speaking of software, the abilities of AI-based virtual assistants have, of course, blossomed in recent years. Your smart speaker is much more useful than it was even six months ago. And by no means has its ability to understand your commands finished improving. In coming years, it will become adept at anticipating your needs and wants, and this capability will transfer directly to hearables.

Today’s virtual assistants, such as Amazon’s Alexa and Apple’s Siri, rely on the cloud for the powerful processing needed to respond to requests. But artificial neural network chips coming soon from IBM, Mythic, and other companies will often allow such intensive processing to be carried out in a hearable itself, eliminating the need for an Internet connection and allowing near-instantaneous reaction times. Occasionally the hearable will have to resort to cloud processing because of the complexity of an algorithm. In these cases, fifth-generation (5G) cellular, which promises median speeds in the hundreds of megabits per second and latencies in single-digit milliseconds, will potentially reduce the latency to as little as tens of milliseconds.

These hearables won’t have to be as tiny as earbuds that fit fully inside the entrances to the ear canals. Today, some people wear their Apple AirPods for several hours every day, and it wasn’t so long ago that Bluetooth earpieces for making voice calls were fashion accessories.

In terms of form factor, designers are already starting to move beyond the classic in-ear monitor. Bose recently introduced Bose Frames, which have directional speakers integrated into a sunglasses frame. Such a frame could house many hearable features in a design most of us are already comfortable wearing for extended periods. Most likely, designers will come up with other options as well. The point is that with the right balance of form, battery life, and user benefits, an AI-powered hearable can become something that some people will think nothing about wearing for hours at a time, and perhaps most of the day.

What might we expect from early offerings? Much of the advanced research in hearables right now is focusing on cognitive control of a hearing aid. The point is to distinguish where the sounds people are paying attention to are coming from—independently of the position of their heads or where their eyes are focused—and determine whether their brains are working unusually hard, most likely because they’re struggling to hear someone. Today, hearing aids generally just amplify all sounds, making them unpleasant for users in noisy environments. The most expensive hearing aids today do have some smarts—some use machine learning along with GPS mapping to determine which volume and noise reduction settings are best for a certain location, applying those when the wearer enters that area.

This kind of device will be attractive to pretty much all of us, not just people struggling with some degree of hearing loss. The sounds and demands of our environments are constantly changing and introducing different types of competing noise, reverberant acoustics, and attention distractors. A device that helps us create a “cone of silence” (remember the 1960s TV comedy “Get Smart”?) or gives us superhuman hearing and the ability to direct our attention to any point in a room will transform how we interact with one another and our environments.

Within five years, a new wave of smart hearing aids will be able to recognize stress, both current and anticipatory. These intelligent devices will do this by collecting and combining several kinds of physiological data and then using deep-learning tools to tune the analysis to individuals, getting better and better at spotting and predicting rising stress levels. The data they use will most likely include pulse rate, gathered using optical or electrical sensors, given that a rising heart rate and shifts in heart rate variability are basic indicators of stress.

These hearables will likely also use miniature electrodes, placed on their surfaces, to sense the weak electric fields around the brain—what we sometimes refer to as brain waves. Future hearables will use software to translate fluctuations in these fields at different frequencies into electroencephalograms (EEGs) with millisecond resolution. Decades of research have helped scientists draw insights into a person’s state of mind from changes in EEGs. For example, devices often look for waves in the frequency range between 7.5 and 14 hertz, known as alpha waves (the exact boundaries are debatable), to track stress and relaxation. Fluctuations in the beta waves—between approximately 14 and 30 Hz—can indicate increased or reduced anxiety. And the interaction between the alpha and beta ranges is a good overall marker of stress.

These hearables may also measure sweat, both its quantity and composition. We all know we tend to sweat more when we’re under stress. Sensors that measure variations in the conductance of our skin due to changes in our sweat glands—known as the galvanic skin response—are starting to show up in wrist wearables and will be easily adapted to the ear. (You may be familiar with this technology for its role in polygraph machines.) Microfluidic devices that gather sweat from the skin can also help measure the rate of sweating. And researchers recently demonstrated that a sensor can detect cortisol—a steroid hormone released when a person is under stress—in sweat.

These hearables will combine this physiological picture of user stress with the sounds picked up by their microphones in order to understand the context in which the user is operating. They will then automatically adjust their settings to the environment and the situation, and amplify the sounds the user most cares about while they screen out other sounds that could interfere with comprehension.

A team from Columbia University, Hofstra University, and the Neurological Institute recently took the idea of decoding the electrical activity in the brain to help reduce stress a step further, demonstrating the effectiveness of using brain-wave detection with machine learning to help people cope with multiple people talking simultaneously and devising an algorithm that can reliably identify the person whom a listener is most interested in hearing. The experiment involved recording brain waves while a research subject listened to someone speaking. Then the researchers mixed the sounds of the target speaker with a random interfering speaker. While the subject struggled to follow the story being narrated by the target speaker in the mix of sound, the researchers compared the current brain-wave signals with those of the previous recording to identify the target of attention. Using an algorithm to separate the voices into separate channels, the system was then able to amplify the voice of the target speaker. While the researchers used implanted electrodes to conduct this experiment, a version of this technology could migrate into a noninvasive brain-monitoring hearable that increases the volume of a target speaker’s voice and dampens other sounds.

Tomorrow’s hearables could also improve our physical as well as mental well-being. For example, a hearable could diagnose and treat tinnitus, the so-called ringing in the ears often caused by the loss of sensory cells in the cochlea or damage to neural cells along the route from the cochlea to the brain. To diagnose the problem, the hearable could play a sound that the wearer adjusts to match the tone of the ringing. Then, to treat the condition, the hearable can take advantage of its proximity to a vagus nerve branch to train the brain to stop the neural activity that caused the annoying ringing sound in the first place.

Vagus nerve applications like this are in the early research phase, but they hold much promise for hearables. The vagus nerves are among a dozen major cranial nerve pairs. They run from the brain to the stomach, one on each side of the body, with branches that go through the skin near the nerve that carries the sensory information from the inner ear to the brain. Doctors stimulate these nerves electrically to treat epilepsy and depression, and researchers are testing such stimulation for the treatment of heart problems, digestive disorders, inflammation, and other mental and physical maladies.

In the case of tinnitus, researchers are stimulating the vagus nerve to boost the activity of neurotransmitters, which are helpful in learning, memory, and bootstrapping the brain to essentially rewire itself when necessary, a phenomenon called neuroplasticity. Such stimulation through hearables could also be used to address post-traumatic stress disorder (PTSD), to treat addiction, or to enhance learning of physical movements.

Privacy and security are two big interrelated challenges facing developers of hearables on the path to widespread adoption. While there are many benefits that may come from having wearables constantly monitoring our internal states and our reactions to our surroundings, people may be reluctant to submit to that kind of constant scrutiny. Businesses, too, will have concerns—for example, that hearables could be hacked to eavesdrop on meetings.

But for a hearable to provide maximum value, it must spend as much time as possible monitoring its owner’s daily life. Otherwise, the machine-learning systems that support it won’t have the detailed history and constant fresh supply of information that any AI device needs to keep improving.

There are also some thorny legal issues to consider. A couple of years ago, prosecutors in Arkansas subpoenaed Amazon to release information collected by an Echo device that was in a home where a murder occurred. Will people and businesses shy away from using smart hearables because the data they collect could one day be used against them?

Various strategies already exist to address such concerns. For example, to protect the voiceprints that voice-based biometric systems use for authentication, some systems dice the audio data into chunks that are encrypted with keys that change constantly. Using this technique, a breach involving one chunk won’t give a hacker access to the rest of that user’s audio data. The technique also ensures that a hacker won’t be able to obtain enough audio data to create a virtual copy of the user’s voice to fool a bank’s voice-authentication system. Additionally, developments in cryptography that allow algorithms to work with fully encrypted data without decrypting it will most likely be important for the security of future hearables.

Another, perhaps thornier, challenge involves mental health. Because hearables can monitor our internal states, tomorrow’s hearables will learn things about us that we may not know—or even be prepared to know.

For example, research by IBM found that an algorithm could predict the likelihood of the onset of psychosis within five years and diagnose schizophrenia with up to 83 percent accuracy simply by analyzing certain components of someone’s speech. But what would a hearable—perhaps one bought simply to improve hearing in restaurants and keep an eye on the user’s blood pressure—do with that kind of information? A hearable that listens to its owner for weeks, months, or years will have deep insight into that person’s mind. If it concludes that someone needs help, should it alert a family member, physician, or even law enforcement? Clearly, there are ethical questions to answer. Early intervention and knowledge is an enormous benefit, but it will come at a cost: Either we’ll have to be willing to give up our privacy or invest in a better infrastructure to protect it. Some questions raised by AI-powered hearables aren’t easily answered.

I am confident that we can meet these challenges. The benefits of hearables will far outweigh the negatives—and that’s a strong motivation for doing the work needed to sort out the issues of privacy and security. When we do, hearables will constantly and silently assess and anticipate our needs and state of mind while helping us cope with the world around us. They will be our true life partners.

This article appears in the May 2019 print issue as “Here come the Hearables.”

About the Author

Poppy Crum is chief scientist at Dolby Laboratories and an adjunct professor at Stanford University.