Tag Archives: sensors/imagers

Ultra-quick Cameras Reveal How to Catch a Light Pulse Mid-Flight

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/tech-talk/sensors/imagers/super-fast-cameras-capture-light-pulse-midflight

The footage is something between the ghostlike and commonplace: a tiny laser pulse tracing through a box pattern, as if moving through a neon storefront sign. But this pulse is more than meets the eye, more than a pattern for the eye to follow, and there’s no good metaphor for it. It is quite simply and starkly a complete burst of light—a pulse train with both a front and a back—captured in a still image, mid-flight.

Electrical engineers and optics experts at the Advanced Quantum Architecture (AQUA) Laboratory in Neuchâtel, Switzerland made this footage last year, mid-pandemic, using a single-photon avalanche diode camera, or SPAD. Its solid-state photodetectors are capable of very high-precision measurements of time and therefore distance, even as its single pixels are struck by individual light particles, or photons.

Edoardo Charbon and his colleagues at the Swiss Ecole Polytechnique Fédérale de Lausanne, working with camera maker Canon, were able in late 2019 to develop a SPAD array at megapixel size in a small camera they called Mega X. It can resolve one million pixels and process this information very quickly. 

The Charbon-led AQUA group—at that point comprising nanotechnology prizewinner Kazuhiro Morimoto, Ming-Lo Wu and Andrei Ardelean—synchronized Mega X to a femtosecond laser. They fire the laser through an aerosol of water vapor, made using dry ice procured at a party shop. 

The photons hit the water droplets, and some of them scatter toward the camera. With a sensor able to realize a megapixel resolution, the camera can capture 24,000 frames per second with exposure times as fast as 3.8 nanoseconds.

The sort of photodetectors used in Mega X have been in development for several decades. Indeed, SPAD imagers can be found in smartphone cameras, industrial robots, and lab spectroscopy. The kind of footage caught in the Neuchâtel lab is called light-in-flight. MIT’s Ramesh Raskar in 2010 and 2011 used a streak camera—a kind of camera used in chemistry applications—to produce 2D images and build films of light in flight in what he called femtophotography

The development of light-in-flight imaging goes back at least to the late 1970s, when Nils Abramson at Stockholm’s Royal Institute of Technology used holography to record light wavefronts. Genevieve Gariepy, Daniele Faccio and other researchers then in Edinburgh in 2015 used a SPAD to show footage of light in flight. The first light-in-flight images took many hours to construct, but the Mega X camera can do it in a few minutes. 

Martin Zurek, a freelance photographer and consultant in Bavaria (he does 3D laser scanning and drone measurements for architectural and landscape projects) recalls many long hours working on femtosecond spectroscopy for his physics doctorate in Munich in the late 1990s. When Zurek watches the AQUA light footage, he’s impressed by the resolution in location and time, which opens up dimensions for the image. “The most interesting thing is how short you can make light pulses,” he says. “You’re observing something fundamental, a fundamental physical property or process. That should astound us every day.”

A potential use for megapixel SPAD cameras would be for faster and more detailed light-ranging 3D scans of landscapes and large objects in mapping, facsimile production, and image recording. For example, megapixel SPAD technology could greatly enhance LiDAR imaging and modeling of the kind done by the Factum Foundation in their studies of the tomb of the Egyptian Pharaoh Seti I.

Faccio, now at the University of Glasgow, is using SPAD imaging to obtain better time-resolved fluorescence microscopy images of cell metabolism. As with Raskar’s MIT group, Faccio hopes to apply the technology to human body imaging.

The AQUA researchers were able to observe an astrophysical effect in their footage called superluminal motion, which is an illusion akin to the doppler effect. Only in this case light appears to the observer to speed up—which it can’t really do, already traveling as it does at the speed of light.

Charbon’s thoughts are more earthbound. “This is just like a conventional camera, except that in every single pixel you can see every single photon,” he says. “That blew me away. It’s why I got into this research in the first place.”

Atomically Precise Sensors Could Detect Another Earth

Post Syndicated from Jason Wright original https://spectrum.ieee.org/sensors/imagers/atomically-precise-sensors-could-detect-another-earth

Gazing into the dark and seemingly endless night sky above, people have long wondered: Is there another world like ours out there, somewhere? Thanks to new sensors that we and other astronomers are developing, our generation may be the first to get an affirmative answer—and the earliest hints of another Earth could come as soon as this year.

Astronomers have discovered thousands of exoplanets so far, and almost two dozen of them are roughly the size of our own planet and orbiting within the so-called habitable zone of their stars, where water might exist on the surface in liquid form. But none of those planets has been confirmed to be rocky, like Earth, and to circle a star like the sun. Still, there is every reason to expect that astronomers will yet detect such a planet in a nearby portion of the galaxy.

So why haven’t they found it yet? We can sum up the difficulty in three words: resolution, contrast, and mass. Imagine trying to spot Earth from hundreds of light-years away. You would need a giant telescope to resolve such a tiny blue dot sitting a mere 150 million kilometers (0.000016 light-year) from the sun. And Earth, at less than a billionth the brightness of the sun, would be hopelessly lost in the glare.

For the same reasons, current observatories—and even the next generation of space telescopes now being built—have no chance of snapping a photo of our exoplanetary twin. Even when such imaging does become possible, it will allow us to measure an exoplanet’s size and orbital period, but not its mass, without which we cannot determine whether it is rocky like Earth or largely gaseous like Jupiter.

For all these reasons, astronomers will detect another Earth first by exploiting a tool that’s been used to detect exoplanets since the mid-1990s: spectroscopy. The tug of an orbiting exoplanet makes its star “wobble” ever so slightly, inducing a correspondingly slight and slow shift in the spectrum of the star’s light. For a doppelgänger of Earth, that fluctuation is so subtle that we have to look for a displacement measuring just a few atoms across in the patterns of the rainbow of starlight falling on a silicon detector in the heart of a superstable vacuum chamber.

Our group at Pennsylvania State University, part of a team funded by NASA and the U.S. National Science Foundation (NSF), has constructed an instrument called NEID (short for NN-explore Exoplanet Investigations with Doppler spectroscopy), for deployment in Arizona to search the skies of the Northern Hemisphere. (The name NEID—pronounced “NOO-id”—derives from an American Indian word meaning “to see.”) A Yale University team has built a second instrument, called EXPRES (for EXtreme PREcision Spectrometer), which will also operate in Arizona. Meanwhile, a third team led by the University of Geneva and the European Southern Observatory is commissioning an instrument called ESPRESSO (Echelle SPectrograph for Rocky Exoplanet and Stable Spectroscopic Observations), in Chile, to hunt in the southern skies.

All three groups are integrating Nobel Prize–winning technologies in novel ways to create cutting-edge digital spectrographs of femtometer-scale precision. Their shared goal is to achieve the most precise measurements of stellar motions ever attempted. The race to discover the elusive “Earth 2.0” is afoot.

Using spectroscopy to pick up an exoplanet-induced wobble is an old idea, and the basic principle is easy to understand. As Earth circles the sun in a huge orbit, it pulls the sun, which is more than 300,000 times as massive, into a far smaller “counter orbit,” in which the sun moves at a mere 10 centimeters per second, about the speed of a box turtle.

Even such slow motions cause the wavelengths of sunlight to shift—by less than one part per billion—toward shorter, bluer wavelengths in the direction of its motion and toward longer, redder wavelengths in the other direction. Exoplanets induce similar Doppler shifts in the light from their host stars. The heavier the planet, the bigger the Doppler shift, making the planet easier to detect.

In 1992, radio astronomers observing odd objects called pulsars used the timing of the radio signals they emit to infer that they hosted exoplanets. In 1995, Michel Mayor and Didier Queloz of the University of Geneva were using the Doppler wobbles in the light from ordinary stars to search for orbiting companions and found a giant planet orbiting the star 51 Pegasi every four days.

This unexpected discovery, for which Mayor and Queloz won the 2019 Nobel Prize in Physics, opened the floodgates. The “Doppler wobble” method has since been used to detect more than 800 exoplanets, virtually all of them heavier than Earth.

So far, almost all of the known exoplanets that are the size and mass of Earth or smaller were discovered using a different technique. As these planets orbit, they fortuitously pass between their star and our telescopes, causing the apparent brightness of the star to dim very slightly for a time. Such transits, as astronomers call them, give away the planet’s diameter—and together with its wobble they will reveal the planet’s mass.

Since the first transiting exoplanet was discovered in 1999, the Kepler space telescope and Transiting Exoplanet Survey Satellite (TESS) have found thousands more, and that number continues to grow. Most of these bodies are unlike any in our solar system. Some are giants, like Jupiter, orbiting so close to their host stars that they make the day side of Mercury look chilly. Others are gassy planets like Neptune or Uranus but much smaller, or rocky planets like Earth but much larger.

Through statistical analysis of the bounteous results, astronomers infer that our own Milky Way galaxy must be home to billions of rocky planets like Earth that orbit inside the habitable zones of their host stars. A study published recently by scientists who worked on the Kepler mission estimated that at least 7 percent—and probably more like half—of all sunlike stars in the Milky Way host potentially habitable worlds. That is why we are so confident that, with sufficiently sensitive instruments, it should be possible to detect other Earths.

But to do so, astronomers will have to measure the mass, size, and orbital period of the planet, which means measuring both its Doppler wobble and its transit. The trove of data from Kepler and TESS—much of it yet to be analyzed—has the transit angle covered. Now it’s up to us and other astronomers to deliver higher-precision measurements of Doppler wobbles.

NEID, our contestant in the race to find Earth 2.0, improves the precision of Doppler velocimetry by making advances on two fronts simultaneously: greater stability and better calibration. Our design builds on work done in the 2000s by European astronomers who built the High Accuracy Radial Velocity Planet Searcher (HARPS), which controlled temperature, vibration, and pressure so well that it faithfully tracked the subtle Doppler shifts of a star’s light over years without sacrificing precision.

In 2010, we and other experts in this field gathered at Penn State to compare notes and discuss our aspirations for the future. HARPS had been producing blockbuster discoveries since 2002; U.S. efforts seemed to be lagging behind. The most recent decadal review by the U.S. National Academy of Sciences had recommended an “aggressive program” to regain the lead.

Astronomers at the workshop drafted a letter to NASA and the NSF that led to them commissioning a new spectrograph, to be installed at the 3.5-meter WIYN telescope at Kitt Peak National Observatory, in Arizona. (“WIYN” is an acronym derived from the names of the four founding institutions.) NEID would be built at Penn State by an international consortium.

Like HARPS before it, NEID combines new technologies with novel methods to achieve unprecedented stability during long-term observations. Any tiny change in the instrument—whether in temperature or pressure, in the way starlight illuminates its optics, or even in the nearly imperceptible expansion of the instrument’s aluminum structure as it ages—can cause the dispersed starlight to creep across the detector. That could look indistinguishable from a Doppler shift and corrupt our observations. Because the wobbles we aim to measure show up as shifts of mere femtometers on the sensor—comparable to the size of the atoms that make up the detector—we have to hold everything incredibly steady.

That starts with exquisite thermal control of the instrument, which is as big as a car. It has required us to design cutting-edge digital detectors and sophisticated laser systems, tied to atomic clocks. These systems act as the ruler by which to measure the amplitudes of the stellar wobbles.

Yet we know we cannot build a perfectly stable instrument. So we rely on calibration to take us the rest of the way. Previous instruments used specialized lamps containing thorium or uranium, which give off light at hundreds of distinct, well-known wavelengths—yardsticks for calibrating the spectral detectors. But these lamps change with age.

NEID instead relies on a laser frequency comb: a Nobel Prize–winning technology that generates laser light at hundreds of evenly spaced frequency peaks. The precision of the comb, limited only by our best atomic clocks, is better than a few parts per quadrillion. By using the comb to regularly calibrate the instrument, we can account for any residual instabilities.

Somewhere out there, another Earth is orbiting its star, which wobbles slowly in sympathy around their shared center of gravity. Over the course of a year or so, the star’s spectrum shifts ever so slightly toward the red, then toward the blue. As that starlight reaches Kitt Peak, the telescope brings it to a sharp focus at the tip of a glass optical fiber, which guides the light down into the bowels of the observatory through a conduit into a custom-designed room housing the NEID instrument.

The focus spot must strike the fiber in exactly the same way with every observation; any variations can translate into slightly different illumination patterns on the detector, mimicking stellar Doppler shifts. The quarter-century-old WIYN telescope was never designed for such work, so to ensure consistency our colleagues at the University of Wisconsin–Madison built an interface that monitors the focus and position of the stellar image hundreds of times a second and rapidly adjusts to hold the focal point steady.

Down in the insulated ground-floor room, a powerful heating-and-cooling system keeps a vacuum chamber bathed in air at a constant temperature of 20 °C. NEID’s spectrograph components, held at less than one-millionth of the standard atmospheric pressure inside the vacuum chamber, are protected from even tiny changes in pressure by a combination of pumps and “getters”—including 2 liters of cryogenic charcoal—to which stray gas molecules stick.

A network of more than 75 high-precision thermometers actively controls 30 heaters placed around the instrument’s outer surface to compensate for any unexpected thermal fluctuations. With no moving parts or motors that might generate heat and disrupt this delicate thermal balance, the system is able to hold NEID’s core optical components to exactly 26.850 °C, plus or minus 0.001 °C.

As the starlight emerges from the optical fiber, it strikes a parabolic collimating mirror and then a reflective diffraction grating that is 800 millimeters long. The grating splits the light into more than 100 thin colored lines. A large glass prism and a system of four lenses then spread the lines across a silicon-based, 80-megapixel charge-coupled device. Within that CCD sensor, photons from the distant star are converted into electrons that are counted, one at a time, by supersensitive amplifiers.

Heat from the amplifiers makes the detector itself expand and contract in ways that are nearly impossible to calibrate. We had to devise some way to keep that operating heat as constant and manageable as possible.

Electrons within the CCD accumulate in pixels, which are actually micrometer-size wells of electric potential, created by voltages applied to minuscule electrodes on one side of the silicon substrate. The usual approach is to hold these voltages constant while the shutter is open and the instrument collects stellar photons. At the end of the observation, manipulation of the voltages shuffles collected electrons over to the readout amplifiers. But such a technique generates enough heat—a few hundredths of a watt—to cripple a system like NEID.

Our team devised an alternative operating method that prevents this problem. We manipulate the CCD voltages during the collection of stellar photons, jiggling the pixels slightly without sacrificing their integrity. The result is a constant heat load from the detector rather than transient pulses.

Minor imperfections in the CCD detector present us with a separate engineering challenge. State-of-the-art commercial CCDs have high pixel counts, low noise, and impeccable light sensitivity. These sensors nevertheless contain tiny variations in the sizes and locations of the millions of individual pixels and in how efficiently electrons move across the detector array. Those subtle flaws induce smearing effects that can be substantially larger than the femtometer-size Doppler shifts we aim to detect. The atomic ruler provided by the laser frequency comb allows us to address this issue by measuring the imperfections of our sensor—and appropriately calibrating its output—at an unprecedented level of precision.

We also have to correct for the motion of the telescope itself as it whirls around Earth’s axis and around the sun at tens of kilometers per second—motion that creates apparent Doppler shifts hundreds of thousands of times as great as the ~10 cm/s wobbles caused by an exoplanet. Fortunately, NASA’s Jet Propulsion Laboratory has for decades been measuring Earth’s velocity through space to much better precision than we can measure it with our spectrograph. Our software combines that data with sensitive measurements of Earth’s rotation to correct for the motion of the telescope.

In January 2020, NEID achieved its “first light” with observations of 51 Pegasi. Since then, we have refined the calibration and tuning of the instrument, which in recent months has approached our design target of measuring Doppler wobbles as slow as 33 cm/s—a big step toward the ultimate goal of 10-cm/s sensitivity. With commissioning underway, the instrument is now regularly measuring some of the nearest, brightest sunlike stars for planets.

Meanwhile, in Chile’s Atacama Desert, the most sophisticated instrument of this kind yet built, ESPRESSO, has been analyzing Doppler shifts in starlight gathered and combined from the four 8.2-meter telescopes of the Very Large Telescope. ESPRESSO is a technological marvel that achieves unprecedentedly precise observations of very faint and distant stars by using two arms that house separate optics and CCD detectors optimized for the red and blue halves of the visible spectrum.

ESPRESSO has already demonstrated measurements that are precise to about 25 cm/s, with still better sensitivity expected as the multiyear commissioning process continues. Astronomers have used the instrument to confirm the existence of a supersize rocky planet in a tight orbit around Proxima Centauri, the star nearest the sun, and to observe the extremely hot planet WASP-76b, where molten iron presumably rains from the skies.

At the 4.3-meter Lowell Discovery Telescope near Flagstaff, Ariz., EXPRES has recently demonstrated its own ability to measure stellar motions to much better than 50 cm/s. Several other Doppler velocimeters, to be installed on other telescopes, are either in the final stages of construction or are coming on line to join the effort. The variety of designs will help astronomers observe stars of differing kinds and magnitudes.

Since that 2010 meeting, our community of instrument builders has quickly learned which design solutions work well and which have turned out to be more trouble than they are worth. Perhaps when we convene at the next such meeting in California, one group will have already found Earth 2.0. There’ll be no telling, though, whether it harbors alien life—and if so, whether the aliens have also spied us.

This article appears in the March 2021 print issue as “ How We’ll Find Another Earth.”

About the Author

Jason T. Wright is a professor of astronomy and astrophysics at Pennsylvania State University. Cullen Blake is an associate professor in the department of physics and astronomy at the University of Pennsylvania.

Ultrasonic Holograms: Who Knew Acoustics Could Go 3D?

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/imagers/acoustic-holograms-and-3d-ultrasound

Although its origins trace back to 1985, acoustic holography as a field has been hampered by rampant noise created by widespread reflection and scattering. (Lasers tamp this problem down somewhat for conventional optical holograms; ultrasound as yet offers no such technological quick fix.) 

But in a recent study published last month in IEEE Sensors Journal, a group of researchers report an improved method for creating acoustic holograms. While the advance won’t lead to treatment with acoustic holograms in the immediate future, the improved technique yields higher sensitivity and a better focusing effect than previous acoustic hologram methods.

There are a number of intriguing possibilities that come with manipulation using sound, including medical applications. Ultrasound can penetrate human tissues and is already used for medical imaging. But more precise manipulation and imaging of human tissues using 3D holographic ultrasound  could lead to completely new therapies—including targeted neuromodulation using sound.

The nature of sound itself poses the first hurdle to be overcome. “The medical application of acoustic holograms is limited owing to the sound reflection and scattering at the acoustic holographic surface and its internal attenuation,” explains Chunlong Fei, an associate professor at Xidian University who is involved in the study.

To address these issues, his team created their acoustic hologram via a “lens” consisting of a disc with a hole at its center. They placed a 1 MHz ultrasound transducer in water and used the outer part of the transducer surface to create the hologram. By creating a hole in the center of the lens, the center of the transducer generates and receives soundwaves with less reflection and scattering.

Next, the researchers compared their new disc approach to more conventional acoustic hologram techniques. They performed this A vs. B comparison via ultrasound holographic images of several thin needles, 1.25 millimeters in diameter or less.

“The most notable feature of the hole-hologram we proposed is that it has high sensitivity and maintains good focusing effect [thanks to the] holographic lens,” says Fei. He notes that these features will lead to less scattering and propagation loss than what occurs with traditional acoustic holograms.

Fei envisions several different ways in which this approach could one day be applied medically, for example by complementing existing medical imaging probes to achieve better resolution, or for applications such as nerve regulation or non-invasive brain stimulation. However, the current set up, using water, would need to be modified to be more suitable for medical setting, along with several next steps related to characterizing and manipulating the hologram, says Fei.

The varied design improvements Fei’s team hopes to develop match the equally eclectic possible applications of ultrasonic hologram technology. In the future, Fei says they hope acoustic holographic devices might achieve super-resolution imaging, particle trapping, selective biological tissue heating—and even find new applications in personalized medicine.

3D Fingerprint Sensors Get Under Your Skin

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/imagers/3d-fingerprint-sensor-id-subcutaneous

Journal Watch report logo, link to report landing page

Many people already use fingerprint recognition technology to access their phones, but one group of researchers wants to take this skin-deep concept further. In a study published January 18 in IEEE Sensors Journal, they described a technique that maps out not only the unique pattern of a person’s fingerprint, but the blood vessels underneath. The approach adds another layer of security to identity authentication.

Typically, a fingerprint scan only accounts for 2D information that captures a person’s unique fingerprint pattern of ridges and valleys, but this 2D information could easily be replicated.

“Compared with the existing 2D fingerprint recognition technologies, 3D recognition that captures the finger vessel pattern within a user’s finger will be ideal for preventing spoofing attacks and be much more secure,” explains Xiaoning Jing, a distinguished professor at North Carolina State University and co-author of the study.

To develop a more secure approach using 3D recognition, Jing and his colleagues created a device that relies on ultrasound pulses. When a finger is placed upon the system, triggering a pressure sensor, a high-frequency pulsed ultrasonic wave is emitted. The amplitudes of reflecting soundwaves can then be used to determine both the fingerprint and blood vessel patterns of the person.

In an experiment, the device was tested using an artificial finger created from polydimethylsiloxane, which has an acoustic impedance similar to human tissues. Bovine blood was added to vessels constructed in the artificial finger. Through this set up, the researchers were able to obtain electronic images of both the fingerprint and blood vessel patterns with resolutions of 500 × 500 dots per inch, which they say is sufficient for commercial applications.

Intriguingly, while the blood vessel features beneath the ridges of the artificial finger could be determined, this was not the case for 40% of the blood vessels that lay underneath the valleys of the fingerprint. Jing explains that this is because the high-frequency acoustic waves cannot propagate through the tiny spaces confined within the valleys of the fingerprint. Nevertheless, he notes that enough of the blood vessels throughout the finger can be distinguished enough to make this approach worthwhile, and that data interpolation or other advanced processing techniques could be used to reconstruct the undetected portion of vessels.

Chang Peng, a post-doc research scholar at North Carolina State University and co-author of the study, see this approach as widely applicable. “We envision this 3D fingerprint recognition approach can be adopted as a highly secure bio-recognition technique for broad applications including consumer electronics, law enforcement, banking and finance, as well as smart homes,” he says, noting that this group is seeking a patent and looking for industry partners to help commercialize the technology.

Notably, the current set up using a single ultrasound inducer takes an hour or more to acquire an image. To improve upon this, the researchers are planning to explore how an array ultrasonic fingerprint sensors will perform compared to the single sensor that was used in this study. Then they aim to test the device with real human fingers, comparing the security and robustness of their technique to commercialized optical and capacitive techniques for real fingerprint recognition.

Sony Builds AI Into a CMOS Image Sensor

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/sensors/imagers/sony-builds-ai-into-a-cmos-image-sensor

Sony today announced that it has developed and is distributing smart image sensors. These devices use machine learning to process captured images on the sensor itself. They can then select only relevant images, or parts of images, to send on to cloud-based systems or local hubs.

This technology, says Mark Hanson, vice president of technology and business innovation for Sony Corp. of America, means practically zero latency between the image capture and its processing; low power consumption enabling IoT devices to run for months on a single battery; enhanced privacy; and far lower costs than smart cameras that use traditional image sensors and separate processors.

Sony’s San Jose laboratory developed prototype products using these sensors to demonstrate to future customers. The chips themselves were designed at Sony’s Atsugi, Japan, technology center. Hanson says that while other organizations have similar technology in development, Sony is the first to ship devices to customers.

Sony builds these chips by thinning and then bonding two wafers—one containing chips with light-sensing pixels and one containing signal processing circuitry and memory. This type of design is only possible because Sony is using a back-illuminated image sensor. In standard CMOS image sensors, the electronic traces that gather signals from the photodetectors are laid on top of the detectors. This makes them easy to manufacture, but sacrifices efficiency, because the traces block some of the incoming light. Back-illuminated devices put the readout circuitry and the interconnects under the photodetectors, adding to the cost of manufacture.

“We originally went to backside illumination so we could get more pixels on our device,” says Hanson. “That was the catalyst to enable us to add circuitry; then the question was what were the applications you could get by doing that.”

Hanson indicated that the initial applications for the technology will be in security, particularly in large retail situations requiring many cameras to cover a store. In this case, the amount of data being collected quickly becomes overwhelming, so processing the images at the edge would simplify that and cost much less, he says. The sensors could, for example, be programmed to spot people, carve out the section of the image containing the person, and only send that on for further processing. Or, he indicated, they could simply send metadata instead of the image itself, say, the number of people entering a building. The smart sensors can also track objects from frame to frame as a video is captured, for example, packages in a grocery store moving from cart to self-checkout register to bag.

Remote surveillance, with the devices running on battery power, is another application that’s getting a lot of interest, he says, along with manufacturing companies that mix robots and people looking to use image sensors to improve safety. Consumer gadgets that use the technology will come later, but he expected developers to begin experimenting with samples.

Mirror Arrays Make Augmented Reality More Realistic

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/imagers/mirrors-augmented-reality-more-realistic

Journal Watch report logo, link to report landing page

In the world of augmented reality (AR), real and virtual images are combined to create immersive environments for users. While the technology is advancing, it has remained challenging to make virtual images appear more “solid” in front of real-world objects, an effect that’s essential for us in achieving depth perception.

Now, a team of researchers has developed a compact AR system that, using an array of miniature mirrors that switch positions tens of thousands of times per second, can create this elusive effect. They describe their new system in a study published February 13 in IEEE Transactions on Visualization and Computer Graphics.

Occlusion is when light from objects in the foreground block the light from objects at farther distances. Commercialized AR systems have limited occlusion because they tend to rely on a single spatial light modulator (SLM) to create the virtual images, while allowing natural light from real-world objects to also be perceived by the user.

“As can be seen with many commercial devices such as the Microsoft HoloLens or Magic Leap, not blocking high levels of incoming light [from real-world objects] means that virtual content becomes so transparent it becomes difficult to even see,” explains Brooke Krajancich, a researcher at Stanford University. This lack of occlusion can interfere with the user’s depth perception, which would be problematic in tasks requiring precision, such as AR-assisted surgery.

To achieve better occlusion, some researchers have been exploring the possibility of a second SLM that controls the incoming light of real-world objects. However, incorporating two SLMs into one system involves a lot of hardware, making it bulky. Instead, Krajancich and her colleagues developed a new design that combines virtual projection and light-blocking abilities into one element.

Their design relies on a dense array of miniature mirrors that can be individually flipped between two states—one that allows light through and one that reflects light—at a rate of up to tens of thousands of times per second.

“Our system uses these mirrors to switch between a see-through state, which allows the user to observe a small part of the real world, and a reflective state, where the same mirror blocks light from the scene in favor of an [artificial] light source,” explains Krajancich. The system computes the optimal arrangement for the mirrors and adjusts accordingly.

Krajancich notes some trade-offs with this approach, including some challenges in rendering colors properly. It also requires a lot of computing power, and therefore may require higher power consumption than other AR systems. While commercialization of this system is a possibility in the future, she says, the approach is still in the early research stages.

Prophesee’s Event-Based Camera Reaches High Resolution

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/sensors/imagers/prophesees-eventbased-camera-reaches-high-resolution

There’s something inherently inefficient about the way video captures motion today. Cameras capture frame after frame at regular intervals, but most of the pixels in those frames don’t change from one to the other, and whatever is moving in those frames is only captured episodically.

Event-based cameras work differently; their pixels only react if they detect a change in the amount of light falling on them. They capture motion better than any other camera, while generating only a small amount of data and burning little power.

Paris-based startup Prophesee has been developing and selling event-based cameras since 2016, but the applications for their chips were limited. That’s because the circuitry surrounding the light-sensing element took up so much space that the imagers had a fairly low resolution. In a partnership announced this week at the IEEE International Solid-State Circuits Conference in San Francisco, Prophesee used Sony technology to put that circuitry on a separate chip that sits behind the pixels.

“Using the Sony process, which is probably the most advanced process, we managed to shrink the pixel pitch down to 4.86 micrometers” from their previous 15 micrometers, says Luca Verre, the company’s cofounder and CEO.

The resulting 1280 x 720 HD event-based imager is suitable for a much wider range of applications. “We want to enter the space of smart home cameras and smart infrastructure, [simultaneous localization and mapping] solutions for AR/VR, 3D sensing for drones or industrial robots,” he says.

The company is also looking to enter the automotive market, where imagers need a high dynamic range to deal with the big differences between day and night driving. “This is where our technology excels,” he says.

Besides the photodiode, each pixel requires circuits to change the diode’s current into a logarithmic voltage and determine if there’s been an increase or decrease in luminosity. It’s that circuitry that Sony’s technology puts on a separate chip that sits behind the pixels and is linked to them by a dense array of copper connections. Previously, the photodiode made up only 25 percent of the area of the pixel, now it’s 77 percent.

When a pixel detects a change (an event), all that is output is the location of the pixel, the polarity of the change, and a 1-microsecond-resolution time stamp. The imager consumes 32 milliwatts to register 100,000 events per second and ramps up to just 73 milliwatts at 300 million events per second. A system that dynamically compresses the event data allows the chip to sustain a rate of more than 1 billion events per second.