Tag Archives: sensors

Meet Surfside’s Disaster-Data Forensic Sleuths

Post Syndicated from Mark Harris original https://spectrum.ieee.org/how-the-navy-seals-of-disaster-data-collection-are-helping-the-surfside-investigation

When Champlain Towers South, a 12-story beachfront condominium in the Miami suburb of Surfside, partially collapsed on the night of June 24, it was a disaster that no one expected. But one team, at least, was prepared for it.

Since 2018, the National Science Foundation’s Natural Hazards Reconnaissance Facility (known as the RAPID) has provided instrumentation, software, training, and support for research before, during, and after about 75 natural hazard and disaster events. Almost overnight, it can dispatch everything from iPads and cameras to a flock of sensor-packed drones or even a robotic hydrographic survey boat—as well as the expertise to use them.

In contrast to the public safety professionals who’ve deployed drones to determine the scope of the collapse and seek out survivors, the RAPID group looks for evidence of how and why buildings fall in the first place. And RAPID’s mission is equally time-critical.

“We are set up to rapidly enter a disaster zone and very quickly collect the data before it’s cleaned up as part of rescue and recovery,” says RAPID director Joe Wartman, a professor of Civil and Environmental Engineering at the University of Washington in Seattle, where RAPID is based. “In a sense, we’re the Navy SEALs of disaster data collection. We’re the only facility in the world that does this work.”

At least 98 people are now thought to have died in the Surfside disaster. Once its scale became clear, the RAPID team knew requests for assistance would quickly follow. “For major events like a big hurricane, we receive many more requests for RAPID than we can support,” says Joy Pauschke, program director for the Natural Hazards Engineering Research Infrastructure (NHERI) facility, of which RAPID is a part. “It’s up to us to go through and figure out which ones are addressing fundamental research questions, and to understand the situational awareness of accessing and operating at the location.”

Within days, the National Institute of Standards and Technology (NIST) announced that it would be carrying out a full technical investigation of the tower’s collapse—only its fifth such investigation since it was authorized to do so following the 9/11 attacks. NIST asked RAPID to deploy to Surfside to monitor a neighboring building for any signs as to whether it, too, might fall.

The team quickly gathered a lidar surveying system, drones, seismometers and accelerometers, and caught a red-eye flight to Miami. Even as they were in the air, however, everything changed. Tropical Storm Elsa was bearing down on the Florida coast, threatening the remaining structure. For the safety of rescuers and potential survivors alike, the stricken building would have to be fully demolished before the storm made landfall.

“By the time our team got on the ground, our mission had changed from simply monitoring, to trying to capture a full, three-dimensional digital model of the building before it was imploded the next day,” says Wartman.

NIST and National Science Foundation staff members discuss imaging of the Champlain Towers South site using lidar.
NIST and National Science Foundation staff members discuss imaging of the Champlain Towers South site using lidar, which uses pulsed laser light to measure distances to objects, creating a 3D representation of the site. NIST

RAPID team members went straight from the airport to the site, where they set up the lidar system amid unstable piles of rubble. “If we had two weeks to do this, we could have let the unit go for hours and scanned at the ultimate resolution,” says Wartman. “But we were racing to capture all of this before the building came down, so we made the decision to do half-hour scans, giving just enough detail to get key structural components.”

Working during the rescue teams’ break times, RAPID engineers managed to complete seven scans detailed enough to measure the location and spacing of the building’s metal rebar, the color and thickness of its concrete slabs, as well as trace individual cracks. The scans also recorded fallen columns and slabs nearby. Because lidar is an active sensor—rather a camera that passively records an image—the RAPID team could even work in low light.

After the structure was demolished on the evening of July 4, the RAPID team remained on site to scan the original debris pile. “The initially collapsed portion had a lot of information in it, in terms of the geometries of where things landed,” says Jeff Berman, RAPID’s Operations Director and also a UW engineering professor. “These were uncovered and documented as the rescue effort continued, layer by layer.”

The RAPID team also flew a drone to collect image data, and instrumented Champlain Towers North, a near-twin of the collapsed building, with accelerometers and a seismometer. “These will measure incoming vibrations from heavy equipment on the excavation site and measure the building’s vibration in response,” says Berman. “The motivation is to inform the analysis of the south tower because its structural characteristics, natural period, and distribution of mass are likely to be similar.”

After 10 days in Surfside, the RAPID engineers packed to return to Seattle—much lighter than when they arrived. With only three full-time staff, RAPID cannot stay at even a major disaster for long. “We can’t go everywhere all the time,” says Bergman. “So we left equipment with NIST staff who we had trained, and they were able to continue running scans until the excavation of the collapsed portion of building was at basement grade.”

By this point, the RAPID team had stitched together its lidar and drone scans into a single, photo-realistic 3D model of the fallen building, and handed it over to NIST.

“What NIST is going to be spending its time on for the next year or two is trying to simulate the collapse to figure out what caused it,” says Wartman. “They’ll develop finite element models with a physics base, and use our 3D geometry to validate those models, to help understand where things landed and what portions did or didn’t collapse.”

The lidars and drones, meanwhile, have just arrived back at RAPID’s Seattle HQ. They will now be cleaned, checked, recharged and put carefully back into their hard cases, ready to be dispatched again when the next unexpected disaster strikes.

COVID-19 Forced Us All to Experiment. What Have We Learned?

Post Syndicated from Michele Acuto original https://spectrum.ieee.org/sensors/imagers/covid-19-forced-us-all-to-experiment-what-have-we-learned

LIFE IS A HARD SCHOOL: First it gives us the test and only then the lesson. Indeed, throughout history humanity has learned much from disasters, wars, financial ruin—and pandemics. A scholarly literature has documented this process in fields as diverse as engineering, risk reduction, management, and urban studies. And it’s already clear that the COVID-19 pandemic has sped up the arrival of the future along several dimensions. Remote working has become the new status quo in many sectors. Teaching, medical consulting, and court cases are expected to stay partly online. Delivery of goods to the consumer’s door has supplanted many a retail storefront, and there are early signs that such deliveries will increasingly be conducted by autonomous vehicles.

On top of the damage it has wreaked on human lives, the pandemic has brought increased costs to individuals and businesses alike. At the same time, however, we can already measure solid improvements in productivity and innovation: Since February 2020, some 60 percent of firms in the United Kingdom and in Spain have adopted new digital technologies, and 40 percent of U.K. firms have invested in new digital capabilities. New businesses came into being at a faster rate in the United States than in previous years.

We propose to build on this foundation and find a way to learn not just from crises but even during the crisis itself. We argue for this position not just in the context of the COVID-19 pandemic but also toward the ultimate goal of improving our ability to handle things we can’t foresee—that is, to become more resilient.

To find the upside of emergencies, we first looked at the economic effects of a tidy little crisis, a two-day strike that partially disrupted service of the London Underground in 2014. We discovered that the approximately 5 percent of the commuters who were forced to reconsider their commute ended up finding better routes, which these people continued after service was restored. In terms of travel time, the strike produced a net benefit to the system because the one-off time costs of the strike were less than the enduring benefits for this minority of commuters.

Why had commuters not done their homework beforehand, finding the optimal route without pressure? After all, their search costs would have been quite low, but the benefits from permanently improving their commute might well have been large. Here, the answer seems to be that commuters were stuck in established yet inefficient habits; they needed a shock to prod them into making their discovery.

A similar effect followed the eruption of a long-dormant Icelandic volcano in 1973. For younger people, having their house destroyed led to an increase of 3.6 years of education and an 83 percent increase in lifetime earnings, due to their increased probability of migrating away from their destroyed town. The shock helped them overcome a situation of being stuck in a location with a limited set of potential occupations, to which they may not have been well suited.

As economists and social scientists, we draw two fundamental insights from these examples of forced experimentation. First, the costs and benefits of a significant disruption are unlikely to fall equally on all those affected, not least at the generational level. Second, to ensure that better ways of doing things are discovered, we need policies to help the experiment’s likely losers get a share of the benefits.

Because large shocks are rare, research on their consequences tends to draw from history. For example, economic historians have argued that the Black Death plague may have contributed to the destruction of the feudal system in Western Europe by increasing the bargaining power of laborers, who were more in demand. The Great Fire of London in 1666 cleared the way, literally, for major building and planning reforms, including the prohibition of new wooden buildings, the construction of wider roads and better sewers, and the invention of fire insurance.

History also illustrates that good data is often a prerequisite for learning from a crisis. John Snow’s 1854 Broad Street map of cholera contagion in London was not only instrumental in identifying lessons learned—the most important being that cholera was transmitted via the water supply—but also in improving policymaking during the crisis. He convinced the authorities to remove the handle from the pump of a particular water source that had been implicated in the spread of the disease, thereby halting that spread.

Four distinct channels lead to the benefits that may come during a disruption to our normal lives.

Habit disruption occurs when a shock forces agents to reconsider their behavior, so that at least some of them can discover better alternatives. London commuters found better routes, and Icelandic young people got more schooling and found better places to live.

Selection involves the destruction of weaker firms so that only the more productive ones survive. Resources then move from the weaker to stronger entities, and average productivity increases. For example, when China entered world markets as a major exporter of industrial products, production from less productive firms in Mexico was reduced or ceased altogether, thus diverting resources to more productive uses.

Weakening of inertia occurs when a shock frees a system from the grip of forces that have until now kept it in stasis. This model of a system that’s stuck is sometimes called path dependence, as it involves a way of doing things that evolved along a particular path, under the influence of economic or technological factors.

The classic example of path dependence is the establishment of the conventional QWERTY keyboard standard on typewriters in the late 19th century and computers thereafter. All people learn how to type on existing keyboards, so even a superior keyboard design can never gain a foothold. Another example is cities that persist in their original sites even though the economic reasons for founding them there no longer apply. Many towns and cities founded in France during the Roman Empire remain right where the Romans left them, even though the Romans made little use of navigable rivers and the coastal trade north of the Mediterranean that became important in later centuries. These cities have been held in place by the man-made and social structures that grew up around them, such as aqueducts and dioceses. In Britain, however, the nearly complete collapse of urban life after the departure of the Roman legions allowed that country to build new cities in places better suited to medieval trade.

Coordination can play a role when a shock resets a playing field to such an extent that a system governed by opposing forces can settle at a new equilibrium point. Before the Great Boston Fire of 1872, the value of much real estate had been held down by the presence of crumbling buildings nearby. After the fire, many buildings were reconstructed simultaneously, encouraging investment on neighboring lots. Some economists argue that the fire created more wealth than it destroyed.

The ongoing pandemic has set off a scramble among economists to access and analyze data. Although some people have considered this unseemly, even opportunistic, we social scientists can’t run placebo-controlled experiments to see how a change in one thing affects another, and so we must exploit for this purpose any shock to a system that comes our way.

What really matters is that the necessary data be gathered and preserved long enough for us to run it through our models, once those models are ready. We ourselves had to scramble to secure data regarding commuting behavior following the London metro strike; normally, such data gets destroyed after 8 weeks. In our case, thanks to Transport for London, we managed to get it anonymized and released for analysis.

In recent years, there has been growing concern over the use of data and the potential for “data pollution,” where an abundance of data storage and its subsequent use or misuse might work against the public interest. Examples include the use of Facebook’s data around the 2016 U.S. presidential election, the way that online sellers use location data to discriminate on price, and how data from Strava’s fitness app accidentally revealed the sites of U.S. military bases.

Given such concerns, many countries have introduced more stringent data-protection legislation, such as the EU General Data Protection Regulation (GDPR). Since this legislation was introduced, a number of companies have faced heavy fines, including British Airways, which in 2018 was fined £183 million for poor security arrangements following a cyberattack. Most organizations delete data after a certain period. Nevertheless, Article 89 of the GDPR allows them to retain data “for scientific or historical research purposes or statistical purposes” in “the public interest.” We argue that data-retention policies should take into account the higher value of data gathered during the current pandemic.

The presence of detailed data is already paying off in the effort to contain the COVID-19 pandemic. Consider the Gauteng City-Region Observatory in Johannesburg, which in March 2020 began to provide governmental authorities at every level with baseline information on the 12-million-strong urban region. The observatory did so fast enough to allow for crucial learning while the crisis was still unfolding.

The observatory’s data had been gathered during its annual “quality of life” survey, now in its 10th year of operation, allowing it to quantify the risks involved in household crowding, shared sanitation facilities, and other circumstances. This information has been cross-indexed with broader health-vulnerability factors, like access to electronic communication, health care, and public transport, as well as with data on preexisting health conditions, such as the incidence of asthma, heart disease, and diabetes.

This type of baseline management, or “baselining,” approach could give these data systems more resilience when faced with the next crisis, whatever it may be—another pandemic, a different natural disaster, or an unexpected major infrastructural fault. For instance, the University of Melbourne conducted on-the-spot modeling of how the pandemic began to unfold during the 2020 lockdowns in Australia, which helped state decision-makers suppress the virus in real time.

When we do find innovations through forced experimentation, how likely are those innovations to be adopted? People may well revert to old habits, and anyone who might reasonably expect to lose because of the change will certainly resist it. One might wonder whether many businesses that thrived while their employees worked off-site might nonetheless insist on people returning to the central office, where managers can be seen to manage, and thereby retain their jobs. We can also expect that those who own assets few people will want to use anymore will argue for government regulations to support those assets. Examples include public transport infrastructure—say, the subways of New York City—and retail and office space.

One of the most famous examples of resistance to technological advancements is the Luddites, a group of skilled weavers and artisans in early 19th-century England who led a six-year rebellion smashing mechanized looms. They rightly feared a large drop in their wages and their own obsolescence. It took 12,000 troops to suppress the Luddites, but their example was followed by other “machine breaking” rebellions, riots, and strikes throughout much of England’s industrial revolution.

Resistance to change can also come from the highest levels. One explanation for the low levels of economic development in Russia and Austria-Hungary during the 19th century was the ruling class’s resistance to new technology and to institutional reform. It was not that the leaders weren’t aware of the economic benefits of such measures, but rather that they feared losing a grip on power and were content to retain a large share of a small pie.

Clearly, it’s important to account for the effects that any innovation has on those who stand to lose from it. One way to do so is to commit to sharing any gains broadly, so that no one loses. Such a plan can disarm opposition before it arises. One example where this strategy has been successfully employed is the Montreal Protocol on Substances That Deplete the Ozone Layer. It included a number of measures to share the gains from rules that preserve the ozone layer, including payments to compensate those countries without readily available substitutes who would otherwise have suffered losses. The Montreal Protocol and its successor treaties have been highly effective in meeting their environmental objectives.

COVID-19 winners and losers are already apparent. In 2020, economic analysis of social distancing in the United States showed that as many as 1.7 million lives might be saved by this practice. However, it was also found that about 90 percent of the life-years saved would have accrued to people older than 50. Furthermore, it is not unreasonable to expect that younger individuals should bear an equal (or perhaps greater) share of the costs of distancing and lockdowns. It seems wise to compensate younger people for complying with the rules on social distancing, both for reasons of fairness and to discourage civil disobedience. We know from stock prices and spending data that some sectors and firms have suffered disproportionately during the pandemic, especially those holding stranded assets that must be written off, such as shopping malls, many of which have lost much of their business, perhaps permanently. We can expect similar outcomes for human capital. There are ways to compensate these parties also, such as cash transfers linked to retraining or reinvestment.

There will almost certainly be winners and losers as a result of the multitude of forced experiments occurring in workplaces. Some people can more easily adapt to new technologies, some are better suited to working from home or in new settings, and some businesses will benefit from less physical interaction and more online communication.

Consider that the push toward online learning that the pandemic has provided may cost some schools their entire business: Why would students wish to listen to online lectures from their own professors when they could instead be listening to the superstars of their field?  Such changes could deliver large productivity payoffs, but they will certainly have distributional consequences, likely benefitting the established universities, whose online platforms may now cater to a bigger market. We know from the history of the Black Death that if they’re big enough, shocks have the power to bend or even break institutions. Thus, if we want them to survive, we need to ensure that our institutions are flexible.

To manage the transition to a world with more resilient institutions, we need high-quality data, of all types and from various sources, including measures of individual human productivity, education, innovation, health, and well-being. There seems little doubt that pandemic-era data, even when it’s of the most ordinary sort, will remain more valuable to society than that gathered in normal times. If we can learn the lessons of COVID-19, we will emerge from the challenge more resilient and better prepared for whatever may come next.

Editor’s note: The views expressed are the authors’ own and should not be attributed to the International Monetary Fund, its executive board, or its management.

About the Author

Ferdinand Rauch is an economist at the University of Oxford, Shaun Larcom is an economist at the University of Cambridge, and Tim Willems is at the International Monetary Fund. Michele Acuto is a professor of global urban politics at the University of Melbourne.

This article appears in the August 2021 print issue as “What We Learned From the Pandemic.”

Tracking Plastics in the Ocean By Satellite

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/remote-sensing/scientists-track-the-flow-of-microplastics-in-the-ocean-from-space

Journal Watch report logo, link to report landing page

It’s well known that microplastics have infiltrated much of the world’s oceans, where these persistent pieces of garbage threaten the livelihood of marine life. But truly quantifying the extent of this problem globally remains a challenge.

Now, a pair of researchers are proposing a novel approach to tracking these minute scraps of plastic in the world’s oceans—remotely from space. They describe their new satellite-based approach in a study published this month in IEEE Transactions on Geoscience and Remote Sensing, which hints at seasonal fluxes of microplastic concentrations in the oceans.

The problem of plastic persisting in environments is only growing. The annual global production of plastic has been increasing steadily every year since the 1950s, reaching 359 million metric tons in 2018. “Microplastic pollution is a looming threat to humans and our environment, the extent of which is not well-defined through traditional sampling alone,” explains Madeline Evans, a Research Assistant with the Department of Climate and Space Sciences and Engineering, University of Michigan. “In order to work towards better solutions, we need to know more about the problem itself.”

Evans was an undergraduate student at the University of Michigan when she began working with a professor of climate and space science at the institution, Christopher Ruf. The two have been collaborating over the past couple of years to develop and validate their approach, which uses bistatic radar to track microplastics in the ocean, using NASA’s Cyclone Global Navigation Satellite System (CYGNSS).

The underlying concept behind their approach hinges on how the presence of microplastics alters the surface of the ocean. “The presence of microplastics and surfactants in the ocean cause the ocean surface to be less responsive to roughening by the wind,” says Ruf.

Therefore, Ruf and Evans explored whether CYGNSS measurements of the ocean’s surface roughness that deviated from predicted values, given local winds speeds, could be used to confirm the presence of microplastics. Indeed, they validated their approach by comparing it to other models for predicting microplastics. But whereas existing models provide a static snapshot of the extent and scope of microplastic pollution, this new approach using CYNGSS can be used to understand microplastic concentrations in real-time.

Using their approach to analyze global patterns, the researchers found that microplastic concentrations in the North Indian Ocean tend to be highest in late winter/early spring and lowest in early summer. These fluxes coincide with monsoon season, prompting the authors to suggest that the outflow patterns of rivers into the ocean and the diluting power of the rain may be influential for microplastic concentration, although more research is needed to confirm this.  As well, they detected a detected a strong seasonal pattern in the Great Pacific Garbage Patch, whereby concentrations of microplastics are highest in summer and lowest in winter.

“The seasonal shift in our measurements surprised me the most,” says Evans. “Before working on this project, I conceptualized the Great Pacific Garbage Patch as a consistent, mostly static mass. I was surprised to witness these large accumulations of microplastics behaving in such a dynamic way.”

Ruf emphasizes the novelty of being able to track microplastic concentrations in real time. “Time lapse measurements of microplastic concentration have never been possible before, so we are able to see for the first time some of its dynamic behavior, like the seasonal dependence and the outflow from major rivers into the ocean,” he says.

However, he notes that not being able to validate the model with direct sampling of microplastics remains a major limitation. Raf and Evans are now serving on global task force dedicated to remote sensing of marine litter and debris, which aims to improve these measurements in the future.

“It is very exciting to be involved in something this new and important,” says Raf. “[This work] will hopefully lead to better awareness of the problem of ocean microplastics and other marine debris and litter, and maybe someday to changes in industrial practices and lifestyle choices to help reduce the problem.”

Ultra-quick Cameras Reveal How to Catch a Light Pulse Mid-Flight

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/tech-talk/sensors/imagers/super-fast-cameras-capture-light-pulse-midflight

The footage is something between the ghostlike and commonplace: a tiny laser pulse tracing through a box pattern, as if moving through a neon storefront sign. But this pulse is more than meets the eye, more than a pattern for the eye to follow, and there’s no good metaphor for it. It is quite simply and starkly a complete burst of light—a pulse train with both a front and a back—captured in a still image, mid-flight.

Electrical engineers and optics experts at the Advanced Quantum Architecture (AQUA) Laboratory in Neuchâtel, Switzerland made this footage last year, mid-pandemic, using a single-photon avalanche diode camera, or SPAD. Its solid-state photodetectors are capable of very high-precision measurements of time and therefore distance, even as its single pixels are struck by individual light particles, or photons.

Edoardo Charbon and his colleagues at the Swiss Ecole Polytechnique Fédérale de Lausanne, working with camera maker Canon, were able in late 2019 to develop a SPAD array at megapixel size in a small camera they called Mega X. It can resolve one million pixels and process this information very quickly. 

The Charbon-led AQUA group—at that point comprising nanotechnology prizewinner Kazuhiro Morimoto, Ming-Lo Wu and Andrei Ardelean—synchronized Mega X to a femtosecond laser. They fire the laser through an aerosol of water vapor, made using dry ice procured at a party shop. 

The photons hit the water droplets, and some of them scatter toward the camera. With a sensor able to realize a megapixel resolution, the camera can capture 24,000 frames per second with exposure times as fast as 3.8 nanoseconds.

The sort of photodetectors used in Mega X have been in development for several decades. Indeed, SPAD imagers can be found in smartphone cameras, industrial robots, and lab spectroscopy. The kind of footage caught in the Neuchâtel lab is called light-in-flight. MIT’s Ramesh Raskar in 2010 and 2011 used a streak camera—a kind of camera used in chemistry applications—to produce 2D images and build films of light in flight in what he called femtophotography

The development of light-in-flight imaging goes back at least to the late 1970s, when Nils Abramson at Stockholm’s Royal Institute of Technology used holography to record light wavefronts. Genevieve Gariepy, Daniele Faccio and other researchers then in Edinburgh in 2015 used a SPAD to show footage of light in flight. The first light-in-flight images took many hours to construct, but the Mega X camera can do it in a few minutes. 

Martin Zurek, a freelance photographer and consultant in Bavaria (he does 3D laser scanning and drone measurements for architectural and landscape projects) recalls many long hours working on femtosecond spectroscopy for his physics doctorate in Munich in the late 1990s. When Zurek watches the AQUA light footage, he’s impressed by the resolution in location and time, which opens up dimensions for the image. “The most interesting thing is how short you can make light pulses,” he says. “You’re observing something fundamental, a fundamental physical property or process. That should astound us every day.”

A potential use for megapixel SPAD cameras would be for faster and more detailed light-ranging 3D scans of landscapes and large objects in mapping, facsimile production, and image recording. For example, megapixel SPAD technology could greatly enhance LiDAR imaging and modeling of the kind done by the Factum Foundation in their studies of the tomb of the Egyptian Pharaoh Seti I.

Faccio, now at the University of Glasgow, is using SPAD imaging to obtain better time-resolved fluorescence microscopy images of cell metabolism. As with Raskar’s MIT group, Faccio hopes to apply the technology to human body imaging.

The AQUA researchers were able to observe an astrophysical effect in their footage called superluminal motion, which is an illusion akin to the doppler effect. Only in this case light appears to the observer to speed up—which it can’t really do, already traveling as it does at the speed of light.

Charbon’s thoughts are more earthbound. “This is just like a conventional camera, except that in every single pixel you can see every single photon,” he says. “That blew me away. It’s why I got into this research in the first place.”

Atomically Precise Sensors Could Detect Another Earth

Post Syndicated from Jason Wright original https://spectrum.ieee.org/sensors/imagers/atomically-precise-sensors-could-detect-another-earth

Gazing into the dark and seemingly endless night sky above, people have long wondered: Is there another world like ours out there, somewhere? Thanks to new sensors that we and other astronomers are developing, our generation may be the first to get an affirmative answer—and the earliest hints of another Earth could come as soon as this year.

Astronomers have discovered thousands of exoplanets so far, and almost two dozen of them are roughly the size of our own planet and orbiting within the so-called habitable zone of their stars, where water might exist on the surface in liquid form. But none of those planets has been confirmed to be rocky, like Earth, and to circle a star like the sun. Still, there is every reason to expect that astronomers will yet detect such a planet in a nearby portion of the galaxy.

So why haven’t they found it yet? We can sum up the difficulty in three words: resolution, contrast, and mass. Imagine trying to spot Earth from hundreds of light-years away. You would need a giant telescope to resolve such a tiny blue dot sitting a mere 150 million kilometers (0.000016 light-year) from the sun. And Earth, at less than a billionth the brightness of the sun, would be hopelessly lost in the glare.

For the same reasons, current observatories—and even the next generation of space telescopes now being built—have no chance of snapping a photo of our exoplanetary twin. Even when such imaging does become possible, it will allow us to measure an exoplanet’s size and orbital period, but not its mass, without which we cannot determine whether it is rocky like Earth or largely gaseous like Jupiter.

For all these reasons, astronomers will detect another Earth first by exploiting a tool that’s been used to detect exoplanets since the mid-1990s: spectroscopy. The tug of an orbiting exoplanet makes its star “wobble” ever so slightly, inducing a correspondingly slight and slow shift in the spectrum of the star’s light. For a doppelgänger of Earth, that fluctuation is so subtle that we have to look for a displacement measuring just a few atoms across in the patterns of the rainbow of starlight falling on a silicon detector in the heart of a superstable vacuum chamber.

Our group at Pennsylvania State University, part of a team funded by NASA and the U.S. National Science Foundation (NSF), has constructed an instrument called NEID (short for NN-explore Exoplanet Investigations with Doppler spectroscopy), for deployment in Arizona to search the skies of the Northern Hemisphere. (The name NEID—pronounced “NOO-id”—derives from an American Indian word meaning “to see.”) A Yale University team has built a second instrument, called EXPRES (for EXtreme PREcision Spectrometer), which will also operate in Arizona. Meanwhile, a third team led by the University of Geneva and the European Southern Observatory is commissioning an instrument called ESPRESSO (Echelle SPectrograph for Rocky Exoplanet and Stable Spectroscopic Observations), in Chile, to hunt in the southern skies.

All three groups are integrating Nobel Prize–winning technologies in novel ways to create cutting-edge digital spectrographs of femtometer-scale precision. Their shared goal is to achieve the most precise measurements of stellar motions ever attempted. The race to discover the elusive “Earth 2.0” is afoot.

Using spectroscopy to pick up an exoplanet-induced wobble is an old idea, and the basic principle is easy to understand. As Earth circles the sun in a huge orbit, it pulls the sun, which is more than 300,000 times as massive, into a far smaller “counter orbit,” in which the sun moves at a mere 10 centimeters per second, about the speed of a box turtle.

Even such slow motions cause the wavelengths of sunlight to shift—by less than one part per billion—toward shorter, bluer wavelengths in the direction of its motion and toward longer, redder wavelengths in the other direction. Exoplanets induce similar Doppler shifts in the light from their host stars. The heavier the planet, the bigger the Doppler shift, making the planet easier to detect.

In 1992, radio astronomers observing odd objects called pulsars used the timing of the radio signals they emit to infer that they hosted exoplanets. In 1995, Michel Mayor and Didier Queloz of the University of Geneva were using the Doppler wobbles in the light from ordinary stars to search for orbiting companions and found a giant planet orbiting the star 51 Pegasi every four days.

This unexpected discovery, for which Mayor and Queloz won the 2019 Nobel Prize in Physics, opened the floodgates. The “Doppler wobble” method has since been used to detect more than 800 exoplanets, virtually all of them heavier than Earth.

So far, almost all of the known exoplanets that are the size and mass of Earth or smaller were discovered using a different technique. As these planets orbit, they fortuitously pass between their star and our telescopes, causing the apparent brightness of the star to dim very slightly for a time. Such transits, as astronomers call them, give away the planet’s diameter—and together with its wobble they will reveal the planet’s mass.

Since the first transiting exoplanet was discovered in 1999, the Kepler space telescope and Transiting Exoplanet Survey Satellite (TESS) have found thousands more, and that number continues to grow. Most of these bodies are unlike any in our solar system. Some are giants, like Jupiter, orbiting so close to their host stars that they make the day side of Mercury look chilly. Others are gassy planets like Neptune or Uranus but much smaller, or rocky planets like Earth but much larger.

Through statistical analysis of the bounteous results, astronomers infer that our own Milky Way galaxy must be home to billions of rocky planets like Earth that orbit inside the habitable zones of their host stars. A study published recently by scientists who worked on the Kepler mission estimated that at least 7 percent—and probably more like half—of all sunlike stars in the Milky Way host potentially habitable worlds. That is why we are so confident that, with sufficiently sensitive instruments, it should be possible to detect other Earths.

But to do so, astronomers will have to measure the mass, size, and orbital period of the planet, which means measuring both its Doppler wobble and its transit. The trove of data from Kepler and TESS—much of it yet to be analyzed—has the transit angle covered. Now it’s up to us and other astronomers to deliver higher-precision measurements of Doppler wobbles.

NEID, our contestant in the race to find Earth 2.0, improves the precision of Doppler velocimetry by making advances on two fronts simultaneously: greater stability and better calibration. Our design builds on work done in the 2000s by European astronomers who built the High Accuracy Radial Velocity Planet Searcher (HARPS), which controlled temperature, vibration, and pressure so well that it faithfully tracked the subtle Doppler shifts of a star’s light over years without sacrificing precision.

In 2010, we and other experts in this field gathered at Penn State to compare notes and discuss our aspirations for the future. HARPS had been producing blockbuster discoveries since 2002; U.S. efforts seemed to be lagging behind. The most recent decadal review by the U.S. National Academy of Sciences had recommended an “aggressive program” to regain the lead.

Astronomers at the workshop drafted a letter to NASA and the NSF that led to them commissioning a new spectrograph, to be installed at the 3.5-meter WIYN telescope at Kitt Peak National Observatory, in Arizona. (“WIYN” is an acronym derived from the names of the four founding institutions.) NEID would be built at Penn State by an international consortium.

Like HARPS before it, NEID combines new technologies with novel methods to achieve unprecedented stability during long-term observations. Any tiny change in the instrument—whether in temperature or pressure, in the way starlight illuminates its optics, or even in the nearly imperceptible expansion of the instrument’s aluminum structure as it ages—can cause the dispersed starlight to creep across the detector. That could look indistinguishable from a Doppler shift and corrupt our observations. Because the wobbles we aim to measure show up as shifts of mere femtometers on the sensor—comparable to the size of the atoms that make up the detector—we have to hold everything incredibly steady.

That starts with exquisite thermal control of the instrument, which is as big as a car. It has required us to design cutting-edge digital detectors and sophisticated laser systems, tied to atomic clocks. These systems act as the ruler by which to measure the amplitudes of the stellar wobbles.

Yet we know we cannot build a perfectly stable instrument. So we rely on calibration to take us the rest of the way. Previous instruments used specialized lamps containing thorium or uranium, which give off light at hundreds of distinct, well-known wavelengths—yardsticks for calibrating the spectral detectors. But these lamps change with age.

NEID instead relies on a laser frequency comb: a Nobel Prize–winning technology that generates laser light at hundreds of evenly spaced frequency peaks. The precision of the comb, limited only by our best atomic clocks, is better than a few parts per quadrillion. By using the comb to regularly calibrate the instrument, we can account for any residual instabilities.

Somewhere out there, another Earth is orbiting its star, which wobbles slowly in sympathy around their shared center of gravity. Over the course of a year or so, the star’s spectrum shifts ever so slightly toward the red, then toward the blue. As that starlight reaches Kitt Peak, the telescope brings it to a sharp focus at the tip of a glass optical fiber, which guides the light down into the bowels of the observatory through a conduit into a custom-designed room housing the NEID instrument.

The focus spot must strike the fiber in exactly the same way with every observation; any variations can translate into slightly different illumination patterns on the detector, mimicking stellar Doppler shifts. The quarter-century-old WIYN telescope was never designed for such work, so to ensure consistency our colleagues at the University of Wisconsin–Madison built an interface that monitors the focus and position of the stellar image hundreds of times a second and rapidly adjusts to hold the focal point steady.

Down in the insulated ground-floor room, a powerful heating-and-cooling system keeps a vacuum chamber bathed in air at a constant temperature of 20 °C. NEID’s spectrograph components, held at less than one-millionth of the standard atmospheric pressure inside the vacuum chamber, are protected from even tiny changes in pressure by a combination of pumps and “getters”—including 2 liters of cryogenic charcoal—to which stray gas molecules stick.

A network of more than 75 high-precision thermometers actively controls 30 heaters placed around the instrument’s outer surface to compensate for any unexpected thermal fluctuations. With no moving parts or motors that might generate heat and disrupt this delicate thermal balance, the system is able to hold NEID’s core optical components to exactly 26.850 °C, plus or minus 0.001 °C.

As the starlight emerges from the optical fiber, it strikes a parabolic collimating mirror and then a reflective diffraction grating that is 800 millimeters long. The grating splits the light into more than 100 thin colored lines. A large glass prism and a system of four lenses then spread the lines across a silicon-based, 80-megapixel charge-coupled device. Within that CCD sensor, photons from the distant star are converted into electrons that are counted, one at a time, by supersensitive amplifiers.

Heat from the amplifiers makes the detector itself expand and contract in ways that are nearly impossible to calibrate. We had to devise some way to keep that operating heat as constant and manageable as possible.

Electrons within the CCD accumulate in pixels, which are actually micrometer-size wells of electric potential, created by voltages applied to minuscule electrodes on one side of the silicon substrate. The usual approach is to hold these voltages constant while the shutter is open and the instrument collects stellar photons. At the end of the observation, manipulation of the voltages shuffles collected electrons over to the readout amplifiers. But such a technique generates enough heat—a few hundredths of a watt—to cripple a system like NEID.

Our team devised an alternative operating method that prevents this problem. We manipulate the CCD voltages during the collection of stellar photons, jiggling the pixels slightly without sacrificing their integrity. The result is a constant heat load from the detector rather than transient pulses.

Minor imperfections in the CCD detector present us with a separate engineering challenge. State-of-the-art commercial CCDs have high pixel counts, low noise, and impeccable light sensitivity. These sensors nevertheless contain tiny variations in the sizes and locations of the millions of individual pixels and in how efficiently electrons move across the detector array. Those subtle flaws induce smearing effects that can be substantially larger than the femtometer-size Doppler shifts we aim to detect. The atomic ruler provided by the laser frequency comb allows us to address this issue by measuring the imperfections of our sensor—and appropriately calibrating its output—at an unprecedented level of precision.

We also have to correct for the motion of the telescope itself as it whirls around Earth’s axis and around the sun at tens of kilometers per second—motion that creates apparent Doppler shifts hundreds of thousands of times as great as the ~10 cm/s wobbles caused by an exoplanet. Fortunately, NASA’s Jet Propulsion Laboratory has for decades been measuring Earth’s velocity through space to much better precision than we can measure it with our spectrograph. Our software combines that data with sensitive measurements of Earth’s rotation to correct for the motion of the telescope.

In January 2020, NEID achieved its “first light” with observations of 51 Pegasi. Since then, we have refined the calibration and tuning of the instrument, which in recent months has approached our design target of measuring Doppler wobbles as slow as 33 cm/s—a big step toward the ultimate goal of 10-cm/s sensitivity. With commissioning underway, the instrument is now regularly measuring some of the nearest, brightest sunlike stars for planets.

Meanwhile, in Chile’s Atacama Desert, the most sophisticated instrument of this kind yet built, ESPRESSO, has been analyzing Doppler shifts in starlight gathered and combined from the four 8.2-meter telescopes of the Very Large Telescope. ESPRESSO is a technological marvel that achieves unprecedentedly precise observations of very faint and distant stars by using two arms that house separate optics and CCD detectors optimized for the red and blue halves of the visible spectrum.

ESPRESSO has already demonstrated measurements that are precise to about 25 cm/s, with still better sensitivity expected as the multiyear commissioning process continues. Astronomers have used the instrument to confirm the existence of a supersize rocky planet in a tight orbit around Proxima Centauri, the star nearest the sun, and to observe the extremely hot planet WASP-76b, where molten iron presumably rains from the skies.

At the 4.3-meter Lowell Discovery Telescope near Flagstaff, Ariz., EXPRES has recently demonstrated its own ability to measure stellar motions to much better than 50 cm/s. Several other Doppler velocimeters, to be installed on other telescopes, are either in the final stages of construction or are coming on line to join the effort. The variety of designs will help astronomers observe stars of differing kinds and magnitudes.

Since that 2010 meeting, our community of instrument builders has quickly learned which design solutions work well and which have turned out to be more trouble than they are worth. Perhaps when we convene at the next such meeting in California, one group will have already found Earth 2.0. There’ll be no telling, though, whether it harbors alien life—and if so, whether the aliens have also spied us.

This article appears in the March 2021 print issue as “ How We’ll Find Another Earth.”

About the Author

Jason T. Wright is a professor of astronomy and astrophysics at Pennsylvania State University. Cullen Blake is an associate professor in the department of physics and astronomy at the University of Pennsylvania.

Ultrasonic Holograms: Who Knew Acoustics Could Go 3D?

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/imagers/acoustic-holograms-and-3d-ultrasound

Although its origins trace back to 1985, acoustic holography as a field has been hampered by rampant noise created by widespread reflection and scattering. (Lasers tamp this problem down somewhat for conventional optical holograms; ultrasound as yet offers no such technological quick fix.) 

But in a recent study published last month in IEEE Sensors Journal, a group of researchers report an improved method for creating acoustic holograms. While the advance won’t lead to treatment with acoustic holograms in the immediate future, the improved technique yields higher sensitivity and a better focusing effect than previous acoustic hologram methods.

There are a number of intriguing possibilities that come with manipulation using sound, including medical applications. Ultrasound can penetrate human tissues and is already used for medical imaging. But more precise manipulation and imaging of human tissues using 3D holographic ultrasound  could lead to completely new therapies—including targeted neuromodulation using sound.

The nature of sound itself poses the first hurdle to be overcome. “The medical application of acoustic holograms is limited owing to the sound reflection and scattering at the acoustic holographic surface and its internal attenuation,” explains Chunlong Fei, an associate professor at Xidian University who is involved in the study.

To address these issues, his team created their acoustic hologram via a “lens” consisting of a disc with a hole at its center. They placed a 1 MHz ultrasound transducer in water and used the outer part of the transducer surface to create the hologram. By creating a hole in the center of the lens, the center of the transducer generates and receives soundwaves with less reflection and scattering.

Next, the researchers compared their new disc approach to more conventional acoustic hologram techniques. They performed this A vs. B comparison via ultrasound holographic images of several thin needles, 1.25 millimeters in diameter or less.

“The most notable feature of the hole-hologram we proposed is that it has high sensitivity and maintains good focusing effect [thanks to the] holographic lens,” says Fei. He notes that these features will lead to less scattering and propagation loss than what occurs with traditional acoustic holograms.

Fei envisions several different ways in which this approach could one day be applied medically, for example by complementing existing medical imaging probes to achieve better resolution, or for applications such as nerve regulation or non-invasive brain stimulation. However, the current set up, using water, would need to be modified to be more suitable for medical setting, along with several next steps related to characterizing and manipulating the hologram, says Fei.

The varied design improvements Fei’s team hopes to develop match the equally eclectic possible applications of ultrasonic hologram technology. In the future, Fei says they hope acoustic holographic devices might achieve super-resolution imaging, particle trapping, selective biological tissue heating—and even find new applications in personalized medicine.

3D Fingerprint Sensors Get Under Your Skin

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/imagers/3d-fingerprint-sensor-id-subcutaneous

Journal Watch report logo, link to report landing page

Many people already use fingerprint recognition technology to access their phones, but one group of researchers wants to take this skin-deep concept further. In a study published January 18 in IEEE Sensors Journal, they described a technique that maps out not only the unique pattern of a person’s fingerprint, but the blood vessels underneath. The approach adds another layer of security to identity authentication.

Typically, a fingerprint scan only accounts for 2D information that captures a person’s unique fingerprint pattern of ridges and valleys, but this 2D information could easily be replicated.

“Compared with the existing 2D fingerprint recognition technologies, 3D recognition that captures the finger vessel pattern within a user’s finger will be ideal for preventing spoofing attacks and be much more secure,” explains Xiaoning Jing, a distinguished professor at North Carolina State University and co-author of the study.

To develop a more secure approach using 3D recognition, Jing and his colleagues created a device that relies on ultrasound pulses. When a finger is placed upon the system, triggering a pressure sensor, a high-frequency pulsed ultrasonic wave is emitted. The amplitudes of reflecting soundwaves can then be used to determine both the fingerprint and blood vessel patterns of the person.

In an experiment, the device was tested using an artificial finger created from polydimethylsiloxane, which has an acoustic impedance similar to human tissues. Bovine blood was added to vessels constructed in the artificial finger. Through this set up, the researchers were able to obtain electronic images of both the fingerprint and blood vessel patterns with resolutions of 500 × 500 dots per inch, which they say is sufficient for commercial applications.

Intriguingly, while the blood vessel features beneath the ridges of the artificial finger could be determined, this was not the case for 40% of the blood vessels that lay underneath the valleys of the fingerprint. Jing explains that this is because the high-frequency acoustic waves cannot propagate through the tiny spaces confined within the valleys of the fingerprint. Nevertheless, he notes that enough of the blood vessels throughout the finger can be distinguished enough to make this approach worthwhile, and that data interpolation or other advanced processing techniques could be used to reconstruct the undetected portion of vessels.

Chang Peng, a post-doc research scholar at North Carolina State University and co-author of the study, see this approach as widely applicable. “We envision this 3D fingerprint recognition approach can be adopted as a highly secure bio-recognition technique for broad applications including consumer electronics, law enforcement, banking and finance, as well as smart homes,” he says, noting that this group is seeking a patent and looking for industry partners to help commercialize the technology.

Notably, the current set up using a single ultrasound inducer takes an hour or more to acquire an image. To improve upon this, the researchers are planning to explore how an array ultrasonic fingerprint sensors will perform compared to the single sensor that was used in this study. Then they aim to test the device with real human fingers, comparing the security and robustness of their technique to commercialized optical and capacitive techniques for real fingerprint recognition.

Light-driven sonar could survey the oceans from the air

Post Syndicated from Prachi Patel original https://spectrum.ieee.org/tech-talk/sensors/remote-sensing/lightdriven-sonar-could-survey-the-oceans-from-the-air

Sonar, which measures the time it takes for sound waves to bounce off objects and travel back to a receiver, is the best way to visualize underwater terrain or inspect marine-based structures. Sonar systems, though, have to be deployed on ships or buoys, making them slow and limiting the area they can cover.

However, engineers at Stanford University have developed a new hybrid technique combining light and sound. Aircraft, they suggest, could use this combined laser/sonar technology to sweep the ocean surface for high-resolution images of submerged objects. The proof-of-concept airborne sonar system, presented recently in the journal IEEE Access, could make it easier and faster to find sunken wrecks, investigate marine habitats, and spot enemy submarines.

Our system could be on a drone, airplane or helicopter,” says Amin Arbabian, an electrical engineering professor at Stanford University. “It could be deployed rapidly…and cover larger areas.”

Airborne radar and lidar are used to map the Earth’s surface at high resolution. Both can penetrate clouds and forest cover, making them especially useful in the air and on the ground. But peering into water from the air is a different challenge. Sound, radio, and light waves all quickly lose their energy when traveling from air into water and back. This attenuation is even worse in turbid water, Arbabian says.

So he and his students combined the two modalities—laser and sonar. Their system relies on the well-known photoacoustic effect, which turns pulses of light into sound. “When you shine a pulse of light on an object it heats up and expands and that leads to a sound wave because it moves molecules of air around the object,” he says.

The group’s new photoacoustic sonar system begins by shooting laser pulses at the water surface. Water absorbs most of the energy, creating ultrasound waves that move through it much like conventional sonar. These waves bounce off objects, and some of the reflected waves go back out from the water into the air.

At this point, the acoustic echoes lose a tremendous amount of energy as they cross that water-air barrier and then travel through the air. Here is where another critical part of the team’s design comes in.

To detect the weak acoustic waves in air, the team uses an ultra-sensitive microelectromechanical device with the mouthful name of an air-coupled capacitive micromachined ultrasonic transducer (CMUT). These devices are simple capacitors with a thin plate that vibrates when hit by ultrasound waves, causing a detectable change in capacitance. They are known to be efficient at detecting sound waves in air, and Arbabian has been investigating the use of CMUT sensors for remote ultrasound imaging. Special software processes the detected ultrasound signals to reconstruct a high-resolution 3D image of the underwater object.

The researchers tested the system by imaging metal bars of different heights and diameters placed in a large 25cm-deep fish tank filled with clear water. The CMUT detector was 10cm above the water surface.

The system should work in murky water, Arbabian says, although they haven’t tested that yet. Next up, they plan to image objects placed in a swimming pool, for which they will have to use more powerful laser sources that work for deeper water. They also want to improve the system so it works with waves, which distort signals and make the detection and image reconstruction much harder. “This proof of concept is to show that you can see through the air-water interface” Arbabian says. “That’s the hardest part of this problem. Once we can prove it works it can scale up to greater depths and larger objects.”

Accelerating the journey to autonomous for carmakers

Post Syndicated from ARM Automotive original https://spectrum.ieee.org/sensors/automotive-sensors/accelerating-the-journey-to-autonomous-for-carmakers

A child runs into the street, stands frozen and terrified as she sees a car speeding toward her. What’s safer? Relying on a car with autonomous capabilities or a human driver to brake in time?

In that situation, it takes the average driver 1.5 seconds to react and hit the brakes; it takes a car equipped with vision systems, RADAR and LIDAR just 0.5 seconds. That 3x faster response time can mean the difference between life and death, and it’s one of many reasons the automotive industry is accelerating down its journey to autonomous. Whether it’s safety, efficiency, comfort, navigation or the more efficient manufacture of an autonomous car, automotive companies are increasingly embracing autonomous technologies.

But this wave of interest in autonomous technologies does not come without its challenges. Disparate systems and a dizzying range of software and hardware choices can make charting a path to autonomy challenging. For example, many current solutions for automated driving are based on bulky, high-power, costly standalone chips. Others require proprietary solutions, which severely constrain design flexibility. However, to make it suitable for widespread commercial use, Tier1 and OEMs are looking for a more power-efficient and cost-effective solution suitable for mass production.

To embrace the vision, we need to understand that autonomous workloads are fundamentally different from traditional workloads and confront the challenges that complexity presents. The incorporation of AI and machine learning means the workload magnitude is far greater when compared to something like the narrow scope of operation surrounding, for example, contemporary ADAS (automated driver assistance systems).

Grappling with complexity

Autonomous is also more complex than traditional workloads because of the overlapping and interconnected nature of systems in industrial applications. There may be a machine vision system with its own set of data and algorithms that needs to integrate with robot vehicles that are alerted by the vision system that a task has been completed and pickup is required.

Integration may also be required for other points along the supply chain. For example, vehicles rolling down the manufacturing line will benefit from just-in-time parts arrivals thanks to the more integrated, near real-time connection with the supply chain.

These technologies also need to support features to help achieve both ASIL D and ASIL B safety requirements, as well as ISO26262.

New thinking, new technology

All of this requires a fundamental reconsideration of design, validation and configuration as well as embracing emerging technologies and methodologies.

As we evolve toward increasingly autonomous deployment, we need new hardware and software systems that will replace some of the commercial offerings being used today – and a fundamental shift from homogeneity to heterogeneous systems. These new systems must not only deliver flexibility, but help reduce the power, size, cost and thermal properties while retaining the performance needed to run autonomous workloads.  And such systems will benefit from a flourishing ecosystem coalescing around these new design approaches that offer customers choice and flexibility to unleash their innovations.

Our mission at Arm is to deliver the underlying technologies and nurture an ecosystem to enable industries such as automotive to accelerate their journey to autonomy and reap the rewards of delivering new, transformative products to their customers.

In this market deep dive we outline the technology challenges and hardware and software solutions that can propel the automotive industry forward in its journey to a more safe, efficient and prosperous future.

IoT Makes Fire Detection Systems Smarter

Post Syndicated from Brian Horowitz original https://spectrum.ieee.org/tech-talk/sensors/remote-sensing/how-iot-makes-fire-detection-systems-smarter

For years, first responders relied on paper maps to reach a fire in an apartment building or office. Incomplete information would delay firefighters from arriving at an emergency, and false alarms would set them on the wrong path altogether. Dispatchers in 911 centers would receive erroneous information on a problem with a smoke detector rather than a sprinkler switch.

“It gets to the point where you don’t even trust the data,” said Dick Bauer, fire chief for the Killingworth Volunteer Fire Company in Killingworth, Connecticut.

Now cloud computing, mobile apps, edge computing and IoT gateways will enable fire safety personnel to gain visibility into how to reach an emergency.

Remote monitoring and diagnostic capabilities of an IoT system help firefighters know where to position personnel and trucks in advance, according to Bauer. An IoT system tells fire personnel the locations of a smoke detector going off, a heat detector sending signals or a water flow switch being activated.

“You can see a map of the building with the actual location identified where the fire really is, and you can actually watch it spread if you have enough sensors,” said Bill Curtis, analyst in residence, IoT, at Moor Insights & Strategy.

IoT will make systems in commercial buildings work together like Amazon’s Alexa controls lights, thermostats and audio/video (AV) equipment in a home, Curtis said. An IoT system could shut down an HVAC system or put elevators in fire mode if smoke is blowing around a building, he suggested. A mobile app populated with sensor data can provide visibility into emergency systems and how to control specific locations in a building. It provides a holistic view of sensors, controls and fire panels.

Firefighters speeding to the scene will know what floor the fire is on and which sensors the emergency triggered. They’ll also learn how many people are in the building, and which entrance to use when they get there, Curtis explained.

“The more sensors and different types of sensors means earlier detection and greater resolution as well as greater precision on exactly where the fire is and how it is moving,” he said.

How IoT Fire Detection Works

Companies such as such as BehrTech and Honeywell offer IoT connectivity systems that provide situational awareness when fighting fires. BehrTech’s MyThings wireless IoT platform provides disaster warnings to guard against forest fires. It lets emergency personnel monitor the weather as well as atmospheric and seismic data.

On 20 Oct., Honeywell introduced a cloud platform for its Connected Life Safety Services (CLSS) that allows first responders to access data on a fire system before they get to an emergency. It’s now possible to evaluate the condition of devices and get essential data about an emergency in real time using a mobile app.

The CLSS cloud platform connects to an IoT gateway at a central station, which collects data from sensors around a building. CLSS transmits data on the building location that generated the alarm to fire departments. It also provides a history of detector signals over the previous 24 hours and indicates whether the smoke detector had previously triggered a false alarm, says Sameer Agrawal, general manager of software and services at Honeywell.

Agrawal said smart fire IoT platforms like CLSS indicate precisely where an emergency is occurring and will enable firefighters to take the right equipment to the correct location.

“When the dispatch sends a fire truck, the Computer Aided Dispatch (CAD) system will provide an access code that the officer in the truck can punch into an application; that will bring up a 2D model of the building and place the exact location of the alarm,” Agrawal said. “You’re able to track your crews that way, so this really is the kind of information that’s going to make their jobs so much safer and more efficient and take all the guesswork out of it.”

IoT Fire Safety Systems in the Future

Curtis suggests that, as more emergency systems to become interconnected in the future, building managers and workers should get access to these dashboards in addition to firefighters.

“Why not show the building occupants where the fire is so they can avoid it?” Curtis says.

In addition, smart fire detection systems will use artificial intelligence (AI) to detect false alarms and provide contextual information on how to prevent them—and prevent people from being thrown out of their hotel beds unnecessarily at 3 a.m., Agrawal said.

“When next-generation AI comes into play,” Agrawal says, “we start understanding more information about, you know, why was it a false alarm or what could have been done differently.”

An AI-equipped detection system will present a score to a facility manager indicating whether there’s a need to call the fire department. Information on the cause of an event and how first responders responded to past emergencies will help the software come up with the score.

What’s more, the algorithms will help detect anomalies in the data from multiple sensors. These anomalies can include a sensor malfunction, a security breach, or a reading that’s “unreasonable,” says Curtis.

Get More From Your Automotive Electronics Test Investment

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/get_more_from_your_automotive_electronics_test_investment

Keysight Automotive

Explore the powerful software behind Keysight’s high-precision hardware and discover how to meet emerging automotive electronics test requirements, while driving higher ROI from your existing hardware. Let Keysight help you cross the finish line ahead of your competition. Jump-start your automotive innovation today with a complimentary software trial

Cops Tap Smart Streetlights Sparking Controversy and Legislation

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/sensors/remote-sensing/cops-smart-street-lights

When San Diego started installing its smart streetlights in 2017, city managers envisioned that the data they gathered would help improve city operations—like selecting streets for bicycle lanes, identifying dangerous intersections for special attention, and figuring out where the city needed more parking. They thought they might spark some tech startups to develop apps that would guide the visually impaired, point drivers to parking places, and send joggers down the quietest routes. And they also envisioned cost savings from reduced energy thanks to the vastly higher efficiencies of the LED lights in comparison with the sodium-vapor lights they replaced.

Instead a $30 million project that had looked like it would put San Diego on the map as one of the “smartest” cities in the U.S. has mired it in battles over the way these systems are being used by law enforcement. Meanwhile, the independent apps that had seemed so promising two years ago have failed to materialize, and even the idea that the technology would pay for itself as energy costs came down didn’t work out as hoped.

San Diego got its smart “CityIQ” streetlights from GE Current, a company originally started as a subsidiary of General Electric but acquired last year by private-equity firm American Industrial Partners. Some 3300 of them have been installed to date and 1000 more have been received but not installed. As part of the deal, the city contracted with Current to run the cloud-based analytics of sensor data on its CityIQ platform. As part of that contract, the cloud operator, rather than the city, owns any algorithms derived from the data. In an additional shuffle, American Industrial Partners sold off the CityIQ platform in May to Ubicuia, a Florida manufacturer of streetlight sensors and software, but kept the LED-lighting side of the operation.

San Diego was the first city to fully embrace the CityIQ technology, though Atlanta and Portland did run pilot tests of the technology. San Diego financed the smart lights—and 14,000 other basic LED lights—with a plan that spread the payments out over 13 years, in such a way that the energy savings from replacing incandescent lighting would cover the cost and then some.

The CityIQ streetlights are packed with technology. Inside is an Intel Atom processor, half a terabyte of memory, Bluetooth and Wi-Fi radios, two 1080p video cameras, two acoustical sensors, and environmental sensors that monitor temperature, pressure, humidity, vibration, and magnetic fields. Much of the data is processed on the node—a textbook example of “edge processing.” That typically includes the processing of the digital video: machine-vision algorithms running on the streetlight itself count cars or bicycles, say, or extract the average speed of vehicles, and then transmit that information to the cloud. This data is managed under contract, initially by GE Current, and the data manager owns any analytics or algorithms derived from processed data.

Initially, at least, the data was expected to be used exclusively for civic analysis and planning and public convenience.

There was an expectation when we launched this that it was not a law-enforcement system, Says Erik Caldwell, deputy chief operating officer for the City of San Diego. “That’s true; that’s not the primary purpose,” he adds. “The primary purpose is to gather data to better run the city.”

But in August 2018, everything changed. That’s when, while investigating a murder in San Diego’s Gaslamp Quarter, a police officer looked up and saw one of the new smart streetlights. He realized the streetlight’s video cameras had a perfect view of the crime scene—one unavailable from the various security cameras in the area.

“We had never seen a video from any of these cameras before,” says Jeffrey Jordon, a captain with the San Diego Police Department. “But we realized the camera was exactly where the crime scene was.”

The police department reached out to San Diego’s environmental services department, the organization responsible for the lights, and asked if video were available. It turned out that the video was still stored on the light—it is deleted after five days—and Current was able to pull it up from the light to its cloud servers, and then forward it to the police department. 

“It was clear at that point that some of the video could help solve crimes, so we had an obligation to turn that over when there was a major crime,” Caldwell says.

“The data sits at the sensors unless we have a specific request, when a violent crime or fatal accident occurs. It is very surgical. We only use it in a reactive way,” Jordon states, not as surveillance.

In this case, it turned out, the video exonerated the person who had been arrested for the murder.

At this point, Caldwell admits that he and his colleagues in the city administration made a mistake.

“We could have done a better job of communicating with the public that there had been a change in the use of the data,” he concedes. “One could have argued that we should have thought this through from the beginning.”

Informal discussions between city managers and police department officials started a month or two later, Caldwell recalls. A lot of things had to be sorted out. There needed to be policies and procedures around when streetlight data would be used by the police. And the process of getting the data to the police had to be streamlined. As it was, it was very cumbersome, involving looking up a streetlight number on a map, contacting the environmental services department with that number, which would in turn contact Current, who would then extract and forward the video. It was an essentially manual process with so many people in the loop that it insufficiently safeguarded the chain of custody required for any evidence to hold up in court.

Come October, the police department had a draft policy in place, limiting the use of the data to serious crimes, though what that set of crimes encompassed wasn’t fully defined.

By mid-February of 2019, the data retrieval and chain-of-custody issues were resolved. The police department had started using Genetec Clearance, a digital-evidence management system, Jordon explains. San Diego Police now have a direct visual interface to the smart-streetlight network, showing where the lights are and if they are operational at any time. When a crime occurs, police officers can use that tool to directly extract the video from a particular streetlight. Data not extracted is still erased after five days.

It’s hard to say exactly when the use of the streetlight video started bubbling up into the public consciousness. Between March and September 2019 the city and the police department held more than a dozen community meetings, explaining the capabilities of the streetlights and collecting feedback on current and proposed future uses for the data. In June 2019 the police department released information on its use of video data to solve crimes—99 at that point, over the 10-month period beginning in August 2018.

In August of 2019, Genevieve Jones-Wright, then legal director of the Partnership for the Advancement of New Americans, said she and other leaders of community organizations heard about a tour the city was offering as part of this outreach effort. Representatives of several organizations made a plan to take the tour and attend future community meetings, soon forming a coalition called Trust SD, for Transparent and Responsible Use of Surveillance Technology San Diego. The group, with Jones-Wright as its spokesperson, pushes for clear and transparent policies around the program.

In late September 2018, TrustSD had some 20 community organizations on board (the group now encompasses 30 organizations). That month, Jones-Wright wrote to the City Council on behalf of the group, asking for an immediate moratorium on the use, installation, and acquisition of smart streetlights until safeguards were put in place to mitigate impacts on civil liberties, a city ordinance that protects privacy and ensures oversight, and public records identifying how streetlight data had been, and was being, used.

In March 2019, the police department adopted a formal policy around the use of streetlight data. It stated that video and audio may be accessed exclusively for law-enforcement purposes with the police department as custodian of the records; the city’s sustainability department (home of the streetlight program) does not have access to that crime-related data. The policy also pledges that streetlight data will not be used to discriminate against any group or to invade the privacy of individuals.

Jones-Wright and her coalition argued that the lack of specific penalties for misuse of the data was unacceptable. Since September 2019 they have been pushing for a change to the city’s municipal code, that is, to codify the policy into a law enforceable by fines and other measures if violated.

But the streetlight controversy didn’t really explode until early in 2020, when another release of data from the police department indicated that the number of times video from the street lights had been used was up to 175 in the first year and a half of police department use, all in investigations of “serious” crimes. The list included murders, sexual assaults, and kidnappings—but it also included vandalism and illegal dumping, which caused activists to question the city’s definition of “serious.”

Says Jones-Wright, “When you have a system with no oversight, we can’t tell if you are operating in confines of rules you created for yourself. When you see vandalism and illegal dumping on the list—these are not serious crimes—so you have to ask why they tapped into the surveillance footage for that, and could the reason be the class of the victim. “We are concerned with the hierarchy of justice here and creating tiers of victimhood.”

The police department’s Jordon points out that the dumping incident involved a truckload of concrete that blocked vehicles from entering and exiting a parking garage used by FBI employees, and therefore qualified as a serious situation.

Local media outlets reported extensively on the controversy. The city attorney submitted a policy to the council, and the council held a vote on whether to ratify the proposed policy in January. It was rejected, not because of any specific objections to its content, but because some lawmakers feared that a policy would no longer be enough to satisfy the public’s concerns. What was needed, they said, was an actual ordinance, with penalties for those who violated it, one that would go beyond streetlights to cover all use of surveillance technology by the city.

San Diegans may yet get their ordinance. In mid-July, 2020, a committee of the City Council approved two proposed ordinances related to streetlight data. One would set  up a process to govern all surveillance technologies in the city, including oversight, auditing, and reporting; data collection and sharing; and access, protection, and retention of data. The other would create a privacy advisory commission of technical experts and community members to review proposals for surveillance technology use. These ordinances still need to go to the full City Council for a vote.

The police department, Jordon said, would welcome clarity. “People are of course concerned right now about things taking place in Hong Kong and China [with the use of cameras] that give them pause. From our standpoint, we have tried to be great stewards of the technology,” he says.

There is no reason under the sun that the City of San Diego and our law enforcement agencies should continue to operate without rules that govern the use of surveillance technology—where there is no transparency, oversight or accountability—no matter what the benefits are,” says Jones-Wright. “So, I am very happy that we are one step closer to having the ordinances that TRUST SD helped to write on the books in San Diego.”

While  the city and community organizations figure out how to regulate the streetlights, their use continues. To date, San Diego police have tapped streetlight video data nearly 400 times, including this past June, during investigations of incidents of felony vandalism and looting during Black Lives Matter protests.

Meanwhile, some of the promised upside of the technology hasn’t worked out as expected.

Software developers who had initially expressed interest in using streetlight data to create consumer-oriented tools for ordinary citizens have yet to get an app on the market.

Says Caldwell, “When we initially launched the program, there was the hope that San Diego’s innovation economy community would find all sorts of interesting use cases for the data, and develop applications, and create a technology ecosystem around mobility and other solutions. That hasn’t born out yet, we have had a lot of conversations with companies looking at it, but it hasn’t turned into a smashing success yet.”

And the planned energy savings, intended to generate cash to pay for the expensive fixtures, were chewed up by increases in electric rates.

For better or worse, San Diego did pave the way for other municipalities looking into smart city technology. Take Carlsbad, a smaller city just north of San Diego. It is in the process of acquiring its own sensor-laden streetlights; these, however will not incorporate cameras. David Graham, who managed the streetlight acquisition program as deputy chief operating officer for San Diego and is now chief innovation officer for the City of Carlsbad, did not respond to Spectrum’s requests for comment but indicated in an interview with the Voice of San Diego that the cameras are not necessary to count cars and pedestrians and other methods will be used for that function. And Carlsbad’s City Council has indicated that it intends to be proactive in establishing clear policies around the use of streetlight data.

“The policies and laws should have been in place before the street lamps were in the ground, instead of legislation catching up to technological capacity,” says Morgan Currie, a lecturer in data and society at the University of Edinburgh.

“It is a clear case of function creep. The lights weren’t designed as surveillance tools, rather, it’s a classic example of how data collection systems are easily retooled as surveillance systems, of how the capacities of the smart city to do good things can also increase state and police control.”

Toshiba’s Light Sensor Paves the Way for Cheap Lidar

Post Syndicated from John Boyd original https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/toshibas-light-sensor-highresolution-lidar

The introduction of fully autonomous cars has slowed to a crawl. Nevertheless, the introduction of technologies such as rearview cameras and automatic self-parking systems are helping the auto industry make incremental progress towards Level 4 autonomy while boosting driver-assist features along the way.

To that end, Toshiba has developed a compact, highly efficient silicon photo-multiplier (SiPM) that enables non-coaxial Lidar to employ off-the-shelf camera lenses to lower costs and help bring about solid-state, high-resolution Lidar.

Automotive Lidar (light detecting and ranging) typically uses spinning lasers to scan a vehicle’s environment 360 degrees by bouncing laser pulses off surrounding objects and measuring the return time for the reflected light to calculate their distances and shapes. The resulting point-cloud map can be used in combination with still images, radar data and GPS to create a virtual 3D map of the area the vehicle is traveling through.

However, high-end Lidar systems can be expensive, costing $80,000 or more, though cheaper versions are also available. The current leader in the field is Velodyne, whose lasers mechanically rotate in a tower mounted atop of a vehicle’s roof.

Solid-state Lidar systems have been announced in the past several years but have yet to challenge the mechanical variety. Now, Toshiba hopes to advance their cause with its SiPM: a solid-state light sensor employing single-photon avalanche diode (SPAD) technology. The Toshiba SiPM contains multiple SPADs, each controlled by an active quenching circuit (AQC). When an SPAD detects a photon, the SPAD cathode voltage is reduced but the AQC resets and reboots the SPAD voltage to the initial value. 

“Typical SiPM recovery time is 10 to 20 nanoseconds,” says Tuan Thanh Ta, Toshiba’s project leader for the technology. “We’ve made it 2 to 4 times faster by using this forced or active quenching method.”

The increased efficiency means Toshiba has been able to use far fewer light sensing cells—down from 48 to just 2—to produce a device measuring 25 μm x 90 μm, much smaller, the company says, than standard devices measuring 100 μm x 100 μm. The small size of these sensors has allowed Toshiba to create a dense two-dimensional array for high sensitivity, a requisite for long-range scanning. 

But such high-resolution data would require impractically large multichannel readout circuitry that comprises separate analog-to-digital converters (ADC) for long distances scanning, and time-to-digital converters (TDC) for short distances. Toshiba has overcome this problem by realizing both ADC and TDC functions in a single circuit. The result is an 80 percent reduction in size (down to 50 μm by 60 μm) over conventional dual data converter chips. 

“Our field trials using the SiPM with a prototype Lidar demonstrated the system’s effectiveness up to a distance of 200 meters while maintaining high resolution—which is necessary for Level 4 autonomous driving,” says Ta, who hails from Vietnam. “This is roughly quadruple the capability of solid-state Lidar currently on the market.”

“Factors like detection range are important, especially in high-speed environments like highways,” says Michael Milford, an interdisciplinary researcher and Deputy Director, Queensland University of Technology (QUT) Center for Robotics in Australia. “So I can see that these [Toshiba trial results] are important properties when it comes to commercial relevance.” 

And as Toshiba’s SiPM employs a two-dimensional array—unlike the one-dimensional photon receivers used in coaxial Lidar systems—Ta points out its 2D aspect ratio corresponds to that of light sensors used in commercial cameras. Consequently, off-the-shelf standard, telephoto and wide-angle lenses can be used for specific applications, helping to further reduce costs. 

By comparison, coaxial Lidar use the same optical path for sending and receiving the light source, and so require a costly customized lens to transmit and then collect the received light and send it to the 1D photon receivers, Ta explains.

But as Milford points out, “If Level 4+ on-road autonomous driving becomes a reality, it’s unlikely we’ll have a wide range of solutions and sensor configurations. So this flexibility [of Toshiba’s sensor] is perhaps more relevant for other domains like drones, and off-road autonomous vehicles, where there is more variability, and use of existing hardware is more important.”

Meanwhile, Toshiba is working to improve the performance quality of its SiPM. “We aim to introduce practical applications in fiscal 2022,” says Akihide Sai, a Senior Research Scientist at Toshiba overseeing the SiPM project. Though he declines to answer whether Toshiba is working with automakers or will produce its own solid-state Lidar system, he says Toshiba will make the SiPM device available to other companies. 

He adds, “We also see it being used in applications such as automated navigation of drones and in robots used for infrastructure monitoring, as well as in applications used in factory automation.”

But QUT’s Milford makes this telling point. “Many of the advances in autonomous vehicle technology and sensing, especially with regards to cost, bulk, and power usage, will become most relevant only when the overall challenge of Level 4+ driving is solved. They aren’t themselves the critical missing pieces for enabling this to happen.”

Can Sensors That Detect Coronavirus in the Air Help Economies Reopen Safely?

Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/sensors/chemical-sensors/devices-monitor-coronavirus-in-the-air

IEEE COVID-19 coverage logo, link to landing page

As businesses scramble to find ways to make workers and customers feel safe about entering enclosed spaces during a pandemic, several companies have proposed a solution: COVID-19 air monitoring devices. 

These devices suck in large quantities of air and trap aerosolized virus particles and anything else that’s present. The contents are then tested for the presence of the novel coronavirus, also known as SARS-CoV-2, which causes COVID-19. 

Several companies in the air quality and diagnostics sectors have quickly developed this sort of tech, with various iterations available on the market. The devices can be used anywhere, such as office buildings, airplanes, hospitals, schools and nursing homes, these companies say. 

But the devices don’t deliver results in real time—they don’t beep to alert people nearby that the virus has been detected. Instead, the collected samples must be sent to a lab to be analyzed, typically with a method called PCR, or polymerase chain reaction. 

This process takes hours. Add to that the logistics of physically transporting the samples to a lab and it could be a day or more before results are available. Still, there’s value in day-old air quality information, say developers of this type of technology. 

“It’s not solving everything about COVID-19,” says Milan Patel, CEO of PathogenDx, a DNA-based testing company. But it does enable businesses to spot the presence of the virus without relying on people to self-report, and brings peace of mind to everyone involved, he says. “If you’re going into a building, wouldn’t it be great to know that they’re doing environmental monitoring?” Patel says.

PathogenDx this year developed an airborne SARS-CoV-2 detection system by combining its DNA testing capability with an air sampler from Bertin Instruments. The gooseneck-shaped instrument uses a cyclonic vortex that draws in a high volume of air and traps any particles inside in a liquid. Once the sample is collected, it must be sent to a PathogenDx lab, where it goes through a two-step PCR process. This amplifies the virus’s genetic code so that it can be detected. Adding a second step to the process improves the test’s sensitivity, Patel says. (PCR is also the gold standard for general human testing of COVID-19.

Patel says he envisions the device proving particularly useful on airplanes, in large office buildings and health care facilities. On an airplane, for example, if the device picks up the presence of the virus during a flight, the airline can let passengers on that plane know that they were potentially exposed, he says. Or if the test comes back negative for the flight, the airline can “know that they didn’t just infect 267 passengers,” says Patel. 

In large office buildings, daily air sampling can give building managers a tool for early detection of the virus. As soon as the tests start coming back positive, the office managers could ask employees to work from home for a couple of weeks. Hospitals could use the device to track trends, identify trouble spots, and alert patients and staff of exposures.

Considering that many carriers of the virus don’t know they have it, or may be reluctant to report positive test results to every business they’ve visited, air monitoring could alert people to potential exposures in a way that contact tracing can’t. 

Other companies globally are putting forth their iterations on SARS-CoV-2 air monitoring. Sartorius in Göttingen, Germany says its device was used to analyze the air in two hospitals in Wuhan, China. (ResultsThe concentration of the virus in isolation wards and ventilated patient rooms was very low, but was higher in the toilet areas used by the patients.)

Assured Bio Labs in Oak Ridge, Tennessee markets its air monitoring device as a way to help the American workforce get back to business. InnovaPrep in Missouri offers an air sampling kit called the Bobcat, and Eurofins Scientific in Luxembourg touts labs worldwide that can analyze such samples.

But none of the commercially available tests can offer real-time results. That’s something that Jing Wang and Guangyu Qiu at the Swiss Federal Institute of Technology (ETH Zurich) and Swiss Federal Laboratories for Materials Science and Technology, or Empa, are working on.

They’ve come up with a plasmonic photothermal biosensor that can detect the presence of SARS-CoV-2 without the need for PCR. Qiu, a sensor engineer and postdoc at ETH Zurich and Empa, says that with some more work, the device could provide results within 15 minutes to an hour. “We’re trying to simplify it to a lab on a chip,” says Qiu. 

The device combines an optical sensor and a photothermal component that harness localized surface plasmon resonance sensing transduction and the plasmonic photothermal effect. But before the device can be tested in the real world, the researchers must find a way, on-board the device, to separate the virus’s genetic material from its membrane. Qiu says he hopes to resolve this and have a prototype ready to test by the end of the year.

New Electronic Nose Sniffs Out Perfectly Ripe Peaches for Harvest

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/chemical-sensors/new-electronic-nose-sniffs-out-perfectly-ripe-peaches-for-harvest

Have you ever tried to guess the ripeness of a peach by its smell? Farmers with a well-trained nose may be able to detect the unique combination of alcohols, esters, ketones, and aldehydes, but even an expert may struggle to know when the fruit is perfect for the picking. To help with harvesting, scientists have been developing electronic noses for sniffing out the ripest and most succulent peaches. In a recent study, one such e-nose exceeds 98 percent accuracy.

Sergio Luiz Stevan Jr. and colleagues at Federal University of Technology – Paraná and State University of Ponta Grossa, in Brazil, developed the new e-nose system. Stevan notes that even within a single, large orchard, fruit on one tree may ripen at different times than fruit on another tree, thanks to microclimates of varying ventilation, rain, soil, and other factors. Farmers can inspect the fruit and make their best guess at the prime time to harvest, but risk losing money if they choose incorrectly.

Fortunately, peaches emit vaporous molecules, called volatile organic compounds, or VOCs. “We know that volatile organic compounds vary in quantity and type, depending on the different phases of fruit growth,” explains Stevan. “Thus, the electronic noses are an [option], since they allow the online monitoring of the VOCs generated by the culture.”

The e-nose system created by his team has a set of gas sensors sensitive to particular VOCs. The measurements are digitized and pre-processed in a microcontroller. Next, a pattern recognition algorithm is used to classify each unique combination of VOC molecules associated with three stages of peach ripening (immature, ripe, over-ripe). The data is stored internally on an SD memory card and transmitted via Bluetooth or USB to a computer for analysis.

The system is also equipped with a ventilation mechanism that draws in air from the surrounding environment at a constant rate. The passing air is subjected to a set level of humidity and temperature to ensure consistent measurements. The idea, Stevan says, is to deploy several of these “noses” across an orchard to create a sensing network. 

He notes several advantages of this system over existing ripeness-sensing approaches, including that it is online, conducts real-time continuous analyses in an open environment, and does not require direct handling of the fruit. “It is different from the other [approaches] present in the literature, which are generally carried out in the laboratory or warehouses, post-harvest or during storage,” he says. The e-nose system is described in a study published June 4 in IEEE Sensors Journal.

While the study shows that the e-nose system already has a high rate of accuracy at more than 98 percent, the researchers are continuing to work on its components, focusing in particular on improving the tool’s flow analysis. They have filed for a patent and are exploring the prospect of commercialization.

For those who prefer their fruits and grains in drinkable form, there is additional good news. Stevan says in the past his team has developed a similar e-nose for beer, to analyze both alcohol content and aromas. Now they are working on an e-nose for wine, as well as a variety of other fruits.

Monitoring bees with a Raspberry Pi and BeeMonitor

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/monitoring-bees-with-a-raspberry-pi-and-beemonitor/

Keeping an eye on bee life cycles is a brilliant example of how Raspberry Pi sensors help us understand the world around us, says Rosie Hattersley

The setup featuring an Arduino, RF receiver, USB cable and Raspberry Pi

Getting to design and build things for a living sounds like a dream job, especially if it also involves Raspberry Pi and wildlife. Glyn Hudson has always enjoyed making things and set up a company manufacturing open-source energy monitoring tools shortly after graduating from university. With access to several hives at his keen apiarist parents’ garden in Snowdonia, Glyn set up BeeMonitor using some of the tools he used at work to track the beehives’ inhabitants.

Glyn bent down infront of a hive checking the original BeeMonitor setup

Glyn checking the original BeeMonitor setup

“The aim of the project was to put together a system to monitor the health of a bee colony by monitoring the temperature and humidity inside and outside the hive over multiple years,” explains Glyn. “Bees need all the help and love they can get at the moment and without them pollinating our plants, weíd struggle to grow crops. They maintain a 34∞C core brood temperature (± 0.5∞C) even when the ambient temperature drops below freezing. Maintaining this temperature when a brood is present is a key indicator of colony health.”

Wi-Fi not spot

BeeMonitor has been tracking the hives’ population since 2012 and is one of the earliest examples of a Raspberry Pi project. Glyn built most of the parts for BeeMonitor himself. Open-source software developed for the OpenEnergyMonitor project provides a data-logging and graphing platform that can be viewed online.

Spectators in protective suits watching staff monitor the beehive

BeeMonitor complete with solar panel to power it. The Snowdonia bees produce 12 to 15 kg of honey per year

The hives were too far from the house for WiFi to reach, so Glyn used a low-power RF sensor connected to an Arduino which was placed inside the hive to take readings. These were received by a Raspberry Pi connected to the internet.

Diagram showing what information BeeMonitor is trying to establish

Diagram showing what information BeeMonitor is trying to establish

At first, there was both a DS18B20 temperature sensor and a DHT22 humidity sensor inside the beehive, along with the Arduino (setup info can be found here). Data from these was saved to an SD card, the obvious drawback being that this didn’t display real-time data readings. In his initial setup, Glyn also had to extract and analyse the CSV data himself. “This was very time-consuming but did result in some interesting data,” he says.

Sensor-y overload

Almost as soon as BeeMonitor was running successfully, Glyn realised he wanted to make the data live on the internet. This would enable him to view live beehive data from anywhere and also allow other people to engage in the data.

“This is when Raspberry Pi came into its own,” he says. He also decided to drop the DHT22 humidity sensor. “It used a lot of power and the bees didn’t like it – they kept covering the sensor in wax! Oddly, the bees don’t seem to mind the DS218B20 temperature sensor, presumably since it’s a round metal object compared to the plastic grille of the DHT22,” notes Glyn.

Bees interacting with the temperature probe

Unlike the humidity sensor, the bees don’t seem to mind the temperature probe

The system has been running for eight years with minimal intervention and is powered by an old car battery and a small solar PV panel. Running costs are negligible: “Raspberry Pi is perfect for getting projects like this up and running quickly and reliably using very little power,” says Glyn. He chose it because of the community behind the hardware. “That was one of Raspberry Pi’s greatest assets and what attracted me to the platform, as well as the competitive price point!” The whole setup cost him about £50.

Glyn tells us we could set up a basic monitor using Raspberry Pi, a DS28B20 temperature sensor, a battery pack, and a solar panel.

The post Monitoring bees with a Raspberry Pi and BeeMonitor appeared first on Raspberry Pi.

Demystifying the Standards for Automotive Ethernet

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/demystifying_the_standards_for_automotive_ethernet

As Ethernet celebrates its 40th anniversary, this webinar will discuss its deployment for automotive applications and the standards related to its implementation. It will explain the role of the Institute of Electrical and Electronics Engineers (IEEE), OPEN Alliance, and the Ethernet Alliance in governing the implementation.  

Key learnings:

  • Understand the role of IEEE, OPEN Alliance, and the Ethernet Alliance in the development of automotive Ethernet 
  • Determine the relevant automotive Ethernet specifications for PHY developers, original equipment manufacturers, and Tier 1 companies  
  • Review the role of automotive Ethernet relative to other emerging high-speed digital automotive standards 

Please contact [email protected] to request PDH certificate code.

Sony Builds AI Into a CMOS Image Sensor

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/sensors/imagers/sony-builds-ai-into-a-cmos-image-sensor

Sony today announced that it has developed and is distributing smart image sensors. These devices use machine learning to process captured images on the sensor itself. They can then select only relevant images, or parts of images, to send on to cloud-based systems or local hubs.

This technology, says Mark Hanson, vice president of technology and business innovation for Sony Corp. of America, means practically zero latency between the image capture and its processing; low power consumption enabling IoT devices to run for months on a single battery; enhanced privacy; and far lower costs than smart cameras that use traditional image sensors and separate processors.

Sony’s San Jose laboratory developed prototype products using these sensors to demonstrate to future customers. The chips themselves were designed at Sony’s Atsugi, Japan, technology center. Hanson says that while other organizations have similar technology in development, Sony is the first to ship devices to customers.

Sony builds these chips by thinning and then bonding two wafers—one containing chips with light-sensing pixels and one containing signal processing circuitry and memory. This type of design is only possible because Sony is using a back-illuminated image sensor. In standard CMOS image sensors, the electronic traces that gather signals from the photodetectors are laid on top of the detectors. This makes them easy to manufacture, but sacrifices efficiency, because the traces block some of the incoming light. Back-illuminated devices put the readout circuitry and the interconnects under the photodetectors, adding to the cost of manufacture.

“We originally went to backside illumination so we could get more pixels on our device,” says Hanson. “That was the catalyst to enable us to add circuitry; then the question was what were the applications you could get by doing that.”

Hanson indicated that the initial applications for the technology will be in security, particularly in large retail situations requiring many cameras to cover a store. In this case, the amount of data being collected quickly becomes overwhelming, so processing the images at the edge would simplify that and cost much less, he says. The sensors could, for example, be programmed to spot people, carve out the section of the image containing the person, and only send that on for further processing. Or, he indicated, they could simply send metadata instead of the image itself, say, the number of people entering a building. The smart sensors can also track objects from frame to frame as a video is captured, for example, packages in a grocery store moving from cart to self-checkout register to bag.

Remote surveillance, with the devices running on battery power, is another application that’s getting a lot of interest, he says, along with manufacturing companies that mix robots and people looking to use image sensors to improve safety. Consumer gadgets that use the technology will come later, but he expected developers to begin experimenting with samples.

Learn the Latest Standards and Test Requirements for Automotive Radar

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/learn_the_latest_standards_and_test_requirements_for_automotive_radar

Automotive radar is a key enabler for autonomous driving technology, and it is one of the key areas for new modulation technologies. Unlike wireless communication technologies, there are no regulations and definitions for modulation type, signal pattern, and duty cycle for automotive radar.  

This webinar addresses which standards apply to radar measurements and how to measure different radar signals using Keysight’s automotive radar test solution.

Key learnings:

  • How to set the right parameters for different chirp rates and duty cycle radar signals for power measurements.  
  • Learn how to test the Rx interference of your automotive radar.  
  • Understand how to use Keysight’s automotive test solution to measure radar signals. 

Please contact [email protected] to request PDH certificate code.