Sonar, which measures the time it takes for sound waves to bounce off objects and travel back to a receiver, is the best way to visualize underwater terrain or inspect marine-based structures. Sonar systems, though, have to be deployed on ships or buoys, making them slow and limiting the area they can cover.
However, engineers at Stanford University have developed a new hybrid technique combining light and sound. Aircraft, they suggest, could use this combined laser/sonar technology to sweep the ocean surface for high-resolution images of submerged objects. The proof-of-concept airborne sonar system, presented recently in the journal IEEE Access, could make it easier and faster to find sunken wrecks, investigate marine habitats, and spot enemy submarines.
“Our system could be on a drone, airplane or helicopter,” says Amin Arbabian, an electrical engineering professor at Stanford University. “It could be deployed rapidly…and cover larger areas.”
Airborne radar and lidar are used to map the Earth’s surface at high resolution. Both can penetrate clouds and forest cover, making them especially useful in the air and on the ground. But peering into water from the air is a different challenge. Sound, radio, and light waves all quickly lose their energy when traveling from air into water and back. This attenuation is even worse in turbid water, Arbabian says.
So he and his students combined the two modalities—laser and sonar. Their system relies on the well-known photoacoustic effect, which turns pulses of light into sound. “When you shine a pulse of light on an object it heats up and expands and that leads to a sound wave because it moves molecules of air around the object,” he says.
The group’s new photoacoustic sonar system begins by shooting laser pulses at the water surface. Water absorbs most of the energy, creating ultrasound waves that move through it much like conventional sonar. These waves bounce off objects, and some of the reflected waves go back out from the water into the air.
At this point, the acoustic echoes lose a tremendous amount of energy as they cross that water-air barrier and then travel through the air. Here is where another critical part of the team’s design comes in.
To detect the weak acoustic waves in air, the team uses an ultra-sensitive microelectromechanical device with the mouthful name of an air-coupled capacitive micromachined ultrasonic transducer (CMUT). These devices are simple capacitors with a thin plate that vibrates when hit by ultrasound waves, causing a detectable change in capacitance. They are known to be efficient at detecting sound waves in air, and Arbabian has been investigating the use of CMUT sensors for remote ultrasound imaging. Special software processes the detected ultrasound signals to reconstruct a high-resolution 3D image of the underwater object.
The researchers tested the system by imaging metal bars of different heights and diameters placed in a large 25cm-deep fish tank filled with clear water. The CMUT detector was 10cm above the water surface.
The system should work in murky water, Arbabian says, although they haven’t tested that yet. Next up, they plan to image objects placed in a swimming pool, for which they will have to use more powerful laser sources that work for deeper water. They also want to improve the system so it works with waves, which distort signals and make the detection and image reconstruction much harder. “This proof of concept is to show that you can see through the air-water interface” Arbabian says. “That’s the hardest part of this problem. Once we can prove it works it can scale up to greater depths and larger objects.”
A child runs into the street, stands frozen and terrified as she sees a car speeding toward her. What’s safer? Relying on a car with autonomous capabilities or a human driver to brake in time?
In that situation, it takes the average driver 1.5 seconds to react and hit the brakes; it takes a car equipped with vision systems, RADAR and LIDAR just 0.5 seconds. That 3x faster response time can mean the difference between life and death, and it’s one of many reasons the automotive industry is accelerating down its journey to autonomous. Whether it’s safety, efficiency, comfort, navigation or the more efficient manufacture of an autonomous car, automotive companies are increasingly embracing autonomous technologies.
But this wave of interest in autonomous technologies does not come without its challenges. Disparate systems and a dizzying range of software and hardware choices can make charting a path to autonomy challenging. For example, many current solutions for automated driving are based on bulky, high-power, costly standalone chips. Others require proprietary solutions, which severely constrain design flexibility. However, to make it suitable for widespread commercial use, Tier1 and OEMs are looking for a more power-efficient and cost-effective solution suitable for mass production.
To embrace the vision, we need to understand that autonomous workloads are fundamentally different from traditional workloads and confront the challenges that complexity presents. The incorporation of AI and machine learning means the workload magnitude is far greater when compared to something like the narrow scope of operation surrounding, for example, contemporary ADAS (automated driver assistance systems).
Grappling with complexity
Autonomous is also more complex than traditional workloads because of the overlapping and interconnected nature of systems in industrial applications. There may be a machine vision system with its own set of data and algorithms that needs to integrate with robot vehicles that are alerted by the vision system that a task has been completed and pickup is required.
Integration may also be required for other points along the supply chain. For example, vehicles rolling down the manufacturing line will benefit from just-in-time parts arrivals thanks to the more integrated, near real-time connection with the supply chain.
These technologies also need to support features to help achieve both ASIL D and ASIL B safety requirements, as well as ISO26262.
New thinking, new technology
All of this requires a fundamental reconsideration of design, validation and configuration as well as embracing emerging technologies and methodologies.
As we evolve toward increasingly autonomous deployment, we need new hardware and software systems that will replace some of the commercial offerings being used today – and a fundamental shift from homogeneity to heterogeneous systems. These new systems must not only deliver flexibility, but help reduce the power, size, cost and thermal properties while retaining the performance needed to run autonomous workloads. And such systems will benefit from a flourishing ecosystem coalescing around these new design approaches that offer customers choice and flexibility to unleash their innovations.
Our mission at Arm is to deliver the underlying technologies and nurture an ecosystem to enable industries such as automotive to accelerate their journey to autonomy and reap the rewards of delivering new, transformative products to their customers.
In this market deep dive we outline the technology challenges and hardware and software solutions that can propel the automotive industry forward in its journey to a more safe, efficient and prosperous future.
For years, first responders relied on paper maps to reach a fire in an apartment building or office. Incomplete information would delay firefighters from arriving at an emergency, and false alarms would set them on the wrong path altogether. Dispatchers in 911 centers would receive erroneous information on a problem with a smoke detector rather than a sprinkler switch.
Now cloud computing, mobile apps, edge computing and IoT gateways will enable fire safety personnel to gain visibility into how to reach an emergency.
Remote monitoring and diagnostic capabilities of an IoT system help firefighters know where to position personnel and trucks in advance, according to Bauer. An IoT system tells fire personnel the locations of a smoke detector going off, a heat detector sending signals or a water flow switch being activated.
“You can see a map of the building with the actual location identified where the fire really is, and you can actually watch it spread if you have enough sensors,” said Bill Curtis, analyst in residence, IoT, at Moor Insights & Strategy.
IoT will make systems in commercial buildings work together like Amazon’s Alexa controls lights, thermostats and audio/video (AV) equipment in a home, Curtis said. An IoT system could shut down an HVAC system or put elevators in fire mode if smoke is blowing around a building, he suggested. A mobile app populated with sensor data can provide visibility into emergency systems and how to control specific locations in a building. It provides a holistic view of sensors, controls and fire panels.
Firefighters speeding to the scene will know what floor the fire is on and which sensors the emergency triggered. They’ll also learn how many people are in the building, and which entrance to use when they get there, Curtis explained.
“The more sensors and different types of sensors means earlier detection and greater resolution as well as greater precision on exactly where the fire is and how it is moving,” he said.
How IoT Fire Detection Works
Companies such as such as BehrTech and Honeywell offer IoT connectivity systems that provide situational awareness when fighting fires. BehrTech’s MyThings wireless IoT platform provides disaster warnings to guard against forest fires. It lets emergency personnel monitor the weather as well as atmospheric and seismic data.
On 20 Oct., Honeywell introduced a cloud platform for its Connected Life Safety Services (CLSS) that allows first responders to access data on a fire system before they get to an emergency. It’s now possible to evaluate the condition of devices and get essential data about an emergency in real time using a mobile app.
The CLSS cloud platform connects to an IoT gateway at a central station, which collects data from sensors around a building. CLSS transmits data on the building location that generated the alarm to fire departments. It also provides a history of detector signals over the previous 24 hours and indicates whether the smoke detector had previously triggered a false alarm, says Sameer Agrawal, general manager of software and services at Honeywell.
Agrawal said smart fire IoT platforms like CLSS indicate precisely where an emergency is occurring and will enable firefighters to take the right equipment to the correct location.
“When the dispatch sends a fire truck, the Computer Aided Dispatch (CAD) system will provide an access code that the officer in the truck can punch into an application; that will bring up a 2D model of the building and place the exact location of the alarm,” Agrawal said. “You’re able to track your crews that way, so this really is the kind of information that’s going to make their jobs so much safer and more efficient and take all the guesswork out of it.”
IoT Fire Safety Systems in the Future
Curtis suggests that, as more emergency systems to become interconnected in the future, building managers and workers should get access to these dashboards in addition to firefighters.
“Why not show the building occupants where the fire is so they can avoid it?” Curtis says.
In addition, smart fire detection systems will use artificial intelligence (AI) to detect false alarms and provide contextual information on how to prevent them—and prevent people from being thrown out of their hotel beds unnecessarily at 3 a.m., Agrawal said.
“When next-generation AI comes into play,” Agrawal says, “we start understanding more information about, you know, why was it a false alarm or what could have been done differently.”
An AI-equipped detection system will present a score to a facility manager indicating whether there’s a need to call the fire department. Information on the cause of an event and how first responders responded to past emergencies will help the software come up with the score.
What’s more, the algorithms will help detect anomalies in the data from multiple sensors. These anomalies can include a sensor malfunction, a security breach, or a reading that’s “unreasonable,” says Curtis.
Explore the powerful software behind Keysight’s high-precision hardware and discover how to meet emerging automotive electronics test requirements, while driving higher ROI from your existing hardware. Let Keysight help you cross the finish line ahead of your competition. Jump-start your automotive innovation today with a complimentary software trial
When San Diego started installing its smart streetlights in 2017, city managers envisioned that the data they gathered would help improve city operations—like selecting streets for bicycle lanes, identifying dangerous intersections for special attention, and figuring out where the city needed more parking. They thought they might spark some tech startups to develop apps that would guide the visually impaired, point drivers to parking places, and send joggers down the quietest routes. And they also envisioned cost savings from reduced energy thanks to the vastly higher efficiencies of the LED lights in comparison with the sodium-vapor lights they replaced.
Instead a $30 million project that had looked like it would put San Diego on the map as one of the “smartest” cities in the U.S. has mired it in battles over the way these systems are being used by law enforcement. Meanwhile, the independent apps that had seemed so promising two years ago have failed to materialize, and even the idea that the technology would pay for itself as energy costs came down didn’t work out as hoped.
San Diego got its smart “CityIQ” streetlights from GE Current, a company originally started as a subsidiary of General Electric but acquired last year by private-equity firm American Industrial Partners. Some 3300 of them have been installed to date and 1000 more have been received but not installed. As part of the deal, the city contracted with Current to run the cloud-based analytics of sensor data on its CityIQ platform. As part of that contract, the cloud operator, rather than the city, owns any algorithms derived from the data. In an additional shuffle, American Industrial Partners sold off the CityIQ platform in May to Ubicuia, a Florida manufacturer of streetlight sensors and software, but kept the LED-lighting side of the operation.
San Diego was the first city to fully embrace the CityIQ technology, though Atlanta and Portland did run pilot tests of the technology. San Diego financed the smart lights—and 14,000 other basic LED lights—with a plan that spread the payments out over 13 years, in such a way that the energy savings from replacing incandescent lighting would cover the cost and then some.
The CityIQ streetlights are packed with technology. Inside is an Intel Atom processor, half a terabyte of memory, Bluetooth and Wi-Fi radios, two 1080p video cameras, two acoustical sensors, and environmental sensors that monitor temperature, pressure, humidity, vibration, and magnetic fields. Much of the data is processed on the node—a textbook example of “edge processing.” That typically includes the processing of the digital video: machine-vision algorithms running on the streetlight itself count cars or bicycles, say, or extract the average speed of vehicles, and then transmit that information to the cloud. This data is managed under contract, initially by GE Current, and the data manager owns any analytics or algorithms derived from processed data.
Initially, at least, the data was expected to be used exclusively for civic analysis and planning and public convenience.
“There was an expectation when we launched this that it was not a law-enforcement system, Says Erik Caldwell, deputy chief operating officer for the City of San Diego. “That’s true; that’s not the primary purpose,” he adds. “The primary purpose is to gather data to better run the city.”
But in August 2018, everything changed. That’s when, while investigating a murder in San Diego’s Gaslamp Quarter, a police officer looked up and saw one of the new smart streetlights. He realized the streetlight’s video cameras had a perfect view of the crime scene—one unavailable from the various security cameras in the area.
“We had never seen a video from any of these cameras before,” says Jeffrey Jordon, a captain with the San Diego Police Department. “But we realized the camera was exactly where the crime scene was.”
The police department reached out to San Diego’s environmental services department, the organization responsible for the lights, and asked if video were available. It turned out that the video was still stored on the light—it is deleted after five days—and Current was able to pull it up from the light to its cloud servers, and then forward it to the police department.
“It was clear at that point that some of the video could help solve crimes, so we had an obligation to turn that over when there was a major crime,” Caldwell says.
“The data sits at the sensors unless we have a specific request, when a violent crime or fatal accident occurs. It is very surgical. We only use it in a reactive way,” Jordonstates, not as surveillance.
In this case, it turned out, the video exonerated the person who had been arrested for the murder.
At this point, Caldwell admits that he and his colleagues in the city administration made a mistake.
“We could have done a better job of communicating with the public that there had been a change in the use of the data,” he concedes. “One could have argued that we should have thought this through from the beginning.”
Informal discussions between city managers and police department officials started a month or two later, Caldwell recalls. A lot of things had to be sorted out. There needed to be policies and procedures around when streetlight data would be used by the police. And the process of getting the data to the police had to be streamlined. As it was, it was very cumbersome, involving looking up a streetlight number on a map, contacting the environmental services department with that number, which would in turn contact Current, who would then extract and forward the video. It was an essentially manual process with so many people in the loop that it insufficiently safeguarded the chain of custody required for any evidence to hold up in court.
Come October, the police department had a draft policy in place, limiting the use of the data to serious crimes, though what that set of crimes encompassed wasn’t fully defined.
By mid-February of 2019, the data retrieval and chain-of-custody issues were resolved. The police department had started using Genetec Clearance, a digital-evidence management system, Jordon explains. San Diego Police now have a direct visual interface to the smart-streetlight network, showing where the lights are and if they are operational at any time. When a crime occurs, police officers can use that tool to directly extract the video from a particular streetlight. Data not extracted is still erased after five days.
It’s hard to say exactly when the use of the streetlight video started bubbling up into the public consciousness. Between March and September 2019 the city and the police department held more than a dozen community meetings, explaining the capabilities of the streetlights and collecting feedback on current and proposed future uses for the data. In June 2019 the police department released information on its use of video data to solve crimes—99 at that point, over the 10-month period beginning in August 2018.
In August of 2019, Genevieve Jones-Wright, then legal director of the Partnership for the Advancement of New Americans, said she and other leaders of community organizations heard about a tour the city was offering as part of this outreach effort. Representatives of several organizations made a plan to take the tour and attend future community meetings, soon forming a coalition called Trust SD, for Transparent and Responsible Use of Surveillance Technology San Diego. The group, with Jones-Wright as its spokesperson, pushes for clear and transparent policies around the program.
In late September 2018, TrustSD had some 20 community organizations on board (the group now encompasses 30 organizations). That month, Jones-Wright wrote to the City Council on behalf of the group, asking for an immediate moratorium on the use, installation, and acquisition of smart streetlights until safeguards were put in place to mitigate impacts on civil liberties, a city ordinance that protects privacy and ensures oversight, and public records identifying how streetlight data had been, and was being, used.
In March 2019, the police department adopted a formal policy around the use of streetlight data. It stated that video and audio may be accessed exclusively for law-enforcement purposes with the police department as custodian of the records; the city’s sustainability department (home of the streetlight program) does not have access to that crime-related data. The policy also pledges that streetlight data will not be used to discriminate against any group or to invade the privacy of individuals.
Jones-Wright and her coalition argued that the lack of specific penalties for misuse of the data was unacceptable. Since September 2019 they have been pushing for a change to the city’s municipal code, that is, to codify the policy into a law enforceable by fines and other measures if violated.
But the streetlight controversy didn’t really explode until early in 2020, when another release of data from the police department indicated that the number of times video from the street lights had been used was up to 175 in the first year and a half of police department use, all in investigations of “serious” crimes. The list included murders, sexual assaults, and kidnappings—but it also included vandalism and illegal dumping, which caused activists to question the city’s definition of “serious.”
Says Jones-Wright, “When you have a system with no oversight, we can’t tell if you are operating in confines of rules you created for yourself. When you see vandalism and illegal dumping on the list—these are not serious crimes—so you have to ask why they tapped into the surveillance footage for that, and could the reason be the class of the victim. “We are concerned with the hierarchy of justice here and creating tiers of victimhood.”
The police department’s Jordon points out that the dumping incident involved a truckload of concrete that blocked vehicles from entering and exiting a parking garage used by FBI employees, and therefore qualified as a serious situation.
Local media outlets reported extensively on the controversy. The city attorney submitted a policy to the council, and the council held a vote on whether to ratify the proposed policy in January. It was rejected, not because of any specific objections to its content, but because some lawmakers feared that a policy would no longer be enough to satisfy the public’s concerns. What was needed, they said, was an actual ordinance, with penalties for those who violated it, one that would go beyond streetlights to cover all use of surveillance technology by the city.
San Diegans may yet get their ordinance. In mid-July, 2020, a committee of the City Council approved two proposed ordinances related to streetlight data. One would setup a process to govern all surveillance technologies in the city, including oversight, auditing, and reporting; data collection and sharing; and access, protection, and retention of data. The other would create a privacy advisory commission of technical experts and community members to review proposals for surveillance technology use. These ordinances still need to go to the full City Council for a vote.
The police department, Jordon said, would welcome clarity. “People are of course concerned right now about things taking place in Hong Kong and China [with the use of cameras] that give them pause. From our standpoint, we have tried to be great stewards of the technology,” he says.
“There is no reason under the sun that the City of San Diego and our law enforcement agencies should continue to operate without rules that govern the use of surveillance technology—where there is no transparency, oversight or accountability—no matter what the benefits are,” says Jones-Wright. “So, I am very happy that we are one step closer to having the ordinances that TRUST SD helped to write on the books in San Diego.”
Whilethe city and community organizations figure out how to regulate the streetlights, their use continues. To date, San Diego police have tapped streetlight video data nearly 400 times, including this past June, during investigations of incidents of felony vandalism and looting during Black Lives Matter protests.
Meanwhile, some of the promised upside of the technology hasn’t worked out as expected.
Software developers who had initially expressed interest in using streetlight data to create consumer-oriented tools for ordinary citizens have yet to get an app on the market.
Says Caldwell, “When we initially launched the program, there was the hope that San Diego’s innovation economy community would find all sorts of interesting use cases for the data, and develop applications, and create a technology ecosystem around mobility and other solutions. That hasn’t born out yet, we have had a lot of conversations with companies looking at it, but it hasn’t turned into a smashing success yet.”
And the planned energy savings, intended to generate cash to pay for the expensive fixtures, were chewed up by increases in electric rates.
For better or worse, San Diego did pave the way for other municipalities looking into smart city technology. Take Carlsbad, a smaller city just north of San Diego.It is in the process of acquiring its own sensor-laden streetlights; these, however will not incorporate cameras. David Graham, who managed the streetlight acquisition program as deputy chief operating officer for San Diego and is now chief innovation officer for the City of Carlsbad, did not respond to Spectrum’s requests for comment but indicated in an interview with the Voice of San Diego that the cameras are not necessary to count cars and pedestrians and other methods will be used for that function. And Carlsbad’s City Council has indicated that it intends to be proactive in establishing clear policies around the use of streetlight data.
“The policies and laws should have been in place before the street lamps were in the ground, instead of legislation catching up to technological capacity,” says Morgan Currie, a lecturer in data and society at the University of Edinburgh.
“It is a clear case of function creep. The lights weren’t designed as surveillance tools, rather, it’s a classic example of how data collection systems are easily retooled as surveillance systems, of how the capacities of the smart city to do good things can also increase state and police control.”
The introduction of fully autonomous cars has slowed to a crawl. Nevertheless, the introduction of technologies such as rearview cameras and automatic self-parking systems are helping the auto industry make incremental progress towards Level 4 autonomy while boosting driver-assist features along the way.
To that end, Toshiba has developed a compact, highly efficient silicon photo-multiplier (SiPM) that enables non-coaxial Lidar to employ off-the-shelf camera lenses to lower costs and help bring about solid-state, high-resolution Lidar.
Automotive Lidar (light detecting and ranging) typically uses spinning lasers to scan a vehicle’s environment 360 degrees by bouncing laser pulses off surrounding objects and measuring the return time for the reflected light to calculate their distances and shapes. The resulting point-cloud map can be used in combination with still images, radar data and GPS to create a virtual 3D map of the area the vehicle is traveling through.
However, high-end Lidar systems can be expensive, costing $80,000 or more, though cheaper versions are also available. The current leader in the field is Velodyne, whose lasers mechanically rotate in a tower mounted atop of a vehicle’s roof.
Solid-state Lidar systems have been announced in the past several years but have yet to challenge the mechanical variety. Now, Toshiba hopes to advance their cause with its SiPM: a solid-state light sensor employing single-photon avalanche diode (SPAD) technology. The Toshiba SiPM contains multiple SPADs, each controlled by an active quenching circuit (AQC). When an SPAD detects a photon, the SPAD cathode voltage is reduced but the AQC resets and reboots the SPAD voltage to the initial value.
“Typical SiPM recovery time is 10 to 20 nanoseconds,” says Tuan Thanh Ta, Toshiba’s project leader for the technology. “We’ve made it 2 to 4 times faster by using this forced or active quenching method.”
The increased efficiency means Toshiba has been able to use far fewer light sensing cells—down from 48 to just 2—to produce a device measuring 25 μm x 90 μm, much smaller, the company says, than standard devices measuring 100 μm x 100 μm. The small size of these sensors has allowed Toshiba to create a dense two-dimensional array for high sensitivity, a requisite for long-range scanning.
But such high-resolution data would require impractically large multichannel readout circuitry that comprises separate analog-to-digital converters (ADC) for long distances scanning, and time-to-digital converters (TDC) for short distances. Toshiba has overcome this problem by realizing both ADC and TDC functions in a single circuit. The result is an 80 percent reduction in size (down to 50 μm by 60 μm) over conventional dual data converter chips.
“Our field trials using the SiPM with a prototype Lidar demonstrated the system’s effectiveness up to a distance of 200 meters while maintaining high resolution—which is necessary for Level 4 autonomous driving,” says Ta, who hails from Vietnam. “This is roughly quadruple the capability of solid-state Lidar currently on the market.”
“Factors like detection range are important, especially in high-speed environments like highways,” says Michael Milford, an interdisciplinary researcher and Deputy Director, Queensland University of Technology (QUT) Center for Robotics in Australia. “So I can see that these [Toshiba trial results] are important properties when it comes to commercial relevance.”
And as Toshiba’s SiPM employs a two-dimensional array—unlike the one-dimensional photon receivers used in coaxial Lidar systems—Ta points out its 2D aspect ratio corresponds to that of light sensors used in commercial cameras. Consequently, off-the-shelf standard, telephoto and wide-angle lenses can be used for specific applications, helping to further reduce costs.
By comparison, coaxial Lidar use the same optical path for sending and receiving the light source, and so require a costly customized lens to transmit and then collect the received light and send it to the 1D photon receivers, Ta explains.
But as Milford points out, “If Level 4+ on-road autonomous driving becomes a reality, it’s unlikely we’ll have a wide range of solutions and sensor configurations. So this flexibility [of Toshiba’s sensor] is perhaps more relevant for other domains like drones, and off-road autonomous vehicles, where there is more variability, and use of existing hardware is more important.”
Meanwhile, Toshiba is working to improve the performance quality of its SiPM. “We aim to introduce practical applications in fiscal 2022,” says Akihide Sai, a Senior Research Scientist at Toshiba overseeing the SiPM project. Though he declines to answer whether Toshiba is working with automakers or will produce its own solid-state Lidar system, he says Toshiba will make the SiPM device available to other companies.
He adds, “We also see it being used in applications such as automated navigation of drones and in robots used for infrastructure monitoring, as well as in applications used in factory automation.”
But QUT’s Milford makes this telling point. “Many of the advances in autonomous vehicle technology and sensing, especially with regards to cost, bulk, and power usage, will become most relevant only when the overall challenge of Level 4+ driving is solved. They aren’t themselves the critical missing pieces for enabling this to happen.”
As businesses scramble to find ways to make workers and customers feel safe about entering enclosed spaces during a pandemic, several companies have proposed a solution: COVID-19 air monitoring devices.
These devices suck in large quantities of air and trap aerosolized virus particles and anything else that’s present. The contents are then tested for the presence of the novel coronavirus, also known as SARS-CoV-2, which causes COVID-19.
Several companies in the air quality and diagnostics sectors have quickly developed this sort of tech, with various iterations available on the market. The devices can be used anywhere, such as office buildings, airplanes, hospitals, schools and nursing homes, these companies say.
But the devices don’t deliver results in real time—they don’t beep to alert people nearby that the virus has been detected. Instead, the collected samples must be sent to a lab to be analyzed, typically with a method called PCR, or polymerase chain reaction.
This process takes hours. Add to that the logistics of physically transporting the samples to a lab and it could be a day or more before results are available. Still, there’s value in day-old air quality information, say developers of this type of technology.
“It’s not solving everything about COVID-19,” says Milan Patel, CEO of PathogenDx, a DNA-based testing company. But it does enable businesses to spot the presence of the virus without relying on people to self-report, and brings peace of mind to everyone involved, he says. “If you’re going into a building, wouldn’t it be great to know that they’re doing environmental monitoring?” Patel says.
Patel says he envisions the device proving particularly useful on airplanes, in large office buildings and health care facilities. On an airplane, for example, if the device picks up the presence of the virus during a flight, the airline can let passengers on that plane know that they were potentially exposed, he says. Or if the test comes back negative for the flight, the airline can “know that they didn’t just infect 267 passengers,” says Patel.
In large office buildings, daily air sampling can give building managers a tool for early detection of the virus. As soon as the tests start coming back positive, the office managers could ask employees to work from home for a couple of weeks. Hospitals could use the device to track trends, identify trouble spots, and alert patients and staff of exposures.
Considering that many carriers of the virus don’t know they have it, or may be reluctant to report positive test results to every business they’ve visited, air monitoring could alert people to potential exposures in a way that contact tracing can’t.
Other companies globally are putting forth their iterations on SARS-CoV-2 air monitoring. Sartorius in Göttingen, Germany says its device was used to analyze the air in two hospitals in Wuhan, China. (Results: The concentration of the virus in isolation wards and ventilated patient rooms was very low, but was higher in the toilet areas used by the patients.)
Assured Bio Labs in Oak Ridge, Tennessee markets its air monitoring device as a way to help the American workforce get back to business. InnovaPrep in Missouri offers an air sampling kit called the Bobcat, and Eurofins Scientific in Luxembourg touts labs worldwide that can analyze such samples.
But none of the commercially available tests can offer real-time results. That’s something that Jing Wang and Guangyu Qiu at the Swiss Federal Institute of Technology (ETH Zurich) and Swiss Federal Laboratories for Materials Science and Technology, or Empa, are working on.
They’ve come up with a plasmonic photothermal biosensor that can detect the presence of SARS-CoV-2 without the need for PCR. Qiu, a sensor engineer and postdoc at ETH Zurich and Empa, says that with some more work, the device could provide results within 15 minutes to an hour. “We’re trying to simplify it to a lab on a chip,” says Qiu.
Have you ever tried to guess the ripeness of a peach by its smell? Farmers with a well-trained nose may be able to detect the unique combination of alcohols, esters, ketones, and aldehydes, but even an expert may struggle to know when the fruit is perfect for the picking. To help with harvesting, scientists have been developing electronic noses for sniffing out the ripest and most succulent peaches. In a recent study, one such e-nose exceeds 98 percent accuracy.
Sergio Luiz Stevan Jr. and colleagues at Federal University of Technology – Paraná and State University of Ponta Grossa, in Brazil, developed the new e-nose system. Stevan notes that even within a single, large orchard, fruit on one tree may ripen at different times than fruit on another tree, thanks to microclimates of varying ventilation, rain, soil, and other factors. Farmers can inspect the fruit and make their best guess at the prime time to harvest, but risk losing money if they choose incorrectly.
Fortunately, peaches emit vaporous molecules, called volatile organic compounds, or VOCs. “We know that volatile organic compounds vary in quantity and type, depending on the different phases of fruit growth,” explains Stevan. “Thus, the electronic noses are an [option], since they allow the online monitoring of the VOCs generated by the culture.”
The e-nose system created by his team has a set of gas sensors sensitive to particular VOCs. The measurements are digitized and pre-processed in a microcontroller. Next, a pattern recognition algorithm is used to classify each unique combination of VOC molecules associated with three stages of peach ripening (immature, ripe, over-ripe). The data is stored internally on an SD memory card and transmitted via Bluetooth or USB to a computer for analysis.
The system is also equipped with a ventilation mechanism that draws in air from the surrounding environment at a constant rate. The passing air is subjected to a set level of humidity and temperature to ensure consistent measurements. The idea, Stevan says, is to deploy several of these “noses” across an orchard to create a sensing network.
He notes several advantages of this system over existing ripeness-sensing approaches, including that it is online, conducts real-time continuous analyses in an open environment, and does not require direct handling of the fruit. “It is different from the other [approaches] present in the literature, which are generally carried out in the laboratory or warehouses, post-harvest or during storage,” he says. The e-nose system is described in a study published June 4 in IEEE Sensors Journal.
While the study shows that the e-nose system already has a high rate of accuracy at more than 98 percent, the researchers are continuing to work on its components, focusing in particular on improving the tool’s flow analysis. They have filed for a patent and are exploring the prospect of commercialization.
For those who prefer their fruits and grains in drinkable form, there is additional good news. Stevan says in the past his team has developed a similar e-nose for beer, to analyze both alcohol content and aromas. Now they are working on an e-nose for wine, as well as a variety of other fruits.
Keeping an eye on bee life cycles is a brilliant example of how Raspberry Pi sensors help us understand the world around us, says Rosie Hattersley
The setup featuring an Arduino, RF receiver, USB cable and Raspberry Pi
Getting to design and build things for a living sounds like a dream job, especially if it also involves Raspberry Pi and wildlife. Glyn Hudson has always enjoyed making things and set up a company manufacturing open-source energy monitoring tools shortly after graduating from university. With access to several hives at his keen apiarist parents’ garden in Snowdonia, Glyn set up BeeMonitor using some of the tools he used at work to track the beehives’ inhabitants.
Glyn checking the original BeeMonitor setup
“The aim of the project was to put together a system to monitor the health of a bee colony by monitoring the temperature and humidity inside and outside the hive over multiple years,” explains Glyn. “Bees need all the help and love they can get at the moment and without them pollinating our plants, weíd struggle to grow crops. They maintain a 34∞C core brood temperature (± 0.5∞C) even when the ambient temperature drops below freezing. Maintaining this temperature when a brood is present is a key indicator of colony health.”
Wi-Fi not spot
BeeMonitor has been tracking the hives’ population since 2012 and is one of the earliest examples of a Raspberry Pi project. Glyn built most of the parts for BeeMonitor himself. Open-source software developed for the OpenEnergyMonitor project provides a data-logging and graphing platform that can be viewed online.
BeeMonitor complete with solar panel to power it. The Snowdonia bees produce 12 to 15 kg of honey per year
The hives were too far from the house for WiFi to reach, so Glyn used a low-power RF sensor connected to an Arduino which was placed inside the hive to take readings. These were received by a Raspberry Pi connected to the internet.
Diagram showing what information BeeMonitor is trying to establish
At first, there was both a DS18B20 temperature sensor and a DHT22 humidity sensor inside the beehive, along with the Arduino (setup info can be found here). Data from these was saved to an SD card, the obvious drawback being that this didn’t display real-time data readings. In his initial setup, Glyn also had to extract and analyse the CSV data himself. “This was very time-consuming but did result in some interesting data,” he says.
Almost as soon as BeeMonitor was running successfully, Glyn realised he wanted to make the data live on the internet. This would enable him to view live beehive data from anywhere and also allow other people to engage in the data.
“This is when Raspberry Pi came into its own,” he says. He also decided to drop the DHT22 humidity sensor. “It used a lot of power and the bees didn’t like it – they kept covering the sensor in wax! Oddly, the bees don’t seem to mind the DS218B20 temperature sensor, presumably since it’s a round metal object compared to the plastic grille of the DHT22,” notes Glyn.
Unlike the humidity sensor, the bees don’t seem to mind the temperature probe
The system has been running for eight years with minimal intervention and is powered by an old car battery and a small solar PV panel. Running costs are negligible: “Raspberry Pi is perfect for getting projects like this up and running quickly and reliably using very little power,” says Glyn. He chose it because of the community behind the hardware. “That was one of Raspberry Pi’s greatest assets and what attracted me to the platform, as well as the competitive price point!” The whole setup cost him about £50.
Glyn tells us we could set up a basic monitor using Raspberry Pi, a DS28B20 temperature sensor, a battery pack, and a solar panel.
As Ethernet celebrates its 40th anniversary, this webinar will discuss its deployment for automotive applications and the standards related to its implementation. It will explain the role of the Institute of Electrical and Electronics Engineers (IEEE), OPEN Alliance, and the Ethernet Alliance in governing the implementation.
Understand the role of IEEE, OPEN Alliance, and the Ethernet Alliance in the development of automotive Ethernet
Determine the relevant automotive Ethernet specifications for PHY developers, original equipment manufacturers, and Tier 1 companies
Review the role of automotive Ethernet relative to other emerging high-speed digital automotive standards
Sony today announced that it has developed and is distributing smart image sensors. These devices use machine learning to process captured images on the sensor itself. They can then select only relevant images, or parts of images, to send on to cloud-based systems or local hubs.
This technology, says Mark Hanson, vice president of technology and business innovation for Sony Corp. of America, means practically zero latency between the image capture and its processing; low power consumption enabling IoT devices to run for months on a single battery; enhanced privacy; and far lower costs than smart cameras that use traditional image sensors and separate processors.
Sony’s San Jose laboratory developed prototype products using these sensors to demonstrate to future customers. The chips themselves were designed at Sony’s Atsugi, Japan, technology center. Hanson says that while other organizations have similar technology in development, Sony is the first to ship devices to customers.
Sony builds these chips by thinning and then bonding two wafers—one containing chips with light-sensing pixels and one containing signal processing circuitry and memory. This type of design is only possible because Sony is using a back-illuminated image sensor. In standard CMOS image sensors, the electronic traces that gather signals from the photodetectors are laid on top of the detectors. This makes them easy to manufacture, but sacrifices efficiency, because the traces block some of the incoming light. Back-illuminated devices put the readout circuitry and the interconnects under the photodetectors, adding to the cost of manufacture.
“We originally went to backside illumination so we could get more pixels on our device,” says Hanson. “That was the catalyst to enable us to add circuitry; then the question was what were the applications you could get by doing that.”
Hanson indicated that the initial applications for the technology will be in security, particularly in large retail situations requiring many cameras to cover a store. In this case, the amount of data being collected quickly becomes overwhelming, so processing the images at the edge would simplify that and cost much less, he says. The sensors could, for example, be programmed to spot people, carve out the section of the image containing the person, and only send that on for further processing. Or, he indicated, they could simply send metadata instead of the image itself, say, the number of people entering a building. The smart sensors can also track objects from frame to frame as a video is captured, for example, packages in a grocery store moving from cart to self-checkout register to bag.
Remote surveillance, with the devices running on battery power, is another application that’s getting a lot of interest, he says, along with manufacturing companies that mix robots and people looking to use image sensors to improve safety. Consumer gadgets that use the technology will come later, but he expected developers to begin experimenting with samples.
Automotive radar is a key enabler for autonomous driving technology, and it is one of the key areas for new modulation technologies. Unlike wireless communication technologies, there are no regulations and definitions for modulation type, signal pattern, and duty cycle for automotive radar.
This webinar addresses which standards apply to radar measurements and how to measure different radar signals using Keysight’s automotive radar test solution.
How to set the right parameters for different chirp rates and duty cycle radar signals for power measurements.
Learn how to test the Rx interference of your automotive radar.
Understand how to use Keysight’s automotive test solution to measure radar signals.
The race to bring self-driving cars to showrooms may have a new leader: Volvo Cars said it will partner with Silicon Valley-based lidar manufacturer Luminar to deliver genuine hands-free, eyes-off-the-road highway driving beginning in 2022. That so-called “Level 3” capability would be an industry first, as companies from Tesla to Uber thus far have failed to meet lofty promises to make autonomous driving a reality.
Sweden’s Volvo, owned by Geely Holdings of China, said that models based on its upcoming SPA2 platform (for “Scalable Product Architecture”) will be hardware-enabled for Luminar’s roof-mounted lidar system. That includes the upcoming Polestar 3 from Volvo’s new electric-car division, and a range of Volvo-branded cars and SUVs. Henrik Green, chief technology officer for Volvo, said the optional “Highway Pilot” system would allow full autonomous driving, but only “when the car determines it is safe to do so.”
“At that point, your Volvo takes responsibility for the driving and you can relax, take your eyes off the road and your hands off the wheel,” Green said. “Over time, updates over-the-air will expand the areas in which the car can drive itself. For us, a safe introduction of autonomy is a gradual introduction.”
Luminar’s lidar system scans a car’s surroundings in real time, firing millions of pulses of light to create a virtual 3D map without relying on GPS or a network connection. Most experts agree that lidar is a critical linchpin of any truly autonomous car. A high-profile skeptic is Elon Musk, who has no plans to employ lidar in his Teslas, and scoffs at the technology as redundant and unnecessary.
Austin Russell, founder and chief executive of Luminar, disagrees.
“If cameras could do everything you can do with lidar, great. But if you really want to get in the autonomous game, this is a clear requirement.”
The 25-year-old Russell said his company’s high-performance lidar sensors, operating at 1550 nanometers and 64 lines of resolution, can spot even small and low-reflective objects—black cars, animals, a child darting across a road—at a range beyond 200 meters, and up to 500 meters for larger, brighter objects. The high-signal, low-noise system can deliver camera-level vision at 200 meters even in rain, fog or snow, he said. That range, Russell said, is critical to give cars the data and time to solve edge cases and make confident decisions, even when hurtling at freeway speeds.
“Right now, you don’t have reliable interpretation,” with camera, GPS or radar systems, Russell said. “You can guess what’s ahead of you 99 percent of the time, but the problem is that last one percent. And a 99-percent pedestrian detection rate doesn’t cut it, not remotely close.”
In a Zoom interview from Palo Alto, Russell held up two prototype versions, a roof-mounted unit roughly the size of a TV cable box, and a smaller unit that would mount on bumpers. The elegant systems incorporate a single laser and receiver, rather than the bulky, expensive, stacked arrays of lower-performance systems. Luminar built all its components and software in-house, Russell said, and is already on its eighth generation of ASIC chip controllers.
The roof-mounted unit can deliver 120 degrees of forward vision, Russell said, plenty for Volvo’s application that combines lidar with cameras, radar, backup GPS and electrically controlled steering and braking. For future robotaxi applications with no human pilot aboard, cars would combine three to five lidar sensors to deliver full 360-degree vision. The unit will also integrate with Volvo’s Advanced Driver Assistance Systems (ADAS), improving such features as automated emergency braking, which global automakers have agreed to make standard in virtually all vehicles by 2022.
The range and performance, Russell said, is key to solving the conundrums that have frustrated autonomous developers.
The industry has hit a wall, unable to make the technical leap to Level 3, with cars so secure in their abilities that owners could perform tasks such as texting, reading or even napping. Last week, Audi officially quit its longstanding claims that its new A8 sedan would do just that, with its Traffic Jam Pilot system. Musk is luring consumers to pay $7,000 up-front for “Full Self-Driving” features that have yet to materialize, and that he claims would allow them to use their Teslas as money-making robotaxis.
Current Level 2 systems, including Tesla’s Autopilot and Cadillac’s SuperCruise, often disengage when they can’t safely process their surroundings. Those disengagements, or even their possibility, demand that drivers continue to pay attention to the road at all times. And when systems do work effectively, drivers can fall out of the loop and be unable to quickly retake control. Russell acknowledged that those limitations leave current systems feeling like a parlor trick: If a driver must still pay full attention to the road, why not just keep driving the old-fashioned way?
“Assuming perfect use, it’s fine as a novelty or convenience,” Russell said. “But it’s easy to get lulled into a sense of safety. Even when you’re really trying to maintain attention, to jump back into the game is very difficult.”
Ideally, Russell said, a true Level 3 system would operate for years and hundreds or thousands of miles and never disengage without generous advance warning.
“From our perspective, spontaneous handoffs that require human intervention are not safe,” he said. “It can’t be, ‘Take over in 500 milliseconds or I’m going to run into a wall.’”
One revolution of Level 3, he said, is that drivers would actually begin to recover valuable time for productivity, rest or relaxation. The other is safety, and the elusive dream of sharply reducing roadway crashes that kill 1.35 million people a year around the world.
Volvo cautioned that its Highway Pilot, whose price is up in the air, would initially roll out in small volumes, and steadily expand across its lineup. Volvo’s announcement included an agreement to possibly increase its minority stake in Luminar. But Luminar said it is also working with 12 of the world’s top 15 automakers in various stages of lidar development. And Russell said that, whichever manufacturer makes them, lidar-based safety and self-driving systems will eventually become an industry standard.
“Driving adoption throughout the larger industry is the right move,” he said. “That’s how you save the most lives and create the most value.”
Remote sensors play a valuable role in collecting data—but recharging these devices while they are scattered over vast and isolated areas can be tedious. A new system is designed to make the charging process easier by using unmanned aerial vehicles (UAVs) to deliver power using radio waves during a flyby. A specialized antenna on the sensor harvests the signals and converts them into electricity. The design is described in a study published 23 March in IEEE Sensors Letters.
In the world of augmented reality (AR), real and virtual images are combined to create immersive environments for users. While the technology is advancing, it has remained challenging to make virtual images appear more “solid” in front of real-world objects, an effect that’s essential for us in achieving depth perception.
Now, a team of researchers has developed a compact AR system that, using an array of miniature mirrors that switch positions tens of thousands of times per second, can create this elusive effect. They describe their new system in a study published February 13 in IEEE Transactions on Visualization and Computer Graphics.
Occlusion is when light from objects in the foreground block the light from objects at farther distances. Commercialized AR systems have limited occlusion because they tend to rely on a single spatial light modulator (SLM) to create the virtual images, while allowing natural light from real-world objects to also be perceived by the user.
“As can be seen with many commercial devices such as the Microsoft HoloLens or Magic Leap, not blocking high levels of incoming light [from real-world objects] means that virtual content becomes so transparent it becomes difficult to even see,” explains Brooke Krajancich, a researcher at Stanford University. This lack of occlusion can interfere with the user’s depth perception, which would be problematic in tasks requiring precision, such as AR-assisted surgery.
To achieve better occlusion, some researchers have been exploring the possibility of a second SLM that controls the incoming light of real-world objects. However, incorporating two SLMs into one system involves a lot of hardware, making it bulky. Instead, Krajancich and her colleagues developed a new design that combines virtual projection and light-blocking abilities into one element.
Their design relies on a dense array of miniature mirrors that can be individually flipped between two states—one that allows light through and one that reflects light—at a rate of up to tens of thousands of times per second.
“Our system uses these mirrors to switch between a see-through state, which allows the user to observe a small part of the real world, and a reflective state, where the same mirror blocks light from the scene in favor of an [artificial] light source,” explains Krajancich. The system computes the optimal arrangement for the mirrors and adjusts accordingly.
Krajancich notes some trade-offs with this approach, including some challenges in rendering colors properly. It also requires a lot of computing power, and therefore may require higher power consumption than other AR systems. While commercialization of this system is a possibility in the future, she says, the approach is still in the early research stages.
As useful as GPS is, it’s not a system that we can rely on all the time. There are many situations in which GPS might not be available, such as when you’re inside a building, driving through a tunnel, on a battlefield, in a mine, underwater, or in space.
Systems that need to track location, like your cell phone, an autonomous car, or a submarine, often use what are called Inertial Measurement Units (a combination of accelerometers and gyroscopes) to estimate position through dead reckoning by tracking changes in acceleration and rotation. The accuracy of these location estimates depends on how accurate the sensors in those IMUs are. Unfortunately, gyroscopes that can do a good job of tracking rotation over long periods of time are too big and too expensive for most commercial or consumer-grade systems.
Researchers at the University of Michigan, led by Khalil Najafi and funded by DARPA, have presented a paper at the 7th IEEE International Symposium on Inertial Sensors & Systems on a new type of of gyroscope called a precision shell integrating (PSI) gyroscope, which the authors say is “10,000 times more accurate but only 10 times more expensive than gyroscopes used in your typical cell phone.” It offers the performance of much larger gyroscopes at one thousandth the cost, meaning that high-precision location awareness, indoor navigation, and long-term autonomy may soon be much more achievable for mobile devices and underground robots.
The power of computing has profoundly influenced our lives and we place a high degree of trust in our electronic systems every day. In applications areas such as automotive, the consequences of a system failing or going wrong can be serious and even potentially life-threatening. Trust in automotive systems is crucial and only possible because of functional safety: A system’s ability to detect, diagnose and safely mitigate the occurrence of any fault.
The application of safety is critical in the automotive industry, but how do we then balance safety and innovation in this rapidly transforming industry?
Arm Flexible Access lowers the barriers to rapid innovation and opens the doors to leading technology with access to a wide range of Arm IP, support, tools, and training. Arm Flexible Access adds safety technologies and features to make it easier for developers in the automotive industry and other safety-related industries to create SoCs and attain certification for industry safety standards.
Arm Flexible Access also enables more experimentation for those designing innovative automotive systems, which may involve self-driving capabilities or electrification of powertrain, allowing developers to find the most efficient way of balancing functionality with the level of safety required. Different architectures and mixtures of IP can be accessed, evaluated, optimized redesigned to achieve the best solution, which can then be taken to certification more easily.
There’s something inherently inefficient about the way video captures motion today. Cameras capture frame after frame at regular intervals, but most of the pixels in those frames don’t change from one to the other, and whatever is moving in those frames is only captured episodically.
Event-based cameras work differently; their pixels only react if they detect a change in the amount of light falling on them. They capture motion better than any other camera, while generating only a small amount of data and burning little power.
Paris-based startup Prophesee has been developing and selling event-based cameras since 2016, but the applications for their chips were limited. That’s because the circuitry surrounding the light-sensing element took up so much space that the imagers had a fairly low resolution. In a partnership announced this week at the IEEE International Solid-State Circuits Conference in San Francisco, Prophesee used Sony technology to put that circuitry on a separate chip that sits behind the pixels.
“Using the Sony process, which is probably the most advanced process, we managed to shrink the pixel pitch down to 4.86 micrometers” from their previous 15 micrometers, says Luca Verre, the company’s cofounder and CEO.
The resulting 1280 x 720 HD event-based imager is suitable for a much wider range of applications. “We want to enter the space of smart home cameras and smart infrastructure, [simultaneous localization and mapping] solutions for AR/VR, 3D sensing for drones or industrial robots,” he says.
The company is also looking to enter the automotive market, where imagers need a high dynamic range to deal with the big differences between day and night driving. “This is where our technology excels,” he says.
Besides the photodiode, each pixel requires circuits to change the diode’s current into a logarithmic voltage and determine if there’s been an increase or decrease in luminosity. It’s that circuitry that Sony’s technology puts on a separate chip that sits behind the pixels and is linked to them by a dense array of copper connections. Previously, the photodiode made up only 25 percent of the area of the pixel, now it’s 77 percent.
When a pixel detects a change (an event), all that is output is the location of the pixel, the polarity of the change, and a 1-microsecond-resolution time stamp. The imager consumes 32 milliwatts to register 100,000 events per second and ramps up to just 73 milliwatts at 300 million events per second. A system that dynamically compresses the event data allows the chip to sustain a rate of more than 1 billion events per second.
The collective thoughts of the interwebz
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.