Tag Archives: sensors

Get More From Your Automotive Electronics Test Investment

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/get_more_from_your_automotive_electronics_test_investment

Keysight Automotive

Explore the powerful software behind Keysight’s high-precision hardware and discover how to meet emerging automotive electronics test requirements, while driving higher ROI from your existing hardware. Let Keysight help you cross the finish line ahead of your competition. Jump-start your automotive innovation today with a complimentary software trial

Cops Tap Smart Streetlights Sparking Controversy and Legislation

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/sensors/remote-sensing/cops-smart-street-lights

When San Diego started installing its smart streetlights in 2017, city managers envisioned that the data they gathered would help improve city operations—like selecting streets for bicycle lanes, identifying dangerous intersections for special attention, and figuring out where the city needed more parking. They thought they might spark some tech startups to develop apps that would guide the visually impaired, point drivers to parking places, and send joggers down the quietest routes. And they also envisioned cost savings from reduced energy thanks to the vastly higher efficiencies of the LED lights in comparison with the sodium-vapor lights they replaced.

Instead a $30 million project that had looked like it would put San Diego on the map as one of the “smartest” cities in the U.S. has mired it in battles over the way these systems are being used by law enforcement. Meanwhile, the independent apps that had seemed so promising two years ago have failed to materialize, and even the idea that the technology would pay for itself as energy costs came down didn’t work out as hoped.

San Diego got its smart “CityIQ” streetlights from GE Current, a company originally started as a subsidiary of General Electric but acquired last year by private-equity firm American Industrial Partners. Some 3300 of them have been installed to date and 1000 more have been received but not installed. As part of the deal, the city contracted with Current to run the cloud-based analytics of sensor data on its CityIQ platform. As part of that contract, the cloud operator, rather than the city, owns any algorithms derived from the data. In an additional shuffle, American Industrial Partners sold off the CityIQ platform in May to Ubicuia, a Florida manufacturer of streetlight sensors and software, but kept the LED-lighting side of the operation.

San Diego was the first city to fully embrace the CityIQ technology, though Atlanta and Portland did run pilot tests of the technology. San Diego financed the smart lights—and 14,000 other basic LED lights—with a plan that spread the payments out over 13 years, in such a way that the energy savings from replacing incandescent lighting would cover the cost and then some.

The CityIQ streetlights are packed with technology. Inside is an Intel Atom processor, half a terabyte of memory, Bluetooth and Wi-Fi radios, two 1080p video cameras, two acoustical sensors, and environmental sensors that monitor temperature, pressure, humidity, vibration, and magnetic fields. Much of the data is processed on the node—a textbook example of “edge processing.” That typically includes the processing of the digital video: machine-vision algorithms running on the streetlight itself count cars or bicycles, say, or extract the average speed of vehicles, and then transmit that information to the cloud. This data is managed under contract, initially by GE Current, and the data manager owns any analytics or algorithms derived from processed data.

Initially, at least, the data was expected to be used exclusively for civic analysis and planning and public convenience.

There was an expectation when we launched this that it was not a law-enforcement system, Says Erik Caldwell, deputy chief operating officer for the City of San Diego. “That’s true; that’s not the primary purpose,” he adds. “The primary purpose is to gather data to better run the city.”

But in August 2018, everything changed. That’s when, while investigating a murder in San Diego’s Gaslamp Quarter, a police officer looked up and saw one of the new smart streetlights. He realized the streetlight’s video cameras had a perfect view of the crime scene—one unavailable from the various security cameras in the area.

“We had never seen a video from any of these cameras before,” says Jeffrey Jordon, a captain with the San Diego Police Department. “But we realized the camera was exactly where the crime scene was.”

The police department reached out to San Diego’s environmental services department, the organization responsible for the lights, and asked if video were available. It turned out that the video was still stored on the light—it is deleted after five days—and Current was able to pull it up from the light to its cloud servers, and then forward it to the police department. 

“It was clear at that point that some of the video could help solve crimes, so we had an obligation to turn that over when there was a major crime,” Caldwell says.

“The data sits at the sensors unless we have a specific request, when a violent crime or fatal accident occurs. It is very surgical. We only use it in a reactive way,” Jordon states, not as surveillance.

In this case, it turned out, the video exonerated the person who had been arrested for the murder.

At this point, Caldwell admits that he and his colleagues in the city administration made a mistake.

“We could have done a better job of communicating with the public that there had been a change in the use of the data,” he concedes. “One could have argued that we should have thought this through from the beginning.”

Informal discussions between city managers and police department officials started a month or two later, Caldwell recalls. A lot of things had to be sorted out. There needed to be policies and procedures around when streetlight data would be used by the police. And the process of getting the data to the police had to be streamlined. As it was, it was very cumbersome, involving looking up a streetlight number on a map, contacting the environmental services department with that number, which would in turn contact Current, who would then extract and forward the video. It was an essentially manual process with so many people in the loop that it insufficiently safeguarded the chain of custody required for any evidence to hold up in court.

Come October, the police department had a draft policy in place, limiting the use of the data to serious crimes, though what that set of crimes encompassed wasn’t fully defined.

By mid-February of 2019, the data retrieval and chain-of-custody issues were resolved. The police department had started using Genetec Clearance, a digital-evidence management system, Jordon explains. San Diego Police now have a direct visual interface to the smart-streetlight network, showing where the lights are and if they are operational at any time. When a crime occurs, police officers can use that tool to directly extract the video from a particular streetlight. Data not extracted is still erased after five days.

It’s hard to say exactly when the use of the streetlight video started bubbling up into the public consciousness. Between March and September 2019 the city and the police department held more than a dozen community meetings, explaining the capabilities of the streetlights and collecting feedback on current and proposed future uses for the data. In June 2019 the police department released information on its use of video data to solve crimes—99 at that point, over the 10-month period beginning in August 2018.

In August of 2019, Genevieve Jones-Wright, then legal director of the Partnership for the Advancement of New Americans, said she and other leaders of community organizations heard about a tour the city was offering as part of this outreach effort. Representatives of several organizations made a plan to take the tour and attend future community meetings, soon forming a coalition called Trust SD, for Transparent and Responsible Use of Surveillance Technology San Diego. The group, with Jones-Wright as its spokesperson, pushes for clear and transparent policies around the program.

In late September 2018, TrustSD had some 20 community organizations on board (the group now encompasses 30 organizations). That month, Jones-Wright wrote to the City Council on behalf of the group, asking for an immediate moratorium on the use, installation, and acquisition of smart streetlights until safeguards were put in place to mitigate impacts on civil liberties, a city ordinance that protects privacy and ensures oversight, and public records identifying how streetlight data had been, and was being, used.

In March 2019, the police department adopted a formal policy around the use of streetlight data. It stated that video and audio may be accessed exclusively for law-enforcement purposes with the police department as custodian of the records; the city’s sustainability department (home of the streetlight program) does not have access to that crime-related data. The policy also pledges that streetlight data will not be used to discriminate against any group or to invade the privacy of individuals.

Jones-Wright and her coalition argued that the lack of specific penalties for misuse of the data was unacceptable. Since September 2019 they have been pushing for a change to the city’s municipal code, that is, to codify the policy into a law enforceable by fines and other measures if violated.

But the streetlight controversy didn’t really explode until early in 2020, when another release of data from the police department indicated that the number of times video from the street lights had been used was up to 175 in the first year and a half of police department use, all in investigations of “serious” crimes. The list included murders, sexual assaults, and kidnappings—but it also included vandalism and illegal dumping, which caused activists to question the city’s definition of “serious.”

Says Jones-Wright, “When you have a system with no oversight, we can’t tell if you are operating in confines of rules you created for yourself. When you see vandalism and illegal dumping on the list—these are not serious crimes—so you have to ask why they tapped into the surveillance footage for that, and could the reason be the class of the victim. “We are concerned with the hierarchy of justice here and creating tiers of victimhood.”

The police department’s Jordon points out that the dumping incident involved a truckload of concrete that blocked vehicles from entering and exiting a parking garage used by FBI employees, and therefore qualified as a serious situation.

Local media outlets reported extensively on the controversy. The city attorney submitted a policy to the council, and the council held a vote on whether to ratify the proposed policy in January. It was rejected, not because of any specific objections to its content, but because some lawmakers feared that a policy would no longer be enough to satisfy the public’s concerns. What was needed, they said, was an actual ordinance, with penalties for those who violated it, one that would go beyond streetlights to cover all use of surveillance technology by the city.

San Diegans may yet get their ordinance. In mid-July, 2020, a committee of the City Council approved two proposed ordinances related to streetlight data. One would set  up a process to govern all surveillance technologies in the city, including oversight, auditing, and reporting; data collection and sharing; and access, protection, and retention of data. The other would create a privacy advisory commission of technical experts and community members to review proposals for surveillance technology use. These ordinances still need to go to the full City Council for a vote.

The police department, Jordon said, would welcome clarity. “People are of course concerned right now about things taking place in Hong Kong and China [with the use of cameras] that give them pause. From our standpoint, we have tried to be great stewards of the technology,” he says.

There is no reason under the sun that the City of San Diego and our law enforcement agencies should continue to operate without rules that govern the use of surveillance technology—where there is no transparency, oversight or accountability—no matter what the benefits are,” says Jones-Wright. “So, I am very happy that we are one step closer to having the ordinances that TRUST SD helped to write on the books in San Diego.”

While  the city and community organizations figure out how to regulate the streetlights, their use continues. To date, San Diego police have tapped streetlight video data nearly 400 times, including this past June, during investigations of incidents of felony vandalism and looting during Black Lives Matter protests.

Meanwhile, some of the promised upside of the technology hasn’t worked out as expected.

Software developers who had initially expressed interest in using streetlight data to create consumer-oriented tools for ordinary citizens have yet to get an app on the market.

Says Caldwell, “When we initially launched the program, there was the hope that San Diego’s innovation economy community would find all sorts of interesting use cases for the data, and develop applications, and create a technology ecosystem around mobility and other solutions. That hasn’t born out yet, we have had a lot of conversations with companies looking at it, but it hasn’t turned into a smashing success yet.”

And the planned energy savings, intended to generate cash to pay for the expensive fixtures, were chewed up by increases in electric rates.

For better or worse, San Diego did pave the way for other municipalities looking into smart city technology. Take Carlsbad, a smaller city just north of San Diego. It is in the process of acquiring its own sensor-laden streetlights; these, however will not incorporate cameras. David Graham, who managed the streetlight acquisition program as deputy chief operating officer for San Diego and is now chief innovation officer for the City of Carlsbad, did not respond to Spectrum’s requests for comment but indicated in an interview with the Voice of San Diego that the cameras are not necessary to count cars and pedestrians and other methods will be used for that function. And Carlsbad’s City Council has indicated that it intends to be proactive in establishing clear policies around the use of streetlight data.

“The policies and laws should have been in place before the street lamps were in the ground, instead of legislation catching up to technological capacity,” says Morgan Currie, a lecturer in data and society at the University of Edinburgh.

“It is a clear case of function creep. The lights weren’t designed as surveillance tools, rather, it’s a classic example of how data collection systems are easily retooled as surveillance systems, of how the capacities of the smart city to do good things can also increase state and police control.”

Toshiba’s Light Sensor Paves the Way for Cheap Lidar

Post Syndicated from John Boyd original https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/toshibas-light-sensor-highresolution-lidar

The introduction of fully autonomous cars has slowed to a crawl. Nevertheless, the introduction of technologies such as rearview cameras and automatic self-parking systems are helping the auto industry make incremental progress towards Level 4 autonomy while boosting driver-assist features along the way.

To that end, Toshiba has developed a compact, highly efficient silicon photo-multiplier (SiPM) that enables non-coaxial Lidar to employ off-the-shelf camera lenses to lower costs and help bring about solid-state, high-resolution Lidar.

Automotive Lidar (light detecting and ranging) typically uses spinning lasers to scan a vehicle’s environment 360 degrees by bouncing laser pulses off surrounding objects and measuring the return time for the reflected light to calculate their distances and shapes. The resulting point-cloud map can be used in combination with still images, radar data and GPS to create a virtual 3D map of the area the vehicle is traveling through.

However, high-end Lidar systems can be expensive, costing $80,000 or more, though cheaper versions are also available. The current leader in the field is Velodyne, whose lasers mechanically rotate in a tower mounted atop of a vehicle’s roof.

Solid-state Lidar systems have been announced in the past several years but have yet to challenge the mechanical variety. Now, Toshiba hopes to advance their cause with its SiPM: a solid-state light sensor employing single-photon avalanche diode (SPAD) technology. The Toshiba SiPM contains multiple SPADs, each controlled by an active quenching circuit (AQC). When an SPAD detects a photon, the SPAD cathode voltage is reduced but the AQC resets and reboots the SPAD voltage to the initial value. 

“Typical SiPM recovery time is 10 to 20 nanoseconds,” says Tuan Thanh Ta, Toshiba’s project leader for the technology. “We’ve made it 2 to 4 times faster by using this forced or active quenching method.”

The increased efficiency means Toshiba has been able to use far fewer light sensing cells—down from 48 to just 2—to produce a device measuring 25 μm x 90 μm, much smaller, the company says, than standard devices measuring 100 μm x 100 μm. The small size of these sensors has allowed Toshiba to create a dense two-dimensional array for high sensitivity, a requisite for long-range scanning. 

But such high-resolution data would require impractically large multichannel readout circuitry that comprises separate analog-to-digital converters (ADC) for long distances scanning, and time-to-digital converters (TDC) for short distances. Toshiba has overcome this problem by realizing both ADC and TDC functions in a single circuit. The result is an 80 percent reduction in size (down to 50 μm by 60 μm) over conventional dual data converter chips. 

“Our field trials using the SiPM with a prototype Lidar demonstrated the system’s effectiveness up to a distance of 200 meters while maintaining high resolution—which is necessary for Level 4 autonomous driving,” says Ta, who hails from Vietnam. “This is roughly quadruple the capability of solid-state Lidar currently on the market.”

“Factors like detection range are important, especially in high-speed environments like highways,” says Michael Milford, an interdisciplinary researcher and Deputy Director, Queensland University of Technology (QUT) Center for Robotics in Australia. “So I can see that these [Toshiba trial results] are important properties when it comes to commercial relevance.” 

And as Toshiba’s SiPM employs a two-dimensional array—unlike the one-dimensional photon receivers used in coaxial Lidar systems—Ta points out its 2D aspect ratio corresponds to that of light sensors used in commercial cameras. Consequently, off-the-shelf standard, telephoto and wide-angle lenses can be used for specific applications, helping to further reduce costs. 

By comparison, coaxial Lidar use the same optical path for sending and receiving the light source, and so require a costly customized lens to transmit and then collect the received light and send it to the 1D photon receivers, Ta explains.

But as Milford points out, “If Level 4+ on-road autonomous driving becomes a reality, it’s unlikely we’ll have a wide range of solutions and sensor configurations. So this flexibility [of Toshiba’s sensor] is perhaps more relevant for other domains like drones, and off-road autonomous vehicles, where there is more variability, and use of existing hardware is more important.”

Meanwhile, Toshiba is working to improve the performance quality of its SiPM. “We aim to introduce practical applications in fiscal 2022,” says Akihide Sai, a Senior Research Scientist at Toshiba overseeing the SiPM project. Though he declines to answer whether Toshiba is working with automakers or will produce its own solid-state Lidar system, he says Toshiba will make the SiPM device available to other companies. 

He adds, “We also see it being used in applications such as automated navigation of drones and in robots used for infrastructure monitoring, as well as in applications used in factory automation.”

But QUT’s Milford makes this telling point. “Many of the advances in autonomous vehicle technology and sensing, especially with regards to cost, bulk, and power usage, will become most relevant only when the overall challenge of Level 4+ driving is solved. They aren’t themselves the critical missing pieces for enabling this to happen.”

Can Sensors That Detect Coronavirus in the Air Help Economies Reopen Safely?

Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/sensors/chemical-sensors/devices-monitor-coronavirus-in-the-air

IEEE COVID-19 coverage logo, link to landing page

As businesses scramble to find ways to make workers and customers feel safe about entering enclosed spaces during a pandemic, several companies have proposed a solution: COVID-19 air monitoring devices. 

These devices suck in large quantities of air and trap aerosolized virus particles and anything else that’s present. The contents are then tested for the presence of the novel coronavirus, also known as SARS-CoV-2, which causes COVID-19. 

Several companies in the air quality and diagnostics sectors have quickly developed this sort of tech, with various iterations available on the market. The devices can be used anywhere, such as office buildings, airplanes, hospitals, schools and nursing homes, these companies say. 

But the devices don’t deliver results in real time—they don’t beep to alert people nearby that the virus has been detected. Instead, the collected samples must be sent to a lab to be analyzed, typically with a method called PCR, or polymerase chain reaction. 

This process takes hours. Add to that the logistics of physically transporting the samples to a lab and it could be a day or more before results are available. Still, there’s value in day-old air quality information, say developers of this type of technology. 

“It’s not solving everything about COVID-19,” says Milan Patel, CEO of PathogenDx, a DNA-based testing company. But it does enable businesses to spot the presence of the virus without relying on people to self-report, and brings peace of mind to everyone involved, he says. “If you’re going into a building, wouldn’t it be great to know that they’re doing environmental monitoring?” Patel says.

PathogenDx this year developed an airborne SARS-CoV-2 detection system by combining its DNA testing capability with an air sampler from Bertin Instruments. The gooseneck-shaped instrument uses a cyclonic vortex that draws in a high volume of air and traps any particles inside in a liquid. Once the sample is collected, it must be sent to a PathogenDx lab, where it goes through a two-step PCR process. This amplifies the virus’s genetic code so that it can be detected. Adding a second step to the process improves the test’s sensitivity, Patel says. (PCR is also the gold standard for general human testing of COVID-19.

Patel says he envisions the device proving particularly useful on airplanes, in large office buildings and health care facilities. On an airplane, for example, if the device picks up the presence of the virus during a flight, the airline can let passengers on that plane know that they were potentially exposed, he says. Or if the test comes back negative for the flight, the airline can “know that they didn’t just infect 267 passengers,” says Patel. 

In large office buildings, daily air sampling can give building managers a tool for early detection of the virus. As soon as the tests start coming back positive, the office managers could ask employees to work from home for a couple of weeks. Hospitals could use the device to track trends, identify trouble spots, and alert patients and staff of exposures.

Considering that many carriers of the virus don’t know they have it, or may be reluctant to report positive test results to every business they’ve visited, air monitoring could alert people to potential exposures in a way that contact tracing can’t. 

Other companies globally are putting forth their iterations on SARS-CoV-2 air monitoring. Sartorius in Göttingen, Germany says its device was used to analyze the air in two hospitals in Wuhan, China. (ResultsThe concentration of the virus in isolation wards and ventilated patient rooms was very low, but was higher in the toilet areas used by the patients.)

Assured Bio Labs in Oak Ridge, Tennessee markets its air monitoring device as a way to help the American workforce get back to business. InnovaPrep in Missouri offers an air sampling kit called the Bobcat, and Eurofins Scientific in Luxembourg touts labs worldwide that can analyze such samples.

But none of the commercially available tests can offer real-time results. That’s something that Jing Wang and Guangyu Qiu at the Swiss Federal Institute of Technology (ETH Zurich) and Swiss Federal Laboratories for Materials Science and Technology, or Empa, are working on.

They’ve come up with a plasmonic photothermal biosensor that can detect the presence of SARS-CoV-2 without the need for PCR. Qiu, a sensor engineer and postdoc at ETH Zurich and Empa, says that with some more work, the device could provide results within 15 minutes to an hour. “We’re trying to simplify it to a lab on a chip,” says Qiu. 

The device combines an optical sensor and a photothermal component that harness localized surface plasmon resonance sensing transduction and the plasmonic photothermal effect. But before the device can be tested in the real world, the researchers must find a way, on-board the device, to separate the virus’s genetic material from its membrane. Qiu says he hopes to resolve this and have a prototype ready to test by the end of the year.

New Electronic Nose Sniffs Out Perfectly Ripe Peaches for Harvest

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/chemical-sensors/new-electronic-nose-sniffs-out-perfectly-ripe-peaches-for-harvest

Have you ever tried to guess the ripeness of a peach by its smell? Farmers with a well-trained nose may be able to detect the unique combination of alcohols, esters, ketones, and aldehydes, but even an expert may struggle to know when the fruit is perfect for the picking. To help with harvesting, scientists have been developing electronic noses for sniffing out the ripest and most succulent peaches. In a recent study, one such e-nose exceeds 98 percent accuracy.

Sergio Luiz Stevan Jr. and colleagues at Federal University of Technology – Paraná and State University of Ponta Grossa, in Brazil, developed the new e-nose system. Stevan notes that even within a single, large orchard, fruit on one tree may ripen at different times than fruit on another tree, thanks to microclimates of varying ventilation, rain, soil, and other factors. Farmers can inspect the fruit and make their best guess at the prime time to harvest, but risk losing money if they choose incorrectly.

Fortunately, peaches emit vaporous molecules, called volatile organic compounds, or VOCs. “We know that volatile organic compounds vary in quantity and type, depending on the different phases of fruit growth,” explains Stevan. “Thus, the electronic noses are an [option], since they allow the online monitoring of the VOCs generated by the culture.”

The e-nose system created by his team has a set of gas sensors sensitive to particular VOCs. The measurements are digitized and pre-processed in a microcontroller. Next, a pattern recognition algorithm is used to classify each unique combination of VOC molecules associated with three stages of peach ripening (immature, ripe, over-ripe). The data is stored internally on an SD memory card and transmitted via Bluetooth or USB to a computer for analysis.

The system is also equipped with a ventilation mechanism that draws in air from the surrounding environment at a constant rate. The passing air is subjected to a set level of humidity and temperature to ensure consistent measurements. The idea, Stevan says, is to deploy several of these “noses” across an orchard to create a sensing network. 

He notes several advantages of this system over existing ripeness-sensing approaches, including that it is online, conducts real-time continuous analyses in an open environment, and does not require direct handling of the fruit. “It is different from the other [approaches] present in the literature, which are generally carried out in the laboratory or warehouses, post-harvest or during storage,” he says. The e-nose system is described in a study published June 4 in IEEE Sensors Journal.

While the study shows that the e-nose system already has a high rate of accuracy at more than 98 percent, the researchers are continuing to work on its components, focusing in particular on improving the tool’s flow analysis. They have filed for a patent and are exploring the prospect of commercialization.

For those who prefer their fruits and grains in drinkable form, there is additional good news. Stevan says in the past his team has developed a similar e-nose for beer, to analyze both alcohol content and aromas. Now they are working on an e-nose for wine, as well as a variety of other fruits.

Monitoring bees with a Raspberry Pi and BeeMonitor

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/monitoring-bees-with-a-raspberry-pi-and-beemonitor/

Keeping an eye on bee life cycles is a brilliant example of how Raspberry Pi sensors help us understand the world around us, says Rosie Hattersley

The setup featuring an Arduino, RF receiver, USB cable and Raspberry Pi

Getting to design and build things for a living sounds like a dream job, especially if it also involves Raspberry Pi and wildlife. Glyn Hudson has always enjoyed making things and set up a company manufacturing open-source energy monitoring tools shortly after graduating from university. With access to several hives at his keen apiarist parents’ garden in Snowdonia, Glyn set up BeeMonitor using some of the tools he used at work to track the beehives’ inhabitants.

Glyn bent down infront of a hive checking the original BeeMonitor setup

Glyn checking the original BeeMonitor setup

“The aim of the project was to put together a system to monitor the health of a bee colony by monitoring the temperature and humidity inside and outside the hive over multiple years,” explains Glyn. “Bees need all the help and love they can get at the moment and without them pollinating our plants, weíd struggle to grow crops. They maintain a 34∞C core brood temperature (± 0.5∞C) even when the ambient temperature drops below freezing. Maintaining this temperature when a brood is present is a key indicator of colony health.”

Wi-Fi not spot

BeeMonitor has been tracking the hives’ population since 2012 and is one of the earliest examples of a Raspberry Pi project. Glyn built most of the parts for BeeMonitor himself. Open-source software developed for the OpenEnergyMonitor project provides a data-logging and graphing platform that can be viewed online.

Spectators in protective suits watching staff monitor the beehive

BeeMonitor complete with solar panel to power it. The Snowdonia bees produce 12 to 15 kg of honey per year

The hives were too far from the house for WiFi to reach, so Glyn used a low-power RF sensor connected to an Arduino which was placed inside the hive to take readings. These were received by a Raspberry Pi connected to the internet.

Diagram showing what information BeeMonitor is trying to establish

Diagram showing what information BeeMonitor is trying to establish

At first, there was both a DS18B20 temperature sensor and a DHT22 humidity sensor inside the beehive, along with the Arduino (setup info can be found here). Data from these was saved to an SD card, the obvious drawback being that this didn’t display real-time data readings. In his initial setup, Glyn also had to extract and analyse the CSV data himself. “This was very time-consuming but did result in some interesting data,” he says.

Sensor-y overload

Almost as soon as BeeMonitor was running successfully, Glyn realised he wanted to make the data live on the internet. This would enable him to view live beehive data from anywhere and also allow other people to engage in the data.

“This is when Raspberry Pi came into its own,” he says. He also decided to drop the DHT22 humidity sensor. “It used a lot of power and the bees didn’t like it – they kept covering the sensor in wax! Oddly, the bees don’t seem to mind the DS218B20 temperature sensor, presumably since it’s a round metal object compared to the plastic grille of the DHT22,” notes Glyn.

Bees interacting with the temperature probe

Unlike the humidity sensor, the bees don’t seem to mind the temperature probe

The system has been running for eight years with minimal intervention and is powered by an old car battery and a small solar PV panel. Running costs are negligible: “Raspberry Pi is perfect for getting projects like this up and running quickly and reliably using very little power,” says Glyn. He chose it because of the community behind the hardware. “That was one of Raspberry Pi’s greatest assets and what attracted me to the platform, as well as the competitive price point!” The whole setup cost him about £50.

Glyn tells us we could set up a basic monitor using Raspberry Pi, a DS28B20 temperature sensor, a battery pack, and a solar panel.

The post Monitoring bees with a Raspberry Pi and BeeMonitor appeared first on Raspberry Pi.

Demystifying the Standards for Automotive Ethernet

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/demystifying_the_standards_for_automotive_ethernet

As Ethernet celebrates its 40th anniversary, this webinar will discuss its deployment for automotive applications and the standards related to its implementation. It will explain the role of the Institute of Electrical and Electronics Engineers (IEEE), OPEN Alliance, and the Ethernet Alliance in governing the implementation.  

Key learnings:

  • Understand the role of IEEE, OPEN Alliance, and the Ethernet Alliance in the development of automotive Ethernet 
  • Determine the relevant automotive Ethernet specifications for PHY developers, original equipment manufacturers, and Tier 1 companies  
  • Review the role of automotive Ethernet relative to other emerging high-speed digital automotive standards 

Please contact [email protected] to request PDH certificate code.

Sony Builds AI Into a CMOS Image Sensor

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/sensors/imagers/sony-builds-ai-into-a-cmos-image-sensor

Sony today announced that it has developed and is distributing smart image sensors. These devices use machine learning to process captured images on the sensor itself. They can then select only relevant images, or parts of images, to send on to cloud-based systems or local hubs.

This technology, says Mark Hanson, vice president of technology and business innovation for Sony Corp. of America, means practically zero latency between the image capture and its processing; low power consumption enabling IoT devices to run for months on a single battery; enhanced privacy; and far lower costs than smart cameras that use traditional image sensors and separate processors.

Sony’s San Jose laboratory developed prototype products using these sensors to demonstrate to future customers. The chips themselves were designed at Sony’s Atsugi, Japan, technology center. Hanson says that while other organizations have similar technology in development, Sony is the first to ship devices to customers.

Sony builds these chips by thinning and then bonding two wafers—one containing chips with light-sensing pixels and one containing signal processing circuitry and memory. This type of design is only possible because Sony is using a back-illuminated image sensor. In standard CMOS image sensors, the electronic traces that gather signals from the photodetectors are laid on top of the detectors. This makes them easy to manufacture, but sacrifices efficiency, because the traces block some of the incoming light. Back-illuminated devices put the readout circuitry and the interconnects under the photodetectors, adding to the cost of manufacture.

“We originally went to backside illumination so we could get more pixels on our device,” says Hanson. “That was the catalyst to enable us to add circuitry; then the question was what were the applications you could get by doing that.”

Hanson indicated that the initial applications for the technology will be in security, particularly in large retail situations requiring many cameras to cover a store. In this case, the amount of data being collected quickly becomes overwhelming, so processing the images at the edge would simplify that and cost much less, he says. The sensors could, for example, be programmed to spot people, carve out the section of the image containing the person, and only send that on for further processing. Or, he indicated, they could simply send metadata instead of the image itself, say, the number of people entering a building. The smart sensors can also track objects from frame to frame as a video is captured, for example, packages in a grocery store moving from cart to self-checkout register to bag.

Remote surveillance, with the devices running on battery power, is another application that’s getting a lot of interest, he says, along with manufacturing companies that mix robots and people looking to use image sensors to improve safety. Consumer gadgets that use the technology will come later, but he expected developers to begin experimenting with samples.

Learn the Latest Standards and Test Requirements for Automotive Radar

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/learn_the_latest_standards_and_test_requirements_for_automotive_radar

Automotive radar is a key enabler for autonomous driving technology, and it is one of the key areas for new modulation technologies. Unlike wireless communication technologies, there are no regulations and definitions for modulation type, signal pattern, and duty cycle for automotive radar.  

This webinar addresses which standards apply to radar measurements and how to measure different radar signals using Keysight’s automotive radar test solution.

Key learnings:

  • How to set the right parameters for different chirp rates and duty cycle radar signals for power measurements.  
  • Learn how to test the Rx interference of your automotive radar.  
  • Understand how to use Keysight’s automotive test solution to measure radar signals. 

Please contact [email protected] to request PDH certificate code.

Volvo and Lidar-maker Luminar to Deliver Hands-free Driving by 2022

Post Syndicated from Lawrence Ulrich original https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/volvo-and-lidarmaker-luminar-to-deliver-handsfree-driving-by-2022

The race to bring self-driving cars to showrooms may have a new leader: Volvo Cars said it will partner with Silicon Valley-based lidar manufacturer Luminar to deliver genuine hands-free, eyes-off-the-road highway driving beginning in 2022. That so-called “Level 3” capability would be an industry first, as companies from Tesla to Uber thus far have failed to meet lofty promises to make autonomous driving a reality. 

Sweden’s Volvo, owned by Geely Holdings of China, said that models based on its upcoming SPA2 platform (for “Scalable Product Architecture”) will be hardware-enabled for Luminar’s roof-mounted lidar system. That includes the upcoming Polestar 3 from Volvo’s new electric-car division, and a range of Volvo-branded cars and SUVs. Henrik Green, chief technology officer for Volvo, said the optional “Highway Pilot” system would allow full autonomous driving, but only “when the car determines it is safe to do so.” 

“At that point, your Volvo takes responsibility for the driving and you can relax, take your eyes off the road and your hands off the wheel,” Green said. “Over time, updates over-the-air will expand the areas in which the car can drive itself. For us, a safe introduction of autonomy is a gradual introduction.” 

Luminar’s lidar system scans a car’s surroundings in real time, firing millions of pulses of light to create a virtual 3D map without relying on GPS or a network connection. Most experts agree that lidar is a critical linchpin of any truly autonomous car. A high-profile skeptic is Elon Musk, who has no plans to employ lidar in his Teslas, and scoffs at the technology as redundant and unnecessary.

Austin Russell, founder and chief executive of Luminar, disagrees. 

“If cameras could do everything you can do with lidar, great. But if you really want to get in the autonomous game, this is a clear requirement.” 

The 25-year-old Russell said his company’s high-performance lidar sensors, operating at 1550 nanometers and 64 lines of resolution, can spot even small and low-reflective objects—black cars, animals, a child darting across a road—at a range beyond 200 meters, and up to 500 meters for larger, brighter objects. The high-signal, low-noise system can deliver camera-level vision at 200 meters even in rain, fog or snow, he said. That range, Russell said, is critical to give cars the data and time to solve edge cases and make confident decisions, even when hurtling at freeway speeds.  

“Right now, you don’t have reliable interpretation,” with camera, GPS or radar systems, Russell said. “You can guess what’s ahead of you 99 percent of the time, but the problem is that last one percent. And a 99-percent pedestrian detection rate doesn’t cut it, not remotely close.” 

In a Zoom interview from Palo Alto, Russell held up two prototype versions, a roof-mounted unit roughly the size of a TV cable box, and a smaller unit that would mount on bumpers. The elegant systems incorporate a single laser and receiver, rather than the bulky, expensive, stacked arrays of lower-performance systems. Luminar built all its components and software in-house, Russell said, and is already on its eighth generation of ASIC chip controllers. 

The roof-mounted unit can deliver 120 degrees of forward vision, Russell said, plenty for Volvo’s application that combines lidar with cameras, radar, backup GPS and electrically controlled steering and braking. For future robotaxi applications with no human pilot aboard, cars would combine three to five lidar sensors to deliver full 360-degree vision. The unit will also integrate with Volvo’s Advanced Driver Assistance Systems (ADAS), improving such features as automated emergency braking, which global automakers have agreed to make standard in virtually all vehicles by 2022. 

The range and performance, Russell said, is key to solving the conundrums that have frustrated autonomous developers. 

The industry has hit a wall, unable to make the technical leap to Level 3, with cars so secure in their abilities that owners could perform tasks such as texting, reading or even napping. Last week, Audi officially quit its longstanding claims that its new A8 sedan would do just that, with its Traffic Jam Pilot system. Musk is luring consumers to pay $7,000 up-front for “Full Self-Driving” features that have yet to materialize, and that he claims would allow them to use their Teslas as money-making robotaxis. 

Current Level 2 systems, including Tesla’s Autopilot and Cadillac’s SuperCruise, often disengage when they can’t safely process their surroundings. Those disengagements, or even their possibility, demand that drivers continue to pay attention to the road at all times. And when systems do work effectively, drivers can fall out of the loop and be unable to quickly retake control. Russell acknowledged that those limitations leave current systems feeling like a parlor trick: If a driver must still pay full attention to the road, why not just keep driving the old-fashioned way? 

“Assuming perfect use, it’s fine as a novelty or convenience,” Russell said. “But it’s easy to get lulled into a sense of safety. Even when you’re really trying to maintain attention, to jump back into the game is very difficult.” 

Ideally, Russell said, a true Level 3 system would operate for years and hundreds or thousands of miles and never disengage without generous advance warning. 

“From our perspective, spontaneous handoffs that require human intervention are not safe,” he said. “It can’t be, ‘Take over in 500 milliseconds or I’m going to run into a wall.’” 

One revolution of Level 3, he said, is that drivers would actually begin to recover valuable time for productivity, rest or relaxation. The other is safety, and the elusive dream of sharply reducing roadway crashes that kill 1.35 million people a year around the world. 

Volvo cautioned that its Highway Pilot, whose price is up in the air, would initially roll out in small volumes, and steadily expand across its lineup. Volvo’s announcement included an agreement to possibly increase its minority stake in Luminar. But Luminar said it is also working with 12 of the world’s top 15 automakers in various stages of lidar development. And Russell said that, whichever manufacturer makes them, lidar-based safety and self-driving systems will eventually become an industry standard. 

“Driving adoption throughout the larger industry is the right move,” he said. “That’s how you save the most lives and create the most value.” 

Drones Use Radio Waves to Recharge Sensors While in Flight

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/remote-sensing/uavs-prove-usefuldelivering-remote-power-charging-services

Journal Watch report logo, link to report landing page

Remote sensors play a valuable role in collecting data—but recharging these devices while they are scattered over vast and isolated areas can be tedious. A new system is designed to make the charging process easier by using unmanned aerial vehicles (UAVs) to deliver power using radio waves during a flyby. A specialized antenna on the sensor harvests the signals and converts them into electricity. The design is described in a study published 23 March in IEEE Sensors Letters.

Mirror Arrays Make Augmented Reality More Realistic

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/imagers/mirrors-augmented-reality-more-realistic

Journal Watch report logo, link to report landing page

In the world of augmented reality (AR), real and virtual images are combined to create immersive environments for users. While the technology is advancing, it has remained challenging to make virtual images appear more “solid” in front of real-world objects, an effect that’s essential for us in achieving depth perception.

Now, a team of researchers has developed a compact AR system that, using an array of miniature mirrors that switch positions tens of thousands of times per second, can create this elusive effect. They describe their new system in a study published February 13 in IEEE Transactions on Visualization and Computer Graphics.

Occlusion is when light from objects in the foreground block the light from objects at farther distances. Commercialized AR systems have limited occlusion because they tend to rely on a single spatial light modulator (SLM) to create the virtual images, while allowing natural light from real-world objects to also be perceived by the user.

“As can be seen with many commercial devices such as the Microsoft HoloLens or Magic Leap, not blocking high levels of incoming light [from real-world objects] means that virtual content becomes so transparent it becomes difficult to even see,” explains Brooke Krajancich, a researcher at Stanford University. This lack of occlusion can interfere with the user’s depth perception, which would be problematic in tasks requiring precision, such as AR-assisted surgery.

To achieve better occlusion, some researchers have been exploring the possibility of a second SLM that controls the incoming light of real-world objects. However, incorporating two SLMs into one system involves a lot of hardware, making it bulky. Instead, Krajancich and her colleagues developed a new design that combines virtual projection and light-blocking abilities into one element.

Their design relies on a dense array of miniature mirrors that can be individually flipped between two states—one that allows light through and one that reflects light—at a rate of up to tens of thousands of times per second.

“Our system uses these mirrors to switch between a see-through state, which allows the user to observe a small part of the real world, and a reflective state, where the same mirror blocks light from the scene in favor of an [artificial] light source,” explains Krajancich. The system computes the optimal arrangement for the mirrors and adjusts accordingly.

Krajancich notes some trade-offs with this approach, including some challenges in rendering colors properly. It also requires a lot of computing power, and therefore may require higher power consumption than other AR systems. While commercialization of this system is a possibility in the future, she says, the approach is still in the early research stages.

New Gyroscope Design Will Help Autonomous Cars and Robots Map the World

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/tech-talk/sensors/automotive-sensors/new-gyroscope-design-will-help-autonomous-cars-robots-map-world

As useful as GPS is, it’s not a system that we can rely on all the time. There are many situations in which GPS might not be available, such as when you’re inside a building, driving through a tunnel, on a battlefield, in a mine, underwater, or in space.

Systems that need to track location, like your cell phone, an autonomous car, or a submarine, often use what are called Inertial Measurement Units (a combination of accelerometers and gyroscopes) to estimate position through dead reckoning by tracking changes in acceleration and rotation. The accuracy of these location estimates depends on how accurate the sensors in those IMUs are. Unfortunately, gyroscopes that can do a good job of tracking rotation over long periods of time are too big and too expensive for most commercial or consumer-grade systems.

Researchers at the University of Michigan, led by Khalil Najafi and funded by DARPA, have presented a paper at the 7th IEEE International Symposium on Inertial Sensors & Systems on a new type of of gyroscope called a precision shell integrating (PSI) gyroscope, which the authors say is “10,000 times more accurate but only 10 times more expensive than gyroscopes used in your typical cell phone.” It offers the performance of much larger gyroscopes at one thousandth the cost, meaning that high-precision location awareness, indoor navigation, and long-term autonomy may soon be much more achievable for mobile devices and underground robots.

Arm Flexible Access for Automotive Applications

Post Syndicated from ARM original https://spectrum.ieee.org/sensors/automotive-sensors/arm_flexible_access_for_automotive_applications

ARM Self-driving

The power of computing has profoundly influenced our lives and we place a high degree of trust in our electronic systems every day. In applications areas such as automotive, the consequences of a system failing or going wrong can be serious and even potentially life-threatening. Trust in automotive systems is crucial and only possible because of functional safety: A system’s ability to detect, diagnose and safely mitigate the occurrence of any fault. 
  
The application of safety is critical in the automotive industry, but how do we then balance safety and innovation in this rapidly transforming industry? 
  
Arm Flexible Access lowers the barriers to rapid innovation and opens the doors to leading technology with access to a wide range of Arm IP, support, tools, and training. Arm Flexible Access adds safety technologies and features to make it easier for developers in the automotive industry and other safety-related industries to create SoCs and attain certification for industry safety standards. 
  
Arm Flexible Access also enables more experimentation for those designing innovative automotive systems, which may involve self-driving capabilities or electrification of powertrain, allowing developers to find the most efficient way of balancing functionality with the level of safety required. Different architectures and mixtures of IP can be accessed, evaluated, optimized redesigned to achieve the best solution, which can then be taken to certification more easily.   
  
To see the range of IP available in Arm Flexible Access suited for automotive applications, please visit our Automotive specific Flexible Access page.

Prophesee’s Event-Based Camera Reaches High Resolution

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/sensors/imagers/prophesees-eventbased-camera-reaches-high-resolution

There’s something inherently inefficient about the way video captures motion today. Cameras capture frame after frame at regular intervals, but most of the pixels in those frames don’t change from one to the other, and whatever is moving in those frames is only captured episodically.

Event-based cameras work differently; their pixels only react if they detect a change in the amount of light falling on them. They capture motion better than any other camera, while generating only a small amount of data and burning little power.

Paris-based startup Prophesee has been developing and selling event-based cameras since 2016, but the applications for their chips were limited. That’s because the circuitry surrounding the light-sensing element took up so much space that the imagers had a fairly low resolution. In a partnership announced this week at the IEEE International Solid-State Circuits Conference in San Francisco, Prophesee used Sony technology to put that circuitry on a separate chip that sits behind the pixels.

“Using the Sony process, which is probably the most advanced process, we managed to shrink the pixel pitch down to 4.86 micrometers” from their previous 15 micrometers, says Luca Verre, the company’s cofounder and CEO.

The resulting 1280 x 720 HD event-based imager is suitable for a much wider range of applications. “We want to enter the space of smart home cameras and smart infrastructure, [simultaneous localization and mapping] solutions for AR/VR, 3D sensing for drones or industrial robots,” he says.

The company is also looking to enter the automotive market, where imagers need a high dynamic range to deal with the big differences between day and night driving. “This is where our technology excels,” he says.

Besides the photodiode, each pixel requires circuits to change the diode’s current into a logarithmic voltage and determine if there’s been an increase or decrease in luminosity. It’s that circuitry that Sony’s technology puts on a separate chip that sits behind the pixels and is linked to them by a dense array of copper connections. Previously, the photodiode made up only 25 percent of the area of the pixel, now it’s 77 percent.

When a pixel detects a change (an event), all that is output is the location of the pixel, the polarity of the change, and a 1-microsecond-resolution time stamp. The imager consumes 32 milliwatts to register 100,000 events per second and ramps up to just 73 milliwatts at 300 million events per second. A system that dynamically compresses the event data allows the chip to sustain a rate of more than 1 billion events per second.

Advanced Battery Manufacturing Techniques and Automotive Ethernet Webinars

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/advanced_battery_manufacturing_techniques_and_automotive_ethernet_webinars

Keysight Technologies is a world leader in the field of electronic test and measurement including a focus on automotive and energy test solutions. The Keysight Engineering Education (KEE) webinar series provides valuable information to electrical engineers on a wide variety of automotive and energy topics ranging from in-vehicle networking to battery cell manufacturing and test to autonomous driving technologies. Come learn some of the latest test challenges and solutions from technical experts at Keysight and our partners.

Register for one or both of these upcoming webinars:

  • How to Reduce Battery Cell Manufacturing Time – Thursday, March 12
  • Demystifying the Standards for Automotive Ethernet – Wednesday, April 22
    Keysight Automotive

Velodyne Will Sell a Lidar for $100

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/velodyne-will-sell-a-lidar-for-100

Velodyne claims to have broken the US $100 barrier for automotive lidar with its tiny Velabit, which it unveiled at CES earlier this month.

“Claims” is the mot juste because this nice, round dollar amount is an estimate based on the mass-manufacturing maturity of a product that has yet to ship. Such a factoid would hardly be worth mentioning had it come from some of the several-score odd lidar startups that haven’t shipped anything at all. But Velodyne created this industry back during DARPA-funded competitions, and has been the market leader ever since.

Seeing Around the Corner With Lasers—and Speckle

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/seeing-around-corner-lasers-speckle

Researchers from Rice, Stanford, Princeton, and Southern Methodist University have developed a new way to use lasers to see around corners that beats the previous technique on resolution and scanning speed. The findings appear today in the journal Optica.

The U.S. military—which funded the work through DARPA grants—is interested for obvious reasons, and NASA wants to use it to image caves, perhaps doing so from orbit. The technique might one day also let rescue workers peer into earthquake-damaged buildings and help self-driving cars navigate tricky intersections.

One day. Right now it’s a science project, and any application is years away. 

The original way of corner peeping, dating to 2012, studies the time it takes laser light to go to a reflective surface, onward to an object and back again. Such time-of-flight measurement requires hours of scanning time to produce a resolution measured in centimeters. Other methods have since been developed that look at reflected light in an image to infer missing parts.

The latest method looks instead at speckle, a shimmering interference pattern that in many laser applications is a bug; here it is a feature because it contains a trove of spatial information. To get the image hidden in the speckle—a process called non-line-of-sight correlography—involves a bear of a calculation. The researchers used deep-learning methods to accelerate the analysis. 

“Image acquisition takes a quarter of a second, and we’re getting sub-millimeter resolution,” says Chris Metzler, the leader of the project, who is a post-doc in electrical engineering at Stanford. He did most of the work while completing a doctorate at Rice.

The problem is that the system can achieve these results only by greatly narrowing the field of view.

“Speckle encodes interference information, and as the area gets larger, the resolution gets worse,” says Ashok Veeraraghavan, an associate professor of electrical engineering and computer science at Rice. “It’s not ideal to image a room; it’s ideal to image an ID badge.”

The two methods are complementary: Time-of-flight gets you the room, the guy standing in that room, and maybe a hint of a badge. Speckle analysis reads the badge. Doing all that would require separate systems operating two lasers at different wavelengths, to avoid interference.

Today corner peeping by any method is still “lab bound,” says Veeraraghavan, largely because of interference from ambient light and other problems. To get the results that are being released today, the researchers had to work from just one meter away, under ideal lighting conditions. A possible way forward may be to try lasers that emit in the infrared.

Another intriguing possibility is to wait until the wireless world’s progress to ever-shorter wavelengths finally hits the millimeter band, which is small enough to resolve most identifying details. Cars and people, for instance.