Tag Archives: sensors

Can Sensors That Detect Coronavirus in the Air Help Economies Reopen Safely?

Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/sensors/chemical-sensors/devices-monitor-coronavirus-in-the-air

IEEE COVID-19 coverage logo, link to landing page

As businesses scramble to find ways to make workers and customers feel safe about entering enclosed spaces during a pandemic, several companies have proposed a solution: COVID-19 air monitoring devices. 

These devices suck in large quantities of air and trap aerosolized virus particles and anything else that’s present. The contents are then tested for the presence of the novel coronavirus, also known as SARS-CoV-2, which causes COVID-19. 

Several companies in the air quality and diagnostics sectors have quickly developed this sort of tech, with various iterations available on the market. The devices can be used anywhere, such as office buildings, airplanes, hospitals, schools and nursing homes, these companies say. 

But the devices don’t deliver results in real time—they don’t beep to alert people nearby that the virus has been detected. Instead, the collected samples must be sent to a lab to be analyzed, typically with a method called PCR, or polymerase chain reaction. 

This process takes hours. Add to that the logistics of physically transporting the samples to a lab and it could be a day or more before results are available. Still, there’s value in day-old air quality information, say developers of this type of technology. 

“It’s not solving everything about COVID-19,” says Milan Patel, CEO of PathogenDx, a DNA-based testing company. But it does enable businesses to spot the presence of the virus without relying on people to self-report, and brings peace of mind to everyone involved, he says. “If you’re going into a building, wouldn’t it be great to know that they’re doing environmental monitoring?” Patel says.

PathogenDx this year developed an airborne SARS-CoV-2 detection system by combining its DNA testing capability with an air sampler from Bertin Instruments. The gooseneck-shaped instrument uses a cyclonic vortex that draws in a high volume of air and traps any particles inside in a liquid. Once the sample is collected, it must be sent to a PathogenDx lab, where it goes through a two-step PCR process. This amplifies the virus’s genetic code so that it can be detected. Adding a second step to the process improves the test’s sensitivity, Patel says. (PCR is also the gold standard for general human testing of COVID-19.

Patel says he envisions the device proving particularly useful on airplanes, in large office buildings and health care facilities. On an airplane, for example, if the device picks up the presence of the virus during a flight, the airline can let passengers on that plane know that they were potentially exposed, he says. Or if the test comes back negative for the flight, the airline can “know that they didn’t just infect 267 passengers,” says Patel. 

In large office buildings, daily air sampling can give building managers a tool for early detection of the virus. As soon as the tests start coming back positive, the office managers could ask employees to work from home for a couple of weeks. Hospitals could use the device to track trends, identify trouble spots, and alert patients and staff of exposures.

Considering that many carriers of the virus don’t know they have it, or may be reluctant to report positive test results to every business they’ve visited, air monitoring could alert people to potential exposures in a way that contact tracing can’t. 

Other companies globally are putting forth their iterations on SARS-CoV-2 air monitoring. Sartorius in Göttingen, Germany says its device was used to analyze the air in two hospitals in Wuhan, China. (ResultsThe concentration of the virus in isolation wards and ventilated patient rooms was very low, but was higher in the toilet areas used by the patients.)

Assured Bio Labs in Oak Ridge, Tennessee markets its air monitoring device as a way to help the American workforce get back to business. InnovaPrep in Missouri offers an air sampling kit called the Bobcat, and Eurofins Scientific in Luxembourg touts labs worldwide that can analyze such samples.

But none of the commercially available tests can offer real-time results. That’s something that Jing Wang and Guangyu Qiu at the Swiss Federal Institute of Technology (ETH Zurich) and Swiss Federal Laboratories for Materials Science and Technology, or Empa, are working on.

They’ve come up with a plasmonic photothermal biosensor that can detect the presence of SARS-CoV-2 without the need for PCR. Qiu, a sensor engineer and postdoc at ETH Zurich and Empa, says that with some more work, the device could provide results within 15 minutes to an hour. “We’re trying to simplify it to a lab on a chip,” says Qiu. 

The device combines an optical sensor and a photothermal component that harness localized surface plasmon resonance sensing transduction and the plasmonic photothermal effect. But before the device can be tested in the real world, the researchers must find a way, on-board the device, to separate the virus’s genetic material from its membrane. Qiu says he hopes to resolve this and have a prototype ready to test by the end of the year.

New Electronic Nose Sniffs Out Perfectly Ripe Peaches for Harvest

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/chemical-sensors/new-electronic-nose-sniffs-out-perfectly-ripe-peaches-for-harvest

Have you ever tried to guess the ripeness of a peach by its smell? Farmers with a well-trained nose may be able to detect the unique combination of alcohols, esters, ketones, and aldehydes, but even an expert may struggle to know when the fruit is perfect for the picking. To help with harvesting, scientists have been developing electronic noses for sniffing out the ripest and most succulent peaches. In a recent study, one such e-nose exceeds 98 percent accuracy.

Sergio Luiz Stevan Jr. and colleagues at Federal University of Technology – Paraná and State University of Ponta Grossa, in Brazil, developed the new e-nose system. Stevan notes that even within a single, large orchard, fruit on one tree may ripen at different times than fruit on another tree, thanks to microclimates of varying ventilation, rain, soil, and other factors. Farmers can inspect the fruit and make their best guess at the prime time to harvest, but risk losing money if they choose incorrectly.

Fortunately, peaches emit vaporous molecules, called volatile organic compounds, or VOCs. “We know that volatile organic compounds vary in quantity and type, depending on the different phases of fruit growth,” explains Stevan. “Thus, the electronic noses are an [option], since they allow the online monitoring of the VOCs generated by the culture.”

The e-nose system created by his team has a set of gas sensors sensitive to particular VOCs. The measurements are digitized and pre-processed in a microcontroller. Next, a pattern recognition algorithm is used to classify each unique combination of VOC molecules associated with three stages of peach ripening (immature, ripe, over-ripe). The data is stored internally on an SD memory card and transmitted via Bluetooth or USB to a computer for analysis.

The system is also equipped with a ventilation mechanism that draws in air from the surrounding environment at a constant rate. The passing air is subjected to a set level of humidity and temperature to ensure consistent measurements. The idea, Stevan says, is to deploy several of these “noses” across an orchard to create a sensing network. 

He notes several advantages of this system over existing ripeness-sensing approaches, including that it is online, conducts real-time continuous analyses in an open environment, and does not require direct handling of the fruit. “It is different from the other [approaches] present in the literature, which are generally carried out in the laboratory or warehouses, post-harvest or during storage,” he says. The e-nose system is described in a study published June 4 in IEEE Sensors Journal.

While the study shows that the e-nose system already has a high rate of accuracy at more than 98 percent, the researchers are continuing to work on its components, focusing in particular on improving the tool’s flow analysis. They have filed for a patent and are exploring the prospect of commercialization.

For those who prefer their fruits and grains in drinkable form, there is additional good news. Stevan says in the past his team has developed a similar e-nose for beer, to analyze both alcohol content and aromas. Now they are working on an e-nose for wine, as well as a variety of other fruits.

Monitoring bees with a Raspberry Pi and BeeMonitor

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/monitoring-bees-with-a-raspberry-pi-and-beemonitor/

Keeping an eye on bee life cycles is a brilliant example of how Raspberry Pi sensors help us understand the world around us, says Rosie Hattersley

The setup featuring an Arduino, RF receiver, USB cable and Raspberry Pi

Getting to design and build things for a living sounds like a dream job, especially if it also involves Raspberry Pi and wildlife. Glyn Hudson has always enjoyed making things and set up a company manufacturing open-source energy monitoring tools shortly after graduating from university. With access to several hives at his keen apiarist parents’ garden in Snowdonia, Glyn set up BeeMonitor using some of the tools he used at work to track the beehives’ inhabitants.

Glyn bent down infront of a hive checking the original BeeMonitor setup

Glyn checking the original BeeMonitor setup

“The aim of the project was to put together a system to monitor the health of a bee colony by monitoring the temperature and humidity inside and outside the hive over multiple years,” explains Glyn. “Bees need all the help and love they can get at the moment and without them pollinating our plants, weíd struggle to grow crops. They maintain a 34∞C core brood temperature (± 0.5∞C) even when the ambient temperature drops below freezing. Maintaining this temperature when a brood is present is a key indicator of colony health.”

Wi-Fi not spot

BeeMonitor has been tracking the hives’ population since 2012 and is one of the earliest examples of a Raspberry Pi project. Glyn built most of the parts for BeeMonitor himself. Open-source software developed for the OpenEnergyMonitor project provides a data-logging and graphing platform that can be viewed online.

Spectators in protective suits watching staff monitor the beehive

BeeMonitor complete with solar panel to power it. The Snowdonia bees produce 12 to 15 kg of honey per year

The hives were too far from the house for WiFi to reach, so Glyn used a low-power RF sensor connected to an Arduino which was placed inside the hive to take readings. These were received by a Raspberry Pi connected to the internet.

Diagram showing what information BeeMonitor is trying to establish

Diagram showing what information BeeMonitor is trying to establish

At first, there was both a DS18B20 temperature sensor and a DHT22 humidity sensor inside the beehive, along with the Arduino (setup info can be found here). Data from these was saved to an SD card, the obvious drawback being that this didn’t display real-time data readings. In his initial setup, Glyn also had to extract and analyse the CSV data himself. “This was very time-consuming but did result in some interesting data,” he says.

Sensor-y overload

Almost as soon as BeeMonitor was running successfully, Glyn realised he wanted to make the data live on the internet. This would enable him to view live beehive data from anywhere and also allow other people to engage in the data.

“This is when Raspberry Pi came into its own,” he says. He also decided to drop the DHT22 humidity sensor. “It used a lot of power and the bees didn’t like it – they kept covering the sensor in wax! Oddly, the bees don’t seem to mind the DS218B20 temperature sensor, presumably since it’s a round metal object compared to the plastic grille of the DHT22,” notes Glyn.

Bees interacting with the temperature probe

Unlike the humidity sensor, the bees don’t seem to mind the temperature probe

The system has been running for eight years with minimal intervention and is powered by an old car battery and a small solar PV panel. Running costs are negligible: “Raspberry Pi is perfect for getting projects like this up and running quickly and reliably using very little power,” says Glyn. He chose it because of the community behind the hardware. “That was one of Raspberry Pi’s greatest assets and what attracted me to the platform, as well as the competitive price point!” The whole setup cost him about £50.

Glyn tells us we could set up a basic monitor using Raspberry Pi, a DS28B20 temperature sensor, a battery pack, and a solar panel.

The post Monitoring bees with a Raspberry Pi and BeeMonitor appeared first on Raspberry Pi.

Demystifying the Standards for Automotive Ethernet

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/demystifying_the_standards_for_automotive_ethernet

As Ethernet celebrates its 40th anniversary, this webinar will discuss its deployment for automotive applications and the standards related to its implementation. It will explain the role of the Institute of Electrical and Electronics Engineers (IEEE), OPEN Alliance, and the Ethernet Alliance in governing the implementation.  

Key learnings:

  • Understand the role of IEEE, OPEN Alliance, and the Ethernet Alliance in the development of automotive Ethernet 
  • Determine the relevant automotive Ethernet specifications for PHY developers, original equipment manufacturers, and Tier 1 companies  
  • Review the role of automotive Ethernet relative to other emerging high-speed digital automotive standards 

Please contact [email protected] to request PDH certificate code.

Sony Builds AI Into a CMOS Image Sensor

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/sensors/imagers/sony-builds-ai-into-a-cmos-image-sensor

Sony today announced that it has developed and is distributing smart image sensors. These devices use machine learning to process captured images on the sensor itself. They can then select only relevant images, or parts of images, to send on to cloud-based systems or local hubs.

This technology, says Mark Hanson, vice president of technology and business innovation for Sony Corp. of America, means practically zero latency between the image capture and its processing; low power consumption enabling IoT devices to run for months on a single battery; enhanced privacy; and far lower costs than smart cameras that use traditional image sensors and separate processors.

Sony’s San Jose laboratory developed prototype products using these sensors to demonstrate to future customers. The chips themselves were designed at Sony’s Atsugi, Japan, technology center. Hanson says that while other organizations have similar technology in development, Sony is the first to ship devices to customers.

Sony builds these chips by thinning and then bonding two wafers—one containing chips with light-sensing pixels and one containing signal processing circuitry and memory. This type of design is only possible because Sony is using a back-illuminated image sensor. In standard CMOS image sensors, the electronic traces that gather signals from the photodetectors are laid on top of the detectors. This makes them easy to manufacture, but sacrifices efficiency, because the traces block some of the incoming light. Back-illuminated devices put the readout circuitry and the interconnects under the photodetectors, adding to the cost of manufacture.

“We originally went to backside illumination so we could get more pixels on our device,” says Hanson. “That was the catalyst to enable us to add circuitry; then the question was what were the applications you could get by doing that.”

Hanson indicated that the initial applications for the technology will be in security, particularly in large retail situations requiring many cameras to cover a store. In this case, the amount of data being collected quickly becomes overwhelming, so processing the images at the edge would simplify that and cost much less, he says. The sensors could, for example, be programmed to spot people, carve out the section of the image containing the person, and only send that on for further processing. Or, he indicated, they could simply send metadata instead of the image itself, say, the number of people entering a building. The smart sensors can also track objects from frame to frame as a video is captured, for example, packages in a grocery store moving from cart to self-checkout register to bag.

Remote surveillance, with the devices running on battery power, is another application that’s getting a lot of interest, he says, along with manufacturing companies that mix robots and people looking to use image sensors to improve safety. Consumer gadgets that use the technology will come later, but he expected developers to begin experimenting with samples.

Learn the Latest Standards and Test Requirements for Automotive Radar

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/learn_the_latest_standards_and_test_requirements_for_automotive_radar

Automotive radar is a key enabler for autonomous driving technology, and it is one of the key areas for new modulation technologies. Unlike wireless communication technologies, there are no regulations and definitions for modulation type, signal pattern, and duty cycle for automotive radar.  

This webinar addresses which standards apply to radar measurements and how to measure different radar signals using Keysight’s automotive radar test solution.

Key learnings:

  • How to set the right parameters for different chirp rates and duty cycle radar signals for power measurements.  
  • Learn how to test the Rx interference of your automotive radar.  
  • Understand how to use Keysight’s automotive test solution to measure radar signals. 

Please contact [email protected] to request PDH certificate code.

Volvo and Lidar-maker Luminar to Deliver Hands-free Driving by 2022

Post Syndicated from Lawrence Ulrich original https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/volvo-and-lidarmaker-luminar-to-deliver-handsfree-driving-by-2022

The race to bring self-driving cars to showrooms may have a new leader: Volvo Cars said it will partner with Silicon Valley-based lidar manufacturer Luminar to deliver genuine hands-free, eyes-off-the-road highway driving beginning in 2022. That so-called “Level 3” capability would be an industry first, as companies from Tesla to Uber thus far have failed to meet lofty promises to make autonomous driving a reality. 

Sweden’s Volvo, owned by Geely Holdings of China, said that models based on its upcoming SPA2 platform (for “Scalable Product Architecture”) will be hardware-enabled for Luminar’s roof-mounted lidar system. That includes the upcoming Polestar 3 from Volvo’s new electric-car division, and a range of Volvo-branded cars and SUVs. Henrik Green, chief technology officer for Volvo, said the optional “Highway Pilot” system would allow full autonomous driving, but only “when the car determines it is safe to do so.” 

“At that point, your Volvo takes responsibility for the driving and you can relax, take your eyes off the road and your hands off the wheel,” Green said. “Over time, updates over-the-air will expand the areas in which the car can drive itself. For us, a safe introduction of autonomy is a gradual introduction.” 

Luminar’s lidar system scans a car’s surroundings in real time, firing millions of pulses of light to create a virtual 3D map without relying on GPS or a network connection. Most experts agree that lidar is a critical linchpin of any truly autonomous car. A high-profile skeptic is Elon Musk, who has no plans to employ lidar in his Teslas, and scoffs at the technology as redundant and unnecessary.

Austin Russell, founder and chief executive of Luminar, disagrees. 

“If cameras could do everything you can do with lidar, great. But if you really want to get in the autonomous game, this is a clear requirement.” 

The 25-year-old Russell said his company’s high-performance lidar sensors, operating at 1550 nanometers and 64 lines of resolution, can spot even small and low-reflective objects—black cars, animals, a child darting across a road—at a range beyond 200 meters, and up to 500 meters for larger, brighter objects. The high-signal, low-noise system can deliver camera-level vision at 200 meters even in rain, fog or snow, he said. That range, Russell said, is critical to give cars the data and time to solve edge cases and make confident decisions, even when hurtling at freeway speeds.  

“Right now, you don’t have reliable interpretation,” with camera, GPS or radar systems, Russell said. “You can guess what’s ahead of you 99 percent of the time, but the problem is that last one percent. And a 99-percent pedestrian detection rate doesn’t cut it, not remotely close.” 

In a Zoom interview from Palo Alto, Russell held up two prototype versions, a roof-mounted unit roughly the size of a TV cable box, and a smaller unit that would mount on bumpers. The elegant systems incorporate a single laser and receiver, rather than the bulky, expensive, stacked arrays of lower-performance systems. Luminar built all its components and software in-house, Russell said, and is already on its eighth generation of ASIC chip controllers. 

The roof-mounted unit can deliver 120 degrees of forward vision, Russell said, plenty for Volvo’s application that combines lidar with cameras, radar, backup GPS and electrically controlled steering and braking. For future robotaxi applications with no human pilot aboard, cars would combine three to five lidar sensors to deliver full 360-degree vision. The unit will also integrate with Volvo’s Advanced Driver Assistance Systems (ADAS), improving such features as automated emergency braking, which global automakers have agreed to make standard in virtually all vehicles by 2022. 

The range and performance, Russell said, is key to solving the conundrums that have frustrated autonomous developers. 

The industry has hit a wall, unable to make the technical leap to Level 3, with cars so secure in their abilities that owners could perform tasks such as texting, reading or even napping. Last week, Audi officially quit its longstanding claims that its new A8 sedan would do just that, with its Traffic Jam Pilot system. Musk is luring consumers to pay $7,000 up-front for “Full Self-Driving” features that have yet to materialize, and that he claims would allow them to use their Teslas as money-making robotaxis. 

Current Level 2 systems, including Tesla’s Autopilot and Cadillac’s SuperCruise, often disengage when they can’t safely process their surroundings. Those disengagements, or even their possibility, demand that drivers continue to pay attention to the road at all times. And when systems do work effectively, drivers can fall out of the loop and be unable to quickly retake control. Russell acknowledged that those limitations leave current systems feeling like a parlor trick: If a driver must still pay full attention to the road, why not just keep driving the old-fashioned way? 

“Assuming perfect use, it’s fine as a novelty or convenience,” Russell said. “But it’s easy to get lulled into a sense of safety. Even when you’re really trying to maintain attention, to jump back into the game is very difficult.” 

Ideally, Russell said, a true Level 3 system would operate for years and hundreds or thousands of miles and never disengage without generous advance warning. 

“From our perspective, spontaneous handoffs that require human intervention are not safe,” he said. “It can’t be, ‘Take over in 500 milliseconds or I’m going to run into a wall.’” 

One revolution of Level 3, he said, is that drivers would actually begin to recover valuable time for productivity, rest or relaxation. The other is safety, and the elusive dream of sharply reducing roadway crashes that kill 1.35 million people a year around the world. 

Volvo cautioned that its Highway Pilot, whose price is up in the air, would initially roll out in small volumes, and steadily expand across its lineup. Volvo’s announcement included an agreement to possibly increase its minority stake in Luminar. But Luminar said it is also working with 12 of the world’s top 15 automakers in various stages of lidar development. And Russell said that, whichever manufacturer makes them, lidar-based safety and self-driving systems will eventually become an industry standard. 

“Driving adoption throughout the larger industry is the right move,” he said. “That’s how you save the most lives and create the most value.” 

Drones Use Radio Waves to Recharge Sensors While in Flight

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/remote-sensing/uavs-prove-usefuldelivering-remote-power-charging-services

Journal Watch report logo, link to report landing page

Remote sensors play a valuable role in collecting data—but recharging these devices while they are scattered over vast and isolated areas can be tedious. A new system is designed to make the charging process easier by using unmanned aerial vehicles (UAVs) to deliver power using radio waves during a flyby. A specialized antenna on the sensor harvests the signals and converts them into electricity. The design is described in a study published 23 March in IEEE Sensors Letters.

Mirror Arrays Make Augmented Reality More Realistic

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/imagers/mirrors-augmented-reality-more-realistic

Journal Watch report logo, link to report landing page

In the world of augmented reality (AR), real and virtual images are combined to create immersive environments for users. While the technology is advancing, it has remained challenging to make virtual images appear more “solid” in front of real-world objects, an effect that’s essential for us in achieving depth perception.

Now, a team of researchers has developed a compact AR system that, using an array of miniature mirrors that switch positions tens of thousands of times per second, can create this elusive effect. They describe their new system in a study published February 13 in IEEE Transactions on Visualization and Computer Graphics.

Occlusion is when light from objects in the foreground block the light from objects at farther distances. Commercialized AR systems have limited occlusion because they tend to rely on a single spatial light modulator (SLM) to create the virtual images, while allowing natural light from real-world objects to also be perceived by the user.

“As can be seen with many commercial devices such as the Microsoft HoloLens or Magic Leap, not blocking high levels of incoming light [from real-world objects] means that virtual content becomes so transparent it becomes difficult to even see,” explains Brooke Krajancich, a researcher at Stanford University. This lack of occlusion can interfere with the user’s depth perception, which would be problematic in tasks requiring precision, such as AR-assisted surgery.

To achieve better occlusion, some researchers have been exploring the possibility of a second SLM that controls the incoming light of real-world objects. However, incorporating two SLMs into one system involves a lot of hardware, making it bulky. Instead, Krajancich and her colleagues developed a new design that combines virtual projection and light-blocking abilities into one element.

Their design relies on a dense array of miniature mirrors that can be individually flipped between two states—one that allows light through and one that reflects light—at a rate of up to tens of thousands of times per second.

“Our system uses these mirrors to switch between a see-through state, which allows the user to observe a small part of the real world, and a reflective state, where the same mirror blocks light from the scene in favor of an [artificial] light source,” explains Krajancich. The system computes the optimal arrangement for the mirrors and adjusts accordingly.

Krajancich notes some trade-offs with this approach, including some challenges in rendering colors properly. It also requires a lot of computing power, and therefore may require higher power consumption than other AR systems. While commercialization of this system is a possibility in the future, she says, the approach is still in the early research stages.

New Gyroscope Design Will Help Autonomous Cars and Robots Map the World

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/tech-talk/sensors/automotive-sensors/new-gyroscope-design-will-help-autonomous-cars-robots-map-world

As useful as GPS is, it’s not a system that we can rely on all the time. There are many situations in which GPS might not be available, such as when you’re inside a building, driving through a tunnel, on a battlefield, in a mine, underwater, or in space.

Systems that need to track location, like your cell phone, an autonomous car, or a submarine, often use what are called Inertial Measurement Units (a combination of accelerometers and gyroscopes) to estimate position through dead reckoning by tracking changes in acceleration and rotation. The accuracy of these location estimates depends on how accurate the sensors in those IMUs are. Unfortunately, gyroscopes that can do a good job of tracking rotation over long periods of time are too big and too expensive for most commercial or consumer-grade systems.

Researchers at the University of Michigan, led by Khalil Najafi and funded by DARPA, have presented a paper at the 7th IEEE International Symposium on Inertial Sensors & Systems on a new type of of gyroscope called a precision shell integrating (PSI) gyroscope, which the authors say is “10,000 times more accurate but only 10 times more expensive than gyroscopes used in your typical cell phone.” It offers the performance of much larger gyroscopes at one thousandth the cost, meaning that high-precision location awareness, indoor navigation, and long-term autonomy may soon be much more achievable for mobile devices and underground robots.

Arm Flexible Access for Automotive Applications

Post Syndicated from ARM original https://spectrum.ieee.org/sensors/automotive-sensors/arm_flexible_access_for_automotive_applications

ARM Self-driving

The power of computing has profoundly influenced our lives and we place a high degree of trust in our electronic systems every day. In applications areas such as automotive, the consequences of a system failing or going wrong can be serious and even potentially life-threatening. Trust in automotive systems is crucial and only possible because of functional safety: A system’s ability to detect, diagnose and safely mitigate the occurrence of any fault. 
  
The application of safety is critical in the automotive industry, but how do we then balance safety and innovation in this rapidly transforming industry? 
  
Arm Flexible Access lowers the barriers to rapid innovation and opens the doors to leading technology with access to a wide range of Arm IP, support, tools, and training. Arm Flexible Access adds safety technologies and features to make it easier for developers in the automotive industry and other safety-related industries to create SoCs and attain certification for industry safety standards. 
  
Arm Flexible Access also enables more experimentation for those designing innovative automotive systems, which may involve self-driving capabilities or electrification of powertrain, allowing developers to find the most efficient way of balancing functionality with the level of safety required. Different architectures and mixtures of IP can be accessed, evaluated, optimized redesigned to achieve the best solution, which can then be taken to certification more easily.   
  
To see the range of IP available in Arm Flexible Access suited for automotive applications, please visit our Automotive specific Flexible Access page.

Prophesee’s Event-Based Camera Reaches High Resolution

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/sensors/imagers/prophesees-eventbased-camera-reaches-high-resolution

There’s something inherently inefficient about the way video captures motion today. Cameras capture frame after frame at regular intervals, but most of the pixels in those frames don’t change from one to the other, and whatever is moving in those frames is only captured episodically.

Event-based cameras work differently; their pixels only react if they detect a change in the amount of light falling on them. They capture motion better than any other camera, while generating only a small amount of data and burning little power.

Paris-based startup Prophesee has been developing and selling event-based cameras since 2016, but the applications for their chips were limited. That’s because the circuitry surrounding the light-sensing element took up so much space that the imagers had a fairly low resolution. In a partnership announced this week at the IEEE International Solid-State Circuits Conference in San Francisco, Prophesee used Sony technology to put that circuitry on a separate chip that sits behind the pixels.

“Using the Sony process, which is probably the most advanced process, we managed to shrink the pixel pitch down to 4.86 micrometers” from their previous 15 micrometers, says Luca Verre, the company’s cofounder and CEO.

The resulting 1280 x 720 HD event-based imager is suitable for a much wider range of applications. “We want to enter the space of smart home cameras and smart infrastructure, [simultaneous localization and mapping] solutions for AR/VR, 3D sensing for drones or industrial robots,” he says.

The company is also looking to enter the automotive market, where imagers need a high dynamic range to deal with the big differences between day and night driving. “This is where our technology excels,” he says.

Besides the photodiode, each pixel requires circuits to change the diode’s current into a logarithmic voltage and determine if there’s been an increase or decrease in luminosity. It’s that circuitry that Sony’s technology puts on a separate chip that sits behind the pixels and is linked to them by a dense array of copper connections. Previously, the photodiode made up only 25 percent of the area of the pixel, now it’s 77 percent.

When a pixel detects a change (an event), all that is output is the location of the pixel, the polarity of the change, and a 1-microsecond-resolution time stamp. The imager consumes 32 milliwatts to register 100,000 events per second and ramps up to just 73 milliwatts at 300 million events per second. A system that dynamically compresses the event data allows the chip to sustain a rate of more than 1 billion events per second.

Advanced Battery Manufacturing Techniques and Automotive Ethernet Webinars

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/advanced_battery_manufacturing_techniques_and_automotive_ethernet_webinars

Keysight Technologies is a world leader in the field of electronic test and measurement including a focus on automotive and energy test solutions. The Keysight Engineering Education (KEE) webinar series provides valuable information to electrical engineers on a wide variety of automotive and energy topics ranging from in-vehicle networking to battery cell manufacturing and test to autonomous driving technologies. Come learn some of the latest test challenges and solutions from technical experts at Keysight and our partners.

Register for one or both of these upcoming webinars:

  • How to Reduce Battery Cell Manufacturing Time – Thursday, March 12
  • Demystifying the Standards for Automotive Ethernet – Wednesday, April 22
    Keysight Automotive

Velodyne Will Sell a Lidar for $100

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/velodyne-will-sell-a-lidar-for-100

Velodyne claims to have broken the US $100 barrier for automotive lidar with its tiny Velabit, which it unveiled at CES earlier this month.

“Claims” is the mot juste because this nice, round dollar amount is an estimate based on the mass-manufacturing maturity of a product that has yet to ship. Such a factoid would hardly be worth mentioning had it come from some of the several-score odd lidar startups that haven’t shipped anything at all. But Velodyne created this industry back during DARPA-funded competitions, and has been the market leader ever since.

Seeing Around the Corner With Lasers—and Speckle

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/seeing-around-corner-lasers-speckle

Researchers from Rice, Stanford, Princeton, and Southern Methodist University have developed a new way to use lasers to see around corners that beats the previous technique on resolution and scanning speed. The findings appear today in the journal Optica.

The U.S. military—which funded the work through DARPA grants—is interested for obvious reasons, and NASA wants to use it to image caves, perhaps doing so from orbit. The technique might one day also let rescue workers peer into earthquake-damaged buildings and help self-driving cars navigate tricky intersections.

One day. Right now it’s a science project, and any application is years away. 

The original way of corner peeping, dating to 2012, studies the time it takes laser light to go to a reflective surface, onward to an object and back again. Such time-of-flight measurement requires hours of scanning time to produce a resolution measured in centimeters. Other methods have since been developed that look at reflected light in an image to infer missing parts.

The latest method looks instead at speckle, a shimmering interference pattern that in many laser applications is a bug; here it is a feature because it contains a trove of spatial information. To get the image hidden in the speckle—a process called non-line-of-sight correlography—involves a bear of a calculation. The researchers used deep-learning methods to accelerate the analysis. 

“Image acquisition takes a quarter of a second, and we’re getting sub-millimeter resolution,” says Chris Metzler, the leader of the project, who is a post-doc in electrical engineering at Stanford. He did most of the work while completing a doctorate at Rice.

The problem is that the system can achieve these results only by greatly narrowing the field of view.

“Speckle encodes interference information, and as the area gets larger, the resolution gets worse,” says Ashok Veeraraghavan, an associate professor of electrical engineering and computer science at Rice. “It’s not ideal to image a room; it’s ideal to image an ID badge.”

The two methods are complementary: Time-of-flight gets you the room, the guy standing in that room, and maybe a hint of a badge. Speckle analysis reads the badge. Doing all that would require separate systems operating two lasers at different wavelengths, to avoid interference.

Today corner peeping by any method is still “lab bound,” says Veeraraghavan, largely because of interference from ambient light and other problems. To get the results that are being released today, the researchers had to work from just one meter away, under ideal lighting conditions. A possible way forward may be to try lasers that emit in the infrared.

Another intriguing possibility is to wait until the wireless world’s progress to ever-shorter wavelengths finally hits the millimeter band, which is small enough to resolve most identifying details. Cars and people, for instance.

Good Buoy: the Raspberry Pi Smart Buoy

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/good-buoy-the-raspberry-pi-smart-buoy/

As their new YouTube video shows, the team at T3ch Flicks have been hard at work, designing and prototyping a smart buoy for marine conservation research.

Smart-Buoy Series [Summary]

We all love the seaside, right? Whether that’s the English seaside with ice creams and muddy piers or the Caribbean, with white sand beaches fringed by palm trees, people flock to the coast for a bit of rest and relaxation, to enjoy water sports or to make their livelihood.

What does a smart buoy do?

“The sensors onboard the smart buoy enable it to measure wave height, wave period, wave power, water temperature, air temperature, air pressure, voltage, current usage and GPS location,” explain T3ch Flicks on their project tutorial page. “All the data the buoy collects is sent via radio to a base station, which is a Raspberry Pi. We made a dashboard to display them using Vue JS.”

But why build a smart buoy to begin with? “The coast is a dynamic area at the mercy of waves. Rising sea levels nibble at beaches and powerful extreme events like hurricanes completely decimate them,” they go on to explain. “To understand how to save them, we need to understand the forces driving their change.”

The 3D-printed casing of the smaert buoy with tech inside

It’s a pretty big ask of a 3D-printed dome but, with the aid of an on-board Raspberry Pi, Arduino and multiple sensors, their project was a resounding success. So much so that the Grenadian government gave the team approval to set the buoy free along their coast, and even made suggestions of how the project could be improved to aid them in their own research – pretty cool, right?

The smart buoy out at sea along the Grenada coast

The project uses a lot of tech. A lot. So, instead of listing it here, why not head over to the hackster.io project page, where you’ll find all the ingredients you need to build your own smart buoy.

Good luck to the T3ch Flicks team. We look forward to seeing how the project develops.

The post Good Buoy: the Raspberry Pi Smart Buoy appeared first on Raspberry Pi.

Fingerprinting iPhones

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/fingerprinting_7.html

This clever attack allows someone to uniquely identify a phone when you visit a website, based on data from the accelerometer, gyroscope, and magnetometer sensors.

We have developed a new type of fingerprinting attack, the calibration fingerprinting attack. Our attack uses data gathered from the accelerometer, gyroscope and magnetometer sensors found in smartphones to construct a globally unique fingerprint. Overall, our attack has the following advantages:

  • The attack can be launched by any website you visit or any app you use on a vulnerable device without requiring any explicit confirmation or consent from you.
  • The attack takes less than one second to generate a fingerprint.
  • The attack can generate a globally unique fingerprint for iOS devices.
  • The calibration fingerprint never changes, even after a factory reset.
  • The attack provides an effective means to track you as you browse across the web and move between apps on your phone.

* Following our disclosure, Apple has patched this vulnerability in iOS 12.2.

Research paper.

Build your own weather station with our new guide!

Post Syndicated from Richard Hayler original https://www.raspberrypi.org/blog/build-your-own-weather-station/

One of the most common enquiries I receive at Pi Towers is “How can I get my hands on a Raspberry Pi Oracle Weather Station?” Now the answer is: “Why not build your own version using our guide?”

Build Your Own weather station kit assembled

Tadaaaa! The BYO weather station fully assembled.

Our Oracle Weather Station

In 2016 we sent out nearly 1000 Raspberry Pi Oracle Weather Station kits to schools from around the world who had applied to be part of our weather station programme. In the original kit was a special HAT that allows the Pi to collect weather data with a set of sensors.

The original Raspberry Pi Oracle Weather Station HAT – Build Your Own Raspberry Pi weather station

The original Raspberry Pi Oracle Weather Station HAT

We designed the HAT to enable students to create their own weather stations and mount them at their schools. As part of the programme, we also provide an ever-growing range of supporting resources. We’ve seen Oracle Weather Stations in great locations with a huge differences in climate, and they’ve even recorded the effects of a solar eclipse.

Our new BYO weather station guide

We only had a single batch of HATs made, and unfortunately we’ve given nearly* all the Weather Station kits away. Not only are the kits really popular, we also receive lots of questions about how to add extra sensors or how to take more precise measurements of a particular weather phenomenon. So today, to satisfy your demand for a hackable weather station, we’re launching our Build your own weather station guide!

Build Your Own Raspberry Pi weather station

Fun with meteorological experiments!

Our guide suggests the use of many of the sensors from the Oracle Weather Station kit, so can build a station that’s as close as possible to the original. As you know, the Raspberry Pi is incredibly versatile, and we’ve made it easy to hack the design in case you want to use different sensors.

Many other tutorials for Pi-powered weather stations don’t explain how the various sensors work or how to store your data. Ours goes into more detail. It shows you how to put together a breadboard prototype, it describes how to write Python code to take readings in different ways, and it guides you through recording these readings in a database.

Build Your Own Raspberry Pi weather station on a breadboard

There’s also a section on how to make your station weatherproof. And in case you want to move past the breadboard stage, we also help you with that. The guide shows you how to solder together all the components, similar to the original Oracle Weather Station HAT.

Who should try this build

We think this is a great project to tackle at home, at a STEM club, Scout group, or CoderDojo, and we’re sure that many of you will be chomping at the bit to get started. Before you do, please note that we’ve designed the build to be as straight-forward as possible, but it’s still fairly advanced both in terms of electronics and programming. You should read through the whole guide before purchasing any components.

Build Your Own Raspberry Pi weather station – components

The sensors and components we’re suggesting balance cost, accuracy, and easy of use. Depending on what you want to use your station for, you may wish to use different components. Similarly, the final soldered design in the guide may not be the most elegant, but we think it is achievable for someone with modest soldering experience and basic equipment.

You can build a functioning weather station without soldering with our guide, but the build will be more durable if you do solder it. If you’ve never tried soldering before, that’s OK: we have a Getting started with soldering resource plus video tutorial that will walk you through how it works step by step.

Prototyping HAT for Raspberry Pi weather station sensors

For those of you who are more experienced makers, there are plenty of different ways to put the final build together. We always like to hear about alternative builds, so please post your designs in the Weather Station forum.

Our plans for the guide

Our next step is publishing supplementary guides for adding extra functionality to your weather station. We’d love to hear which enhancements you would most like to see! Our current ideas under development include adding a webcam, making a tweeting weather station, adding a light/UV meter, and incorporating a lightning sensor. Let us know which of these is your favourite, or suggest your own amazing ideas in the comments!

*We do have a very small number of kits reserved for interesting projects or locations: a particularly cool experiment, a novel idea for how the Oracle Weather Station could be used, or places with specific weather phenomena. If have such a project in mind, please send a brief outline to [email protected], and we’ll consider how we might be able to help you.

The post Build your own weather station with our new guide! appeared first on Raspberry Pi.