Tag Archives: sensors/automotive-sensors

Accelerating the journey to autonomous for carmakers

Post Syndicated from ARM Automotive original https://spectrum.ieee.org/sensors/automotive-sensors/accelerating-the-journey-to-autonomous-for-carmakers

A child runs into the street, stands frozen and terrified as she sees a car speeding toward her. What’s safer? Relying on a car with autonomous capabilities or a human driver to brake in time?

In that situation, it takes the average driver 1.5 seconds to react and hit the brakes; it takes a car equipped with vision systems, RADAR and LIDAR just 0.5 seconds. That 3x faster response time can mean the difference between life and death, and it’s one of many reasons the automotive industry is accelerating down its journey to autonomous. Whether it’s safety, efficiency, comfort, navigation or the more efficient manufacture of an autonomous car, automotive companies are increasingly embracing autonomous technologies.

But this wave of interest in autonomous technologies does not come without its challenges. Disparate systems and a dizzying range of software and hardware choices can make charting a path to autonomy challenging. For example, many current solutions for automated driving are based on bulky, high-power, costly standalone chips. Others require proprietary solutions, which severely constrain design flexibility. However, to make it suitable for widespread commercial use, Tier1 and OEMs are looking for a more power-efficient and cost-effective solution suitable for mass production.

To embrace the vision, we need to understand that autonomous workloads are fundamentally different from traditional workloads and confront the challenges that complexity presents. The incorporation of AI and machine learning means the workload magnitude is far greater when compared to something like the narrow scope of operation surrounding, for example, contemporary ADAS (automated driver assistance systems).

Grappling with complexity

Autonomous is also more complex than traditional workloads because of the overlapping and interconnected nature of systems in industrial applications. There may be a machine vision system with its own set of data and algorithms that needs to integrate with robot vehicles that are alerted by the vision system that a task has been completed and pickup is required.

Integration may also be required for other points along the supply chain. For example, vehicles rolling down the manufacturing line will benefit from just-in-time parts arrivals thanks to the more integrated, near real-time connection with the supply chain.

These technologies also need to support features to help achieve both ASIL D and ASIL B safety requirements, as well as ISO26262.

New thinking, new technology

All of this requires a fundamental reconsideration of design, validation and configuration as well as embracing emerging technologies and methodologies.

As we evolve toward increasingly autonomous deployment, we need new hardware and software systems that will replace some of the commercial offerings being used today – and a fundamental shift from homogeneity to heterogeneous systems. These new systems must not only deliver flexibility, but help reduce the power, size, cost and thermal properties while retaining the performance needed to run autonomous workloads.  And such systems will benefit from a flourishing ecosystem coalescing around these new design approaches that offer customers choice and flexibility to unleash their innovations.

Our mission at Arm is to deliver the underlying technologies and nurture an ecosystem to enable industries such as automotive to accelerate their journey to autonomy and reap the rewards of delivering new, transformative products to their customers.

In this market deep dive we outline the technology challenges and hardware and software solutions that can propel the automotive industry forward in its journey to a more safe, efficient and prosperous future.

Get More From Your Automotive Electronics Test Investment

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/get_more_from_your_automotive_electronics_test_investment

Keysight Automotive

Explore the powerful software behind Keysight’s high-precision hardware and discover how to meet emerging automotive electronics test requirements, while driving higher ROI from your existing hardware. Let Keysight help you cross the finish line ahead of your competition. Jump-start your automotive innovation today with a complimentary software trial

Toshiba’s Light Sensor Paves the Way for Cheap Lidar

Post Syndicated from John Boyd original https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/toshibas-light-sensor-highresolution-lidar

The introduction of fully autonomous cars has slowed to a crawl. Nevertheless, the introduction of technologies such as rearview cameras and automatic self-parking systems are helping the auto industry make incremental progress towards Level 4 autonomy while boosting driver-assist features along the way.

To that end, Toshiba has developed a compact, highly efficient silicon photo-multiplier (SiPM) that enables non-coaxial Lidar to employ off-the-shelf camera lenses to lower costs and help bring about solid-state, high-resolution Lidar.

Automotive Lidar (light detecting and ranging) typically uses spinning lasers to scan a vehicle’s environment 360 degrees by bouncing laser pulses off surrounding objects and measuring the return time for the reflected light to calculate their distances and shapes. The resulting point-cloud map can be used in combination with still images, radar data and GPS to create a virtual 3D map of the area the vehicle is traveling through.

However, high-end Lidar systems can be expensive, costing $80,000 or more, though cheaper versions are also available. The current leader in the field is Velodyne, whose lasers mechanically rotate in a tower mounted atop of a vehicle’s roof.

Solid-state Lidar systems have been announced in the past several years but have yet to challenge the mechanical variety. Now, Toshiba hopes to advance their cause with its SiPM: a solid-state light sensor employing single-photon avalanche diode (SPAD) technology. The Toshiba SiPM contains multiple SPADs, each controlled by an active quenching circuit (AQC). When an SPAD detects a photon, the SPAD cathode voltage is reduced but the AQC resets and reboots the SPAD voltage to the initial value. 

“Typical SiPM recovery time is 10 to 20 nanoseconds,” says Tuan Thanh Ta, Toshiba’s project leader for the technology. “We’ve made it 2 to 4 times faster by using this forced or active quenching method.”

The increased efficiency means Toshiba has been able to use far fewer light sensing cells—down from 48 to just 2—to produce a device measuring 25 μm x 90 μm, much smaller, the company says, than standard devices measuring 100 μm x 100 μm. The small size of these sensors has allowed Toshiba to create a dense two-dimensional array for high sensitivity, a requisite for long-range scanning. 

But such high-resolution data would require impractically large multichannel readout circuitry that comprises separate analog-to-digital converters (ADC) for long distances scanning, and time-to-digital converters (TDC) for short distances. Toshiba has overcome this problem by realizing both ADC and TDC functions in a single circuit. The result is an 80 percent reduction in size (down to 50 μm by 60 μm) over conventional dual data converter chips. 

“Our field trials using the SiPM with a prototype Lidar demonstrated the system’s effectiveness up to a distance of 200 meters while maintaining high resolution—which is necessary for Level 4 autonomous driving,” says Ta, who hails from Vietnam. “This is roughly quadruple the capability of solid-state Lidar currently on the market.”

“Factors like detection range are important, especially in high-speed environments like highways,” says Michael Milford, an interdisciplinary researcher and Deputy Director, Queensland University of Technology (QUT) Center for Robotics in Australia. “So I can see that these [Toshiba trial results] are important properties when it comes to commercial relevance.” 

And as Toshiba’s SiPM employs a two-dimensional array—unlike the one-dimensional photon receivers used in coaxial Lidar systems—Ta points out its 2D aspect ratio corresponds to that of light sensors used in commercial cameras. Consequently, off-the-shelf standard, telephoto and wide-angle lenses can be used for specific applications, helping to further reduce costs. 

By comparison, coaxial Lidar use the same optical path for sending and receiving the light source, and so require a costly customized lens to transmit and then collect the received light and send it to the 1D photon receivers, Ta explains.

But as Milford points out, “If Level 4+ on-road autonomous driving becomes a reality, it’s unlikely we’ll have a wide range of solutions and sensor configurations. So this flexibility [of Toshiba’s sensor] is perhaps more relevant for other domains like drones, and off-road autonomous vehicles, where there is more variability, and use of existing hardware is more important.”

Meanwhile, Toshiba is working to improve the performance quality of its SiPM. “We aim to introduce practical applications in fiscal 2022,” says Akihide Sai, a Senior Research Scientist at Toshiba overseeing the SiPM project. Though he declines to answer whether Toshiba is working with automakers or will produce its own solid-state Lidar system, he says Toshiba will make the SiPM device available to other companies. 

He adds, “We also see it being used in applications such as automated navigation of drones and in robots used for infrastructure monitoring, as well as in applications used in factory automation.”

But QUT’s Milford makes this telling point. “Many of the advances in autonomous vehicle technology and sensing, especially with regards to cost, bulk, and power usage, will become most relevant only when the overall challenge of Level 4+ driving is solved. They aren’t themselves the critical missing pieces for enabling this to happen.”

Demystifying the Standards for Automotive Ethernet

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/demystifying_the_standards_for_automotive_ethernet

As Ethernet celebrates its 40th anniversary, this webinar will discuss its deployment for automotive applications and the standards related to its implementation. It will explain the role of the Institute of Electrical and Electronics Engineers (IEEE), OPEN Alliance, and the Ethernet Alliance in governing the implementation.  

Key learnings:

  • Understand the role of IEEE, OPEN Alliance, and the Ethernet Alliance in the development of automotive Ethernet 
  • Determine the relevant automotive Ethernet specifications for PHY developers, original equipment manufacturers, and Tier 1 companies  
  • Review the role of automotive Ethernet relative to other emerging high-speed digital automotive standards 

Please contact [email protected] to request PDH certificate code.

Learn the Latest Standards and Test Requirements for Automotive Radar

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/learn_the_latest_standards_and_test_requirements_for_automotive_radar

Automotive radar is a key enabler for autonomous driving technology, and it is one of the key areas for new modulation technologies. Unlike wireless communication technologies, there are no regulations and definitions for modulation type, signal pattern, and duty cycle for automotive radar.  

This webinar addresses which standards apply to radar measurements and how to measure different radar signals using Keysight’s automotive radar test solution.

Key learnings:

  • How to set the right parameters for different chirp rates and duty cycle radar signals for power measurements.  
  • Learn how to test the Rx interference of your automotive radar.  
  • Understand how to use Keysight’s automotive test solution to measure radar signals. 

Please contact [email protected] to request PDH certificate code.

Volvo and Lidar-maker Luminar to Deliver Hands-free Driving by 2022

Post Syndicated from Lawrence Ulrich original https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/volvo-and-lidarmaker-luminar-to-deliver-handsfree-driving-by-2022

The race to bring self-driving cars to showrooms may have a new leader: Volvo Cars said it will partner with Silicon Valley-based lidar manufacturer Luminar to deliver genuine hands-free, eyes-off-the-road highway driving beginning in 2022. That so-called “Level 3” capability would be an industry first, as companies from Tesla to Uber thus far have failed to meet lofty promises to make autonomous driving a reality. 

Sweden’s Volvo, owned by Geely Holdings of China, said that models based on its upcoming SPA2 platform (for “Scalable Product Architecture”) will be hardware-enabled for Luminar’s roof-mounted lidar system. That includes the upcoming Polestar 3 from Volvo’s new electric-car division, and a range of Volvo-branded cars and SUVs. Henrik Green, chief technology officer for Volvo, said the optional “Highway Pilot” system would allow full autonomous driving, but only “when the car determines it is safe to do so.” 

“At that point, your Volvo takes responsibility for the driving and you can relax, take your eyes off the road and your hands off the wheel,” Green said. “Over time, updates over-the-air will expand the areas in which the car can drive itself. For us, a safe introduction of autonomy is a gradual introduction.” 

Luminar’s lidar system scans a car’s surroundings in real time, firing millions of pulses of light to create a virtual 3D map without relying on GPS or a network connection. Most experts agree that lidar is a critical linchpin of any truly autonomous car. A high-profile skeptic is Elon Musk, who has no plans to employ lidar in his Teslas, and scoffs at the technology as redundant and unnecessary.

Austin Russell, founder and chief executive of Luminar, disagrees. 

“If cameras could do everything you can do with lidar, great. But if you really want to get in the autonomous game, this is a clear requirement.” 

The 25-year-old Russell said his company’s high-performance lidar sensors, operating at 1550 nanometers and 64 lines of resolution, can spot even small and low-reflective objects—black cars, animals, a child darting across a road—at a range beyond 200 meters, and up to 500 meters for larger, brighter objects. The high-signal, low-noise system can deliver camera-level vision at 200 meters even in rain, fog or snow, he said. That range, Russell said, is critical to give cars the data and time to solve edge cases and make confident decisions, even when hurtling at freeway speeds.  

“Right now, you don’t have reliable interpretation,” with camera, GPS or radar systems, Russell said. “You can guess what’s ahead of you 99 percent of the time, but the problem is that last one percent. And a 99-percent pedestrian detection rate doesn’t cut it, not remotely close.” 

In a Zoom interview from Palo Alto, Russell held up two prototype versions, a roof-mounted unit roughly the size of a TV cable box, and a smaller unit that would mount on bumpers. The elegant systems incorporate a single laser and receiver, rather than the bulky, expensive, stacked arrays of lower-performance systems. Luminar built all its components and software in-house, Russell said, and is already on its eighth generation of ASIC chip controllers. 

The roof-mounted unit can deliver 120 degrees of forward vision, Russell said, plenty for Volvo’s application that combines lidar with cameras, radar, backup GPS and electrically controlled steering and braking. For future robotaxi applications with no human pilot aboard, cars would combine three to five lidar sensors to deliver full 360-degree vision. The unit will also integrate with Volvo’s Advanced Driver Assistance Systems (ADAS), improving such features as automated emergency braking, which global automakers have agreed to make standard in virtually all vehicles by 2022. 

The range and performance, Russell said, is key to solving the conundrums that have frustrated autonomous developers. 

The industry has hit a wall, unable to make the technical leap to Level 3, with cars so secure in their abilities that owners could perform tasks such as texting, reading or even napping. Last week, Audi officially quit its longstanding claims that its new A8 sedan would do just that, with its Traffic Jam Pilot system. Musk is luring consumers to pay $7,000 up-front for “Full Self-Driving” features that have yet to materialize, and that he claims would allow them to use their Teslas as money-making robotaxis. 

Current Level 2 systems, including Tesla’s Autopilot and Cadillac’s SuperCruise, often disengage when they can’t safely process their surroundings. Those disengagements, or even their possibility, demand that drivers continue to pay attention to the road at all times. And when systems do work effectively, drivers can fall out of the loop and be unable to quickly retake control. Russell acknowledged that those limitations leave current systems feeling like a parlor trick: If a driver must still pay full attention to the road, why not just keep driving the old-fashioned way? 

“Assuming perfect use, it’s fine as a novelty or convenience,” Russell said. “But it’s easy to get lulled into a sense of safety. Even when you’re really trying to maintain attention, to jump back into the game is very difficult.” 

Ideally, Russell said, a true Level 3 system would operate for years and hundreds or thousands of miles and never disengage without generous advance warning. 

“From our perspective, spontaneous handoffs that require human intervention are not safe,” he said. “It can’t be, ‘Take over in 500 milliseconds or I’m going to run into a wall.’” 

One revolution of Level 3, he said, is that drivers would actually begin to recover valuable time for productivity, rest or relaxation. The other is safety, and the elusive dream of sharply reducing roadway crashes that kill 1.35 million people a year around the world. 

Volvo cautioned that its Highway Pilot, whose price is up in the air, would initially roll out in small volumes, and steadily expand across its lineup. Volvo’s announcement included an agreement to possibly increase its minority stake in Luminar. But Luminar said it is also working with 12 of the world’s top 15 automakers in various stages of lidar development. And Russell said that, whichever manufacturer makes them, lidar-based safety and self-driving systems will eventually become an industry standard. 

“Driving adoption throughout the larger industry is the right move,” he said. “That’s how you save the most lives and create the most value.” 

New Gyroscope Design Will Help Autonomous Cars and Robots Map the World

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/tech-talk/sensors/automotive-sensors/new-gyroscope-design-will-help-autonomous-cars-robots-map-world

As useful as GPS is, it’s not a system that we can rely on all the time. There are many situations in which GPS might not be available, such as when you’re inside a building, driving through a tunnel, on a battlefield, in a mine, underwater, or in space.

Systems that need to track location, like your cell phone, an autonomous car, or a submarine, often use what are called Inertial Measurement Units (a combination of accelerometers and gyroscopes) to estimate position through dead reckoning by tracking changes in acceleration and rotation. The accuracy of these location estimates depends on how accurate the sensors in those IMUs are. Unfortunately, gyroscopes that can do a good job of tracking rotation over long periods of time are too big and too expensive for most commercial or consumer-grade systems.

Researchers at the University of Michigan, led by Khalil Najafi and funded by DARPA, have presented a paper at the 7th IEEE International Symposium on Inertial Sensors & Systems on a new type of of gyroscope called a precision shell integrating (PSI) gyroscope, which the authors say is “10,000 times more accurate but only 10 times more expensive than gyroscopes used in your typical cell phone.” It offers the performance of much larger gyroscopes at one thousandth the cost, meaning that high-precision location awareness, indoor navigation, and long-term autonomy may soon be much more achievable for mobile devices and underground robots.

Arm Flexible Access for Automotive Applications

Post Syndicated from ARM original https://spectrum.ieee.org/sensors/automotive-sensors/arm_flexible_access_for_automotive_applications

ARM Self-driving

The power of computing has profoundly influenced our lives and we place a high degree of trust in our electronic systems every day. In applications areas such as automotive, the consequences of a system failing or going wrong can be serious and even potentially life-threatening. Trust in automotive systems is crucial and only possible because of functional safety: A system’s ability to detect, diagnose and safely mitigate the occurrence of any fault. 
  
The application of safety is critical in the automotive industry, but how do we then balance safety and innovation in this rapidly transforming industry? 
  
Arm Flexible Access lowers the barriers to rapid innovation and opens the doors to leading technology with access to a wide range of Arm IP, support, tools, and training. Arm Flexible Access adds safety technologies and features to make it easier for developers in the automotive industry and other safety-related industries to create SoCs and attain certification for industry safety standards. 
  
Arm Flexible Access also enables more experimentation for those designing innovative automotive systems, which may involve self-driving capabilities or electrification of powertrain, allowing developers to find the most efficient way of balancing functionality with the level of safety required. Different architectures and mixtures of IP can be accessed, evaluated, optimized redesigned to achieve the best solution, which can then be taken to certification more easily.   
  
To see the range of IP available in Arm Flexible Access suited for automotive applications, please visit our Automotive specific Flexible Access page.

Advanced Battery Manufacturing Techniques and Automotive Ethernet Webinars

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/advanced_battery_manufacturing_techniques_and_automotive_ethernet_webinars

Keysight Technologies is a world leader in the field of electronic test and measurement including a focus on automotive and energy test solutions. The Keysight Engineering Education (KEE) webinar series provides valuable information to electrical engineers on a wide variety of automotive and energy topics ranging from in-vehicle networking to battery cell manufacturing and test to autonomous driving technologies. Come learn some of the latest test challenges and solutions from technical experts at Keysight and our partners.

Register for one or both of these upcoming webinars:

  • How to Reduce Battery Cell Manufacturing Time – Thursday, March 12
  • Demystifying the Standards for Automotive Ethernet – Wednesday, April 22
    Keysight Automotive

Velodyne Will Sell a Lidar for $100

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/velodyne-will-sell-a-lidar-for-100

Velodyne claims to have broken the US $100 barrier for automotive lidar with its tiny Velabit, which it unveiled at CES earlier this month.

“Claims” is the mot juste because this nice, round dollar amount is an estimate based on the mass-manufacturing maturity of a product that has yet to ship. Such a factoid would hardly be worth mentioning had it come from some of the several-score odd lidar startups that haven’t shipped anything at all. But Velodyne created this industry back during DARPA-funded competitions, and has been the market leader ever since.

Seeing Around the Corner With Lasers—and Speckle

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/seeing-around-corner-lasers-speckle

Researchers from Rice, Stanford, Princeton, and Southern Methodist University have developed a new way to use lasers to see around corners that beats the previous technique on resolution and scanning speed. The findings appear today in the journal Optica.

The U.S. military—which funded the work through DARPA grants—is interested for obvious reasons, and NASA wants to use it to image caves, perhaps doing so from orbit. The technique might one day also let rescue workers peer into earthquake-damaged buildings and help self-driving cars navigate tricky intersections.

One day. Right now it’s a science project, and any application is years away. 

The original way of corner peeping, dating to 2012, studies the time it takes laser light to go to a reflective surface, onward to an object and back again. Such time-of-flight measurement requires hours of scanning time to produce a resolution measured in centimeters. Other methods have since been developed that look at reflected light in an image to infer missing parts.

The latest method looks instead at speckle, a shimmering interference pattern that in many laser applications is a bug; here it is a feature because it contains a trove of spatial information. To get the image hidden in the speckle—a process called non-line-of-sight correlography—involves a bear of a calculation. The researchers used deep-learning methods to accelerate the analysis. 

“Image acquisition takes a quarter of a second, and we’re getting sub-millimeter resolution,” says Chris Metzler, the leader of the project, who is a post-doc in electrical engineering at Stanford. He did most of the work while completing a doctorate at Rice.

The problem is that the system can achieve these results only by greatly narrowing the field of view.

“Speckle encodes interference information, and as the area gets larger, the resolution gets worse,” says Ashok Veeraraghavan, an associate professor of electrical engineering and computer science at Rice. “It’s not ideal to image a room; it’s ideal to image an ID badge.”

The two methods are complementary: Time-of-flight gets you the room, the guy standing in that room, and maybe a hint of a badge. Speckle analysis reads the badge. Doing all that would require separate systems operating two lasers at different wavelengths, to avoid interference.

Today corner peeping by any method is still “lab bound,” says Veeraraghavan, largely because of interference from ambient light and other problems. To get the results that are being released today, the researchers had to work from just one meter away, under ideal lighting conditions. A possible way forward may be to try lasers that emit in the infrared.

Another intriguing possibility is to wait until the wireless world’s progress to ever-shorter wavelengths finally hits the millimeter band, which is small enough to resolve most identifying details. Cars and people, for instance.