Tag Archives: Transportation/Self-Driving

Surprise! 2020 Is Not the Year for Self-Driving Cars

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/transportation/self-driving/surprise-2020-is-not-the-year-for-selfdriving-cars

In March, because of the coronavirus, self-driving car companies, including Argo, Aurora, Cruise, Pony, and Waymo, suspended vehicle testing and operations that involved a human driver. Around the same time, Waymo and Ford released open data sets of information collected during autonomous-vehicle tests and challenged developers to use them to come up with faster and smarter self-driving algorithms.

These developments suggest the self-driving car industry still hopes to make meaningful progress on autonomous vehicles (AVs) this year. But the industry is undoubtedly slowed by the pandemic and facing a set of very hard problems that have gotten no easier to solve in the interim.

Five years ago, several companies including Nissan and Toyota promised self-driving cars in 2020. Lauren Isaac, the Denver-based director of business initiatives at the French self-driving vehicle company EasyMile, says AV hype was “at its peak” back then—and those predictions turned out to be far too rosy.

Now, Isaac says, many companies have turned their immediate attention away from developing fully autonomous Level 5 vehicles, which can operate in any conditions. Instead, the companies are focused on Level 4 automation, which refers to fully automated vehicles that operate within very specific geographical areas or weather conditions. “Today, pretty much all the technology developers are realizing that this is going to be a much more incremental process,” she says.

For example, EasyMile’s self-driving shuttles operate in airports, college campuses, and business parks. Isaac says the company’s shuttles are all Level 4. Unlike Level 3 autonomy (which relies on a driver behind the wheel as its backup), the backup driver in a Level 4 vehicle is the vehicle itself.

“We have levels of redundancy for this technology,” she says. “So with our driverless shuttles, we have multiple levels of braking systems, multiple levels of lidars. We have coverage for all systems looking at it from a lot of different angles.”

Another challenge: There’s no consensus on the fundamental question of how an AV looks at the world. Elon Musk has famously said that any AV manufacturer that uses lidar is “doomed.” A 2019 Cornell research paper seemed to bolster the Tesla CEO’s controversial claim by developing algorithms that can derive from stereo cameras 3D depth-perception capabilities that rival those of lidar.

However, open data sets have called lidar doomsayers into doubt, says Sam Abuelsamid, a Detroit-based principal analyst in mobility research at the industry consulting firm Navigant Research.

Abuelsamid highlighted a 2019 open data set from the AV company Aptiv, which the AI company Scale then analyzed using two independent sources: The first considered camera data only, while the second incorporated camera plus lidar data. The Scale team found camera-only (2D) data sometimes drew inaccurate “bounding boxes” around vehicles and made poorer predictions about where those vehicles would be going in the immediate future—one of the most important functions of any self-driving system.

“While 2D annotations may look superficially accurate, they often have deeper inaccuracies hiding beneath the surface,” software engineer Nathan Hayflick of Scale wrote in a company blog about the team’s Aptiv data set research. “Inaccurate data will harm the confidence of [machine learning] models whose outputs cascade down into the vehicle’s prediction and planning software.”

Abuelsamid says Scale’s analysis of Aptiv’s data brought home the importance of building AVs with redundant and complementary sensors—and shows why Musk’s dismissal of lidar may be too glib. “The [lidar] point cloud gives you precise distance to each point on that vehicle,” he says. “So you can now much more accurately calculate the trajectory of that vehicle. You have to have that to do proper prediction.”

So how soon might the industry deliver self-driving cars to the masses? Emmanouil Chaniotakis is a lecturer in transport modeling and machine learning at University College London. Earlier this year, he and two researchers at the Technical University of Munich published a comprehensive review of all the studies they could find on the future of shared autonomous vehicles (SAVs).

They found the predictions—for robo-taxis, AV ride-hailing services, and other autonomous car-sharing possibilities—to be all over the map. One forecast had shared autonomous vehicles driving just 20 percent of all miles driven in 2040, while another model forecast them handling 70 percent of all miles driven by 2035.

So autonomous vehicles (shared or not), by some measures at least, could still be many years out. And it’s worth remembering that previous predictions proved far too optimistic.

This article appears in the May 2020 print issue as “The Road Ahead for Self-Driving Cars.”

Robot Vehicles Make Contactless Deliveries Amid Coronavirus Quarantine

Post Syndicated from Erico Guizzo original https://spectrum.ieee.org/automaton/transportation/self-driving/robot-vehicles-make-contactless-deliveries-amid-coronavirus-quarantine

For the past two months, the vegetables have arrived on the back of a robot. That’s how 16 communities in Zibo, in eastern China, have received fresh produce during the coronavirus pandemic. The robot is an autonomous van that uses lidars, cameras, and deep-learning algorithms to drive itself, carrying up to 1,000 kilograms on its cargo compartment.

The unmanned vehicle provides a “contactless” alternative to regular deliveries, helping reduce the risk of person-to-person infection, says Professor Ming Liu, a computer scientist at the Hong Kong University of Science and Technology (HKUST) and cofounder of Unity Drive Innovation, or UDI, the Shenzhen-based startup that developed the self-driving van.

Is Coronavirus Speeding the Adoption of Driverless Technology?

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/will-coronavirus-speed-adoption-of-driverless-technology

Neolix, a maker of urban robo-delivery trucks, made an interesting claim recently. The Beijing-based company said orders for its self-driving delivery vehicles were soaring because the coronavirus epidemic had both cleared the roads of cars and opened the eyes of customers to the advantages of driverlessness. The idea is that when the epidemic is over, the new habits may well persist.

Neolix last week told Automotive News it had booked 200 orders in the past two months after having sold just 159 in the eight months before. And on 11 March, the company confirmed that it had raised US $29 million in February to fund mass production.

Of course, this flurry of activity could merely be coincidental to the epidemic, but Tallis Liu, the company’s manager of business development, maintains that it reflects changing attitudes in a time of plague.

“We’ve seen a rise in both acceptance and demand both from the general public and from the governmental institutions,” he tells IEEE Spectrum. The sight of delivery bots on the streets of Beijing is “educating the market” about “mobility as a service” and on “how it will impact people’s day-to-day lives during and after the outbreak.”

During the epidemic, Neolix has deployed 50 vehicles in 10 major cities in China to do mobile delivery and also disinfection service. Liu says that many of the routes were chosen because they include public roads that the lockdown on movement has left relatively empty. 

The company’s factory has a production capacity of 10,000 units a year, and most of the  factory staff has returned to their positions,  Liu adds. “Having said that, we are indeed facing some delays from our suppliers given the ongoing situation.”

Neolix’s deliverybots are adorable—a term this site once used to describe a strangely similar-looking rival bot from the U.S. firm Nuro. The bots are the size of a small car, and they’re each equipped with cameras, three 16-channel lidar laser sensors, and one single-channel lidar. The low-speed version also has 14 ultrasonic short-range sensors; on the high-speed version, the ultrasonic sensors are supplanted by radars. 

If self-driving technology benefits from the continued restrictions on movement in China and around the world, it wouldn’t be the first time that necessity had been the mother of invention. An intriguing example is furnished by a mere two-day worker’s strike on the London Underground in 2014. Many commuters, forced to find alternatives, ended up sticking with those workarounds even after Underground service resumed, according to a 2015 analysis by three British economists.

One of the researchers, Tim Willems of Oxford University, tells Spectrum that disruptions can induce permanent changes when three conditions are met. First, “decision makers are lulled into habits and have not been able to achieve their optimum (close to our Tube strike example).” Second, “there are coordination failures that make it irrational for any one decision maker to deviate from the status quo individually” and a disruption “forces everybody away from the status quo at the same time.” And third, the reluctance to pay the fixed costs required to set up a new way of doing things can be overcome under crisis conditions.

By that logic, many workers sent home for months on end to telecommute will stay on their porches or in their pajamas long after the all-clear signal has sounded. And they will vastly accelerate the move to online shopping, with package delivery of both the human and the nonhuman kind.

On Monday, New York City’s mayor, Bill de Blasio, said he was suspending his long-running campaign against e-bikes. “We are suspending that enforcement for the duration of this crisis,” he said. And perhaps forever.

Echodyne Shows Off Its Cognitive Radar for Self-Driving Cars

Post Syndicated from Mark Harris original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/echodyne-cognitive-radar-self-driving-cars

As a transportation technology journalist, I’ve ridden in a lot of self-driving cars, both with and without safety drivers. A key part of the experience has always been a laptop or screen showing a visualization of other road users and pedestrians, using data from one or more laser-ranging lidar sensors.

Ghostly three-dimensional shapes made of shimmering point clouds appear at the edge of the screen, and are often immediately recognizable as cars, trucks, and people.

At first glance, the screen in Echodyne’s Ford Flex SUV looks like a lidar visualization gone wrong. As we explore the suburban streets of Kirkland, Washington, blurry points and smeary lines move across the display, changing color as they go. They bear little resemblance to the vehicles and cyclists I can see out of the window.

Gill Pratt on “Irrational Exuberance” in the Robocar World

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/gill-pratt-of-toyota-on-the-irrational-exuberance-in-the-robocar-world

A lot of people in the auto industry talked for way too long about the imminent advent of fully self-driving cars. 

In 2013, Carlos Ghosn, now very much the ex-chairman of Nissan, said it would happen in seven years. In 2016, Elon Musk, then chairman of Tesla, implied  his cars could basically do it already. In 2017 and right through early 2019 GM Cruise talked 2019. And Waymo, the company with the most to show for its efforts so far, is speaking in more measured terms than it used just a year or two ago. 

It’s all making Gill Pratt, CEO of the Toyota Research Institute in California, look rather prescient. A veteran roboticist who joined Toyota in 2015 with the task of developing robocars, Pratt from the beginning emphasized just how hard the task would be and how important it was to aim for intermediate goals—notably by making a car that could help drivers now, not merely replace them at some distant date.

That helpmate, called Guardian, is set to use a range of active safety features to coach a driver and, in the worst cases, to save him from his own mistakes. The more ambitious Chauffeur will one day really drive itself, though in a constrained operating environment. The constraints on the current iteration will be revealed at the first demonstration at this year’s Olympic games in Tokyo; they will certainly involve limits to how far afield and how fast the car may go.

Earlier this week, at TRI’s office in Palo Alto, Calif., Pratt and his colleagues gave Spectrum a walkaround look at the latest version of the Chauffeur, the P4; it’s a Lexus with a package of sensors neatly merging with the roof. Inside are two lidars from Luminar, a stereocamera, a mono-camera (just to zero in on traffic signs), and radar. At the car’s front and corners are small Velodyne lidars, hidden behind a grill or folded smoothly into small protuberances. Nothing more could be glimpsed, not even the electronics that no doubt filled the trunk.

Pratt and his colleagues had a lot to say on the promises and pitfalls of self-driving technology. The easiest to excerpt is their view on the difficulty of the problem.

There isn’t anything that’s telling us it can’t be done; I should be very clear on that,” Pratt says. “Just because we don’t know how to do it doesn’t mean it can’t be done.”

That said, though, he notes that early successes (using deep neural networks to process vast amounts of data) led researchers to optimism. In describing that optimism, he does not object to the phrase “irrational exuberance,” made famous during the 1990s dot-com bubble.

It turned out that the early successes came in those fields where deep learning, as it’s known, was most effective, like artificial vision and other aspects of perception. Computers, long held to be particularly bad at pattern recognition, were suddenly shown to be particularly good at it—even better, in some cases, than human beings. 

“The irrational exuberance came from looking  at the slope of the [graph] and seeing the seemingly miraculous improvement deep learning had given us,” Pratt says. “Everyone was surprised, including the people who developed it, that suddenly, if you threw enough data and enough computing at it, the performance would get so good. It was then easy to say that because we were surprised just now, it must mean we’re going to continue to be surprised in the next couple of years.”

The mindset was one of permanent revolution: The difficult, we do immediately; the impossible just takes a little longer. 

Then came the slow realization that AI not only had to perceive the world—a nontrivial problem, even now—but also to make predictions, typically about human behavior. That problem is more than nontrivial. It is nearly intractable. 

Of course, you can always use deep learning to do whatever it does best, and then use expert systems to handle the rest. Such systems use logical rules, input by actual experts, to handle whatever problems come up. That method also enables engineers to tweak the system—an option that the black box of deep learning doesn’t allow.

Putting deep learning and expert systems together does help, says Pratt. “But not nearly enough.”

Day-to-day improvements will continue no matter what new tools become available to AI researchers, says Wolfram Burgard, Toyota’s vice president for automated driving technology

“We are now in the age of deep learning,” he says. “We don’t know what will come after—it could be a rebirth of an old technology that suddenly outperforms what we saw before. We are still in a phase where we are making progress with existing techniques, but the gradient isn’t as steep as it was a few years ago. It is getting more difficult.”





Apex.OS: An Open Source Operating System for Autonomous Cars

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/apexos-operating-system-open-source-autonomous-cars

The facets of autonomous car development that automakers tend to get excited about are things like interpreting sensor data, decision making, and motion planning.

Unfortunately, if you want to make self-driving cars, there’s all kinds of other stuff that you need to get figured out first, and much of it is really difficult but also absolutely critical. Things like, how do you set up a reliable network inside of your vehicle? How do you manage memory and data recording and logging? How do you get your sensors and computers to all talk to each other at the same time? And how do you make sure it’s all stable and safe?

In robotics, the Robot Operating System (ROS) has offered an open-source solution for many of these challenges. ROS provides the groundwork for researchers and companies to build off of, so that they can focus on the specific problems that they’re interested in without having to spend time and money on setting up all that underlying software infrastructure first.

Apex.ai’s Apex OS, which is having its version 1.0 release today, extends this idea from robotics to autonomous cars. It promises to help autonomous carmakers shorten their development timelines, and if it has the same effect on autonomous cars as ROS has had on robotics, it could help accelerate the entire autonomous car industry.

For more about what this 1.0 software release offers, we spoke with Apex.ai CEO Jan Becker.

IEEE Spectrum: What exactly can Apex.OS do, and what doesn’t it do? 

Jan Becker: Apex.OS is a fork of ROS 2 that has been made robust and reliable so that it can be used for the development and deployment of highly safety-critical systems such as autonomous vehicles, robots, and aerospace applications. Apex.OS is API-compatible to ROS 2. In a  nutshell, Apex.OS is an SDK for autonomous driving software and other safety-critical mobility applications. The components enable customers to focus on building their specific applications without having to worry about message passing, reliable real-time execution, hardware integration, and more.

Apex.OS is not a full [self-driving software] stack. Apex.OS enables customers to build their full stack based on their needs. We have built an automotive-grade 3D point cloud/lidar object detection and tracking component and we are in the process of building a lidar-based localizer, which is available as Apex.Autonomy. In addition, we are starting to work with other algorithmic component suppliers to integrate Apex.OS APIs into their software. These components make use of Apex.OS APIs, but are available separately, which allows customers to assemble a customized full software stack from building blocks such that it exactly fits their needs. The algorithmic components re-use the open architecture which is currently being built in the open source Autoware.Auto project.

So if every autonomous vehicle company started using Apex.OS, those companies would still be able to develop different capabilities?

Apex.OS is an SDK for autonomous driving software and other safety-critical mobility applications. Just like iOS SDK provides an SDK for iPhone app developers enabling them to focus on the application, Apex.OS provides an SDK to developers of safety-critical mobility applications. 

Every autonomous mobility system deployed into a public environment must be safe. We enable customers to focus on their application without having to worry about the safety of the underlying components. Organizations will differentiate themselves through performance, discrete features, and other product capabilities. By adopting Apex.OS, we enable them to focus on developing these differentiators. 

What’s the minimum viable vehicle that I could install Apex.OS on and have it drive autonomously? 

In terms of compute hardware, we showed Apex.OS running on a Renesas R-Car H3 and on a Quanta V3NP at CES 2020. The R-Car H3 contains just four ARM Cortex-A57 cores and four ARM Cortex-A53 cores and is the smallest ECU for which our customers have requested support. You can install Apex.OS on much smaller systems, but this is the smallest one we have tested extensively so far, and which is also powering our vehicle.

We are currently adding support for the Renesas R-Car V3H, which contains four ARM Cortex-A53 cores (and no ARM Cortex-A57 cores) and an additional image processing processor. 

You suggest that Apex.OS is also useful for other robots and drones, in addition to autonomous vehicles. Can you describe how Apex.OS would benefit applications in these spaces?

Apex.OS provides a software framework that enables reading, processing, and outputting data on embedded real-time systems used in safety-critical environments. That pertains to robotics and aerospace applications just as much as to automotive applications. We simply started with automotive applications because of the stronger market pull. 

Industrial robots today often run ROS for the perception system and non-ROS embedded controller for highly-accurate position control, because ROS cannot run the realtime controller with the necessary precision. Drones often run PX4 for the autopilot and ROS for the perception stack. Apex.OS combines the capabilities of ROS with the requirements of mobility systems, specifically regarding real-time, reliability and the ability to run on embedded compute systems.

How will Apex contribute back to the open-source ROS 2 ecosystem that it’s leveraging within Apex.OS?

We have contributed back to the ROS 2 ecosystem from day one. Any and all bugs that we find in ROS 2 get fixed in ROS 2 and thereby contributed back to the open-source codebase. We also provide a significant amount of funding to Open Robotics to do this. In addition, we are on the ROS 2 Technical Steering Committee to provide input and guidance to make ROS 2 more useful for automotive applications. Overall we have a great deal of interest in improving ROS 2 not only because it increases our customer base, but also because we strive to be a good open-source citizen.

The features we keep in house pertain to making ROS 2 realtime, deterministic, tested, and certified on embedded hardware. Our goals are therefore somewhat orthogonal to the goals of an open-source project aiming to address as many applications as possible. We, therefore, live in a healthy symbiosis with ROS 2. 

[ Apex.ai ]

Uh-oh Uber, Look Out Lyft: Here Comes Driverless Ride Sharing

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/transportation/self-driving/uhoh-uber-look-out-lyft-here-comes-driverless-ride-sharing

Yesterday I drove from Silicon Valley to San Francisco. It started raining on the way and I hadn’t thought to take an umbrella. No matter—I had the locations of two parking garages, just a block or so from my destination, preloaded into my navigation app. But both were full, and I found myself driving in stop-and-go traffic around crowded, wet, hilly, construction-heavy San Francisco, hunting for street parking or an open garage for nearly an hour. It was driving hell.

So when I finally arrived at a launch event hosted by Cruise, I couldn’t have been more receptive to the company’s pitch for Cruise Origin, a new vehicle that, Cruise executives say, intends to make it so I won’t need to drive or park in a city ever again.

Damon’s Hypersport AI Boosts Motorcycle Safety

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/damons-hypersport-motorcycle-safety-ai

For all its pure-electric acceleration and range and its ability to shapeshift, the Hypersport motorcycle shown off last week at CES by Vancouver, Canada-based Damon Motorcycles matters for just one thing: It’s the first chopper swathed in active safety systems.

These systems don’t take control, not even in anticipation of a crash, as they do in many advanced driver assistance systems in cars. They leave a motorcyclist fully in command while offering the benefit of an extra pair of eyes.

Why drape high tech “rubber padding” over the motorcycle world? Because that’s where the danger is: Motorcyclists are 27 times more likely to die in a crash than are passengers in cars.

“It’s not a matter of if you’ll have an accident on a motorbike, but when,” says Damon chief executive Jay Giraud. “Nobody steps into motorbiking knowing that, but they learn.”

The Hypersport’s sensor suite includes cameras, radar, GPS, solid-state gyroscopes and accelerometers. It does not include lidar­–“it’s not there yet,” Giraud says–but it does open the door a crack to another way of seeing the world: wireless connectivity.  

The bike’s brains note everything that happens when danger looms, including warnings issued and evasive maneuvers taken, then shunts the data to the cloud via 4G wireless. For now that data is processed in batches, to help Damon refine its algorithms, a practice common among self-driving car researchers. Some day, it will share such data with other vehicles in real-time, a strategy known as vehicle-to-everything, or V2x.

But not today. “That whole world is 5-10 years away—at least,” Giraud grouses. “I’ve worked on this for over decade—we’re no closer today than we were in 2008.”

The bike has an onboard neural net whose settings are fixed at any given time. When the net up in the cloud comes up with improvements, these are sent as over-the-air updates to each motorcycle. The updates have to be approved by each owner before going live onboard. 

When the AI senses danger it gives warning. If the car up ahead suddenly brakes, the handlebars shake, warning of a frontal collision. If a vehicle coming from behind enters the biker’s blind spot, LEDs flash. That saves the rider the trouble of constantly having to look back to check the blind spot.

Above all, it gives the rider time. A 2018 report by the National Highway Traffic Safety Administration found that from 75 to 90 percent of riders in accidents had less than three seconds to notice a threat and try to avert it; 10 percent had less than one second. Just an extra second or two could save a lot of lives.

The patterns the bike’s AI tease out from the data are not always comparable to those a self-driving car would care about. A motorcycle shifts from one half of a lane to the other; it leans down, sometimes getting fearsomely close to the pavement; and it is often hard for drivers in other vehicles to see.

One motorbike-centric problem is the high risk a biker takes just by entering an intersection. Some three-quarters of motorcycle accidents happen there, and of that number about two-thirds are caused by a car’s colliding from behind or from the side. The side collision, called a T-bone, is particularly bad because there’s nothing at all to shield the rider.

Certain traffic patterns increase the risk of such collisions. “Patterns that repeat allow our system to predict risk,” Giraud says. “As the cloud sees the tagged information again and again, we can use it to make predictions.”

Damon is taking pre-orders, but it expects to start shipping in mid-2021. Like Tesla, it will deliver straight to the customer, with no dealers to get in the way.

Hand-Tracking Tech Watches Riders in Self-Driving Cars to See If They’re Ready to Take the Wheel

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/handtracking-tech-for-the-next-generation-of-autonomous-cars

Journal Watch report logo, link to report landing page

Researchers have developed a new technique for tracking the hand movements of a non-attentive driver, to calculate how long it would take the driver to assume control of a self-driving car in an emergency.

If manufacturers can overcome the final legal hurdles, cars with Level 3 autonomous vehicle technology will one day be chauffeuring people from A to B. These cars allow a driver to have his or her eyes off the road and the freedom to do minor tasks (such as texting or watching a movie). However, these cars need a way of knowing how quickly—or slowly—a driver can respond when taking control during an emergency.

To address this need, Kevan Yuen and Mohan Trivedi at the University of California, San Diego developed their new hand-tracking system, which is described in a study published 22 November in IEEE Transactions on Intelligent Vehicles.

Self-Driving Cars Learn About Road Hazards Through Augmented Reality

Post Syndicated from Yiheng Feng original https://spectrum.ieee.org/transportation/self-driving/selfdriving-cars-learn-about-road-hazards-through-augmented-reality

For decades, anyone who wanted to know whether a new car was safe to drive could simply put it through its paces, using tests established through trial and error. Such tests might investigate whether the car can take a sharp turn while keeping all four wheels on the road, brake to a stop over a short distance, or survive a collision with a wall while protecting its occupants.

But as cars take an ever greater part in driving themselves, such straightforward testing will no longer suffice. We will need to know whether the vehicle has enough intelligence to handle the same kind of driving conditions that humans have always had to manage. To do that, automotive safety-assurance testing has to become less like an obstacle course and more like an IQ test.

One obvious way to test the brains as well as the brawn of autonomous vehicles would be to put them on the road along with other traffic. This is necessary if only because the self-driving cars will have to share the road with the human-driven ones for many years to come. But road testing brings two concerns. First, the safety of all concerned can’t be guaranteed during the early stages of deployment; self-driving test cars have already been involved in fatal accidents. Second is the sheer scale that such direct testing would require.

That’s because most of the time, test vehicles will be driven under typical conditions, and everything will go as it normally does. Only in a tiny fraction of cases will things take a different turn. We call these edge cases, because they concern events that are at the edge of normal experience. Example: A truck loses a tire, which hops the median and careens into your lane, right in front of your car. Such edge cases typically involve a concurrence of failures that are hard to conceive of and are still harder to test for. This raises the question, How long must we road test a self-driving, connected vehicle before we can fairly claim that it is safe?

The answer to that question may never be truly known. What’s clear, though, is that we need other strategies to gauge the safety of self-driving cars. And the one we describe here—a mixture of physical vehicles and computer simulation—might prove to be the most effective way there is to evaluate self-driving cars.

A fatal crash occurs only once in about 160 million kilometers of driving, according to statistics compiled by the U.S. National Highway Traffic Safety Administration [PDF]. That’s a bit more than the distance from Earth to the sun (and 10 times as much as has been logged by the fleet of Google sibling Waymo, the company that has the most experience with self-driving cars). To travel that far, an autonomous car driving at highway speeds for 24 hours a day would need almost 200 years. It would take even longer to cover that distance on side streets, passing through intersections and maneuvering around parking lots. It might take a fleet of 500 cars 10 years to finish the job, and then you’d have to do it all over again for each new design.

Clearly, the industry must augment road testing with other strategies to bring out as many edge cases as possible. One method now in use is to test self-driving vehicles in closed test facilities where known edge cases can be staged again and again. Take, as an example, the difficulties posed by cars that run a red light at high speed. An intersection can be built as if it were a movie set, and self-driving cars can be given the task of crossing when the light turns green while at the same time avoiding vehicles that illegally cross in front of them.

While this approach is helpful, it also has limitations. Multiple vehicles are typically needed to simulate edge cases, and professional drivers may well have to pilot them. All this can be costly and difficult to coordinate. More important, no one can guarantee that the autonomous vehicle will work as desired, particularly during the early stages of such testing. If something goes wrong, a real crash could happen and damage the self-driving vehicle or even hurt people in other vehicles. Finally, no matter how ingenious the set designers may be, they cannot be expected to create a completely realistic model of the traffic environment. In real life, a tree’s shadow can confuse an autonomous car’s sensors and a radar reflection off a manhole cover can make the radar see a truck where none is present.

Computer simulation provides a way around the limitations of physical testing. Algorithms generate virtual vehicles and then move them around on a digital map that corresponds to a real-world road. If the data thus generated is then broadcast to an actual vehicle driving itself on the same road, the vehicle will interpret the data exactly as if it had come from its own sensors. Think of it as augmented reality tuned for use by a robot.

Although the physical test car is driving on empty roads, it “thinks” that it is surrounded by other vehicles. Meanwhile, it sends information that it is gathering—both from augmented reality and from its sensing of the real-world surroundings—back to the simulation platform. Real vehicles, simulated vehicles, and perhaps other simulated objects, such as pedestrians, can thus interact. In this way, a wide variety of scenarios can be tested in a safe and cost-effective way.

The idea for automotive augmented reality came to us by the back door: Engineers had already improved certain kinds of computer simulations by including real machines in them. As far back as 1999, Ford Motor Co. used measurements of an actual revving engine to supply data for a computer simulation of a power train. This hybrid simulation method was called hardware-in-the-loop, and engineers resorted to it because mimicking an engine in software can be very difficult. Knowing this history, it occurred to us that it would be possible to do the opposite—generate simulated vehicles as part of a virtual environment for testing actual cars.

In June 2017, we implemented an augmented-reality environment in Mcity, the world’s first full-scale test bed for autonomous vehicles. It occupies 32 acres on the North Campus of the University of Michigan, in Ann Arbor. Its 8 lane-kilometers (5 lane-miles) of roadway are arranged in sections having the attributes of a highway, a multilane arterial road, or an intersection.

Here’s how it works. The autonomous test car is equipped with an onboard device that can broadcast vehicle status, such as location, speed, acceleration, and heading, doing so every tenth of a second. It does this wirelessly, using dedicated short-range communications (DSRC), a standard similar to Wi-Fi that has been earmarked for mobile users. Roadside devices distributed around the testing facility receive this information and forward it to a traffic-simulation model, one that can simulate the testing facility by boiling it down to an equivalent network geometry that incorporates the actions of traffic signals. Once the computer model receives the test car’s information, it creates a virtual twin of that car. Then it updates the virtual car’s movements based on the movements of the real test car.

Feeding data from the real test vehicle into the computer simulation constitutes only half of the loop. We complete the other half by sending information about the various vehicles the computer has simulated to the test car. This is the essence of the augmented-reality environment. Every simulated vehicle also generates vehicle-status messages at a frequency of 10 hertz, which we forward to the roadside devices, which in turn broadcast it in real time. When the real test car receives that data, its vehicle-control system uses it to “see” all the virtual vehicles. To the car, these simulated entities are indistinguishable from the real thing.

By having vehicles pass messages through the roadside devices—that is, by substituting “vehicle-to-infrastructure” connections for direct “vehicle-to-vehicle” links—real vehicles and virtual vehicles can sense one another and interact accordingly. In the same fashion, traffic-signal status is also synchronized between the real and the simulated worlds. That way, real and virtual vehicles can each “look” at a given light and see whether it is green or red.

The status messages passed between real and simulated worlds include, of course, vehicle positions. This allows actual vehicles to be mapped onto the simulated road network, and simulated vehicles to be mapped into the actual road network. The positions of actual vehicles are represented with GPS coordinates—latitude, longitude, and elevation—and those of simulated vehicles with local coordinates—x, y, and z. An algorithm transforms one system of coordinates into the other.

But that mathematical transformation isn’t all that’s needed. There are small GPS and map errors, and they sometimes prevent a GPS position, forwarded from the actual test car and translated to the local system of coordinates, from appearing on a simulated road. We correct these errors with a separate mapping algorithm. Also, when the test car stops, we must lock it in place in the simulation, so that fluctuations in its GPS coordinates do not cause it to drift [PDF] out of position in the simulation.

Everything here depends on wireless communication. To ensure that it was reliable, we installed four roadside radios in Mcity, enough to cover the entire testing facility. The DSRC wireless standard, which operates in the 5.9-gigahertz band, gives us high data-transmission rates and very low latency. These are critical to safety at high speeds and during stop-on-a-dime maneuvers. DSRC is in wide use in Japan and Europe; it hasn’t yet gained much traction in the United States, although Cadillac is now equipping some of its cars with DSRC devices.

Whether DSRC will be the way cars communicate with one another is uncertain, though. Some people have argued that cellular communications, particularly in the coming 5G implementation, might offer equally low latency with a greater range. Whichever standard wins out, the communications protocols used in our system can easily be adapted to it.

We expect that the software framework we used to build our system will also endure, at least for a few years. We constructed our simulation with PTV Vissim, a commercial package developed in Germany to model traffic flow “microscopically,” that is, by simulating the behavior of each individual vehicle.

One thing that can be expected to change is the test vehicle, as other companies begin to use our system to put their own autonomous vehicles through their paces. For now, our one test vehicle is a Lincoln MKZ Hybrid, which is equipped with DSRC and thus fully connected. Drive-by-wire controls that we added to the car allow software to command the steering wheel, throttle, brake, and transmission. The car also carries multiple radars, lidars, cameras, and a GPS receiver with real-time kinematic positioning, which improves resolution by referring to a signal from a ground-based radio station.

We have implemented two testing scenarios. In the first one, the system generates a virtual train and projects it into the augmented reality perceived by the test car as the train approaches a mock-up of a rail crossing in Mcity. The point is to see whether the test car can stop in time and then wait for the train to pass. We also throw in other virtual vehicles, such as cars that follow the test car. These strings of cars—actual and virtual—can be formally arranged convoys (known as platoons) or ad hoc arrangements: perhaps cars queuing to get onto an entry ramp.

The second, more complicated testing scenario involves the case we mentioned earlier—running a red light. In the United States, cars running red lights cause more than a quarter of all the fatalities that occur at an intersection, according to the American Automobile Association. This scenario serves two purposes: to see how the test car reacts to traffic signals and also how it reacts to red-light-running scofflaws.

Our test car should be able to tell whether the signal is red or green and decide accordingly whether to stop or to go. It should also be able to notice that the simulated red-light runner is coming, predict its trajectory, and calculate when and where the test car might be when it crosses that trajectory. The test car ought to be able to do all these things well enough to avoid a collision.

Because the computer running the simulation can fully control the actions of the red-light runner, it can generate a wide variety of testing parameters in successive iterations of the experiment. This is precisely the sort of thing a computer can do much more accurately than any human driver. And of course, the entire experiment can be done in complete safety because the lawbreaker is merely a virtual car.

There is a lot more of this kind of edge-case simulation that can be done. For example, we can use the augmented-reality environment to evaluate the ability of test cars to handle complex driving situations, like turning left from a stop sign onto a major highway. The vehicle needs to seek gaps in traffic going in both directions, meanwhile watching for pedestrians who may cross at the sign. The car can decide to make a stop in the median first, or instead simply drive straight into the desired lane. This involves a decision-making process of several stages, all while taking into account the actions of a number of other vehicles (including predicting how they will react to the test car’s actions).

Another example involves maneuvers at roundabouts—entering, exiting, and negotiation for position with other cars—without help from a traffic signal. Here the test car needs to predict what other vehicles will do, decide on an acceptable gap to use to merge, and watch for aggressive vehicles. We can also construct augmented-reality scenarios with bicyclists, pedestrians, and other road users, such as farm machinery. The less predictable such alternative actors are, the more intelligence the self-driving car will need.

Ultimately, we would like to put together a large library of test scenarios including edge cases, then use the augmented-reality testing environment to run the tests repeatedly. We are now building up such a library with data scoured from reports of actual crashes, together with observations by sensor-laden vehicles of how people drive when they don’t know they’re part of an experiment. By putting together disparate edge conditions, we expect to create artificial edge cases that are particularly challenging for the software running in self-driving cars.

Thus armed, we ought to be able to see just how safe a given autonomous car is without having to drive it to the sun and back.

This article appears in the December 2019 print issue as “Augmented Reality for Robocars.”

About the Authors

Henry X. Liu is a professor of civil and environmental engineering at the University of Michigan, Ann Arbor, and a research professor at the University of Michigan Transportation Research Institute. Yiheng Feng is an assistant research scientist in the university’s Engineering Systems Group.

Driving Tests Coming for Autonomous Cars

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/driving-tests-coming-for-autonomous-cars

The three laws of robotic safety in Isaac Asimov’s science fiction stories seem simple and straightforward, but the ways the fictional tales play out reveal unexpected complexities. Writers of safety standards for self-driving cars express their goals in similarly simple terms. But several groups now developing standards for how autonomous vehicles will interact with humans and with each other face real-world issues much more complex than science fiction.

Advocates of autonomous cars claim that turning the wheel over to robots could slash the horrific toll of 1.3 million people killed around the world each year by motor vehicles. Yet the public has become wary because robotic cars also can kill. Documents released last week by the U.S. National Transportation Safety Board blame the March 2018 death of an Arizona pedestrian struck by a self-driving Uber on safety failures by the car’s safety driver, the company, and the state of Arizona. Even less-deadly safety failures are damning, like the incident where a Tesla in Autopilot mode wasn’t smart enough to avoid crashing into a stopped fire engine whose warning lights were flashing.

Safety standards for autonomous vehicles “are absolutely critical” for public acceptance of the new technology, says Greg McGuire, associate director of the Mcity autonomous vehicle testing lab at the University of Michigan. “Without them, how do we know that [self-driving cars] are safe, and how do we gain public trust?” Earning that trust requires developing standards through an open process that the public can scrutinize, and may even require government regulation, he adds.

Companies developing autonomous technology have taken notice. Earlier this year, representatives from 11 companies including Aptiv, Audi, Baidu, BMW, Daimler, Infineon, Intel, and Volkswagen collaborated to write a wide-ranging whitepaper titled “Safety First for Automated Driving.” They urged designing safety features into the automated driving function, and using heightened cybersecurity to assure the integrity of vital data including the locations, movement, and identification of other objects in the vehicle environment. They also urged validating and verifying the performance of robotic functions in a wide range of operating conditions.

On 7 November, the International Telecommunications Union announced the formation of a focus group called AI for Autonomous and Assisted Driving. It’s aim: to develop performance standards for artificial intelligence (AI) systems that control self-driving cars. (The ITU has come a long way since its 1865 founding as the International Telegraph Union, with a mandate to standardize the operations of telegraph services.)

ITU intends the standards to be “an equivalent of a Turing Test for AI on our roads,” says focus group chairman Bryn Balcombe of the Autonomous Drivers Alliance. A computer passes a Turing Test if it can fool a person into thinking it’s a human. The AI test is vital, he says, to assure that human drivers and the AI behind self-driving cars understand each other and predict each other’s behaviors and risks.

A planning document says AI development should match public expectations so:

• AI never engages in careless, dangerous, or reckless driving behavior

• AI remains aware, willing, and able to avoid collisions at all times

• AI meets or exceeds the performance of a competent, careful human driver

 

These broad goals for automotive AI algorithms resemble Asimov’s laws, insofar as they bar hurting humans and demand that they obey human commands and protect their own existence. But the ITU document includes a list of 15 “deliverables” including developing specifications for evaluating AIs and drafting technical reports needed for validating AI performance on the road.

A central issue is convincing the public to entrust the privilege of driving—a potentially life-and-death activity—to a technology which has suffered embarrassing failures like the misidentification of minorities that led San Francisco to ban the use of facial recognition by police and city agencies.

Testing how well an AI can drive is vastly complex, says McGuire. Human adaptability makes us fairly good drivers. “We’re not perfect, but we are very good at it, with typically a hundred million miles between fatal traffic crashes,” he says. Racking up that much distance in real-world testing is impractical—and it is but a fraction of the billions of vehicle miles needed for statistical significance. That’s a big reason developers have turned to simulations. Computers can help them run up virtual mileage needed to find potential safety flaws that might arise only rare situations, like in a snowstorm or heavy rain, or on a road under construction.

It’s not enough for an automotive AI to assure the vehicle’s safety, says McGuire. “The vehicle has to work in a way that humans would understand.” Self-driving cars have been rear-ended when they stopped in situations where most humans would not have expected a driver to stop. And a truck can be perfectly safe even when close enough to unnerve a bicyclist.

Other groups are also developing standards for robotic vehicles. ITU is covering both automated driver assistance and fully autonomous vehicles. Underwriters Laboratories is working on a standard for fully-autonomous vehicles. The Automated Vehicle Safety Consortium, a group including auto companies, plus Lyft, Uber, and SAE International (formerly the Society of Automotive Engineers) is developing safety principles for SAE Level 4 and 5 autonomous vehicles. The BSI Group (formerly the British Institute of Standards) developed a strategy for British standards for connected and autonomous vehicles and is now working on the standards themselves.

How long will it take to develop standards? “This is a research process,” says McGuire. “It takes as long as it takes” to establish public trust and social benefit. In the near term, Mcity has teamed with the city of Detroit, the U.S. Department of Transportation, and Verizon to test autonomous vehicles for transporting the elderly on city streets. But he says the field “needs to be a living thing that continues to evolve” over a longer period.

NTSB Investigation Into Deadly Uber Self-Driving Car Crash Reveals Lax Attitude Toward Safety

Post Syndicated from Mark Harris original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/ntsb-investigation-into-deadly-uber-selfdriving-car-crash-reveals-lax-attitude-toward-safety

The Uber car that hit and killed Elaine Herzberg in Tempe, Ariz., in March 2018 could not recognize all pedestrians, and was being driven by an operator likely distracted by streaming video, according to documents released by the U.S. National Transportation Safety Board (NTSB) this week.

But while the technical failures and omissions in Uber’s self-driving car program are shocking, the NTSB investigation also highlights safety failures that include the vehicle operator’s lapses, lax corporate governance of the project, and limited public oversight.

This week, the NTSB released over 400 pages ahead of a 19 November meeting aimed at determining the official cause of the accident and reporting on its conclusions. The Board’s technical review of Uber’s autonomous vehicle technology reveals a cascade of poor design decisions that led to the car being unable to properly process and respond to Herzberg’s presence as she crossed the roadway with her bicycle.

A radar on the modified Volvo XC90 SUV first detected Herzberg roughly six seconds before the impact, followed quickly by the car’s laser-ranging lidar. However, the car’s self-driving system did not have the capability to classify an object as a pedestrian unless they were near a crosswalk.

For the next five seconds, the system alternated between classifying Herzberg as a vehicle, a bike and an unknown object. Each inaccurate classification had dangerous consequences. When the car thought Herzberg a vehicle or bicycle, it assumed she would be travelling in the same direction as the Uber vehicle but in the neighboring lane. When it classified her as an unknown object, it assumed she was static.

Worse still, each time the classification flipped, the car treated her as a brand new object. That meant it could not track her previous trajectory and calculate that a collision was likely, and thus did not even slow down. Tragically, Volvo’s own City Safety automatic braking system had been disabled because its radars could have interfered with Uber’s self-driving sensors.

By the time the XC90 was just a second away from Herzberg, the car finally realized that whatever was in front of it could not be avoided. At this point, it could have still slammed on the brakes to mitigate the impact. Instead, a system called “action suppression” kicked in.

This was a feature Uber engineers had implemented to avoid unnecessary extreme maneuvers in response to false alarms. It suppressed any planned braking for a full second, while simultaneously alerting and handing control back to its human safety driver. But it was too late. The driver began braking after the car had already hit Herzberg. She was thrown 23 meters (75 feet) by the impact and died of her injuries at the scene.

Four days after the crash, at the same time of night, Tempe police carried out a rather macabre re-enactment. While an officer dressed as Herzberg stood with a bicycle at the spot she was killed, another drove the actual crash vehicle slowly towards her. The driver was able to see the officer from at least 194 meters (638 feet) away.

Key duties for Uber’s 254 human safety drivers in Tempe were actively monitoring the self-driving technology and the road ahead. In fact, recordings from cameras in the crash vehicle show that the driver spent much of the ill-fated trip looking at something placed near the vehicle’s center console, and occasionally yawning or singing. The cameras show that she was looking away from the road for at least five seconds directly before the collision.

Police investigators later established that the driver had likely been streaming a television show on her personal smartphone. Prosecutors are reportedly still considering criminal charges against her.

Uber’s Tempe facility, nicknamed “Ghost Town,” did have strict prohibitions against using drugs, alcohol or mobile devices while driving. The company also had a policy of spot-checking logs and in-dash camera footage on a random basis. However, Uber was unable to supply NTSB investigators with documents or logs that revealed if and when phone checks were performed. The company also admitted that it had never carried out any drug checks.

Originally, the company had required two safety drivers in its cars at all times, with operators encouraged to report colleagues who violated its safety rules. In October 2017, it switched to having just one.

The investigation also revealed that Uber didn’t have a comprehensive policy on vigilance and fatigue. In fact, the NTSB found that Uber’s self-driving car division “did not have a standalone operational safety division or safety manager. Additionally, [it] did not have a formal safety plan, a standardized operations procedure (SOP) or guiding document for safety.”

Instead, engineers and drivers were encouraged to follow Uber’s core values or norms, which include phrases such as: “We have a bias for action and accountability”; “We look for the toughest challenges, and we push”; and, “Sometimes we fail, but failure makes us smarter.”

NTSB investigators found that state of Arizona had a similarly relaxed attitude to safety. A 2015 executive order from governor Doug Ducey established a Self-Driving Vehicle Oversight Committee. That committee met only twice, with one of its representatives telling NTSB investigators that “the committee decided that many of the [laws enacted in other states] stifled innovation and did not substantially increase safety. Further, it felt that as long as the companies were abiding by the executive order and existing statutes, further actions were unnecessary.”

When investigators inquired whether the committee, the Arizona Department of Transportation, or the Arizona Department of Public Safety had sought any information from autonomous driving companies to monitor the safety of their operations, they were told that none had been collected.

As it turns out, the fatal collision was far from the first crash that Uber’s 40 self-driving cars in Tempe had been involved in. Between September 2016 and March 2018, the NTSB learned there had been 37 other crashes and incidents involving Uber’s test vehicles in autonomous mode. Most were minor rear-end fender-benders, but on one occasion, a test vehicle drove into a bicycle lane bollard. Another time, a safety driver had been forced to take control of the car to avoid a head-on collision. The result: the car struck a parked vehicle.

Virtual Car Sharing Combines Telepresence Robots and Autonomous Vehicles

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/virtual-car-sharing-combines-telepresence-robots-and-autonomous-vehicles

5G connectivity and a remotely projected human help autonomous vehicles be safer and more flexible

One of the remaining challenges for autonomous cars is figuring out a way to handle that long tail of weird edge cases that can randomly happen in the real world. Another challenge that remains is figuring out how to handle that huge population of weird beings called humans, who behave pseudorandomly in the real world. 

We’ve seen potential solutions to both of these problems. The first, meant to cover the long tail of situations that are outside the experience and confidence of an autonomous system, can involve a remote human temporarily taking over control of the vehicle. And with the right hardware, that human can solve the challenge of interacting with other humans at the same time.

One Driver Steers Two Trucks With Peloton’s Autonomous Follow System

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/transportation/self-driving/will-autonomous-following-be-a-game-changer-for-trucking

The technology is currently being tested on closed tracks, the company says

A host of companies are working to develop autonomous driving technology, but Silicon Valley startup Peloton has put its focus on autonomous following. The company today announced technology that uses computers, sensors, and vehicle-to-vehicle (V2V) communications to allow one driver to drive two separate trucks. 

Last year, Peloton began selling technology that enabled closer and safer truck platooning, using sensors, V2V communications, and automatic powertrain control and braking. That version of its product, Platoon Pro, requires a driver in the second truck to steer. The new version will take the second driver out of the equation.

Here’s how it works: In the front truck, the driver drives normally. Whenever he adjusts his foot on the throttle, touches the brakes, or maneuvers the steering wheel, digital details describing that action are wirelessly transmitted to the computer in the following truck. Using that information, along with data gathered from its own collection of radars, cameras, and other sensors, the second truck can safely trail close behind the first, forming a single-driver platoon.