All posts by Philip E. Ross

Q&A: The Masterminds Behind Toyota’s Self-Driving Cars Say AI Still Has a Way to Go

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/transportation/self-driving/qa-the-masterminds-behind-toyotas-selfdriving-cars-say-ai-still-has-a-way-to-go

Getting a car to drive itself is undoubtedly the most ambitious commercial application of artificial intelligence (AI). The research project was kicked into life by the 2004 DARPA Urban Challenge and then taken up as a business proposition, first by Alphabet, and later by the big automakers.

The industry-wide effort vacuumed up many of the world’s best roboticists and set rival companies on a multibillion-dollar acquisitions spree. It also launched a cycle of hype that paraded ever more ambitious deadlines—the most famous of which, made by Alphabet’s Sergei Brin in 2012, was that full self-driving technology would be ready by 2017. Those deadlines have all been missed.

Much of the exhilaration was inspired by the seeming miracles that a new kind of AI—deep learning—was achieving in playing games, recognizing faces, and transliterating voices. Deep learning excels at tasks involving pattern recognition—a particular challenge for older, rule-based AI techniques. However, it now seems that deep learning will not soon master the other intellectual challenges of driving, such as anticipating what human beings might do.

Among the roboticists who have been involved from the start are Gill Pratt, the chief executive officer of Toyota Research Institute (TRI) , formerly a program manager at the Defense Advanced Research Projects Agency (DARPA); and Wolfram Burgard, vice president of automated driving technology for TRI and president of the IEEE Robotics and Automation Society. The duo spoke with IEEE Spectrum’s Philip Ross at TRI’s offices in Palo Alto, Calif.

This interview has been condensed and edited for clarity.

IEEE Spectrum: How does AI handle the various parts of the self-driving problem?

Gill Pratt: There are three different systems that you need in a self-driving car: It starts with perception, then goes to prediction, and then goes to planning.

The one that by far is the most problematic is prediction. It’s not prediction of other automated cars, because if all cars were automated, this problem would be much more simple. How do you predict what a human being is going to do? That’s difficult for deep learning to learn right now.

Spectrum: Can you offset the weakness in prediction with stupendous perception?

Wolfram Burgard: Yes, that is what car companies basically do. A camera provides semantics, lidar provides distance, radar provides velocities. But all this comes with problems, because sometimes you look at the world from different positions—that’s called parallax. Sometimes you don’t know which range estimate that pixel belongs to. That might make the decision complicated as to whether that is a person painted onto the side of a truck or whether this is an actual person.

With deep learning there is this promise that if you throw enough data at these networks, it’s going to work—finally. But it turns out that the amount of data that you need for self-driving cars is far larger than we expected.

Spectrum: When do deep learning’s limitations become apparent?

Pratt: The way to think about deep learning is that it’s really high-performance pattern matching. You have input and output as training pairs; you say this image should lead to that result; and you just do that again and again, for hundreds of thousands, millions of times.

Here’s the logical fallacy that I think most people have fallen prey to with deep learning. A lot of what we do with our brains can be thought of as pattern matching: “Oh, I see this stop sign, so I should stop.” But it doesn’t mean all of intelligence can be done through pattern matching.

For instance, when I’m driving and I see a mother holding the hand of a child on a corner and trying to cross the street, I am pretty sure she’s not going to cross at a red light and jaywalk. I know from my experience being a human being that mothers and children don’t act that way. On the other hand, say there are two teenagers—with blue hair, skateboards, and a disaffected look. Are they going to jaywalk? I look at that, you look at that, and instantly the probability in your mind that they’ll jaywalk is much higher than for the mother holding the hand of the child. It’s not that you’ve seen 100,000 cases of young kids—it’s that you understand what it is to be either a teenager or a mother holding a child’s hand.

You can try to fake that kind of intelligence. If you specifically train a neural network on data like that, you could pattern-match that. But you’d have to know to do it.

Spectrum: So you’re saying that when you substitute pattern recognition for reasoning, the marginal return on the investment falls off pretty fast?

Pratt: That’s absolutely right. Unfortunately, we don’t have the ability to make an AI that thinks yet, so we don’t know what to do. We keep trying to use the deep-learning hammer to hammer more nails—we say, well, let’s just pour more data in, and more data.

Spectrum: Couldn’t you train the deep-learning system to recognize teenagers and to assign the category a high propensity for jaywalking?

Burgard: People have been doing that. But it turns out that these heuristics you come up with are extremely hard to tweak. Also, sometimes the heuristics are contradictory, which makes it extremely hard to design these expert systems based on rules. This is where the strength of the deep-learning methods lies, because somehow they encode a way to see a pattern where, for example, here’s a feature and over there is another feature; it’s about the sheer number of parameters you have available.

Our separation of the components of a self-driving AI eases the development and even the learning of the AI systems. Some companies even think about using deep learning to do the job fully, from end to end, not having any structure at all—basically, directly mapping perceptions to actions. 

Pratt: There are companies that have tried it; Nvidia certainly tried it. In general, it’s been found not to work very well. So people divide the problem into blocks, where we understand what each block does, and we try to make each block work well. Some of the blocks end up more like the expert system we talked about, where we actually code things, and other blocks end up more like machine learning.

Spectrum: So, what’s next—what new technique is in the offing?

Pratt: If I knew the answer, we’d do it. [Laughter]

Spectrum: You said that if all cars on the road were automated, the problem would be easy. Why not “geofence” the heck out of the self-driving problem, and have areas where only self-driving cars are allowed?

Pratt: That means putting in constraints on the operational design domain. This includes the geography—where the car should be automated; it includes the weather, it includes the level of traffic, it includes speed. If the car is going slow enough to avoid colliding without risking a rear-end collision, that makes the problem much easier. Street trolleys operate with traffic still in some parts of the world, and that seems to work out just fine. People learn that this vehicle may stop at unexpected times. My suspicion is, that is where we’ll see Level 4 autonomy in cities. It’s going to be in the lower speeds.

That’s a sweet spot in the operational design domain, without a doubt. There’s another one at high speed on a highway, because access to highways is so limited. But unfortunately there is still the occasional debris that suddenly crosses the road, and the weather gets bad. The classic example is when somebody irresponsibly ties a mattress to the top of a car and it falls off; what are you going to do? And the answer is that terrible things happen—even for humans.

Spectrum: Learning by doing worked for the first cars, the first planes, the first steam boilers, and even the first nuclear reactors. We ran risks then; why not now?

Pratt: It has to do with the times. During the era where cars took off, all kinds of accidents happened, women died in childbirth, all sorts of diseases ran rampant; the expected characteristic of life was that bad things happened. Expectations have changed. Now the chance of dying in some freak accident is quite low because of all the learning that’s gone on, the OSHA [Occupational Safety and Health Administration] rules, UL code for electrical appliances, all the building standards, medicine.

Furthermore—and we think this is very important—we believe that empathy for a human being at the wheel is a significant factor in public acceptance when there is a crash. We don’t know this for sure—it’s a speculation on our part. I’ve driven, I’ve had close calls; that could have been me that made that mistake and had that wreck. I think people are more tolerant when somebody else makes mistakes, and there’s an awful crash. In the case of an automated car, we worry that that empathy won’t be there.

Spectrum: Toyota is building a system called Guardian to back up the driver, and a more futuristic system called Chauffeur, to replace the driver. How can Chauffeur ever succeed? It has to be better than a human plus Guardian!

Pratt: In the discussions we’ve had with others in this field, we’ve talked about that a lot. What is the standard? Is it a person in a basic car? Or is it a person with a car that has active safety systems in it? And what will people think is good enough?

These systems will never be perfect—there will always be some accidents, and no matter how hard we try there will still be occasions where there will be some fatalities. At what threshold are people willing to say that’s okay?

Spectrum: You were among the first top researchers to warn against hyping self-driving technology. What did you see that so many other players did not?

Pratt: First, in my own case, during my time at DARPA I worked on robotics, not cars. So I was somewhat of an outsider. I was looking at it from a fresh perspective, and that helps a lot.

Second, [when I joined Toyota in 2015] I was joining a company that is very careful—even though we have made some giant leaps—with the Prius hybrid drive system as an example. Even so, in general, the philosophy at Toyota is kaizen—making the cars incrementally better every single day. That care meant that I was tasked with thinking very deeply about this thing before making prognostications.

And the final part: It was a new job for me. The first night after I signed the contract I felt this incredible responsibility. I couldn’t sleep that whole night, so I started to multiply out the numbers, all using a factor of 10. How many cars do we have on the road? Cars on average last 10 years, though ours last 20, but let’s call it 10. They travel on an order of 10,000 miles per year. Multiply all that out and you get 10 to the 10th miles per year for our fleet on Planet Earth, a really big number. I asked myself, if all of those cars had automated drive, how good would they have to be to tolerate the number of crashes that would still occur? And the answer was so incredibly good that I knew it would take a long time. That was five years ago.

Burgard: We are now in the age of deep learning, and we don’t know what will come after. We are still making progress with existing techniques, and they look very promising. But the gradient is not as steep as it was a few years ago.

Pratt: There isn’t anything that’s telling us that it can’t be done; I should be very clear on that. Just because we don’t know how to do it doesn’t mean it can’t be done.

Mercedes and Nvidia Announce the Advent of the Software-Defined Car

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/mercedes-and-nvidia-announce-the-advent-of-the-softwaredefined-car

The idea of tomorrow’s car as a computer on wheels is ripening apace. Today, Daimler and Nvidia announced that Daimler’s carmaking arm, Mercedes-Benz, will drop Nvidia’s newest computerized driving system into every car it sells, beginning in 2024.

The system, called the Drive AGX Orin, is a system-on-a-chip that was announced in December and is planned to ship in 2022. It’s an open system, but as adapted for Mercedes, it will be laden with specially designed software. The result, say the two companies, will be a software-defined car:  Customers will buy a car, then periodically download new features, among them some that were not known at the time of purchase. This capability will be enhanced by using software instead of dedicated hardware in the form of a constellation of electronic control units, or ECUs.

“In modern cars, there can be 100, up to 125, ECUs,” said Danny Shapiro, Nvidia’s senior director of automotive. “Many of those will be replaced by software apps. That will change how different things function in the car—from windshield wipers to door locks to performance mode.”

The plan is to give cars a degree of self-driving competence comparable to Level 2  (where the car assists the driver) and Level 3 (where the driver can do other things as the car drives itself, while remaining ready to take back the wheel). The ability to park itself will be Level 4 (where there’s no need to mind the car at all, so long as it’s operating in a predefined comfort zone).

In all these matters, the Nvidia-Daimler partnership is following a trail that Tesla blazed and others have followed. Volkswagen plainly emulated Tesla in a project that is just now yielding its first fruit: an all-electric car that will employ a single master electronic architecture that will eventually power all of VW’s electric and self-driving cars. The rollout was slightly delayed because of glitches in software, as we reported last week

Asked whether Daimler would hire a horde of software experts, as Volkwagen is doing, and thus become something of a software company, Bernhard Wardin, spokesman for Daimler’s autonomous driving and artificial intelligence division, said he had no comment. (Sounds like a “yes.”) He added that though a car model had been selected for the debut of the new system, its name was still under wraps.

One thing that strikes the eye is the considerable muscle of the system. The AGX Orin has 17 billion transistors, incorporating what Nvidia says are new deep learning and computer vision accelerators that “deliver 200 trillion operations per second—nearly 7 times the performance of Nvidia’s previous generation Xavier SoC.” 

It’s interesting to note that the Xavier itself began to ship only last year, after being announced toward the end of 2016. On its first big publicity tour, at the 2017 Consumer Electronics Show, the Xavier was touted by Jen-Hsun Huang, the CEO of Nvidia. He spoke together with the head of Audi, which was going to use the Xavier in a “Level 4” car that was supposed to hit the road in three years. Yes, that would be now.

So, a chip 7 times more powerful, with far more advanced AI software, is now positioned to power a car with mere Level 2/3 capabilities—in 2024.

None of this is meant to single out Nvidia or Daimler for ridicule. It’s meant to ridicule the entire industry, both the tech people and the car people, who have been busy walking back expectations for self-driving for the past two years. And to ridicule us tech journalists, who have been backtracking right alongside them.

Cars that help the driver do the driving are here already. More help is on the way. Progress is real. But the future is still in the future: Robocar tech is harder than we thought.

Can Electric Cars on the Highway Emulate Air-to-Air Refueling?

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/energy/batteries-storage/will-electric-cars-on-the-highway-emulate-airtoair-refueling

Jet fighters can’t carry a huge tank of fuel because it would slow them down. Instead they have recourse to air-to-air refueling, using massive tanker planes as their gas stations. 

What if electric cars could do the same thing, while zooming down the highway? One car with charge to spare could get in front of another that was short of juice, and the two could extend telescopic charging booms until they linked together magnetically. The charge-rich car would then share some of its largesse. And to complete the aerial analogy, just add a few big “tanker trucks” bearing enormous batteries to beef up the power reserves of an entire flotilla of EVs.

The advantages of the concept are clear: It uses a lot of battery capacity that would otherwise go untapped, and it allows cars to save time in transit by recharging on the go, without taking detours or sitting still while topping off.

Yeah, and the tooth fairy leaves presents under your pillow. We’re too far into the month for this kind of story. Right?

Maybe it’s no April Fools joke. Maybe sharing charge is the way forward, not just for electric cars and trucks on the highways but for other mobile vehicles. That’s the brief of professor Swarup Bhunia and his colleagues in the department of electrical and computer engineering at the University of Florida, in Gainesville.

Bhunia is no mere enthusiast: He has written three features so far for IEEE Spectrum (this, this and this). And his group has published their new proposal in arXiv, an online forum for preprints that have been vetted, if not yet fully peer-reviewed. The researchers call the concept peer-to-peer car charging.

The point is to make a given amount of battery go further and thus to solve the two main problems of electric vehicles—high cost and range anxiety. In 2019 batteries made up about a third of the cost of a mid-size electric car, down from half just a few years ago, but still a huge expense. And though most drivers typically cover only short distances, they usually want to be able to go very far if need be.

Mobile charging works by dividing a car’s battery pack into independent banks of cells. One bank runs the motor, the other one accepts charge. If the power source is a battery-bearing truck, you can move a great deal of power —enough for “an extra 20 miles of range,” Bhunia suggests. True, even a monster truck can charge only one car at a time, but each newly topped-off car will then be in a position to spare a few watt-hours for other cars it meets down the road.

We already have the semblance of such a battery truck. We recently wrote about a concept from Volkswagen to use mobile robots to haul batteries to stranded EVs. And even now you can buy a car-to-car charge-sharing system from the Italian company Andromeda, although so far no one seems to have tried using it while in motion.

If all the cars participated (yes, a big “if”), then you’d get huge gains. In computer modeling, done with the traffic simulator SUMO, the researchers found that EVs had to stop to recharge only about a third as often. What’s more, they could manage things with nearly a quarter less battery capacity.

A few disadvantages suggest themselves. First off, how do two EVs dock while  barreling down the freeway? Bhunia says it would be no strain at all for a self-driving car, when that much-awaited creature finally comes. Nothing can hold cars in tandem more precisely than a robot. But even a human being, assisted by an automatic system, might be able to pull of the feat, just as pilots do during in-flight refueling. 

Then there is the question of reciprocity. How many people are likely to lend their battery to a perfect stranger? 

“They wouldn’t donate power,” explains Bhunia. “They’d be getting the credit back when they are in need. If you’re receiving charge within a network”—like ride-sharing drivers for companies like Uber and Lyft—“one central management system can do it. But if you want to share between networks, that transaction can be stored in a bank as a credit and paid back later, in kind or in cash.”

In their proposal, the researchers envisage a central management system that operates in the cloud.

Any mobile device that moves about on its own can benefit from a share-and-share-alike charging scheme. Delivery bots would make a lot more sense if they didn’t have to spend half their time searching for a wall socket. And being able to recharge a UAV from a moving truck would greatly benefit any operator of a fleet of cargo drones, as Amazon appears to want to do.

Sure, road-safety regulators would pop their corks at the very mention of high-speed energy swapping. And, yes, one big advance in battery technology would send the idea out the window, as it were. Still, if aviators could share fuel as early as 1923, drivers might well try their hand at it a century later.

Lucid Motors’ Peter Rawlinson Talks E-Car Efficiency

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/advanced-cars/lucid-motors-says-ecar-design-all-about-efficiency

Two years ago, our own Lawrence Ulrich wrote that Lucid Motors “might just have a shot at being a viable, smaller-scale competitor to Tesla,” with series production of its fiendishly fast electric car, the Air, then slated for 2019.

Here we are in 2020, and you still can’t buy the Air. But now all systems are go, insists Peter Rawlinson, the chief executive and chief technology officer of Lucid and, in a former incarnation, the chief engineer for the Tesla Model S. He credits last year’s infusion of US $1 billion from the sovereign wealth fund of Saudi Arabia.

Is Coronavirus Speeding the Adoption of Driverless Technology?

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/will-coronavirus-speed-adoption-of-driverless-technology

Neolix, a maker of urban robo-delivery trucks, made an interesting claim recently. The Beijing-based company said orders for its self-driving delivery vehicles were soaring because the coronavirus epidemic had both cleared the roads of cars and opened the eyes of customers to the advantages of driverlessness. The idea is that when the epidemic is over, the new habits may well persist.

Neolix last week told Automotive News it had booked 200 orders in the past two months after having sold just 159 in the eight months before. And on 11 March, the company confirmed that it had raised US $29 million in February to fund mass production.

Of course, this flurry of activity could merely be coincidental to the epidemic, but Tallis Liu, the company’s manager of business development, maintains that it reflects changing attitudes in a time of plague.

“We’ve seen a rise in both acceptance and demand both from the general public and from the governmental institutions,” he tells IEEE Spectrum. The sight of delivery bots on the streets of Beijing is “educating the market” about “mobility as a service” and on “how it will impact people’s day-to-day lives during and after the outbreak.”

During the epidemic, Neolix has deployed 50 vehicles in 10 major cities in China to do mobile delivery and also disinfection service. Liu says that many of the routes were chosen because they include public roads that the lockdown on movement has left relatively empty. 

The company’s factory has a production capacity of 10,000 units a year, and most of the  factory staff has returned to their positions,  Liu adds. “Having said that, we are indeed facing some delays from our suppliers given the ongoing situation.”

Neolix’s deliverybots are adorable—a term this site once used to describe a strangely similar-looking rival bot from the U.S. firm Nuro. The bots are the size of a small car, and they’re each equipped with cameras, three 16-channel lidar laser sensors, and one single-channel lidar. The low-speed version also has 14 ultrasonic short-range sensors; on the high-speed version, the ultrasonic sensors are supplanted by radars. 

If self-driving technology benefits from the continued restrictions on movement in China and around the world, it wouldn’t be the first time that necessity had been the mother of invention. An intriguing example is furnished by a mere two-day worker’s strike on the London Underground in 2014. Many commuters, forced to find alternatives, ended up sticking with those workarounds even after Underground service resumed, according to a 2015 analysis by three British economists.

One of the researchers, Tim Willems of Oxford University, tells Spectrum that disruptions can induce permanent changes when three conditions are met. First, “decision makers are lulled into habits and have not been able to achieve their optimum (close to our Tube strike example).” Second, “there are coordination failures that make it irrational for any one decision maker to deviate from the status quo individually” and a disruption “forces everybody away from the status quo at the same time.” And third, the reluctance to pay the fixed costs required to set up a new way of doing things can be overcome under crisis conditions.

By that logic, many workers sent home for months on end to telecommute will stay on their porches or in their pajamas long after the all-clear signal has sounded. And they will vastly accelerate the move to online shopping, with package delivery of both the human and the nonhuman kind.

On Monday, New York City’s mayor, Bill de Blasio, said he was suspending his long-running campaign against e-bikes. “We are suspending that enforcement for the duration of this crisis,” he said. And perhaps forever.

Gill Pratt on “Irrational Exuberance” in the Robocar World

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/gill-pratt-of-toyota-on-the-irrational-exuberance-in-the-robocar-world

A lot of people in the auto industry talked for way too long about the imminent advent of fully self-driving cars. 

In 2013, Carlos Ghosn, now very much the ex-chairman of Nissan, said it would happen in seven years. In 2016, Elon Musk, then chairman of Tesla, implied  his cars could basically do it already. In 2017 and right through early 2019 GM Cruise talked 2019. And Waymo, the company with the most to show for its efforts so far, is speaking in more measured terms than it used just a year or two ago. 

It’s all making Gill Pratt, CEO of the Toyota Research Institute in California, look rather prescient. A veteran roboticist who joined Toyota in 2015 with the task of developing robocars, Pratt from the beginning emphasized just how hard the task would be and how important it was to aim for intermediate goals—notably by making a car that could help drivers now, not merely replace them at some distant date.

That helpmate, called Guardian, is set to use a range of active safety features to coach a driver and, in the worst cases, to save him from his own mistakes. The more ambitious Chauffeur will one day really drive itself, though in a constrained operating environment. The constraints on the current iteration will be revealed at the first demonstration at this year’s Olympic games in Tokyo; they will certainly involve limits to how far afield and how fast the car may go.

Earlier this week, at TRI’s office in Palo Alto, Calif., Pratt and his colleagues gave Spectrum a walkaround look at the latest version of the Chauffeur, the P4; it’s a Lexus with a package of sensors neatly merging with the roof. Inside are two lidars from Luminar, a stereocamera, a mono-camera (just to zero in on traffic signs), and radar. At the car’s front and corners are small Velodyne lidars, hidden behind a grill or folded smoothly into small protuberances. Nothing more could be glimpsed, not even the electronics that no doubt filled the trunk.

Pratt and his colleagues had a lot to say on the promises and pitfalls of self-driving technology. The easiest to excerpt is their view on the difficulty of the problem.

There isn’t anything that’s telling us it can’t be done; I should be very clear on that,” Pratt says. “Just because we don’t know how to do it doesn’t mean it can’t be done.”

That said, though, he notes that early successes (using deep neural networks to process vast amounts of data) led researchers to optimism. In describing that optimism, he does not object to the phrase “irrational exuberance,” made famous during the 1990s dot-com bubble.

It turned out that the early successes came in those fields where deep learning, as it’s known, was most effective, like artificial vision and other aspects of perception. Computers, long held to be particularly bad at pattern recognition, were suddenly shown to be particularly good at it—even better, in some cases, than human beings. 

“The irrational exuberance came from looking  at the slope of the [graph] and seeing the seemingly miraculous improvement deep learning had given us,” Pratt says. “Everyone was surprised, including the people who developed it, that suddenly, if you threw enough data and enough computing at it, the performance would get so good. It was then easy to say that because we were surprised just now, it must mean we’re going to continue to be surprised in the next couple of years.”

The mindset was one of permanent revolution: The difficult, we do immediately; the impossible just takes a little longer. 

Then came the slow realization that AI not only had to perceive the world—a nontrivial problem, even now—but also to make predictions, typically about human behavior. That problem is more than nontrivial. It is nearly intractable. 

Of course, you can always use deep learning to do whatever it does best, and then use expert systems to handle the rest. Such systems use logical rules, input by actual experts, to handle whatever problems come up. That method also enables engineers to tweak the system—an option that the black box of deep learning doesn’t allow.

Putting deep learning and expert systems together does help, says Pratt. “But not nearly enough.”

Day-to-day improvements will continue no matter what new tools become available to AI researchers, says Wolfram Burgard, Toyota’s vice president for automated driving technology

“We are now in the age of deep learning,” he says. “We don’t know what will come after—it could be a rebirth of an old technology that suddenly outperforms what we saw before. We are still in a phase where we are making progress with existing techniques, but the gradient isn’t as steep as it was a few years ago. It is getting more difficult.”





Velodyne Will Sell a Lidar for $100

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/velodyne-will-sell-a-lidar-for-100

Velodyne claims to have broken the US $100 barrier for automotive lidar with its tiny Velabit, which it unveiled at CES earlier this month.

“Claims” is the mot juste because this nice, round dollar amount is an estimate based on the mass-manufacturing maturity of a product that has yet to ship. Such a factoid would hardly be worth mentioning had it come from some of the several-score odd lidar startups that haven’t shipped anything at all. But Velodyne created this industry back during DARPA-funded competitions, and has been the market leader ever since.

Damon’s Hypersport AI Boosts Motorcycle Safety

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/damons-hypersport-motorcycle-safety-ai

For all its pure-electric acceleration and range and its ability to shapeshift, the Hypersport motorcycle shown off last week at CES by Vancouver, Canada-based Damon Motorcycles matters for just one thing: It’s the first chopper swathed in active safety systems.

These systems don’t take control, not even in anticipation of a crash, as they do in many advanced driver assistance systems in cars. They leave a motorcyclist fully in command while offering the benefit of an extra pair of eyes.

Why drape high tech “rubber padding” over the motorcycle world? Because that’s where the danger is: Motorcyclists are 27 times more likely to die in a crash than are passengers in cars.

“It’s not a matter of if you’ll have an accident on a motorbike, but when,” says Damon chief executive Jay Giraud. “Nobody steps into motorbiking knowing that, but they learn.”

The Hypersport’s sensor suite includes cameras, radar, GPS, solid-state gyroscopes and accelerometers. It does not include lidar­–“it’s not there yet,” Giraud says–but it does open the door a crack to another way of seeing the world: wireless connectivity.  

The bike’s brains note everything that happens when danger looms, including warnings issued and evasive maneuvers taken, then shunts the data to the cloud via 4G wireless. For now that data is processed in batches, to help Damon refine its algorithms, a practice common among self-driving car researchers. Some day, it will share such data with other vehicles in real-time, a strategy known as vehicle-to-everything, or V2x.

But not today. “That whole world is 5-10 years away—at least,” Giraud grouses. “I’ve worked on this for over decade—we’re no closer today than we were in 2008.”

The bike has an onboard neural net whose settings are fixed at any given time. When the net up in the cloud comes up with improvements, these are sent as over-the-air updates to each motorcycle. The updates have to be approved by each owner before going live onboard. 

When the AI senses danger it gives warning. If the car up ahead suddenly brakes, the handlebars shake, warning of a frontal collision. If a vehicle coming from behind enters the biker’s blind spot, LEDs flash. That saves the rider the trouble of constantly having to look back to check the blind spot.

Above all, it gives the rider time. A 2018 report by the National Highway Traffic Safety Administration found that from 75 to 90 percent of riders in accidents had less than three seconds to notice a threat and try to avert it; 10 percent had less than one second. Just an extra second or two could save a lot of lives.

The patterns the bike’s AI tease out from the data are not always comparable to those a self-driving car would care about. A motorcycle shifts from one half of a lane to the other; it leans down, sometimes getting fearsomely close to the pavement; and it is often hard for drivers in other vehicles to see.

One motorbike-centric problem is the high risk a biker takes just by entering an intersection. Some three-quarters of motorcycle accidents happen there, and of that number about two-thirds are caused by a car’s colliding from behind or from the side. The side collision, called a T-bone, is particularly bad because there’s nothing at all to shield the rider.

Certain traffic patterns increase the risk of such collisions. “Patterns that repeat allow our system to predict risk,” Giraud says. “As the cloud sees the tagged information again and again, we can use it to make predictions.”

Damon is taking pre-orders, but it expects to start shipping in mid-2021. Like Tesla, it will deliver straight to the customer, with no dealers to get in the way.

Seeing Around the Corner With Lasers—and Speckle

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/seeing-around-corner-lasers-speckle

Researchers from Rice, Stanford, Princeton, and Southern Methodist University have developed a new way to use lasers to see around corners that beats the previous technique on resolution and scanning speed. The findings appear today in the journal Optica.

The U.S. military—which funded the work through DARPA grants—is interested for obvious reasons, and NASA wants to use it to image caves, perhaps doing so from orbit. The technique might one day also let rescue workers peer into earthquake-damaged buildings and help self-driving cars navigate tricky intersections.

One day. Right now it’s a science project, and any application is years away. 

The original way of corner peeping, dating to 2012, studies the time it takes laser light to go to a reflective surface, onward to an object and back again. Such time-of-flight measurement requires hours of scanning time to produce a resolution measured in centimeters. Other methods have since been developed that look at reflected light in an image to infer missing parts.

The latest method looks instead at speckle, a shimmering interference pattern that in many laser applications is a bug; here it is a feature because it contains a trove of spatial information. To get the image hidden in the speckle—a process called non-line-of-sight correlography—involves a bear of a calculation. The researchers used deep-learning methods to accelerate the analysis. 

“Image acquisition takes a quarter of a second, and we’re getting sub-millimeter resolution,” says Chris Metzler, the leader of the project, who is a post-doc in electrical engineering at Stanford. He did most of the work while completing a doctorate at Rice.

The problem is that the system can achieve these results only by greatly narrowing the field of view.

“Speckle encodes interference information, and as the area gets larger, the resolution gets worse,” says Ashok Veeraraghavan, an associate professor of electrical engineering and computer science at Rice. “It’s not ideal to image a room; it’s ideal to image an ID badge.”

The two methods are complementary: Time-of-flight gets you the room, the guy standing in that room, and maybe a hint of a badge. Speckle analysis reads the badge. Doing all that would require separate systems operating two lasers at different wavelengths, to avoid interference.

Today corner peeping by any method is still “lab bound,” says Veeraraghavan, largely because of interference from ambient light and other problems. To get the results that are being released today, the researchers had to work from just one meter away, under ideal lighting conditions. A possible way forward may be to try lasers that emit in the infrared.

Another intriguing possibility is to wait until the wireless world’s progress to ever-shorter wavelengths finally hits the millimeter band, which is small enough to resolve most identifying details. Cars and people, for instance.

Startup Offers a Silicon Solution for Lithium-Ion Batteries

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/energywise/green-tech/fuel-cells/startup-advano-silicon-lithium-ion-batteries

You could cram a lot more energy in a lithium-ion battery anode if you replaced the graphite in the anode with silicon. Silicon has about 10 times as much storage capacity.

But silicon bloats during charging, and that can break a battery’s innards. Graphite hardly swells at all because during charging it slips incoming lithium ions between its one-atom-thick layers. That process, called intercalation, involves no chemical change whatever. But it’s different with silicon: Silicon and lithium combine to form lithium silicide, a bulky alloy. 

A lot of companies are trying to sidestep the problem by fine-tuning the microstructure of silicon sheets or particles, typically by building things from the bottom up. We’ve written about a few of them, including Sila Nanotechnologies, Enovix, and XNRGI.

Now comes Advano, a startup in New Orleans, that champions a top-down approach. The point is to produce silicon particles that are perhaps not so fine-tuned, but still good enough, and to produce the little dots in huge quantities.

Aeva Unveils Lidar on a Chip

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/sensors/lidar-on-a-chip-at-last

There are scores of lidar startups. Why read about another one? 

It wouldn’t be enough to have the backing of a big car company, though Aeva has that. In April the company announced a partnership with Audi, and today it’s doing so with Porsche—the two upscale realms of the Volkswagen auto empire. 

Nor is it enough to claim that truly driverless cars are just around the bend. We’ve seen that promise made and broken many times. 

What makes this two-year-old company worth a look-see is its technology, which is unusual both for its miniaturization and for the way in which it modulates its laser beam. 

Lithium-Sulfur Battery Project Aims to Double the Range of Electric Airplanes

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/energywise/aerospace/aviation/lithiumsulfur-battery-project-aims-to-double-the-range-of-electric-airplanes

When General Motors briefly first wowed the world with its EV-1 electric car, back in 1990, it relied on lead-acid batteries that packed a piddling 30 to 40 watt-hours per kilogram. The project eventually died, in part because that metric was so low (and the cost was so high).

It was the advent of new battery designs, above all the lithium-ion variant, that launched today’s electric-car wave. Today’s Tesla Model 3’s lithium-ion battery pack has an estimated 168 Wh/kg. And important as this energy-per-weight ratio is for electric cars, it’s more important still for electric aircraft.

Now comes Oxis Energy, of Abingdon, UK, with a battery based on lithium-sulfur chemistry that it says can greatly increase the ratio, and do so in a product that’s safe enough for use even in an electric airplane. Specifically, a plane built by Bye Aerospace, in Englewood, Colo., whose founder, George Bye, described the project in this 2017 article for IEEE Spectrum.

A Radar to Watch You in Your Car

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/sensors/vayyar-promises-a-radar-for-inside-the-car-as-well-as-outside-it

Vayyar says there are at least four good reasons to monitor passengers with radar instead of cameras

Used to be, when we said the walls had ears, it meant there were microphones hidden in them. Now, when we say the walls have eyes, will it mean they have radar?

Maybe so, at least in your car. Vayyar Imaging, a firm based in Tel Aviv, says it has a radar chip that can form a three-dimensional view of what’s going on inside a car as well as outside of it. Right now, though, it’s concentrating on the inside, because there’s a regulatory push to having in-cabin observation in place in the early 2020s, and Vayyar thinks it has a head start.

Everyone else seems to be banking on cameras, including infrared cameras, to do this job. You can buy such a system right now: the Cadillac CT6 Super Cruise. That car can drive itself for extended periods, but it uses a camera to scrutinize the driver for signs of distraction or fatigue to make sure that, if a problem comes up that the system can’t solve, it can safely hand control back to the human.

DeepMind Deploys Self-taught Agents To Beat Humans at Quake III

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/deep-mind-gets-software-agents-to-work-together-to-beat-a-multiplayer-video-game

Without instructions, software agents learn how to crush human players at “Capture the Flag” in Quake III Arena

Chess and Go were originally developed to mimic warfare, but they do a bad job of it. War and most other competitions generally involve more than one opponent and more than one ally, and the play typically unfolds not on an orderly, flat matrix but in a variety of landscapes built up in three dimensions.

That’s why Alphabet’s DeepMind, having crushed chess and Go, has now tackled the far harder challenge posed by the three-dimensional, multiplayer, first-person video game. Writing today in Science, lead author Max Jaderberg and 17 DeepMind colleagues describe how a totally unsupervised program of self-learning allowed software to exceed human performance in playing “Quake III Arena.” The experiment involved a version of the game that requires each of two teams to capture as many of the other teams’ flags as possible.

The teams begin at base camps set at opposite ends of a map, which is generated at random before each round. Players roam about, interacting with buildings, trees, hallways and other features on the map, as well as with allies and opponents. They try to use their laser-like weapons to “tag” members of the opposing team; a tagged player must drop any flag he might have been carrying on the spot and return to his team’s base.

DeepMind represents each player with a software agent that sees the same screen a human player would see. The agents have no way of knowing what other agents are seeing; again, this is a much closer approximation of real strategic contests than most board games provide. Each agent begins by making choices at random, but as evidence trickles in over successive iterations of the game, it is used in a process called reinforcement learning. The result is to cause the agent’s behavior to converge on a purposeful behavior pattern, called a “policy.” 

Each agent develops its policy on its own, which means it can specialize a bit. However, there’s a limit: After every 1000 iterations of play the system compares policies and estimates how well the entire team would do if it were to mimic this or that agent. If one agent’s winning chances turn out to be less than 70 percent as high as another’s, the weaker agent copies the stronger one. Meanwhile, the reinforcement learning is itself tweaked by comparing it to other metrics. Such tweaking of the tweaker is known as meta-optimization.

Agents start out as blank slates, but they do have one feature built into their way of evaluating things. It’s called a multi–time scale recurrent neural network with external memory, and it keeps an eye not only on the score at the end of the game but also at earlier points. The researchers note that “Reward purely based on game outcome, such as win/draw/loss signal…is very sparse and delayed, resulting in no learning. Hence, we obtain more frequent rewards by considering the game points stream.”

The program generally beats human players when starting from a randomly generated position. Even after the humans had practiced for a total of 12 hours, they still were able to win just 25 percent of the games, drawing 6 percent of the time, and losing the rest.

However, when two professional game testers were given a particularly complex map that had not been used in training and were allowed to play games on that map against two software agents, the pros needed just 6 hours of training to come out on top. This result was not described in the Science paper but in a supplementary document made available to the press. The pros used their in-depth study of the map to identify the routes that the agents preferred and to work out how to avoid those routes.

So for the time being people can still beat software in a well-studied set-piece battle. Of course, real life rarely provides such opportunities. Robert E. Lee got to fight the Battle of Gettysburg just one time.

Bosch to Sell Low-Cost Sensors for Flying Cars

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/sensors/bosch-adapts-automotive-sensors-for-use-air-taxis

Bosch expects the first flying taxi service to take off in a major city by 2023

Bosch today said it plans to sell a universal control unit for flying cars that combines dozens of sensors that have been proven in cars on the ground. 

“The first flying taxis are set to take off in major cities starting in 2023, at the latest,” Harald Kröger, president of the Bosch Automotive Electronics division, said in a statement. “Bosch plans to play a leading role in shaping this future market.” 

Among the many sensors in the universal, plug-and-play unit are MEMS-based acceleration sensors. These include yaw-rate sensors to measure the angle of attack—that is, the plane’s angle with respect to the oncoming air. This was the quality that was mismeasured by the sensors and misinterpreted by the control unit of the Boeing 737 Max, contributing to the two crashes of that airliner.