Tag Archives: transportation

Small Japanese Town to Test First Autonomous Amphibious Bus

Post Syndicated from John Boyd original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/small-japanese-town-to-test-first-autonomous-amphibious-bus

No longer a rare sight, amphibious buses can now be found making a splash around the globe by providing tourists with a different view of local attractions. Even Naganohara, a small town in Gunma Prefecture, Japan, population 5,600, operates an amphibious tourist bus daily in and alongside the Yanba Dam nine months of the year.

And that’s the problem—the experience is less of a thrill year by year. So the town, an hour’s train journey northwest of Tokyo, hit on the idea of making the amphibious bus self-driving.

A consortium that includes the Japan Amphibious Vehicle AssociationSaitama Institute of Technology (SIT), ITbook Holdings, ABIT Corporation, and the town of Naganohara will begin in December what is billed as the first such trials in the world of an autonomous amphibious bus. Supported primarily by the Nippon Foundation, the consortium has secured some 250 million yen ($2.4 million) to fund the project through to March next year, after which, progress will be evaluated, with further funding expected to be made available.

The amphibious bus, the property of the town, comprises a converted truck design combined with a ship’s bottom and carries 40 passengers. It uses the truck’s diesel engine on land and a separate ship engine to travel in the dam at 3.6 knots an hour. 

SIT is developing the self-driving technologies for both land and water that are based on the open-source Autoware platform for autonomous cars, and on controllers for modified Joy Cars. 

“Joy Cars are joystick-controlled cars for disable people that have been retrofitted with actuators and a joystick controller system,” says Daishi Watabe, director, Center for Self-Driving Technologies at SIT, who is heading the Yanba Smart Mobility Project. “They are the development of an industry-SIT collaboration.”

Tatsuma Okubo, a general manager at ITbook Holdings, is the project’s manager and describes the autonomous technology set-up as follows. A PC with the Autoware software installed takes in data from the various sensors including Lidar, cameras, and the Global Navigation Satellite System. The software uses a controller area network (CAN bus) to communicate the data to a vehicle motion controller that in turn controls two Joystick-controlled Joy System sub-control units: one for steering and one for accelerating and braking.

“Basically, our autonomous bus system substitutes voltage data from the joystick interface with voltage data from the Autoware electronic unit,” says Watabe. “We are developing two sets of remodeled Joy Car actuators for retrofitting in the Naganohara amphibious bus—one set for use on land, the other for water, which are remodeled land actuators.”

He says the autonomous control system will manage four major areas of control: vehicle water-in/water-out location-recognition; sensor-stabilization to counter ship rolling; self-localization techniques to manage changes in surrounding 3D views, given the water height in the dam can dramatically change; and a sonar-based obstacle-avoidance scheme. In addition, AI is also used to assist in obstacle detection, self-localization, and path planning.

When the dam was created, buildings and trees were left as they are.  

“Given the height of the lake can change as much as 30 meters, we have to recognize underwater obstacles and driftwood to avoid any collisions,” says Watabe. “But because water permeability is low, cameras are not suitable. And Lidar doesn’t function well underwater. So we need to use sonar sensors for obstacle detection and path planning.”

What’s more, 3D views change according to the water level, while Lidar has no surrounding objects to reflect from when the bus is in the middle of the lake. “This means a simple scan-matching algorithm is not sufficient for self-localization,” explains Watabe. “So we’ll also use global navigation satellite data enhanced through real-time kinematic positioning and a gyro-based localization scheme.”

The biggest difficulty the project faces, according to Okubo, is the short construction period available, as they only have the off-season—December to March—this year and next to install and field test autonomous functionality.

Another challenge: Because winds and water flows can affect vehicle guidance, subtle handling of the vehicle is required when entering and exiting the water to ensure the underwater guardrails do not cause damage. Consequently, the group is developing a precise control system to govern the rudder and propulsion system.

“We’ll install and fine-tune the autonomous functionality during two off-season periods in 2020-21 and 2021-22,” says Watabe.  “Then the plan is to conduct field tests with the public in February and March 2022.”

Besides tourism, Okubo says the technology has a huge potential to “revolutionize logistics” to Japan’s remote islands, which are facing a survival crisis due to declining populations. As an example, he says only a single driver (or no driver once full automation is introduced) would be necessary during goods transshipments to such islands. This should reduce costs and enable more frequent operations.  

Wireless Charging Tech to Keep EVs on the Go

Post Syndicated from Lawrence Ulrich original https://spectrum.ieee.org/cars-that-think/transportation/advanced-cars/wireless-charging-tech-to-keep-evs-on-the-go

Norway, already a world leader in EV adoption, will soon mark a world’s first: An Oslo-based fleet of Jaguar I-Pace taxis that can charge wirelessly even as they queue up for passengers.

That inductive charging technology, developed by a former NASA engineer at Pennsylvania-based Momentum Dynamics, aims to solve perhaps the biggest disconnect in EVs: How to bring convenient charging to the urban masses—including apartment dwellers and drivers of taxis, buses, and delivery trucks—without clogging every inch of prime real estate with bulky, unsightly chargers. The conundrum becomes more pressing with the introduction of new electric models, and each additional government mandate for fewer fossil-fueled cars and lower carbon emissions.

A great example of that action to combat climate-change is Oslo, whose ambitious ElectriCity plan will require that all taxis produce zero tailpipe emissions by 2024—effectively banning even gasoline-electric hybrid models. The result of punitive taxes on fossil-fueled cars and enticing incentives for electric models: 50 percent of Norway’s new cars are now EVs, a higher percentage by far than any nation. Norway’s government has decreed that all new cars must be zero-emissions by 2025.

That carrot-and-stick urgency led to a partnership between Jaguar, Momentum Dynamics, Nordic taxi operator Cabonline, and charging company Fortam Recharge. The group aims to create the world’s first wireless-charging taxi fleet. To that end, Jaguar is equipping 25 I-Pace SUVs with Momentum Dynamic’s inductive charging pads. The pads, which are about 60 cm square, are rated at 50 to 75 kilowatts. As the cars work their way through taxi queues, the Jaguars will stop over a series of inductive coils embedded in the pavement. Using resonant magnetic coupling operating at 85 hertz, a charging pad will route enough energy to a taxi’s batteries add about 80 kilometers of range for every 15 minutes hovering over the inductive coils—with no physical plugs or human hookup required.

Rather than fill batteries to the brim, the idea is to replenish them in shorter bursts whenever the opportunity arises.

“The concept is grazing, rather than guzzling,” says Andrew Daga, chief executive of Momentum Dynamics. “You just keep adding energy back in as you need it.”

Daga argues that current charging models—including the “one car, one plug” idea and the reliance on ever-more-potent DC chargers that can degrade battery life—are ultimately unworkable for congested cities, mobility fleets, and impatient drivers or riders.

“More frequent interactions with the grid are necessary,” Daga said. “It’s all about thinking differently about how fueling is going to be done in a world of electric vehicles.”

The company’s breakthrough was born from outer space and inner snow. Daga’s co-founder, Bruce Long, who died in 2018, was an electrical engineering professor at Bucknell University. During Antarctic expeditions to measure glacial activity for Penn State’s geophysics program, brutal elements inspired a wireless solution for recharging electronic equipment. Fine snow kept blowing into Long’s sensitive instruments whenever their cases were cracked open to replace batteries. Daga, who had worked on the 35-meter-long solar power arrays of NASA’s International Space Station, had already been envisioning ways to reduce their weighty aluminum cabling. That sparked the wireless energy transfer idea that’s the basis for Momentum Dynamics’ current project.

The company is also collaborating with China’s EV giant Geely, which also owns Volvo and Lotus. Momentum executives say they’ve also struck a deal with an unnamed European manufacturer to produce a wireless charging urban delivery truck.

This is just the latest turn in the road for Momentum Dynamics. In 2015, the company began proving its concept with electric bus trials in four U.S. cities. The transit trials featured a Chinese-built BYD bus in Wenatchee, Wash. that slurped energy from a charging pad installed along its route at a rate of 200 kilowatts. That’s on par with some of the fastest DC chargers, enough to “keep the bus in 24/7 operation, without ever going back to the garage” to recharge, Daga says.

Daga points out that taxi or ride-hailing drivers are genetically disposed to avoid downtime—whether that means waiting in line for gasoline pumps or detouring for lengthy charges at depots. With an inductive system, “they won’t lose a single minute of revenue time charging their vehicle.”

The company claims its technology delivers 94-percent charging efficiency, which holds steady as scalable power climbs to 200 or even 350 kilowatts. That’s a winning contrast with DC fast chargers, whose efficiency drops sharply at higher power because of massive resistance and the resulting heat that demands liquid-cooled cables, which themselves create more energy losses.

“It’s the perfect charging technology,” said Morgan Lind, chief operating officer of Recharge Infra, a division of Fortam. Recharge Infra tabbed Momentum Dynamics after learning it could deliver 50 kilowatts or more through a roughly 18-centimeter air gap between vehicle and pavement—a huge improvement over companies that promised no more than 11 kilowatts.

Backers cite several spin-off benefits. With systems buried entirely underground, the plan eliminates: chargers to compete for prime parking space or sidewalks; moving parts and worries about vandalism or damage from elements; and wired infrastructure, including unsightly towers and arms for electric buses, to pollute the view.

“It makes the experience of refueling invisible,” Daga said. “We could get clean cities and clean streets at the same time.”

Furthermore, says Daga, inductive systems will deliver a daisy chain of gains. With enough charging pads, they could keep vehicle batteries in a permanent “sweet spot” between 75 and 85 percent capacity, avoiding deep cycling that kills batteries before their time. Largely freed from range concerns, EVs could carry smaller battery packs, trimming their daunting weight and cost, while further boosting energy efficiency.

For taxi fleets or passenger cars, Momentum Dynamics is developing software to track even the briefest charging events and bill customers automatically, similar to an automated tolling system. The company has also developed a near-field communication system, which would allow autonomous cars to align and connect with charging pads. Bi-directional charging could let cars contribute supplementary power to the grid.

Lind says the Jaguar taxis should start running their meters, and  no-fuss chargers, by year’s end. Lind called Norway—with five million residents, but a determination to wean itself off of fossil-fueled vehicles—an ideal, if tiny, test bench.

“We are an extremely small country, but we see that we can be a guiding star to many other countries,” says Lind. “The avalanche of EVs is coming, and there’s no stopping it.”

Self-Driving Cars Learn to Read the Body Language of People on the Street

Post Syndicated from Casey Weaver original https://spectrum.ieee.org/transportation/self-driving/selfdriving-cars-learn-to-read-the-body-language-of-people-on-the-street

A four-lane street narrows to two to accommodate workers repairing a large pothole. One worker holds a stop sign loosely in his left hand as he waves cars through with his right. Human drivers don’t think twice about whether to follow the gesture or the sign; they move smoothly forward without stopping.

This situation, however, would likely stop an autonomous vehicle in its tracks. It would understand the stop sign and how to react, but that hand gesture? That’s a lot more complicated.

And drivers, human and computer, daily face this and far more complex situations in which reading body language is the key. Consider a city street corner: A pedestrian, poised to cross with the light, stops to check her phone and waves a right-turning car forward. Another pedestrian lifts a hand up to wave to a friend across the way, but keeps moving. A human driver can decode these gestures with a glance.

Navigating such challenges safely and seamlessly, without interrupting the flow of traffic, requires that autonomous vehicles understand the common hand motions used to guide human drivers through unexpected situations, along with the gestures and body language of pedestrians going about their business. These are signals that humans react to without much thought, but they present a challenge for a computer system that’s still learning about the world around it.

Autonomous-vehicle developers around the world have been working for several years to teach self-driving cars to understand at least some basic hand gestures, initially focusing on signals from cyclists. Generally, developers rely on machine learning to improve vehicles’ abilities to identify real-world situations and understand how to deal with them. At Cruise we gather that data from our fleet of more than 200 self-driving cars. These vehicles have logged hundreds of thousands of miles every year for the past seven years; before the pandemic hit, they were on the road around the clock, taking breaks only to recharge (our cars are all-electric) and for regular maintenance. Our cars are learning fast because they are navigating the hilly streets of San Francisco, one of the most complex driving environments in the United States.

But we realized that our machine-learning models don’t always have enough training data because our cars don’t experience important gestures in the real world often enough. Our vehicles need to recognize each of these situations from different angles and distances and under different lighting conditions—a combination of constraints that produce a huge number of possibilities. It would take us years to gain enough information on these events if we relied only on the real-world experiences of our vehicles.

We at Cruise found a creative solution to the data gap: motion capture (or mo-cap) of human gestures, a technique that game developers use to create characters. Cruise has been hiring game developers—including me—for expertise in simulating detailed worlds, and some of us took on the challenge of capturing data to use in teaching our vehicles to understand gestures.

First, our data-collection team set out to build a comprehensive list of the ways people use their bodies to interact with the world and with other people—when hailing a taxi, say, talking on a phone while walking, or stepping into the street to dodge sidewalk construction. We started with movements that an autonomous vehicle might misconstrue as an order meant for itself—for example, that pedestrian waving to a friend. We then moved on to other gestures made in close proximity to the vehicle but not directed at it, such as parking attendants waving cars in the lane next to the vehicle into a garage and construction workers holding up a sign asking cars to stop temporarily.

Ultimately, we came up with an initial list of five key messages that are communicated using gestures: stop, go, turn left, turn right, and what we call “no”—that is, common motions that aren’t relevant to a passing car, like shooting a selfie or removing a backpack. We used the generally accepted American forms of these gestures, assuming that cars will be driving on the right, because we’re testing in San Francisco.

Of course, the gestures people use to send these messages aren’t uniform, so we knew from the beginning that our data set would have to contain a lot more than just five examples. Just how many, we weren’t sure.

Creating that data set required the use of motion-capture technology. There are two types of mo-cap systems—optical and nonoptical. The optical version of mo-cap uses cameras distributed over a large gridlike structure that surrounds a stage; the video streams from these cameras can be used to triangulate the 3D positions of visual markers on a full-body suit worn by an actor. There are several variations of this system that can produce extremely detailed captures, including those of facial expressions. That’s the kind that allows movie actors to portray nonhuman characters, as in the 2009 movie Avatar, and lets the gaming industry record the movements of athletes for the development of sports-themed video games.

Optical motion capture, however, must be performed in a studio setting with a complex multicamera setup. So Cruise selected a nonoptical, sensor-based version of motion capture instead. This technology, which relies on microelectromechanical systems (MEMS), is portable, wireless, and doesn’t require dedicated studio space. That gives us a lot of flexibility and allows us to take it out of the studio and into real-world locations.

Our mo-cap suits each incorporate 19-sensor packages attached at key points of the body, including the head and chest and each hip, shoulder, upper arm, forearm, and leg. Each package is about the size of a silver dollar and contains an accelerometer, a gyroscope, and a magnetometer. These are all wired to a belt containing a battery pack, a control bus, and a Wi-Fi radio. The sensor data flows wirelessly to a laptop running dedicated software, which lets our engineers view and evaluate the data in real time.

We recruited five volunteers of varying body characteristics—including differences in height, weight, and gender—from the Cruise engineering team, had them put the suits on, and took them to places that were relatively free from electronic interference. Each engineer-actor began by assuming a T-pose (standing straight, with legs together and arms out to the side) to calibrate the mo-cap system. From there, the actor made one gesture after another, moving through the list of gestures our team had created from our real-world data. Over the course of seven days, we had these five actors run through this gesture set again and again, using each hand separately and in some cases together. We also asked our actors to express different intensities. For example, the intensity would be high for a gesture signaling an urgent stop to a car that’s driving too fast in a construction zone. The intensity would be lower for a movement indicating that a car should slow down and come to a gradual stop. We ended up with 239 thirty-second clips.

Then our engineers prepared the data to be fed into machine-learning models. First, they verified that all gestures had been correctly recorded without additional noise and that no incorrectly rotated sensors had provided bad data. Then the engineers ran each gesture sequence through software that identified the joint position and orientation of each frame in the sequence. Because these positions were originally captured in three dimensions, the software could calculate multiple 2D perspectives of each sequence; that capability allowed us to expand our gesture set by incrementally rotating the points to simulate 10 different viewpoints. We created even more variations by randomly dropping various points of the body—to simulate the real-world scenarios in which something is hiding those points from view—and again incrementally rotating the remaining points to create different viewing angles.

Besides giving us a broad range of gestures performed by different people and seen from different perspectives, motion capture also gave us remarkably clean data: The skeletal structure of human poses is consistent no matter what the style or color of clothing or the lighting conditions may be. This clean data let us train our machine-learning system more efficiently.

Once our cars are trained on our motion-captured data, they will be better equipped to navigate the various scenarios that city driving presents. One such case is road construction. San Francisco always has a plethora of construction projects under way, which means our cars face workers directing traffic very often. Using our gesture-recognition system, our cars will be able to maneuver safely around multiple workers while comprehending their respective hand gestures.

Take, for example, a situation in which three road workers are blocking the lane that a self-driving car was planning to take. One of the workers is directing traffic and the other two are assessing road damage. The worker directing traffic has a sign in one hand; it has eight sides like a stop sign but reads “SLOW.” With the other hand he motions to traffic to move forward. To cross the intersection safely, our self-driving vehicle will recognize the person as someone controlling traffic. The vehicle will correctly interpret his gestures to mean that it should shift into the other lane, move forward, and ignore the car that’s coming to a stop at the opposite side of the intersection but appears to have the right-of-way.

In another situation, our vehicles will realize that someone entering an intersection and ignoring the flashing “Don’t Walk” sign is in fact directing traffic, not a pedestrian crossing against the light. The car will note that the person is facing it, rather than presenting his side, as someone preparing to cross the street would do. It will note that one of the person’s arms is up and the other is moving so as to signal a vehicle to cross. It will even register assertive behavior. All these things together enable our car to understand that it can continue to move forward, even though it sees someone in the intersection.

Training our self-driving cars to understand gestures is only the beginning. These systems must be able to detect more than just the basic movements of a person. We are continuing to test our gesture-recognition system using video collected by our test vehicles as they navigate the real world. Meanwhile, we have started training our systems to understand the concept of humans carrying or pushing other objects, such as a bicycle. This is important because a human pushing a bicycle usually behaves differently from a human riding a bicycle.

We’re also planning to expand our data set to help our cars better understand cyclists’ gestures—for example, a left hand pointing up, with a 90-degree angle at the elbow, means the cyclist is going to turn right; a right arm pointing straight out means the same thing. Our cars already recognize cyclists and automatically slow down to make room for them. Knowing what their gestures mean, however, will allow our cars to make sure they give cyclists enough room to perform a signaled maneuver without stopping completely and creating an unnecessary traffic jam. (Our cars still look out for unexpected turns from cyclists who don’t signal their intent, of course.)

Self-driving cars will change the way we live our lives in the years to come. And machine learning has taken us a long way in this development. But creative use of technologies like motion capture will allow us to more quickly teach our self-driving fleet to better coexist in cities—and make our roads safer for all.

This article appears in the September 2020 print issue as “The New Driver’s Ed.”

About the Author

Casey Weaver is a senior engineering manager at Cruise, in San Francisco.

Lucid Air EV Crushes 500 Miles-On-a-Single-Charge Barrier

Post Syndicated from John Voelcker original https://spectrum.ieee.org/cars-that-think/transportation/advanced-cars/lucid-air-ev-500-miles-on-a-single-charge

If you’re an electric-car startup, you need an angle to make the world pay attention—especially if the media has portrayed you as a Tesla copycat for years. For Lucid Motors, that angle is simple: 500 miles of range—800 kilometers—on a single battery charge.

That would make its upcoming Lucid Air luxury electric sedan the longest-range electric vehicle in the world, way ahead of the 402-mile rating for the 2020 Tesla Model S Long Range Plus, today’s range champion.

The car itself will be revealed September 9, with production to begin within six months thereafter. The U.S. EPA won’t release official range estimates for the Lucid Air, however, until next spring, only a few weeks before the company says it will deliver its first production cars. So Lucid took the gamble of having its range tested by an independent third party with long experience in certifying cars to EPA test cycles.

Tuesday the EV maker announced independent verification of an estimated EPA range of 517 miles, based on tests conducted by FEV North America Inc. in Auburn Hills, Michigan. Those tests used the EPA’s Multicycle Test procedure, codified by the SAE J1634 standard, October 2012, the same one used by every maker when tested for emissions or battery range.

Your move, Tesla.

Lucid unveiled its Air electric luxury sedan in April 2017 but then had to go on hiatus for two years as it sought funding to take the design into production in a purpose-built factory in Arizona. As CEO Peter Rawlinson explained to Motor Trend earlier this year, funders at the time were excited by autonomous-vehicle technology as well as numerous other EV startups.

It took two more years for Lucid to raise enough funding to move into production. It ultimately received $1.3 billion from Saudi Arabia’s sovereign-wealth fund, in exchange for more than half the company, according to documents revealed in June.

The company laid off no engineers. And, in the intervening two years, the team changed the design of the Air’s battery cells, the pack voltage, the power electronics, and the electric motors that power the wheels—two motors on introduction, potentially three later on.

The 2017 Air prototype used a a whopping 130-kilowatt-hour battery (against the 100 kWh of the highest-spec Model S) operating at 400 volts. It had AC induction motors and 150-kilowatt fast charging. Its range was promised to be “more than 400 miles.”

The car that is to go into production next spring will have a somewhat smaller, more energy-dense pack—Lucid isn’t giving numbers at the moment—operating at around 900 volts, a pair of permanent-magnet motors, and 350-kW fast charging. The range will be at least 517 miles, per the tests, and could go higher if it can get the same 0.75 EPA “adjustment factor” for efficient air-conditioning that let the Model S Long Range Plus squeak past the 400-mile mark. (Standard factor is 0.70.)

The changes, as CEO Peter Rawlinson told Spectrum several months ago, were all in the service of efficiency—getting as much range and performance out of every kilowatt-hour by reducing air resistance, wasted energy due to resistance, and mechanical friction in every aspect of the car. To do that, he told us this week, an EV company has to engineer not only the car but every facet of its electric powertrain.

The most visible aspect of an EV’s efficiency is its design. The Lucid Air offers a longer cabin and more legroom than the long-wheelbase version of the Mercedes-Benz S-Class, Rawlinson says, despite having the footprint of a mid-size car like a Honda Accord. While it “reads” big visually, he notes that it’s smaller not only than the Tesla Model S but even the Porsche Taycan, which has far less interior volume.

The smaller footprint and low height gives the Air a smaller frontal area than competitors, which multiplied by the lowest drag coefficient in its class (0.21), requires less energy to displace air than other EVs. (Note that each maker does its own Cd testing, so drag coefficients from different brands may not be directly comparable.)

One of Rawlinson’s passions is the necessity for any electric-car maker to engineer all the critical components of its powertrain in-house: battery pack, motors, inverters, transmission, and software. “Every detail adds up,” he stressed, returning to the theme multiple times over the course of an hour-long chat with Spectrum.

He laid out the advantages he feels Lucid has over every other EV maker—in each area.

Battery Pack

Despite having no cars in production just yet, Lucid already has a trove of data on the performance of its batteries under extreme duty cycles. That comes from the experience its parent company Atieva garnered when it provided battery packs to all 24 cars fielded by the competing teams in the Formula E electric-car racing series.

The revised pack that will launch in the Air operates at “significantly over 900 volts,” permitting fast charging at up to 350 volts and reducing resistance heat. Like other packs, its cells are packaged into modules (or “bricks,” as Rawlinson calls them), that make up the building blocks of the pack.

Although Lucid isn’t giving details about the cell chemistry it jointly developed with battery partner LG Chem, it uses thousands of 21700-format cylindrical cells (as Tesla does). Lucid and LG Chem together engineered a specific cell architecture with lower-resistance connectors, again in the service of reducing energy wasted as heat due to resistance.

Rawlinson lauded LG Chem, as a long-term and dedicated partner. And, he added, that company’s global cell-production capabilities relieved him of that responsibility. “Thank goodness I don’t have to build a gigafactory!”

Traction Motors

One of the secrets to the Lucid’s packaging is an “ultracompact” motor, one Rawlinson says produces 55 kW per liter of volume, an output density he says is two to three times that of any other maker’s motor. One of the secrets to its efficiency is what he describes only as a novel (and heavily patented) method of cooling that puts liquid closer to the core of the motor, unlike traditional designs with coolant circulating around the periphery.

The Lucid motor spins at up to 20,000 rpm, with unspecified breakthroughs in the magnetic design to reduce cogging losses—the torque produced by a permanent-magnet motor due to the interaction of the magnets and the stator slots, especially noticeable at low speeds. Again, the CEO was cagey about exactly what the design advances were. More details may emerge next month when the car is revealed.

Inverters

While Lucid’s use of silicon-carbide MOSFETs for switching doesn’t do much for the EPA test cycles that determine its range rating, Rawlinson suggests they reinforce range in the much broader array of uses in real-world driving. And he spoke sadly of a (non-Tesla) competitor’s use of a third-party inverter, noting that mixing high-voltage insulated gate bipolar transistor (IGBT) components with MOSTFETs wasted energy.

Transmission 

Rawlinson set his design team to develop the e-motors and the transmission as a “single rotational system” to ensure the most integrated and compact device possible. The gearbox hasn’t been shown yet, and the company has applied for multiple patents.

One example Rawlinson was willing to discuss, however, was the 3-dimensional shape of the gear teeth themselves—an area few makers spend much time on. Lucid has patented the involute profile of the gear teeth within its transmission, optimized for performance and efficiency to transmit maximum torque under power while spinning with the least resistance otherwise.

Software

“We are as much a software company as a hardware company,” Rawlinson cheerfully admits, echoing some investors’ mantra (both pro and con) for Tesla. From the control strategies for each component of the powertrain to a user interface he promises will be “revolutionary,” the creation and optimization of the Lucid Air’s software could be the topic of a whole series of articles by itself. We left the specifics of that topic for another day.

Before writing off Lucid as just another Tesla-wannabe, it’s worth noting CEO Rawlinson has been here before. Not only was he part of the group that created the 2002 Jaguar XJ, one of the earliest all-aluminum luxury sedans, but he led the team that engineered the actual 2012 Tesla Model S and put it into production.

I first interviewed Rawlinson in January 2011, when a Model S body-in-white on display at the Detroit Auto Show was drawing a steady throng of visitors. That was the first time that other car makers began to think Tesla might actually know what it was doing.

Back then Rawlinson was so exhausted he was sleeping under a conference table between interviews. But he was articulate and passionate about the challenges of building the world’s first serious long-range electric car—at a time, remember, when the then-new Nissan Leaf offered an EPA rating of 73 miles.

This week, Rawlinson was equally articulate, if anything more passionate, and rather better-rested. His entire mantra for the Lucid Air was to rethink every aspect of efficiency, to dispense with anxieties over battery range once and for all.

“I want to move from range anxiety to range confidence,” he said. Indeed, an EPA-approved range of more than 500 miles should really end discussion over how far electric cars can travel, since it’s the rare driver whose bladder will permit that range to be used.

The Lucid Air, he said, targets not the Tesla Model S—the de facto assumption by much of the press—but the Mercedes-Benz S-Class, the pinnacle of luxury sedans built and sold in high volume. In 2012, he said, few S-Class owners would have considered an electric car. Now, Tesla has proven the concept, and they’re willing to make the leap. But they won’t leave for a vehicle as stark as a Tesla, he said, so that’s where Lucid is starting.

The company’s goal, he said, is to provide EVs that deliver enough range to end that concern for good. Then, it will launch versions with smaller, less-expensive batteries that might provide “only” 350 miles of range. And, he notes, the company’s battery packs are carefully designed for high-volume assembly, its motors engineered for mass CNC production.

The Lucid CEO sounds most excited about the possibility of using the company’s ultra-efficient powertrain in smaller vehicles that would, he says, “cascade into the mass market, so the man in the street can buy an electric car” with no compromises.

“That’s what drives me,” he said. The Tesla Model S? “Not a bad first effort, not bad.”

Will Camera Startup Light Give Autonomous Vehicles Better Vision than Lidar?

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/transportation/sensors/will-camera-startup-light-give-autonomous-vehicles-better-vision-than-lidar

In 2013, Rajiv Laroia and Dave Grannan started Light, a company that aimed to disrupt the tiny camera market. The ultimate goal was to provide designs, circuitry, and software to mobile device manufacturers so that smartphones could capture images at qualities that rivaled those taken with bulky, expensive, professional camera lenses. But it turns out the best use of Light’s technology might not be taking better snapshots, but in helping cars see better. 

The technology is built around using an array of inexpensive lenses of varying focal lengths and advanced digital signal processing. Light showcased its approach by releasing a standalone camera—the 16-lens L16—in 2017, and sold out of its initial production run; the number of units was never made public. 

Part of the magic of Light’s images is the ability to select or change the point of focus after the fact. The multiple camera modules, set slightly apart, also mean that Light cameras can determine a depth value for each pixel in the scene allowing the software to create a threedimensional map of the objects in the image.

This ability to create a depth map meant that the consumer-targeted L16 got attention from businesses interested in things other than pretty pictures. A rental car company in Germany set up an array of cameras to inspect cars being dropped off for damage; Wayfair experimented with Light’s technology to place furniture within images of rooms. And Light CEO Grannan says the company always hoped to use computational imaging for machine vision as well as consumer cameras.

Still, Light continued to focus on going after consumers looking to take better pictures with small devices. And in 2019, the first cellphone camera using Light’s technology hit the market, the five-module Nokia 9 PureView. It didn’t exactly take the world by storm.

“I think our timing was bad,” Grannan says. “In 2019 smartphone sales started to shrink, the product became commoditized, and there was a shift from differentiating on quality and features to competing on price. We had a premium solution, involving extra cameras and an ASIC, and that was not going to work in that environment.”

Fortunately, thanks to Softbank CEO Masayoshi Son, another door had recently opened.

In 2018, looking for additional capital, Light had approached the Softbank Vision Fund about joining its Series D round of investment, a US $121 million infusion that brought total investment in the company to more than $185 million. Son suggested that, in his view, because Light’s technology could effectively allow cameras to see in three dimensions, it could challenge Lidar—which uses pulses of laser light to gauge distances—in the autonomous car market.

“We were intrigued by that,” Grannan says. “We had envisioned multiple vertical markets, but auto wasn’t one of them.”

But, he says, though co-founder and CTO Laroia realized it was theoretically possible to scale the abilities of the company’s depth mapping technology so it could work over the hundreds of meters of range needed by autonomous vehicles, he wasn’t sure it was practical, given the vibration and other challenges a vehicle in motion encounters. Laroia spent about three months convincing himself that the system could be calibrated and the algorithms would work when cameras were separated far enough to make the longer range possible.

With that question settled, Light began an R&D program in early 2019 to further refine its algorithms for use in autonomous vehicles. In mid-2019, with the consumer phone and camera market looking sour, the company announced that it was getting out of that business, and pivoted the entire company to focus on sensing systems for autonomous vehicles.

“We can cover a longer range—up to 1000 meters, compared with 200 or so for lidar,” says Grannan. “The systems we are building can cost a few thousand dollars instead of tens of thousands of dollars. And our systems use less power, a key feature for electric vehicles.”

At the moment, Light’s engineers are testing the first complete prototypes of the system, using an unmarked white van that they are driving around the San Francisco Bay Area. The tests involve different numbers of cameras in the array, with a variety of focal lengths, as the engineers optimize the design. Computation happens on a field-programmable gate array; Light’s engineers expect to have moved the circuitry to an ASIC by early 2021.

The prototype is still in stealth. Grannan says the company expects to unveil it and announce partnerships with autonomous vehicle developers later this year.

Biking’s Bedazzling Boom

Post Syndicated from David Schneider original https://spectrum.ieee.org/tech-talk/transportation/systems/bikings-bedazzing-boom

It might seem odd that, earlier this month, Stuttgart-based Bosch, a leading global supplier of automotive parts and equipment, seemed to be asking political leaders to reduce the amount of space on roadways they are allowing for cars and trucks.

This makes more sense when you realize that this call for action came from the folks at Bosch eBike Systems, a division of the company that makes electric bicycles. Their argument is simple enough: The COVID19 pandemic has prompted many people to shift from traveling via mass transit to bicycling, and municipal authorities should respond to this change by beefing up the bike infrastructure in their cities and towns.

There’s no doubt that a tectonic shift in people’s interest in cycling is taking place. Indeed, the current situation appears to rival in ferocity the bike boom of the early 1970s, which was sparked by a variety of factors, including: the maturing into adulthood of many baby boomers who were increasingly concerned about the environment; the 1973 Arab oil embargo; and the mass production of lightweight road bikes.

While the ’70s bike boom was largely a North American affair, the current one, like the pandemic itself, is global. Detailed statistics are hard to come by, but retailers in many countries are reporting a surge of sales, for both conventional bikes and e-bikes—the latter of which may be this bike boom’s technological enabler the way lightweight road bikes were to the boom that took place 50 years ago. Dutch e-bike maker VanMoof, for example, reported a 50 percent year-over-year increase in its March sales. And that’s when many countries were still in lockdown.

Eco Compteur, a French company that sells equipment for tracking pedestrian and bicycle traffic, is documenting the current trend with direct observations. It reports bicycle use in Europe growing strongly since lockdown measures eased. And according to its measurements, in most parts of the United States, bicycle usage is up by double or even triple digits over the same time last year.

Well before Bosch’s electric-bike division went public with its urgings, local officials had been responding with ways to help riders of both regular bikes and e-bikes. In March, for example, the mayor of New York City halted the police crackdown on food-delivery workers using throttle-assisted e-bikes. (Previously, they had been treated as scofflaws and ticketed.) And in April, New York introduced a budget bill that will legalize such e-bikes statewide.

Biking in all forms is indeed getting a boost around the world, as localities create or enlarge bike lanes, accomplishing at breakneck speed what typically would have taken years. Countless cities and towns—including Boston, Berlin, and Bogota, where free e-bikes have even been provided to healthcare workers—are fast creating bike lanes to help their many new bicycle riders get around.

Maybe it’s not accurate to characterize these local improvements to biking infrastructure as countless; some people are indeed trying to keep of tally of these developments. The “Local Actions to Support Walking and Cycling During Social Distancing Dataset” has roughly 700 entries as of this writing. That dataset is the brainchild of Tabitha Combs at the University of North Carolina in Chapel Hill, who does research on transportation planning.

“That’s probably 10 percent of what’s happening in the world right now,” says Combs, who points out that one of the pandemic’s few positive side effects has been its influence on cycling “You’ve got to get out of the house and do something,” she says. “People are rediscovering bicycling.”

The key question is whether the changes in people’s inclination to cycle to work or school or just for exercise—and the many improvements to biking infrastructure that the pandemic has sparked as a result—will endure after this public-health crisis ends. Combs says that cities in Europe appear more committed than those in the United States in this regard, with some allocating substantial funds to planning their new bike infrastructure.

Cycling is perhaps one realm were responding to the pandemic doesn’t force communities to sacrifice economically: Indeed, increasing the opportunities for people to walk and bike often “facilitates spontaneous commerce,” says Combs. And researchers at Portland State have shown that cycling infrastructure can even boost nearby home values. So lots of people should be able to agree that having the world bicycling more is an excellent way to battle the pandemic.

Q&A: The Masterminds Behind Toyota’s Self-Driving Cars Say AI Still Has a Way to Go

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/transportation/self-driving/qa-the-masterminds-behind-toyotas-selfdriving-cars-say-ai-still-has-a-way-to-go

Getting a car to drive itself is undoubtedly the most ambitious commercial application of artificial intelligence (AI). The research project was kicked into life by the 2004 DARPA Urban Challenge and then taken up as a business proposition, first by Alphabet, and later by the big automakers.

The industry-wide effort vacuumed up many of the world’s best roboticists and set rival companies on a multibillion-dollar acquisitions spree. It also launched a cycle of hype that paraded ever more ambitious deadlines—the most famous of which, made by Alphabet’s Sergei Brin in 2012, was that full self-driving technology would be ready by 2017. Those deadlines have all been missed.

Much of the exhilaration was inspired by the seeming miracles that a new kind of AI—deep learning—was achieving in playing games, recognizing faces, and transliterating voices. Deep learning excels at tasks involving pattern recognition—a particular challenge for older, rule-based AI techniques. However, it now seems that deep learning will not soon master the other intellectual challenges of driving, such as anticipating what human beings might do.

Among the roboticists who have been involved from the start are Gill Pratt, the chief executive officer of Toyota Research Institute (TRI) , formerly a program manager at the Defense Advanced Research Projects Agency (DARPA); and Wolfram Burgard, vice president of automated driving technology for TRI and president of the IEEE Robotics and Automation Society. The duo spoke with IEEE Spectrum’s Philip Ross at TRI’s offices in Palo Alto, Calif.

This interview has been condensed and edited for clarity.

IEEE Spectrum: How does AI handle the various parts of the self-driving problem?

Gill Pratt: There are three different systems that you need in a self-driving car: It starts with perception, then goes to prediction, and then goes to planning.

The one that by far is the most problematic is prediction. It’s not prediction of other automated cars, because if all cars were automated, this problem would be much more simple. How do you predict what a human being is going to do? That’s difficult for deep learning to learn right now.

Spectrum: Can you offset the weakness in prediction with stupendous perception?

Wolfram Burgard: Yes, that is what car companies basically do. A camera provides semantics, lidar provides distance, radar provides velocities. But all this comes with problems, because sometimes you look at the world from different positions—that’s called parallax. Sometimes you don’t know which range estimate that pixel belongs to. That might make the decision complicated as to whether that is a person painted onto the side of a truck or whether this is an actual person.

With deep learning there is this promise that if you throw enough data at these networks, it’s going to work—finally. But it turns out that the amount of data that you need for self-driving cars is far larger than we expected.

Spectrum: When do deep learning’s limitations become apparent?

Pratt: The way to think about deep learning is that it’s really high-performance pattern matching. You have input and output as training pairs; you say this image should lead to that result; and you just do that again and again, for hundreds of thousands, millions of times.

Here’s the logical fallacy that I think most people have fallen prey to with deep learning. A lot of what we do with our brains can be thought of as pattern matching: “Oh, I see this stop sign, so I should stop.” But it doesn’t mean all of intelligence can be done through pattern matching.

For instance, when I’m driving and I see a mother holding the hand of a child on a corner and trying to cross the street, I am pretty sure she’s not going to cross at a red light and jaywalk. I know from my experience being a human being that mothers and children don’t act that way. On the other hand, say there are two teenagers—with blue hair, skateboards, and a disaffected look. Are they going to jaywalk? I look at that, you look at that, and instantly the probability in your mind that they’ll jaywalk is much higher than for the mother holding the hand of the child. It’s not that you’ve seen 100,000 cases of young kids—it’s that you understand what it is to be either a teenager or a mother holding a child’s hand.

You can try to fake that kind of intelligence. If you specifically train a neural network on data like that, you could pattern-match that. But you’d have to know to do it.

Spectrum: So you’re saying that when you substitute pattern recognition for reasoning, the marginal return on the investment falls off pretty fast?

Pratt: That’s absolutely right. Unfortunately, we don’t have the ability to make an AI that thinks yet, so we don’t know what to do. We keep trying to use the deep-learning hammer to hammer more nails—we say, well, let’s just pour more data in, and more data.

Spectrum: Couldn’t you train the deep-learning system to recognize teenagers and to assign the category a high propensity for jaywalking?

Burgard: People have been doing that. But it turns out that these heuristics you come up with are extremely hard to tweak. Also, sometimes the heuristics are contradictory, which makes it extremely hard to design these expert systems based on rules. This is where the strength of the deep-learning methods lies, because somehow they encode a way to see a pattern where, for example, here’s a feature and over there is another feature; it’s about the sheer number of parameters you have available.

Our separation of the components of a self-driving AI eases the development and even the learning of the AI systems. Some companies even think about using deep learning to do the job fully, from end to end, not having any structure at all—basically, directly mapping perceptions to actions. 

Pratt: There are companies that have tried it; Nvidia certainly tried it. In general, it’s been found not to work very well. So people divide the problem into blocks, where we understand what each block does, and we try to make each block work well. Some of the blocks end up more like the expert system we talked about, where we actually code things, and other blocks end up more like machine learning.

Spectrum: So, what’s next—what new technique is in the offing?

Pratt: If I knew the answer, we’d do it. [Laughter]

Spectrum: You said that if all cars on the road were automated, the problem would be easy. Why not “geofence” the heck out of the self-driving problem, and have areas where only self-driving cars are allowed?

Pratt: That means putting in constraints on the operational design domain. This includes the geography—where the car should be automated; it includes the weather, it includes the level of traffic, it includes speed. If the car is going slow enough to avoid colliding without risking a rear-end collision, that makes the problem much easier. Street trolleys operate with traffic still in some parts of the world, and that seems to work out just fine. People learn that this vehicle may stop at unexpected times. My suspicion is, that is where we’ll see Level 4 autonomy in cities. It’s going to be in the lower speeds.

That’s a sweet spot in the operational design domain, without a doubt. There’s another one at high speed on a highway, because access to highways is so limited. But unfortunately there is still the occasional debris that suddenly crosses the road, and the weather gets bad. The classic example is when somebody irresponsibly ties a mattress to the top of a car and it falls off; what are you going to do? And the answer is that terrible things happen—even for humans.

Spectrum: Learning by doing worked for the first cars, the first planes, the first steam boilers, and even the first nuclear reactors. We ran risks then; why not now?

Pratt: It has to do with the times. During the era where cars took off, all kinds of accidents happened, women died in childbirth, all sorts of diseases ran rampant; the expected characteristic of life was that bad things happened. Expectations have changed. Now the chance of dying in some freak accident is quite low because of all the learning that’s gone on, the OSHA [Occupational Safety and Health Administration] rules, UL code for electrical appliances, all the building standards, medicine.

Furthermore—and we think this is very important—we believe that empathy for a human being at the wheel is a significant factor in public acceptance when there is a crash. We don’t know this for sure—it’s a speculation on our part. I’ve driven, I’ve had close calls; that could have been me that made that mistake and had that wreck. I think people are more tolerant when somebody else makes mistakes, and there’s an awful crash. In the case of an automated car, we worry that that empathy won’t be there.

Spectrum: Toyota is building a system called Guardian to back up the driver, and a more futuristic system called Chauffeur, to replace the driver. How can Chauffeur ever succeed? It has to be better than a human plus Guardian!

Pratt: In the discussions we’ve had with others in this field, we’ve talked about that a lot. What is the standard? Is it a person in a basic car? Or is it a person with a car that has active safety systems in it? And what will people think is good enough?

These systems will never be perfect—there will always be some accidents, and no matter how hard we try there will still be occasions where there will be some fatalities. At what threshold are people willing to say that’s okay?

Spectrum: You were among the first top researchers to warn against hyping self-driving technology. What did you see that so many other players did not?

Pratt: First, in my own case, during my time at DARPA I worked on robotics, not cars. So I was somewhat of an outsider. I was looking at it from a fresh perspective, and that helps a lot.

Second, [when I joined Toyota in 2015] I was joining a company that is very careful—even though we have made some giant leaps—with the Prius hybrid drive system as an example. Even so, in general, the philosophy at Toyota is kaizen—making the cars incrementally better every single day. That care meant that I was tasked with thinking very deeply about this thing before making prognostications.

And the final part: It was a new job for me. The first night after I signed the contract I felt this incredible responsibility. I couldn’t sleep that whole night, so I started to multiply out the numbers, all using a factor of 10. How many cars do we have on the road? Cars on average last 10 years, though ours last 20, but let’s call it 10. They travel on an order of 10,000 miles per year. Multiply all that out and you get 10 to the 10th miles per year for our fleet on Planet Earth, a really big number. I asked myself, if all of those cars had automated drive, how good would they have to be to tolerate the number of crashes that would still occur? And the answer was so incredibly good that I knew it would take a long time. That was five years ago.

Burgard: We are now in the age of deep learning, and we don’t know what will come after. We are still making progress with existing techniques, and they look very promising. But the gradient is not as steep as it was a few years ago.

Pratt: There isn’t anything that’s telling us that it can’t be done; I should be very clear on that. Just because we don’t know how to do it doesn’t mean it can’t be done.

Future Baggage Scanners Will Tell Us What Things Are Made Of

Post Syndicated from Joel A. Greenberg original https://spectrum.ieee.org/transportation/air/future-baggage-scanners-will-tell-us-what-things-are-made-of

Some years ago, the three of us found ourselves together in a very long security line at Reagan National Airport in Washington, D.C. Maybe, we joked, Superman could use his X-ray vision to help out the beleaguered Transportation Security Administration (TSA) employees and the traveling masses.

After an oddly technical discussion about our favorite superhero, we agreed that the Man of Steel’s X-ray vision wouldn’t really be able to tackle this particular challenge. That’s because some concealed threats don’t reveal themselves directly in an X-ray image, and finding them with X-rays would require computational processing far beyond what even Superman’s brain could manage. But then came an exciting realization: We did have the power to solve the technical problem here. By employing our combined expertise, we began developing an X-ray system well suited for detecting such dangerous objects hidden in a carry-on bag. Perhaps one day it will shorten the kind of line we were standing in.

The goal of aviation security is, of course, to make sure that dangerous items do not end up on planes—while also providing an acceptable passenger experience. What exactly this experience should be depends on whom you ask, but it likely includes short, quick-moving lines and the capacity to bring everything you want without having to rummage through your bag to exonerate your electric toothbrush.

To make this vision a reality, the TSA would need a device that is capable of rapidly scanning the many bags passing through a security checkpoint and deciding on its own whether any of them contains a threat. Today’s standard approach uses X-ray projection imaging, the same technology doctors use to diagnose broken bones.

X-ray projection imaging works by directing these penetrating rays at something and measuring how much energy comes out the other side. In other words, the TSA’s scanners sense the X-ray shadows cast by the objects in your suitcase. The shape and degree of darkness of the shadows are used to distinguish between different items or, in the case of medical imaging, to reveal a fracture in a bone.

While this strategy works reasonably well, anyone who has played shadow puppets knows that shadows are not always what they appear to be. That’s why more advanced scanners use computed tomography, which combines X-ray transmission images obtained from many angles to yield the full 3D shape of objects within your bag. This tactic provides significant advantages, such as allowing you to leave most items (laptops, iPads, keys, and so forth) in your bag for a single scan.

Such advanced scanners are now being deployed at U.S. airports, although plenty of older scanners are still in use. Their 2D images are interpreted by a combination of computer algorithms and trained operators, who then determine whether you are free to continue to your plane or need to have your carry-on more carefully inspected by an agent.

In general, this system works great for shape-based threats such as guns or knives. While we can debate whether a small penknife or knitting needle could really be used as a weapon, these transmission X-ray scanners detect such objects routinely. And more recent enhancements, such as deep-learning-based image-recognition algorithms, allow automatic identification of a variety of such dangerous items.

This is all well and good, but what if the shape of some potential threat doesn’t reveal its dangerous nature? After all, an explosive could be fashioned into the shape of a common, benign object. And this isn’t just a hypothetical—a number of plots from around the world have centered around shaping explosives like everyday objects or hiding them in innocuous places such as within bottles of liquor or contact-lens cleaner or tucked away in laptops or shoes. How can the TSA deal with those threats? The answer is to look beyond the item’s shadow and to examine instead what we might call its material fingerprint.

It has been known for more than a century that X-rays can reveal the atomic structure of a material through a process known as X-ray diffraction. For this, you have to measure the rays that bounce off the atoms or molecules in the target and ricochet in different directions. How the X-rays scatter depends on the details of the material’s interatomic spacing and degree of crystallinity.

X-ray diffraction can, for example, easily distinguish among coal, graphite, and diamond, despite the fact that these substances are chemically identical—they are all made up of carbon. It can even identify different liquids, which lack crystallinity but have different interatomic spacings as a function of molecular size and repulsion. So a scanner based on this technique could, for example, determine whether the bottle someone stuffed in his carry-on was filled with water or with nitroglycerine.

Making use of X-ray diffraction measurements, however, is a thorny technical problem. To begin with, the diffracted signal is several orders of magnitude weaker than the transmitted signal. So it’s harder to measure at all. It’s also much harder to interpret. As a result, even though such fingerprinting (more properly called X-ray diffraction tomography) would pretty much guarantee success in identifying threats, the technology required has in the past come at a significant cost, not just financially but also in terms of complexity, scan time, and other factors.

The focus of our efforts over the last few years has been figuring out how to combine X-ray diffraction measurements with traditional transmission imaging in a practical way. We’re not done yet, but we’ve gotten pretty far in the development of what we expect to be the next generation of airport baggage scanners.

For more than 30 years, researchers have attempted to use X-ray diffraction to map out differences in atomic structure from place to place within an extended object. There’s been some progress, particularly in the medical and materials-­science communities, but the systems built for this have been complicated, expensive, and slow. That’s why systems based on these principles are not yet scanning bags at airports.

The dimness of the diffracted signals is an issue, to be sure, but one that’s relatively straightforward to address. First, you need a strong source of X-rays, which might be dangerous in a medical setting but really isn’t a problem for a baggage scanner. Second, the image sensor you use also has to be more sensitive than is typical, but again that’s not hard to arrange. The much more vexing problem is that the diffracted signals originate from all points in the bag. And each pixel of your detector records all these different signals at once.

The trick to disentangling these overlapping signals—so you can tell which of the scattered X-rays came from your laptop and which from your water bottle, say—involves placing an additional element in the system, one that affects the X-ray signals in a controlled way. This element, called a coded aperture, is basically a slab of highly absorptive material with a set of holes drilled in it. Those holes are arranged in a specific pattern. X-rays can pass through the holes but are blocked by the absorptive material.

The reason for using such a coded aperture is easier to understand with the help of a thought experiment. Imagine that the piece of luggage being scanned is composed of hundreds of tiny X-ray sources, each of which can be switched on and off on command. If you turn on just one source, the X-rays it emits will pass through the holes in the coded aperture and continue on to the image sensor positioned some distance behind it. If you had Superman’s X-ray vision, you’d see a pattern of spots projected onto the plane of the image sensor.

Now switch on a different one of these tiny X-ray sources instead, one located at a different position in the bag. The pattern of spots projected on the image plane will now be different. The spots will be bigger or smaller and located in different places. The same principle applies to shadow puppets, whose shadows change in size and position depending on where the puppet is located with respect to the light source and the wall.

So the coded aperture affects the X-rays originating from each different location within the bag in a unique way, which is akin to tagging them with a bar code. This works even though the X-rays originating from different regions in the bag impinge on the detector at the same time. It takes some clever calculations, but the signals generated by X-rays passing through the coded aperture and arriving at the detector can be untangled, allowing the scanner to distinguish the X-rays coming from different parts of the bag. This process is much easier if the scanner also captures the traditional X-ray transmission images, which give you a pretty good idea of where the relevant objects are positioned in the bag.

Using this coded-aperture approach, we previously built what we’ll call a preprototype scanner, which had a simple configuration and used inexpensive off-the-shelf components. With it, we were able to identify different plastics, liquids, and solids at a resolution of better than 1 centimeter.

Taking this to the next level involves designing a scanner that fully combines transmission and X-ray diffraction measurements and employs state-of-the-art detectors. It should be considerably faster, more accurate, and less expensive than what we’ve built before. This scanner should be suitable for commercialization and use in airports around the world.

To understand the possibilities of this and related systems, we required detailed numerical simulations, which helped us to compare possible configurations and to identify the best ones. As a first step in making such evaluations, we developed software that let us create virtual bags in a fully automated manner. Running it with enough computing horsepower—either locally on high-performance clusters or using suitable cloud resources—allows us to create hundreds, thousands, or even millions of virtual bags, ones that are representative of the kinds of things travelers carry to the airport every day. We can now run those virtual bags through our simulation software, which models the relevant X-ray physics (for both transmission and scattering) and spits out high-fidelity estimates of the measurements you would get with a scanner of a particular design.

The ability to generate many controllable, ultrarealistic virtual scans for arbitrary system configurations has allowed us to quantify how much better one scanner design is compared with another and what the fundamental detection performance of a given type of measurement would be. As a side benefit, this same software lets us generate data sets for training and validating the machine-learning algorithms the scanners will run to recognize images and detect threats.

So even as new computed-tomography baggage scanners continue to roll out at airports across the United States, we will be working to pave the way for an entirely new kind of scanner, one that can identify a dangerous item based on its material-specific fingerprint, not just the type of X-ray shadows it casts.

We are thrilled at the prospect of being able to contribute to improved security in an industry that is so critical to the operation of the modern world. We are so committed to making a difference that we joined with colleague Anuj Kapadia of the Duke University radiology faculty to establish Quadridox, a company focused on turning our security-line musings into real-world solutions. And it may even turn out that X-ray diffraction has a future beyond baggage scanning. The ability to identify specific materials will most likely prove useful for other applications, too, such as the detection of illicit drugs in the mail, perhaps even in the diagnosis of cancer. At the end of the day, though, we would settle for a future where the only reason you have to open your bag at an airport checkpoint is to take a sip from your water bottle.

This article appears in the July 2020 print issue as “The All-Seeing Baggage Scanner.”

About the Authors

Joel Greenberg and Michael Gehm are with the department of electrical and computer engineering at Duke University. Greenberg is an associate research professor and Gehm is an associate professor. Amit Ashok is an associate professor of optical sciences at the University of Arizona.

Mercedes and Nvidia Announce the Advent of the Software-Defined Car

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/mercedes-and-nvidia-announce-the-advent-of-the-softwaredefined-car

The idea of tomorrow’s car as a computer on wheels is ripening apace. Today, Daimler and Nvidia announced that Daimler’s carmaking arm, Mercedes-Benz, will drop Nvidia’s newest computerized driving system into every car it sells, beginning in 2024.

The system, called the Drive AGX Orin, is a system-on-a-chip that was announced in December and is planned to ship in 2022. It’s an open system, but as adapted for Mercedes, it will be laden with specially designed software. The result, say the two companies, will be a software-defined car:  Customers will buy a car, then periodically download new features, among them some that were not known at the time of purchase. This capability will be enhanced by using software instead of dedicated hardware in the form of a constellation of electronic control units, or ECUs.

“In modern cars, there can be 100, up to 125, ECUs,” said Danny Shapiro, Nvidia’s senior director of automotive. “Many of those will be replaced by software apps. That will change how different things function in the car—from windshield wipers to door locks to performance mode.”

The plan is to give cars a degree of self-driving competence comparable to Level 2  (where the car assists the driver) and Level 3 (where the driver can do other things as the car drives itself, while remaining ready to take back the wheel). The ability to park itself will be Level 4 (where there’s no need to mind the car at all, so long as it’s operating in a predefined comfort zone).

In all these matters, the Nvidia-Daimler partnership is following a trail that Tesla blazed and others have followed. Volkswagen plainly emulated Tesla in a project that is just now yielding its first fruit: an all-electric car that will employ a single master electronic architecture that will eventually power all of VW’s electric and self-driving cars. The rollout was slightly delayed because of glitches in software, as we reported last week

Asked whether Daimler would hire a horde of software experts, as Volkwagen is doing, and thus become something of a software company, Bernhard Wardin, spokesman for Daimler’s autonomous driving and artificial intelligence division, said he had no comment. (Sounds like a “yes.”) He added that though a car model had been selected for the debut of the new system, its name was still under wraps.

One thing that strikes the eye is the considerable muscle of the system. The AGX Orin has 17 billion transistors, incorporating what Nvidia says are new deep learning and computer vision accelerators that “deliver 200 trillion operations per second—nearly 7 times the performance of Nvidia’s previous generation Xavier SoC.” 

It’s interesting to note that the Xavier itself began to ship only last year, after being announced toward the end of 2016. On its first big publicity tour, at the 2017 Consumer Electronics Show, the Xavier was touted by Jen-Hsun Huang, the CEO of Nvidia. He spoke together with the head of Audi, which was going to use the Xavier in a “Level 4” car that was supposed to hit the road in three years. Yes, that would be now.

So, a chip 7 times more powerful, with far more advanced AI software, is now positioned to power a car with mere Level 2/3 capabilities—in 2024.

None of this is meant to single out Nvidia or Daimler for ridicule. It’s meant to ridicule the entire industry, both the tech people and the car people, who have been busy walking back expectations for self-driving for the past two years. And to ridicule us tech journalists, who have been backtracking right alongside them.

Cars that help the driver do the driving are here already. More help is on the way. Progress is real. But the future is still in the future: Robocar tech is harder than we thought.

Pilot Test Begins for Tech to Connect Everyday Vehicles

Post Syndicated from Sandy Ong original https://spectrum.ieee.org/transportation/advanced-cars/pilot-test-begins-for-tech-to-connect-everyday-vehicles

Drivers can use a number of signals to communicate with other drivers: taillights, high beams, the horn. But car manufacturers envision a future where cars themselves will exchange messages about where they’re going and where they’ve been.

Connected vehicles—those fitted with technology that allows them to communicate with other drivers, pedestrians, and nearby infrastructure—are increasingly being tested around the world. In July, Columbus, Ohio, will be the latest city to launch a connected-vehicle pilot.

Connected vehicles have long been overshadowed by their more famous cousins: autonomous vehicles. Although all autonomous vehicles have aspects of connectivity, adding certain technologies to ordinary cars could prevent accidents and save lives in the near term.

The Connected Vehicle Environment project in Columbus will see up to 1,800 public and private vehicles fitted with special onboard units and dashboard-mounted head-up displays. Those cars will be able to receive messages from traffic lights at 113 intersections, including some of the city’s most dangerous crossings. The aim is to study the impacts of connectivity on safety and traffic flow. Organizers will begin recruiting drivers and installing onboard units in July, with testing to begin in November.

The technology will enable “basic safety messaging,” including warnings to reduce speed or look out for pedestrians ahead, explains Luke Stedke, who manages communications at Drive Ohio, the state’s smart-mobility center. The pilot is part of the Smart Columbus initiative, which was launched after the city won US $40 million in the U.S. Department of Transportation (DOT)’s 2015 Smart City Challenge.

Columbus joins two other cities in Ohio—Marysville and Dublin—that have recently begun testing connected vehicles. Collectively, the DOT has invested more than $45 million in such trials in Wyoming, Tampa, and New York City.

The keen interest in connecting nonautonomous vehicles comes primarily from the technology’s potential to reduce crashes and save lives, says Debra Bezzina of the University of Michigan’s Transportation Research Institute. Sensors to alert drivers to hazards in their blind spots, streetlight-mounted cameras that show a pedestrian at an upcoming intersection, and an approaching car that warns of black ice ahead are just a few examples of what may be possible.

“Connected-vehicle technology can prevent 80 percent of all unimpaired car crashes,” says Bezzina. “So it could save billions of dollars a year.”

The new technology also promises more efficient traffic management and greener commuting. Emergency vehicles could move through intersections quicker with all the lights in their favor. Motorists, using information broadcast by traffic signals about their phase and timing, could adjust their speed accordingly and save fuel.

Connected-vehicle communications can run either on a wireless technology called Dedicated Short-Range Communications (DSRC) or on a cellular-based alternative called Cellular-V2X. DSRC is scalable, offers low latency, and works well in an urban environment, says mechanical engineer Joshua Siegel from Michigan State University (MSU). But it has a limited range (roughly 1 kilometer).

C-V2X, on the other hand, has higher latency but superior range and greater bandwidth. With experts predicting that the rollout of 5G will improve latency, some countries, including China, have favored cellular networks. But the United States hasn’t given a clear edict on the issue, causing much uncertainty for automobile manufacturers.

Similar to other pilots around the country, the new Columbus trial will employ DSRC technology, which is farther up the road to being commercially available than C-V2X modules.

Many car manufacturers are keen to incorporate communications technology because it offers benefits that self-driving vehicles can’t. For one, the tech works in all weather conditions, unlike notoriously fickle radar and lidar detection systems.

Another advantage: “You can get information from infrastructure that you just cannot get with an autonomous vehicle,” says Bezzina. While autonomous-vehicle sensors might be able to inform you of a red light ahead, she says, connected-vehicle technology is capable of telling you that “there’s a light ahead, it’s located here, here’s what the geography of the intersection looks like, and that light will only remain red for the next two seconds. It’s much deeper information.”

The next step for connected-vehicle trials would be to expand testing from 1,000 vehicles to possibly 1 million, says MSU’s Siegel. “We’re not seeing the full benefits of scale in the pilots we’ve had today,” he says. With more cars connected, vehicles could platoon or travel close together on highways while remaining safe, and the timing of traffic lights could be automatically adjusted to account for vehicle load.

Large-scale trials will bring another lesson too, says Siegel: “You’ll learn a lot about the social acceptance of this.”

This article appears in the July 2020 print issue as “U.S. Cities Pilot Connected-Vehicle Tech.”

Volkswagen’s E-Car Rollout Slips on Software

Post Syndicated from Robert N. Charette original https://spectrum.ieee.org/cars-that-think/transportation/advanced-cars/volkswagens-ecar-rollout-slips-on-software

Today Volkswagen’s European customers will be able to make binding preorders for one of the 30,000 limited 1st Edition ID.3 fully-electric compact cars. It’s the first fruit of the company’s EUR 60 billion strategy to become the world’s dominant manufacturer of all-electric and hybrid vehicles. 

Customers who opt for early delivery can expect the cars as early as September. They had expected to take delivery this summer, but software problems delayed the rollout. In replacing the multiple electric architecture VW currently uses with one new architecture, VW joins the list of old-line manufacturers that have discovered that simplified software can be hard to design

The problems came to light late last year when the German business publication Manager Magazin reported that VW was struggling to get its new vehicle software and systems architecture to operate properly. Up to 20,000 ID.3 vehicles were predicted to need software updates post-production, it reported. However, VW insisted that software problems were not going to delay the summer 2020 launch.

Problems continued into the first quarter. VW insiders told the German magazine the architecture had been developed “too hastily.” Test drivers were reporting up to 300 errors per day, and more than 10,000 technicians had been mobilized to fix them. Systems and software integration, along with proper event timing, seemed to be some of the problems’ causes. Even VW had to admit at the time that “things are not going great.”

Even now, customers who decide to take delivery of 1st Edition ID.3 vehicles in September will be missing two digital functions, the App Connect function and the distance feature in the head-up display. They’ll be updated in early 2021. 

The delayed launch of the ID.3 was one reason why VW’s board replaced Herbert Diess as chief executive of the VW brand last week. He will still be chief executive of the VW group, however.

The software problems should come as no surprise, because VW has taken a very ambitious approach. As reported in AutoNews (subscription required), VW chose to supplant its eight legacy architectures with a single architecture and an associated operating system based on three powerful computers, executing software developed in-house. The system orchestrates all things digital in the vehicle, which used to require 70 or more electronic control modules. 

Traditionally, the software for the electronic control modules had been developed mainly by VW’s suppliers. However, last year VW told them that it would be the prime software developer for all of its vehicles; it spun off an organization called Car.Software to do so. That organization is expected to hire another 5,000 or more digital experts on top of the initial 3,000 to develop and maintain the software used across all VW brands.

In addition, the software operating system needed network capable so software could be updated via VW’s automotive cloud, which the company is developing together with Microsoft. Such over-the-air updating was pioneered by Tesla, whose entire software/system architectural approach VW has made clear it is emulating.

Furthermore, VW has developed a dedicated chassis platform that it calls the modular electric drive matrix (MEB). The main point is to let VW quickly develop a wide range of electric vehicles in a short time, at an affordable cost. It should also help VW manage the increasing complexity of the software it uses to control the engine and drivetrain in internal combustion vehicles, which it must do to meet ever more stringent EU environmental regulations. 

Whether VW will navigate all the technical, managerial and organization changes this software initiative imposes remains to be seen. It’s an old-line car company trying to build the new thing—computers on wheels.  One thing is certain: It is spending a lot of money to make it happen. 

Bollinger’s New Electric SUV and Pickup Show What’s Possible With EV Design

Post Syndicated from Lawrence Ulrich original https://spectrum.ieee.org/energywise/transportation/efficiency/bollingers-new-electric-suv-pickup-ev-design

A gasoline engine up front, a transmission in the middle, and a fuel tank in back: For decades, that’s the way most automobiles were designed.

Electricity has changed that equation. And the Bollinger B1 sport-utility and B2 pickup are among the new electric vehicles (EVs) that show car designers are taking full advantage of the freedoms afforded by electric propulsion.

Bollinger Motors, based in suburban Detroit, this week announced it has received U.S. patents for its clever Frunkgate and Passthrough features on its brawny US $125,000 electric SUV and pickup.

Inflatable E-Bike Fits in a Backpack

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/energywise/transportation/alternative-transportation/inflatable-ebike-fits-in-a-backpack

The trend towards communal sidewalk-based personal mobility systems (like shared bikes and scooters) has resulted in some amount of actual personal mobility, but also a lot of cluttered sidewalks, injured riders and pedestrians, and questionable business models. Fortunately, there are other solutions to the last-kilometer problem that are less dangerous and annoying, like this prototype for an inflatable e-bike under development at the University of Tokyo. From a package of folded-up fabric that fits in a backpack, Poimo (POrtable and Inflatable MObility) can be quickly inflated with a small pump into a comfortable and intrinsically safe mobility system that can be deflated again and packed away once you get where you’re going.

‘Hydrogen-On-Tap’ Device Turns Trucks Into Fuel-Efficient Vehicles

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energywise/transportation/alternative-transportation/hydrogen-on-tap-device-trucks-fuel-efficient-vehicles

The city of Carmel, Ind., has trucks for plowing snow, salting streets, and carrying landscaping equipment. But one cherry-red pickup can do something no other vehicle can: produce its own hydrogen.

A 45-kilogram metal box sits in the bed of the work truck. When a driver starts the engine, the device automatically begins concocting the colorless, odorless gas, which feeds into the engine’s intake manifold. This prevents the truck from guzzling gasoline until the hydrogen supply runs out. The pickup has no fuel cell module, a standard component in most hydrogen vehicles. No high-pressure storage tanks or refueling pumps are needed, either.

Surprise! 2020 Is Not the Year for Self-Driving Cars

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/transportation/self-driving/surprise-2020-is-not-the-year-for-selfdriving-cars

In March, because of the coronavirus, self-driving car companies, including Argo, Aurora, Cruise, Pony, and Waymo, suspended vehicle testing and operations that involved a human driver. Around the same time, Waymo and Ford released open data sets of information collected during autonomous-vehicle tests and challenged developers to use them to come up with faster and smarter self-driving algorithms.

These developments suggest the self-driving car industry still hopes to make meaningful progress on autonomous vehicles (AVs) this year. But the industry is undoubtedly slowed by the pandemic and facing a set of very hard problems that have gotten no easier to solve in the interim.

Five years ago, several companies including Nissan and Toyota promised self-driving cars in 2020. Lauren Isaac, the Denver-based director of business initiatives at the French self-driving vehicle company EasyMile, says AV hype was “at its peak” back then—and those predictions turned out to be far too rosy.

Now, Isaac says, many companies have turned their immediate attention away from developing fully autonomous Level 5 vehicles, which can operate in any conditions. Instead, the companies are focused on Level 4 automation, which refers to fully automated vehicles that operate within very specific geographical areas or weather conditions. “Today, pretty much all the technology developers are realizing that this is going to be a much more incremental process,” she says.

For example, EasyMile’s self-driving shuttles operate in airports, college campuses, and business parks. Isaac says the company’s shuttles are all Level 4. Unlike Level 3 autonomy (which relies on a driver behind the wheel as its backup), the backup driver in a Level 4 vehicle is the vehicle itself.

“We have levels of redundancy for this technology,” she says. “So with our driverless shuttles, we have multiple levels of braking systems, multiple levels of lidars. We have coverage for all systems looking at it from a lot of different angles.”

Another challenge: There’s no consensus on the fundamental question of how an AV looks at the world. Elon Musk has famously said that any AV manufacturer that uses lidar is “doomed.” A 2019 Cornell research paper seemed to bolster the Tesla CEO’s controversial claim by developing algorithms that can derive from stereo cameras 3D depth-perception capabilities that rival those of lidar.

However, open data sets have called lidar doomsayers into doubt, says Sam Abuelsamid, a Detroit-based principal analyst in mobility research at the industry consulting firm Navigant Research.

Abuelsamid highlighted a 2019 open data set from the AV company Aptiv, which the AI company Scale then analyzed using two independent sources: The first considered camera data only, while the second incorporated camera plus lidar data. The Scale team found camera-only (2D) data sometimes drew inaccurate “bounding boxes” around vehicles and made poorer predictions about where those vehicles would be going in the immediate future—one of the most important functions of any self-driving system.

“While 2D annotations may look superficially accurate, they often have deeper inaccuracies hiding beneath the surface,” software engineer Nathan Hayflick of Scale wrote in a company blog about the team’s Aptiv data set research. “Inaccurate data will harm the confidence of [machine learning] models whose outputs cascade down into the vehicle’s prediction and planning software.”

Abuelsamid says Scale’s analysis of Aptiv’s data brought home the importance of building AVs with redundant and complementary sensors—and shows why Musk’s dismissal of lidar may be too glib. “The [lidar] point cloud gives you precise distance to each point on that vehicle,” he says. “So you can now much more accurately calculate the trajectory of that vehicle. You have to have that to do proper prediction.”

So how soon might the industry deliver self-driving cars to the masses? Emmanouil Chaniotakis is a lecturer in transport modeling and machine learning at University College London. Earlier this year, he and two researchers at the Technical University of Munich published a comprehensive review of all the studies they could find on the future of shared autonomous vehicles (SAVs).

They found the predictions—for robo-taxis, AV ride-hailing services, and other autonomous car-sharing possibilities—to be all over the map. One forecast had shared autonomous vehicles driving just 20 percent of all miles driven in 2040, while another model forecast them handling 70 percent of all miles driven by 2035.

So autonomous vehicles (shared or not), by some measures at least, could still be many years out. And it’s worth remembering that previous predictions proved far too optimistic.

This article appears in the May 2020 print issue as “The Road Ahead for Self-Driving Cars.”

Scientists Explore Underwater Quantum Links for Submarines

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/transportation/marine/quantum-water

Underwater quantum links are possible across 30 meters (100 feet) of turbulent water, scientists have shown. Such findings could help to one day secure quantum communications for submarines.

Quantum cryptography exploits the quantum properties of particles such as photons to help encrypt and decrypt messages in a theoretically unhackable way. Scientists worldwide are now endeavoring to develop satellite-based quantum communications networks for a global real-time quantum Internet.

In addition to beaming quantum communications signals across the air, through a vacuum, and within fiber optic cables, researchers have investigated establishing quantum communications links through water. Such work could lead to secure quantum communications between submarines and surface vessels, and with other subs, aircraft, or even satellites.

Lucid Motors’ Peter Rawlinson Talks E-Car Efficiency

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/advanced-cars/lucid-motors-says-ecar-design-all-about-efficiency

Two years ago, our own Lawrence Ulrich wrote that Lucid Motors “might just have a shot at being a viable, smaller-scale competitor to Tesla,” with series production of its fiendishly fast electric car, the Air, then slated for 2019.

Here we are in 2020, and you still can’t buy the Air. But now all systems are go, insists Peter Rawlinson, the chief executive and chief technology officer of Lucid and, in a former incarnation, the chief engineer for the Tesla Model S. He credits last year’s infusion of US $1 billion from the sovereign wealth fund of Saudi Arabia.

Robot Vehicles Make Contactless Deliveries Amid Coronavirus Quarantine

Post Syndicated from Erico Guizzo original https://spectrum.ieee.org/automaton/transportation/self-driving/robot-vehicles-make-contactless-deliveries-amid-coronavirus-quarantine

For the past two months, the vegetables have arrived on the back of a robot. That’s how 16 communities in Zibo, in eastern China, have received fresh produce during the coronavirus pandemic. The robot is an autonomous van that uses lidars, cameras, and deep-learning algorithms to drive itself, carrying up to 1,000 kilograms on its cargo compartment.

The unmanned vehicle provides a “contactless” alternative to regular deliveries, helping reduce the risk of person-to-person infection, says Professor Ming Liu, a computer scientist at the Hong Kong University of Science and Technology (HKUST) and cofounder of Unity Drive Innovation, or UDI, the Shenzhen-based startup that developed the self-driving van.

2020 Top 10 High Tech Cars

Post Syndicated from Lawrence Ulrich original https://spectrum.ieee.org/transportation/advanced-cars/2020-top-10-high-tech-cars

In 2019, the auto industry finally started acting like its future was electric. How do we know? Just follow the money.

General Motors just announced it was spending US $20 billion over five years to bring out a new generation of electric vehicles. Volkswagen Group has pledged $66 billion spread over five years, most of it for electric propulsion. Ford hopes to transform its lineup and image with an $11.5 billion program to develop EVs. And of course, Tesla has upstaged them all with the radical, scrapyard-from-Mars Cybertruck, a reminder that Elon Musk will remain a threat to the automotive order for the foreseeable future.

This past year, I saw the first fruit of Volkswagen Group’s massive investment: the Porsche Taycan, a German sport sedan that sets new benchmarks in performance and fast charging. It lived up to all the hype, I’m happy to say. As for Tesla and Ford, stay tuned. The controversial Tesla Cybertruck, the hotly anticipated Ford Mustang Mach-E, and the intriguing Rivian pickup and SUV (which has been boosted by $500 million in backing from Ford) are still awaiting introduction. EV fans, as ever, must be patient: The Mach-E won’t reach showrooms until late this year, and as for the Rivian and Cybertruck, who knows?

As is our habit, we focus here on cars that are already in showrooms or will be within the next few months. And we do include some good old gasoline-powered cars. Our favorite is the Corvette: It adopts a mid-engine design for the first time in its 67-year history. Yes, an electrified version is in the works.

Chevrolet Corvette Stingray C8

The middle: where no Corvette engine has gone before

Base price: US $59,995

By now, even casual car fans have heard that the Corvette has gone mid-engine. It’s a radical realignment for a car famous for big V8s nestling below long, flowing hoods since the ’Vette’s birth in 1953. Best of all, it works, and it means the Stingray will breathe down the necks of Ferraris, McLarens, and other mid-engine exotics—but at a ridiculous base price of just US $59,995.

Tadge Juechter, the Corvette’s chief engineer, says that the previous, seventh-generation model had reached the limits of front-engine physics. By rebalancing weight rearward, the new design allows the Stingray to put almost preposterous power to the pavement without sacrificing the comfort and everyday drivability that buyers demand.

I got my first taste of these new physics near the old stagecoach town of Tortilla Flat, Ariz. Despite having barely more grunt than last year’s base model—369 kilowatts (495 horsepower) from the 6.2-liter V8 rumbling just behind my right shoulder—the Corvette scorches to 60 miles per hour (97 kilometers per hour) nearly a full second quicker, at a supercar-baiting 2.9 seconds.

This Stingray should top out at around 190 mph. And there are rumors of mightier versions in the works, perhaps even an electric or hybrid ’Vette with at least 522 kW (700 hp).

With the engine out back, driver and passenger sit virtually atop the front axle, 42 centimeters (16.5 inches) closer to the action, wrapped in a fighter-jet-inspired cockpit with a clearer view over a dramatically lowered hood. Thanks to a new eight-speed, dual-clutch automated gearbox, magnetorheological shocks, and a limited-slip rear differential—all endlessly adjustable—my Corvette tamed every outlaw curve, bump, and dip in its Old West path. It’s so stable and composed that you’ll need a racetrack to approach its performance limits. It’s still fun on public roads, but you can tell that it’s barely breaking a sweat.

Yet it’s nearly luxury-car smooth and quiet when you’re not romping on throttle. And it’s thrifty. Figure on 9 to 8.4 liters per 100 kilometers (26 to 28 miles per gallon) at a steady highway cruise, including sidelining half its cylinders to save fuel. A sleek convertible model does away with the coupe’s peekaboo view of the splendid V8 through a glass cover. The upside is an ingenious roof design that folds away without hogging a cubic inch of cargo space. Unlike any other mid-engine car in the world, the Corvette will also fit two sets of golf clubs (or equivalent luggage) in a rear trunk, in addition to the generously sized “frunk” up front.

The downside to that convenience is a yacht-size rear deck that makes—how shall we put this?—the Chevy’s butt look fat.

An onboard Performance Data Recorder works like a real-life video game, capturing point-of-view video and granular data on any drive, overlaying the video with telemetry readouts, and allowing drivers to analyze lap times and performance with Cosworth racing software. The camera-and-GPS system allows any road or trip to be stored and analyzed as though it was a timed circuit—perfect for those record-setting grocery runs.

Polestar 1

This hybrid is tuned for performance

Base price: US $156,500

Consider the Polestar 1 a tech tease from Volvo. This fiendishly complex plug-in hybrid will be seen in just 1,500 copies, built over three years in a showpiece, enviro-friendly factory in Chengdu, China. Just as important, it’s the first of several planned Polestars, a Volvo sub-brand that aims to expand the company’s electric reach around the globe.

I drove mine in New Jersey, scooting from Hoboken to upstate New York, as fellow drivers craned their necks to glimpse this tuxedo-sharp, hand-built luxury GT. The body panels are formed from carbon fiber, trimming 227 kilograms (500 pounds) from what’s still a 2,345-kg (5,170-pound) ride. Front wheels are driven by a four-cylinder gas engine, whose combo of a supercharger and turbocharger generates 243 kilowatts (326 horses) from just 2.0 liters of displacement, with another 53 kW (71 hp) from an integrated starter/generator. Two 85-kW electric motors power the rear wheels, allowing some 88 kilometers (55 miles) of emissions-free range—likely a new high for a plug-in hybrid—before the gas engine kicks in. Mashing the throttle summons some 462 kW (619 hp) and 1,000 newton meters (737 pound-feet) of torque, allowing a 4.2-second dash to 60 miles per hour (97 kilometers per hour). It’s fast, but not lung-crushing fast, like Porsche’s Taycan.

Yet the Polestar’s handling is slick, thanks to those rear motors, which work independently, allowing torque vectoring—the speeding or slowing of individual wheels—to boost agility. And Öhlins shock absorbers, from the renowned racing and performance brand, combine precise body control with a creamy-smooth ride. It’s a fun drive, but Polestar’s first real test comes this summer with the Polestar 2 EV. That fastback sedan’s $63,750 base price and roughly 440-km (275-mile) range will see it square off against Tesla’s sedans. Look for it in next year’s Top 10.

Hyundai Sonata

It has the automation of a much pricier car

Base price: US $24,330

The U.S. market for family sedans has been gutted by SUVs. But rather than give up on sedans, as Ford and Fiat Chrysler have done, Hyundai has doubled down with a 2020 Sonata that’s packed with luxury-level tech and alluring design at a mainstream price.

The Sonata is packed with features that were recently found only on much costlier cars. The list includes Hyundai’s SmartSense package of forward-collision avoidance, automated emergency braking, lane-keeping assist, automatic high-beam assist, adaptive cruise control, and a drowsy-driver attention warning, and they’re all standard, even in the base model. The SEL model adds a blind-spot monitor, but with a cool tech twist: Flick a turn signal and a circle-shaped camera view of the Sonata’s blind spot appears in the digital gauge cluster in front of the driver. It helped me spot bicyclists in city traffic.

Hyundai’s latest infotainment system, with a 10-inch (26-centimeter) monitor, remains one of the industry’s most intuitive touch screens. Taking a page from much more expensive BMWs, the Hyundai’s new “smart park” feature, standard on the top-shelf Limited model, lets it pull into or out of a tight parking spot or garage with no driver aboard, controlled by the driver through the key fob.

That fob can be replaced by a digital key, which uses an Android smartphone app, Bluetooth Low Energy, and Near Field Communication to unlock and start the car. Owners can share digital-key access with up to three users, including sending codes via the Web.

Even the Sonata’s hood is festooned with fancy electronics. What first looks like typical chrome trim turns out to illuminate with increasing intensity as the strips span the fenders and merge into the headlamps. The chrome was laser-etched to allow a grid of 0.05-millimeter LED squares to shine through. Add it to the list of bright ideas from Hyundai.

Porsche Taycan

It outperforms Tesla—for a price

Base price: US $114,340

Yes, the all-electric Porsche Taycan is better than a Tesla Model S. And it had damn well better be: The Porsche is a far newer design, and it sells at up to double the Tesla’s price. What you get for all that is a four-door supercar GT, a technological marvel that starts the clock ticking on the obsolescence of fossil-fueled automobiles.

This past September I spent two days driving the Taycan Turbo S through Denmark and Germany. One high point was repeated runs to 268 kilometers per hour (167 miles per hour) on the Autobahn, faster than I’ve ever driven an EV. From a standing start, an automated launch mode summoned 560 kilowatts (750 horsepower) for a time-warping 2.6-second dash to 60 mph.

As alert readers have by now surmised, the Taycan is fast. But one of its best time trials takes place with the car parked. Thanks to the car’s groundbreaking 800-volt electrical architecture—with twice the voltage of the Tesla’s—charging is dramatically quicker. Doubling the voltage means the current needed to deliver a given level of power is of course halved. Pulling off the Autobahn during my driving test and connecting the liquid-cooled cables of a 350-kW Ionity charger, I watched the Porsche suck in enough DC to replenish its 93.4-kW battery from 8 to 80 percent in 20 minutes flat.

Based on my math, the Porsche added nearly 50 miles of range for every 5 minutes of max charging. In the time it takes to hit the bathroom and pour a coffee, owners can add about 160 kilometers (100 miles) of range toward the Taycan’s total, estimated at 411 to 450 km (256 to 280 miles) under the new Worldwide Harmonized Light Vehicle Test Procedure. But the U.S. Environmental Protection Agency (EPA) seems to have sandbagged the Porsche, pegging its range at 201 miles, even as test drivers report getting 270 miles or more. Porsche hopes to have 600 of the ultrafast DC chargers up and running in the United States by the end of this year.

That 800-volt operation brings other advantages, too. With less current to carry, the wiring is slimmer and lighter, saving 30 kilograms in the electrical harness alone. Also, less current is drawn during hard driving, which reduces heat and wear on the electric motors. Porsche says that’s key to the Taycan’s repeatable, consistent performance.

In its normal driving mode, the Turbo S version kicks out 460 kW (617 horsepower) and 1,049 newton meters (774 pound-feet) of torque. The front and back axles each have an electric motor with a robust 600-amp inverter; in other models the front gets 300 amps and the rear gets 600 amps.

The Porsche’s other big edge is its race-bred handling. Though this sedan tops 2,310 kg (5,100 pounds), its serenity at boggling speeds is unmatched. Credit the full arsenal of Porsche’s chassis technology: four-wheel-steering, active roll stabilization, and an advanced air suspension offering three levels of stiffness, based on three separate pressurized chambers. Porsche claims class-leading levels of brake-energy recuperation. It’s also Porsche’s most aerodynamic production model, with a drag coefficient of just 0.22, about as good as any mass-production car ever.

Porsche invested US $1 billion to develop the Taycan, with $800 million of that going to a new factory in Zuffenhausen, Germany. For a fairer fight with Tesla, a more-affordable 4S model arrives in U.S. showrooms this summer, with up to 420 kW (563 hp) and a base price of $103,800.

Audi RS Q8

Mild hybrid, wild ride

Base price (est.): US $120,000

I’m rocketing up a dormant volcano to the highest peak in Spain, Mt. Teide in the Canary Islands. There may be more efficient ways to test a luxury crossover SUV, but none more fun.

I’m in the Audi RS Q8, a mild-hybrid version of the Q8, introduced just last year. I’m getting a lesson in how tech magic can make a roughly 2,310-kilogram (5,100-pound) vehicle accelerate, turn, and brake like a far smaller machine.

The RS Q8’s pulsing heart is a 4-liter, 441-kilowatt (591-horsepower) twin-turbo V8. It’s augmented by a mild-hybrid system based on a 48-volt electrical architecture that sends up to 12 kW to charge a lithium-ion battery. That system also powers trick electromechanical antiroll bars to keep the body flatter than a Marine’s haircut during hard cornering. An adaptive air suspension hunkers down at speed to reduce drag and center of gravity, while Quattro all-wheel drive and four-wheel steering provide stability.

A mammoth braking system, largely shared with the Lamborghini Urus, the Audi’s corporate cousin, includes insane 10-piston calipers up front. That means 10 pressure points for the brake pads against the spinning brake discs, for brawny stopping power and improved heat management and pedal feel. Optional carbon-ceramic brakes trim 19 pounds from each corner.

Audi’s engineers fine-tuned it all in scores of trials on Germany’s fabled Nürburgring circuit, which the RS Q8 stormed in 7 minutes, 42 seconds. That’s faster than any other SUV in history.

Audi’s digital Virtual Cockpit and MMI Touch center screens are smoothly integrated in a flat panel. A navigation system analyzes past drives to nearby destinations, looking at logged data on traffic density and the time of day. And the Audi Connect, an optional Android app that can be used by up to five people, can unlock and start the Audi.

Audi quotes a conservative 3.8-second catapult from 0 to 100 kilometers per hour (62 miles per hour). We’re betting on 0 to 60 mph in 3.5 seconds, maybe less.

Mini Cooper SE

It offers all-electric sprightliness

US $30,750

I’m on a street circuit at the FIA’s Formula E race in Brooklyn, N.Y., about to take my first all-electric laps in the new Mini Cooper SE during a break in race action.

The Manhattan skyline paints a stunning backdrop across the harbor. My Red Hook apartment happens to be a short walk from this temporary circuit; so is the neighborhood Tesla showroom, and an Ikea and a Whole Foods, both equipped with EV chargers. In other words, this densely populated city is perfect for the compact, maneuverable, electric Mini, that most stylish of urban conveyances.

It’s efficient, too, as Britain’s Mini first proved 61 years ago, with the front-drive car that Sir Alec Issigonis created in response to the gasoline rationing in Britain following the 1956 Suez crisis. This Mini squeezes 32.6 kilowatt-hours worth of batteries into a T-shaped pack below its floor without impinging on cargo space. At a hair over 1,360 kilograms (3,000 pounds), this Mini adds only about 110 kg to a base gasoline Cooper.

With a 135-kilowatt (181-horsepower) electric motor under its handsome hood, the Mini sails past the Formula E grandstand, quickening my pulse with its go-kart agility and its ethereal, near-silent whir. The body sits nearly 2 centimeters higher than the gasoline version, to accommodate 12 lithium-ion battery modules, but the center of gravity drops by 3 cm (1.2 inches), a net boost to stability and handling. Because the Mini has neither an air-inhaling radiator grille nor an exhaust-exhaling pipe, it’s tuned for better aerodynamics as well.

A single-speed transmission means I never have to shift, though I do fiddle with the toggle switch that dials up two levels of regenerative braking. That BMW electric power train, with 270 newton meters (199 pound-feet) of instant-on torque, punts me from 0 to 60 miles per hour (0 to 97 kilometers per hour) in just over 7 seconds, plenty frisky for such a small car. The company claims a new wheelspin actuator reacts to traction losses notably faster, a sprightliness that’s particularly gratifying when gunning the SE around a corner.

It all reminds me of that time when the Tesla Roadster was turning heads and EVs were supposed to be as compact and light as possible to save energy. The downside is that a speck-size car can fit only so much battery. The Mini’s has less than one-third the capacity of the top Tesla Model S. That’s only enough for a mini-size range of 177 km (110 miles). That relatively tiny battery helps deliver an appealing base price of $23,250, including a $7,500 federal tax credit. And this is still a hyperefficient car: On a subsequent drive in crawling Miami traffic, the Mini is on pace for 201 km (125 miles) of range, though its battery contains the equivalent of less than 0.9 gallon of gasoline.

Following a full 4-hour charge on a basic Level 2 charger, you’ll be zipping around town again, your conscience as clear as the air around the Mini.

Vintage Fiat 124 Spider, Retooled by Electric GT

A drop-in electric-drive system gives new life to an old car—like this 1982 Spider

System base price: US $32,500

Vintage-car aficionados love to grouse about the time and money it takes to keep their babies running. Electric GT has a better idea: Skip ahead a century. The California company has developed an ingenious plug-and-play “crate motor” that transplants an electric heart into most any vintage gasoline car.

I drove an orange 1982 Fiat 124 Spider that Electric GT converted to battery drive. With a relatively potent 89 kilowatts (120 horsepower) and 235 newton meters (173 pound-feet) of torque below its hood, and 25 kilowatt-hours’ worth of repurposed Tesla batteries stuffed into its trunk area, the Fiat can cover up to 135 kilometers (85 miles) of driving range, enough for a couple hours of top-down cruising.

Best of all, the system is designed to integrate exclusively with manual-transmission cars, including the Fiat’s charming wood-topped shifter and five forward gears. This romantic, Pininfarina-designed Fiat also squirts to 60 miles per hour in about 7 seconds, about 3 seconds quicker than the original old-school dawdler.

Electric GT first got attention when it converted a 1978 Ferrari 308, best known as Tom Selleck’s chariot on the U.S. TV show “Magnum, P.I.,” to electric drive. The company’s shop, north of Los Angeles, is filled with old Porsches, Toyota FJ40s, and other cars awaiting electrification.

The crate motors even look like a gasoline engine, with what appears at first glance to be V-shaped cylinder banks and orange sparkplug wires. Systems are engineered for specific cars, and the burliest of the bunch store 100 kWh, enough to give plenty of range.

With system prices starting at US $32,500 and topping $80,000 for longer-range units, this isn’t a project for the backyard mechanic on a Pep Boys budget. Eric Hutchison, Electric GT’s cofounder, says it’s for the owner who loves a special car and wants to keep it alive but doesn’t want to provide the regular babying care that aging, finicky machines typically demand.

“It’s the guy who says, ‘I already own three Teslas. Now, how do I get my classic Jaguar electrified?’ ” says Hutchison.

Components designed for easy assembly should enable a good car hobbyist to perform the conversion in just 40 to 50 hours, the company says.

“We’re taking out all the brain work of having to be an expert in battery safety or electrical management,” Hutchison says. “You can treat it like a normal engine swap.”

Toyota RAV4 Hybrid

A redesigned hybrid system optimizes fuel economy

Base price: $29,470

The RAV4 is the best-selling vehicle in the United States that isn’t a pickup truck. What’s more, its hybrid offshoot is the most popular gas-electric SUV. No wonder: Forty-four percent of all hybrids sold in America in 2018 were Toyotas. And where many hybrids disappoint in real-world fuel economy, the RAV4 delivers. That’s why this Toyota, whose 2019 redesign came too late to make last year’s Top 10 list, is getting its due for 2020.

My own tests show 41 miles per gallon (5.7 liters per 100 kilometers) in combined city and highway driving, 1 mpg better than the EPA rating. Up front, a four-cylinder, 131-kilowatt (176-horsepower) engine mates with an 88-kW (118-hp) electric motor. A 40-kW electric motor under the cargo hold drives the rear wheels. Altogether, you get a maximum 163 kW (219 hp) in all-wheel-drive operation, with no driveshaft linking the front and rear wheels. The slimmer, redesigned hybrid system adds only about 90 kilograms (about 200 pounds) and delivers a huge 8-mile-per-gallon gain over the previous model. Toyota’s new Predictive Efficient Drive collects data on its driver’s habits and combines that with GPS route and traffic info to optimize both battery use and charging. For example, it will use more electricity while climbing hills in expectation of recapturing that juice on the downhill side. And when the RAV4 is riding on that battery, it’s as blissfully quiet as a pure EV. Toyota’s Safety Sense gear is standard, including adaptive cruise control, lane-keeping assist, and automatic emergency braking. Next year will bring the first-ever plug-in hybrid version, which Toyota says will be the most powerful RAV4 yet.

Ford Escape Hybrid

This SUV has carlike efficiency

Base price: US $29,450

Years ago, Americans began abandoning their cars for SUVs. So by now you might think those SUVs would be achieving carlike efficiencies. You’d be correct. Exhibit A: the new Ford Escape Hybrid, with its class-topping EPA rating of 5.7 liters per 100 kilometers (41 miles per gallon)in combined city and highway driving. That’s 1 mpg better than its formidable Top 10 competitor, the Toyota RAV4 Hybrid. Where the Toyota aims for a rugged-SUV look, the Ford wraps a softer, streamlined body around its own hybrid system.

That includes a 2.5-L, four-cylinder Atkinson-cycle engine, and a pair of electric motor/generators for a 150-kilowatt (200 horsepower) total. A briefcase-size battery pack, about a third the size of the old Escape Hybrid’s, tucks below the front passenger seat. The Toyota’s rear electric motor drives the rear wheels independently and thus offers only an all-wheel-drive version. The Escape forges a mechanical connection to the rear wheels, allowing both all-wheel drive and front-wheel-drive versions. The latter is lighter and more efficient when you’re not dealing with snow, ice, off-roading, or some combination of the three. The 0-to-60-mph run is dispatched in a whisper-quiet 8.7 seconds, versus 7.5 seconds for the Toyota. The Ford fires back with powerful, smartly tuned hybrid brakes that have more stopping power than either the Toyota or the gasoline-only Escapes can manage.

Tech features include a nifty automated self-parking function, evasive-steering assist, and wireless smartphone charging. A head-up display available on the Titanium—Ford’s first ever in North America—projects speed, navigation info, driver-assist status, and other data onto the windshield. FordPass Connect, a smartphone app, lets owners use a smartphone to lock, unlock, start, or locate their vehicle, and a standard 4G LTE Wi-Fi system links up to 10 mobile devices.

A plug-in hybrid version will follow later this year with what Ford says will be a minimum 30 miles of usable all-electric range. All told, it’s a winning one-two punch of efficiency and technology in an SUV that starts below $30,000.

Aston Martin Vantage AMR

High tech empowers retro tech

Base price: US $183,081

Take an Aston Martin Vantage, among the world’s most purely beautiful sports cars. Add a 375-kilowatt (503-horsepower) hand-assembled V8 from AMG, the performance arm of Mercedes-Benz. Assemble a team of engineers led by Matt Becker, Aston’s handling chief and the former maestro of Lotus’s chassis development. Does this sound like the recipe for the sports car of your dreams? Well, that dream goes over the top, with the manual transmission in the new Vantage AMR.

Burbling away from Aston’s AMR Performance Centre, tucked along the Nürburgring Nordschleife circuit in Germany, I am soon happily pressing a clutch pedal and finessing the stick shift on the Autobahn. The next thing I know, the Aston is breezing past 300 kilometers per hour (or 186 miles per hour), which is not far off its official 195-mph top speed. That’s a 7-mph improvement over the automatic version. This stick shouts defiance in a world in which the Corvette C8, the Ferrari, the Lamborghini, and the Porsche 911 have sent their manual transmissions to the great scrapyard in the sky.

But what’s impressive is how seamlessly the company has integrated this classic technology with the newest tech, including an adaptive power train and suspension. The AMR’s 1,500-kilogram (3,298-pound) curb weight is about 100 kg less than that of an automatic model.

The seven-speed manual, a once-maddening unit from Italy’s Graziano, has been transformed. An all-new gearbox was out of the question: No supplier wanted to develop one for a sports car that will have just 200 copies produced this year. So Aston had to get creative with the existing setup. Technicians reworked shift cables and precisely chamfered the gears’ “fingers”—think of the rounded teeth inside a Swiss watch—for smoother, more-precise shifts. A dual-mass flywheel was fitted to the mighty Mercedes V8 to dampen resonance in the driveline so the gearbox doesn’t rattle. The standard Vantage’s peak torque has been lowered from 681 to 625 newton meters (from 502 to 461 pound-feet) to reduce stress on transmission gears.

Aston also sweated the ideal placement of shifter and clutch pedal for the pilot. A dual-chamber clutch master cylinder, developed from a Formula One design, moves a high volume of transmission fluid quickly, but without an unreasonably heavy, thigh-killing clutch pedal. A selectable AM Shift Mode feature delivers modern, rev-matching downshifts, eliminating the need for human heel-and-toe maneuvers, with thrilling matched upshifts under full throttle.

The Graziano still takes a bit of practice: Its funky “dogleg” first gear sits off to the left, away from the familiar H pattern of shift gates. Second gear is where you’d normally find first, third replaces second, and so on. The layout originated in old-school racing, the idea being that first gear was unneeded, unless you were rolling through the pit lane. The dogleg pattern allows easier shifting from second to third and back without having to slide the shifter sideways. Once acclimated, I can’t get enough: The shifter grants me precise control over the brawny V8, and the Aston’s every balletic move. More improbably, this sweet shifter on the AMR won’t become a footnote in Aston history: It will be an option on every Vantage in 2021.

This article appears in the April 2020 print issue as “ 2020 Top 10 Tech Cars.”

Companies Report a Rush of Electric Vehicle Battery Advances

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/transportation/efficiency/companies-report-rush-electric-vehicle-battery-advances

Electric vehicles have recently boasted impressive growth rates, more than doubling in market penetration every two years between 2014 and 2018. And batteries play a key role in EV performance and price. That’s why some companies are looking to new chemistries and battery technologies to sustain EV growth rates throughout the early 2020s.

Three recent developments suggest that executives are more than just hopeful. They are, in fact, already striking deals to acquire and commercialize new EV battery advances. And progress has been broad—the new developments concern the three main electrical components of a battery: its cathode, electrolyte, and anode.

TESLA’S BIG BETS Analysts think Tesla’s upcoming annual Battery Day (the company hadn’t yet set a date at press time) will hold special significance. Maria Chavez of Navigant Research in Boulder, Colo., expects to hear about at least three big advancements.

The first one (which Reuters reported in February) is that Tesla will develop batteries with cathodes made from lithium iron phosphate for its Model 3s. These LFP batteries—with “F” standing for “Fe,” the chemical symbol for iron—are reportedly free of cobalt, which is expensive and often mined using unethical practices. LFP batteries also have higher charge and discharge rates and longer lifetimes than conventional lithium-ion cells. “The downside is that they’re not very energy dense,” says Chavez.

To combat that, Tesla will reportedly switch from standard cylindrical cells to prism-shaped cells—the second bit of news Chavez expects to hear about. Stacking prisms versus cylinders would allow Tesla to fit more batteries into a given space.

A third development, Chavez says, may concern Tesla’s recent acquisition, Maxwell Technologies. Before being bought by Tesla in May of 2019, Maxwell specialized in making supercapacitors. Supercapacitors, which are essentially charged metal plates with proprietary materials in between, boost a device’s charge capacity and performance.

Supercapacitors are famous for pumping electrons into and out of a circuit at blindingly fast speeds. So an EV power train with a supercapacitor could quickly access stores of energy for instant acceleration and other power-hungry functions. On the flip side, the supercapacitor could also rapidly store incoming charge to be metered out to the lithium battery over longer stretches of time—which could both speed up quick charging and possibly extend battery life.

So could blending supercapacitors, prismatic cells, and lithium iron phosphate chemistry provide an outsize boost for Tesla’s EV performance specs? “The combination of all three things basically creates a battery that’s energy dense, low cost, faster-to-charge, and cobalt-free—which is the promise that Tesla has been making for a while now,” Chavez said.

SOLID-STATE DEALS Meanwhile, other companies are focused on improving both safety and performance of the flammable liquid electrolyte in conventional lithium batteries. In February, Mercedes-Benz announced a partnership with the Canadian utility Hydro-Québec to develop next-generation lithium batteries with a solid and nonflammable electrolyte. And a month prior, the Canadian utility announced a separate partnership with the University of Texas at Austin and lithium-ion battery pioneer John Goodenough, to commercialize a solid-state battery with a glass electrolyte.

“Hydro-Québec is the pioneer of solid-state batteries,” said Karim Zaghib, general director of the utility’s Center of Excellence in Transportation Electrification and Energy Storage. “We started doing research and development in [lithium] solid-state batteries…in 1995.”

Although Zaghib cannot disclose the specific electrolytes his lab will be working with Mercedes to develop, he says the utility is building on a track record of successful battery technology rollouts with companies including A123 Systems in the United States, Murata Manufacturing in Japan, and Blue Solutions in Canada.

STARTUP SURPRISE Lastly, Echion Technologies, a startup based in Cambridge, England, said in February that it had developed a new anode for high-capacity lithium batteries that could charge in just 6 minutes. (Not to be outdone, a team of researchers in Korea announced that same month that its own silicon anode would charge to 80 percent in 5 minutes.)

Echion CEO Jean de la Verpilliere—a former engineering Ph.D. student at the nearby University of Cambridge—says Echion’s proprietary “mixed niobium oxide” anode is compatible with conventional cathode and electrolyte technologies.

“That’s key to our business model, to be ‘drop-in,’ ” says de la Verpilliere, who employs several former Cambridge students and staff. “We want to bring innovation to anodes. But then we will be compatible with everything else in the battery.”

In the end, the winning combination for next-generation batteries may well include one or more breakthroughs from each category—cathode, anode, and electrolyte.

This article appears in the April 2020 print issue as “EV Batteries Shift Into High Gear.”