All posts by Philip E. Ross

FAA Files Reveal a Surprising Threat to Airline Safety: the U.S. Military’s GPS Tests

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/aerospace/aviation/faa-files-reveal-a-surprising-threat-to-airline-safety-the-us-militarys-gps-tests

Early one morning last May, a commercial airliner was approaching El Paso International Airport, in West Texas, when a warning popped up in the cockpit: “GPS Position Lost.” The pilot contacted the airline’s operations center and received a report that the U.S. Army’s White Sands Missile Range, in South Central New Mexico, was disrupting the GPS signal. “We knew then that it was not an aircraft GPS fault,” the pilot wrote later.

The pilot missed an approach on one runway due to high winds, then came around to try again. “We were forced to Runway 04 with a predawn landing with no access to [an instrument landing] with vertical guidance,” the pilot wrote. “Runway 04…has a high CFIT threat due to the climbing terrain in the local area.”

CFIT stands for “controlled flight into terrain,” and it is exactly as serious as it sounds. The pilot considered diverting to Albuquerque, 370 kilometers away, but eventually bit the bullet and tackled Runway 04 using only visual aids. The plane made it safely to the ground, but the pilot later logged the experience on NASA’s Aviation Safety Reporting System, a forum where pilots can anonymously share near misses and safety tips.

This is far from the most worrying ASRS report involving GPS jamming. In August 2018, a passenger aircraft in Idaho, flying in smoky conditions, reportedly suffered GPS interference from military tests and was saved from crashing into a mountain only by the last-minute intervention of an air traffic controller. “Loss of life can happen because air traffic control and a flight crew believe their equipment are working as intended, but are in fact leading them into the side of the mountain,” wrote the controller. “Had [we] not noticed, that flight crew and the passengers would be dead. I have no doubt.”

There are some 90 ASRS reports detailing GPS interference in the United States over the past eight years, the majority of which were filed in 2019 and 2020. Now IEEE Spectrum has new evidence that GPS disruption to commercial aviation is much more common than even the ASRS database suggests. Previously undisclosed Federal Aviation Administration (FAA) data for a few months in 2017 and 2018 detail hundreds of aircraft losing GPS reception in the vicinity of military tests. On a single day in March 2018, 21 aircraft reported GPS problems to air traffic controllers near Los Angeles. These included a medevac helicopter, several private planes, and a dozen commercial passenger jets. Some managed to keep flying normally; others required help from air traffic controllers. Five aircraft reported making unexpected turns or navigating off course. In all likelihood, there are many hundreds, possibly thousands, of such incidents each year nationwide, each one a potential accident. The vast majority of this disruption can be traced back to the U.S. military, which now routinely jams GPS signals over wide areas on an almost daily basis somewhere in the country.

The military is jamming GPS signals to develop its own defenses against GPS jamming. Ironically, though, the Pentagon’s efforts to safeguard its own troops and systems are putting the lives of civilian pilots, passengers, and crew at risk. In 2013, the military essentially admitted as much in a report, saying that “planned EA [electronic attack] testing occasionally causes interference to GPS based flight operations, and impacts the efficiency and economy of some aviation operations.”

In the early days of aviation, pilots would navigate using road maps in daylight and follow bonfires or searchlights after dark. By World War II, radio beacons had become common. From the late 1940s, ground stations began broadcasting omnidirectional VHF signals that planes could lock on to, while shorter-range systems indicated safe glide slopes to help pilots land. At their peak, in 2000, there were more than a thousand very high frequency (VHF) navigation stations in the United States. However, in areas with widely spaced stations, pilots were forced to take zigzag routes from one station to the next, and reception of the VHF signals could be hampered by nearby buildings and hills.

Everything changed with the advent of global navigation satellite systems (GNSS), first devised by the U.S. military in the 1960s. The arrival in the mid-1990s of the civilian version of the technology, called the Global Positioning System, meant that aircraft could navigate by satellite and take direct routes from point to point; GPS location and altitude data was also accurate enough to help them land.

The FAA is about halfway through its NextGen effort, which is intended to make flying safer and more efficient through a wholesale switch from ground-based navigation aids like radio beacons to a primarily satellite-enabled navigation system. Along with that switch, the agency began decommissioning VHF navigation stations a decade ago. The United States is now well on its way to having a minimal backup network of fewer than 600 ground stations.

Meanwhile, the reliance on GPS is changing the practice of flying and the habits of pilots. As GPS receivers have become cheaper, smaller, and more capable, they have become more common and more widely integrated. Most airplanes must now carry Automatic Dependent Surveillance-Broadcast (ADS-B) transponders, which use GPS to calculate and broadcast their altitude, heading, and speed. Private pilots use digital charts on tablet computers, while GPS data underpins autopilot and flight-management computers. Pilots should theoretically still be able to navigate, fly, and land without any GPS assistance at all, using legacy radio systems and visual aids. Commercial airlines, in particular, have a range of backup technologies at their disposal. But because GPS is so widespread and reliable, pilots are in danger of forgetting these manual techniques.

When an Airbus passenger jet suddenly lost GPS near Salt Lake City in June 2019, its pilot suffered “a fair amount of confusion,” according to the pilot’s ASRS report. “To say that my raw data navigation skills were lacking is an understatement! I’ve never done it on the Airbus and can’t remember having done it in 25 years or more.”

“I don’t blame pilots for getting a little addicted to GPS,” says Todd E. ­Humphreys, director of the Radionavigation Laboratory at the University of Texas at Austin. “When something works well 99.99 percent of the time, humans don’t do well in being vigilant for that 0.01 percent of the time that it doesn’t.”

Losing GPS completely is not the worst that can happen. It is far more dangerous when accurate GPS data is quietly replaced by misleading information. The ASRS database contains many accounts of pilots belatedly realizing that GPS-enabled autopilots had taken them many kilometers in the wrong direction, into forbidden military areas, or dangerously close to other aircraft.

In December 2012, an air traffic controller noticed that a westbound passenger jet near Reno, Nev., had veered 16 kilometers (10 miles) off course. The controller confirmed that military GPS jamming was to blame and gave new directions, but later noted: “If the pilot would have noticed they were off course before I did and corrected the course, it would have caused [the] aircraft to turn right into [an] opposite direction, eastbound [jet].”

So why is the military interfering so regularly with such a safety-critical system? Although most GPS receivers today are found in consumer smartphones, GPS was designed by the U.S. military, for the U.S. military. The Pentagon depends heavily on GPS to locate and navigate its aircraft, ships, tanks, and troops.

For such a vital resource, GPS is exceedingly vulnerable to attack. By the time GPS signals reach the ground, they are so faint they can be easily drowned out by interference, whether accidental or malicious. Building a basic electronic warfare setup to disrupt these weak signals is trivially easy, says Humphreys: “Detune the oscillator in a microwave oven and you’ve got a superpowerful jammer that works over many kilometers.” Illegal GPS jamming devices are widely available on the black market, some of them marketed to professional drivers who may want to avoid being tracked while working.

Other GNSS systems, such as Russia’s GLONASS, China’s BeiDou, and Europe’s Galileo constellations, use slightly different frequencies but have similar vulnerabilities, depending on exactly who is conducting the test or attack. In China, mysterious attacks have successfully “spoofed” ships with GPS receivers toward fake locations, while vessels relying on BeiDou reportedly remain unaffected. Similarly, GPS signals are regularly jammed in the eastern Mediterranean, Norway, and Finland, while the Galileo system is untargeted in the same attacks.

The Pentagon uses its more remote military bases, many in the American West, to test how its forces operate under GPS denial, and presumably to develop its own electronic warfare systems and countermeasures. The United States has carried out experiments in spoofing GPS signals on at least one occasion, during which it was reported to have taken great care not to affect civilian aircraft.

Despite this, many ASRS reports record GPS units delivering incorrect positions rather than failing altogether, but this can also happen when the satellite signals are degraded. Whatever the nature of its tests, the military’s GPS jamming can end up disrupting service for civilian users, particularly high-altitude commercial aircraft, even at a considerable distance.

The military issues Notices to Airmen (NOTAM) to warn pilots of upcoming tests. Many of these notices cover hundreds of thousands of square kilometers. There have been notices that warn of GPS disruption over all of Texas or even the entire American Southwest. Such a notice doesn’t mean that GPS service will be disrupted throughout the area, only that it might be disrupted. And that uncertainty creates its own problems.

In 2017, the FAA commissioned the nonprofit Radio Technical Commission for Aeronautics to look into the effects of intentional GPS interference on civilian aircraft. Its report, issued the following year by the RTCA’s GPS Interference Task Group, found that the number of military GPS tests had almost tripled from 2012 to 2017. Unsurprisingly, ASRS safety reports referencing GPS jamming are also on the rise. There were 38 such ASRS narratives in 2019—nearly a tenfold increase over 2018.

New internal FAA materials obtained by Spectrum from a member of the task group and not previously made public indicate that the ASRS accounts represent only the tip of the iceberg. The FAA data consists of pilots’ reports of GPS interference to the Los Angeles Air Route Traffic Control Center, one of 22 air traffic control centers in the United States. Controllers there oversee air traffic across central and Southern California, southern Nevada, southwestern Utah, western Arizona, and portions of the Pacific Ocean—areas heavily affected by military GPS testing.

This data includes 173 instances of lost or intermittent GPS during a six-month period of 2017 and another 60 over two months in early 2018. These reports are less detailed than those in the ASRS database, but they show aircraft flying off course, accidentally entering military airspace, being unable to maneuver, and losing their ability to navigate when close to other aircraft. Many pilots required the assistance of air traffic control to continue their flights. The affected aircraft included a pet rescue shuttle, a hot-air balloon, multiple medical flights, and many private planes and passenger jets.

In at least a handful of episodes, the loss of GPS was deemed an emergency. Pilots of five aircraft, including a Southwest Airlines flight from Las Vegas to
Chicago, invoked the “stop buzzer,” a request routed through air traffic control for the military to immediately cease jamming. According to the Aircraft Owners and Pilots Association, pilots must use this phrase only when a safety-of-flight issue is encountered.

To be sure, many other instances in the FAA data were benign. In early March 2017, for example, Jim Yoder was flying a Cessna jet owned by entrepreneur and space tourist Dennis Tito between Las Vegas and Palm Springs, Calif., when both onboard GPS devices were jammed. “This is the only time I’ve ever had GPS go out, and it was interesting because I hadn’t thought about it really much,” Yoder told Spectrum. “I asked air traffic control what was going on and they were like, ‘I don’t really know.’ But we didn’t lose our ability to navigate, and I don’t think we ever got off course.”

Indeed, one of the RTCA task group’s conclusions was that the Notice to Airmen system was part of the problem: Most pilots who fly through affected areas experience no ill effects, causing some to simply ignore such warnings in the future.

“We call the NOTAMs ‘Chicken Little,’ ” says Rune Duke, an FAA official who was cochair of the RTCA’s task group. “They say the sky is falling over large areas…and it’s not realistic. There are mountains and all kinds of things that would prevent GPS interference from making it 500 nautical miles [926 km] from where it is initiated.”

GPS interference can be affected by the terrain, aircraft altitude and attitude, direction of flight, angle to and distance from the center of the interference, equipment aboard the plane, and many other factors, concluded the task group, which included representatives of the FAA, airlines, pilots, aircraft manufacturers, and the U.S. military. One aircraft could lose all GPS reception, even as another one nearby is completely unaffected. One military test might pass unnoticed while another causes chaos in the skies.

This unreliability has consequences. In 2014, a passenger plane approaching El Paso had to abort its landing after losing GPS reception. “This is the first time in my flying career that I have experienced or even heard of GPS signal jamming,” wrote the pilot in an ASRS report. “Although it was in the NOTAMs, it still caught us by surprise as we really did not expect to lose all GPS signals at any point. It was a good thing the weather was good or this could have become a real issue.”

Sometimes air traffic controllers are as much in the dark as pilots. “They are the last line of defense,” the FAA’s Duke told Spectrum. “And in many cases, air traffic control was not even aware of the GPS interference taking place.”

The RTCA report made many recommendations. The Department of Defense could improve coordination with the FAA, and it could refrain from testing GPS during periods of high air traffic. The FAA could overhaul its data collection and analysis, match anecdotal reports with digital data, and improve documentation of adverse events. The NOTAM system could be made easier to interpret, with warnings that more accurately match the experiences of pilots and controllers.

Remarkably, until the report came out, the FAA had been instructing pilots to report GPS anomalies only when they needed assistance from air traffic control. “The data has been somewhat of a challenge because we’ve somewhat discouraged reporting,” says Duke. “This has led the FAA to believe it’s not been such a problem.”

NOTAMs now encourage pilots to report all GPS interference, but many of the RTCA’s other recommendations are languishing within the Office of Accident Investigation and Prevention at the FAA.

New developments are making the problem worse. The NextGen project is accelerating the move of commercial aviation to satellite-enabled navigation. Emerging autonomous air systems, such as drones and air taxis, will put even more weight on GPS’s shaky shoulders.

When any new aircraft is adopted, it risks posing new challenges to the system. The Embraer EMB-505 Phenom 300, for instance, entered service in 2009 and has since become the world’s best-selling light jet. In 2016, the FAA warned that if the Phenom 300 encountered an unreliable or unavailable GPS signal, it could enter a Dutch roll (named for a Dutch skating technique), a dangerous combination of wagging and rocking that could cause pilots to lose control. The FAA instructed Phenom 300 owners to avoid all areas of GPS interference.

As GPS assumes an ever more prominent role, the military is naturally taking a stronger interest in it. “Year over year, the military’s need for GPS interference-event testing has increased,” says Duke. “There was an increase again in 2019, partly because of counter-UAS [drone] activity. And they’re now doing GPS interference where they previously had not, like Michigan, Wisconsin, and the Dakotas, because it adds to the realism of any type of military training.”

So there are ever more GPS-jamming tests, more aircraft navigating by satellite, and more pilots utterly reliant on GPS. It is a feedback loop, and it constantly raises the chances that one of these near misses and stop buzzers will end in catastrophe.

When asked to comment, the FAA said it has established a resilient navigation and surveillance infrastructure to enable aircraft to continue safe operations during a GPS outage, including radio beacons and radars. It also noted that it and other agencies are working to create a long-term GPS backup solution that will provide position, navigation, and ­timing—again, to minimize the effects of a loss of GPS.

However, in a report to Congress in April 2020, the agency coordinating this effort, the U.S. Department of Homeland Security, wrote: “DHS recommends that responsibility for mitigating temporary GPS outages be the responsibility of the individual user and not the responsibility of the Federal Government.” In short, the problem of GPS interference is not going away.

In September 2019, the pilot of a small business jet reported experienced jamming on a flight into New Mexico. He could hear that aircraft all around him were also affected, with some being forced to descend for safety. “Since the FAA is deprecating [ground-based radio aids], we are becoming dependent upon an unreliable navigation system,” wrote the pilot upon landing. “This extremely frequent [interference with] critical GPS navigation is a significant threat to aviation safety. This jamming has to end.”

The same pilot was jammed again on his way home.

This article appears in the February 2021 print issue as “Lost in Airspace.”

Boom Supersonic’s XB-1 Test Aircraft Promises a New Era of Faster-Than-Sound Travel

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/aerospace/aviation/boom-supersonics-xb-1-test-aircraft-promises-a-new-era-of-faster-than-sound-travel

The Concorde, the world’s first supersonic airliner, landed for the last time 17 years ago. It was a beautiful feat of engineering, but it never turned a profit. So why should we believe that Boom Supersonic can do better now?

Blake Scholl, the founder and chief executive, turns the question on its head: How could it not do better? The Concorde was designed in the 1960s using slide rules and wind tunnels, whereas Boom Supersonic’s planned airliner, the Overture, is being modeled in software and built out of ultralight materials, using superefficient jet engines. That should reduce the consumption of fuel per passenger-mile well below the Concorde’s gluttonous rate, low enough to make possible fares no pricier than today’s business class.

“The greatest challenge—it’s not technology,” says Scholl. “It’s not that people don’t want them. And it’s not that the money isn’t out there. It’s the complex execution, and it’s getting the right people together to do it, building a startup in an industry that doesn’t usually have startups.”

In October, Boom rolled out a one-third-scale model of the Overture called the XB-1, the point of which is to prove the design principles, cockpit ergonomics, flight envelope, even the experience of flight itself. The first flight will take place in the “near future,” over the Mojave Desert says Scholl. There’s just one seat, which means that the experience will be the pilot’s alone. The Overture, with 65 seats, is slated to fly in 2025, and allowing for the time it takes to certify it with regulators, it is planned for commercial flight by 2029.

Test planes are hardly standard in the commercial aviation industry. This one is necessary, though, because the company started with a clean sheet of paper.

“There’s nothing more educational than flying hardware,” Brian Durrence, senior vice president in charge of developing the ­Overture, tells IEEE Spectrum. “You’re usually using a previous aircraft—if you’re a company that has a stable of aircraft. But it’s probably not a great business plan to jump straight into building your first commercial aircraft brand new.”

The XB-1 is slender and sleek, with swept-back delta wings and three jet engines, two slung under the wings and a third in the tail. These are conventional jet engines, coupled to afterburners—thrust-enhancing, fuel-sucking devices that were used by the Concorde, contributing to its high cost of operation. But the future airliner will use advanced turbofan engines that won’t need afterburners.

Another difference from yesteryear is in the liberal use of virtual views of the aircraft’s surroundings. The Concorde, optimized as it was for speed, was awkward during takeoff and landing, when the wings had to be angled upward to provide sufficient lift at low speeds. To make sure the pilot had a good view of the runway, the plane thus needed a “drooping” nose, which was lowered at these times. But the XB-1 can keep a straight face because it relies instead on cameras and cockpit monitors to make the ground visible.

To speed the design process, the engineers had recourse not only to computer modeling but also to 3D printing, which lets them prototype a part in hours, revise it, and reprint it. Boom uses parts made by Stratasys, an additive-­manufacturing company, which uses a new laser system from Velo 3D to fuse titanium powder into objects with intricate internal channels, both to save weight and allow the passage of cooling air.

Perhaps Scholl’s most debatable claim is that there’s a big potential market for Mach-plus travel out there. If speed were really such a selling point, why have commercial airliners actually slowed down a bit since the 1960s? It’s not that today’s planes are pokier—even now, a pilot will routinely make up lost time simply by opening the throttle. No, today’s measured pace is a business decision: Carriers traded speed against fuel consumption because the market demanded low fares.

Scholl insists there are people who have a genuine need for speed. He cites internal market surveys suggesting that most business travelers would buy a supersonic flight at roughly the same price as business class, and he argues that still more customers might pop up once supersonic travel allowed you to, say, fly from New York to ­London in 3.5 hours, meet with associates, then get back home in time to sleep in your own bed.

Why not go after the market for business jets, then? Because Scholl has Elon Musk–like ambitions: He says that once business travel gets going, the unit cost of manufacturing will fall, further lowering fares and inducing even mere tourists to break the sound barrier. That, he argues, is why he’s leaving the market for supersonic business planes—those having 19 seats or less—to the likes of Virgin Galactic and Aerion Supersonic.

Boom’s flights will be between coastal cities, because overwater routes inflict no supersonic booms on anyone except the fish. So we’re talking about a niche within a niche. Still, overwater flight is a big and profitable slice of the pie, and it’s precisely on such long routes that speed matters the most.

This article appears in the January 2021 print issue as “Supersonic Travel Returns.”

DeepMind’s New AI Masters Games Without Even Being Taught the Rules

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/deepminds-new-ai-masters-games-without-even-been-taught-the-rules

The folks at DeepMind are pushing their methods one step further toward the dream of a machine that learns on its own, the way a child does.

The London-based company, a subsidiary of Alphabet, is officially publishing the research today, in Nature, although it tipped its hand back in November with a preprint in ArXiv. Only now, though, are the implications becoming clear: DeepMind is already looking into real-world applications.

DeepMind won fame in 2016 for AlphaGo, a reinforcement-learning system that beat the game of Go after training on millions of master-level games. In 2018 the company followed up with AlphaZero, which trained itself to beat Go, chess and Shogi, all without recourse to master games or advice. Now comes MuZero, which doesn’t even need to be shown the rules of the game.

The new system tries first one action, then another, learning what the rules allow, at the same time noticing the rewards that are proffered—in chess, by delivering checkmate; in Pac-Man, by swallowing a yellow dot. It then alters its methods until it hits on a way to win such rewards more readily—that is, it improves its play. Such learning by observation is ideal for any AI that faces problems that can’t be specified easily. In the messy real world—apart from the abstract purity of games—such problems abound.

“We’re exploring the application of MuZero to video compression, something that could not have been done with AlphaZero,” says Thomas Hubert, one of the dozen co-authors of the Nature article. 

“It’s because it would be very expensive to do it with AlphaZero,” adds Julian Schrittwieser, another co-author. 

Other applications under discussion are in self-driving cars (which in Alphabet is handled by its subsidiary, Waymo) and in protein design, the next step beyond protein folding (which sister program AlphaFold recently mastered). Here the goal might be to design a protein-based pharmaceutical that must act on something that is itself an actor, say a virus or a receptor on a cell’s surface.

By simultaneously learning the rules and improving its play, MuZero outdoes its DeepMind predecessors in the economical use of data. In the Atari game of Ms. Pac-Man, when MuZero was limited to considering six or seven simulations per move—“a number too small to cover all the available actions,” as DeepMind notes, in a statement—it still did quite well. 

The system takes a fair amount of computing muscle to train, but once trained, it needs so little processing to make its decisions that the entire operation might be managed on a smartphone. “And even the training isn’t so much,” says Schrittwieser. “An Atari game would take 2-3 weeks to train on a single GPU.”

One reason for the lean operation is that MuZero models only those aspects of its environment—in a game or in the world—that matter in the decision-making process. “After all, knowing an umbrella will keep you dry is more useful to know than modeling the pattern of raindrops in the air,” DeepMind notes, in a statement.

Knowing what’s important is important. Chess lore relates a story in which a famous grandmaster is asked how many moves ahead he looks. “Only one,” intones the champion, “but it is always the best.” That is, of course, an exaggeration, yet it holds a kernel of truth: Strong chessplayers generally examine lines of analysis that span only a few dozen positions, but they know at a glance which ones are worth looking at. 

Children can learn a general pattern after exposure to a very few instances—inferring Niagara from a drop of water, as it were. This astounding power of generalization has intrigued psychologists for generations; the linguist Noam Chomsky once argued that children had to be hard-wired with the basics of grammar because otherwise the “poverty of the stimulus” would have made it impossible for them to learn how to talk. Now, though, this idea is coming into question; maybe children really do glean much from very little.

Perhaps machines, too, are in the early stages of learning how to learn in that fashion. Cue the shark music!

Uber Sells its Robocar Unit

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/uber-sells-its-robocar-unit

Uber is selling its robocar development operation to Aurora and calling the move a step forward in its quest for autonomous driving technology.

True, it will now have a big stake in the two companies’ combined robocar projects. But this is the latest wrinkle in the consolidation that is under way throughout the self-driving business. Uber itself has in the past been among the acquisitive companies.

But this news is not something Uber’s founder would have welcomed. And it puts the lie to two verities common just a few years back: that every big player in road transport needed its own robocar research unit and that the payoff would come soon—like, now. 

The sale is valued at US $4 billion—a far cry, as Reuters reports, from the $7.5-billion valuation implicit in Uber’s deal last year to raise $1 billion from investors, including Toyota. But it’s still a hefty chunk of change, and that means Uber still has access to a trove of IP and, perhaps as important, robotics talent.

Indeed, when the Pittsburgh company’s disgraced founder and former CEO, Travis Kalanick, first bet big on self-driving tech in early 2015, he went on a hiring spree that gutted the robotics department of Carnegie Mellon University (CMU), also in Pittsburgh. Aurora is based there, too, so there won’t be an out-migration of roboticists from the area.

“Uber was losing $1 million to $2 million a day with no returns in sight in the near future,” says Raj Rajkumar, a professor of electrical and computer engineering at CMU. “Aurora has deep pockets; if it wins, Uber still wins. If not, at least this will cut the bleeding.”

Kalanick spent money like water, then was forced out in 2017 amid allegations of turning a blind eye to sexual harassment.  The company’s stock remained a favorite with investors, hitting an all-time high just last week.  At today’s market close the market capitalization stood at just under $94 billion, about half again as much as GM’s value. Yet Uber has never turned a profit.

Not turning a profit was once a feature rather than a bug. In the late 1990s, investors were able to put whatever valuation they pleased on a startup so long as that startup did not have any earnings. Without earnings, there can be no price-to-earnings ratio, and without that, my opinion of a company’s value is as good as yours.

In that tech bubble, and perhaps again in today’s mini-bubble, the value comes from somewhere else. It stems from matters unquantifiable—a certain feeling, a je ne sais quoi. In a word, hype.

Those 1990s dot-com startups were on to something, as we in these pandemic times, living off deliveries from FreshDirect, can testify. But they were somewhat ahead of their time. So is robocar tech.

Self-driving technology is clearly transforming the auto industry. Right now, you can buy cars that hew to their lanes, mitigate head-on crashes, warn of things lurking in your blind spots, and even check your eyes to make sure you are looking at the road. It was all too easy to project the trendline all the way to truly autonomous vehicles.

That’s what Uber (and a lot of other companies) did in 2016 when it said it would field such a true robocar in 2021. That’s three and a half weeks from now.

It was Uber, more than any other player, that showed the hollowness of such hype in 2017 when one of its robocars got into an industry-shaking crash in Arizona. The car, given control by a driver who wasn’t paying attention, killed a pedestrian. Across the country, experimental autonomous car fleets were temporarily grounded, and states that had offered lenient regulation began to tighten up. The coming of the driverless era suddenly got set back by years. It was a hard blow for those vendors of high-tech sensors and other equipment that can only make economic sense if manufactured at scale.

The industry is now in long-haul mode, and that requires it to concentrate its resources in a handful of places. Waymo, Aurora, GM and a handful of big auto companies are obvious choices, and among the suppliers there is the odd clear success, like Luminar. The lidar maker had a stellar coming out party on Wall Street last week, one that put Austin Russellits 25-year-old founder, in the billionaires’ club.

But a shakeout is a shakeout. Despite its stellar market cap, Uber is still a ride-hailing app, not General Motors. The company was wise to get out now.

Russia, China, the U.S.: Who Will Win the Hypersonic Arms Race?

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/aerospace/aviation/russia-china-the-us-who-will-win-the-hypersonic-arms-race

It’s obvious why the militaries of the world want missiles that can follow erratic paths at low altitude while flying at five times the speed of sound, eluding any chance at detection or interception.

“Think of it as delivering a pizza, except it’s not a pizza,” says Bradley Wheaton, a specialist in hypersonics at the Johns Hopkins University Applied Physics Laboratory (APL), in Maryland. “In the United States, just 15 minutes can cover the East Coast; a really fast missile takes 20 minutes to get to the West Coast. At these speeds, you have a factor of 50 increase in the area covered per unit of time.”

So the question isn’t why the great powers are pursuing hypersonic arms, but why they are doing so now. Quick answer: They are once again locked in an arms race.

The wider world first heard of this type of weaponry in March 2018, when Russian president Vladimir Putin gave a speech describing his country’s plans for a nuclear-powered cruise missile that could fly around the world at blinding speed, then snake around hills and dales to a target. His bold assertions have been questioned, particularly the part about nuclear power. Even so, a year later a nuclear accident killed seven people near a testing range off the northern coast of Russia, and U.S. intelligence officials speculated that it involved hypersonic experiments.

The nature of that accident is still shrouded in mystery, but it’s clear there’s been a huge increase in the research effort in hypersonics. Here’s a roundup of what the superpowers of the 21st century are doing to pursue what is, in fact, an old concept.

The hypersonic missiles in use or in testing in China and Russia can apparently carry either conventional warheads, aimed at ships and other small military targets, or nuclear ones, aimed at cities and government centers. These ship killers could deprive the United States of its preeminence at sea, which is more than enough reason for China, for instance, to develop hypersonics. But a nuclear-armed version that leaves the defender too little time to launch a retaliatory strike would do even more to shift the balance of power, because it would dismantle the painstakingly constructed system of deterrence known as mutually assured destruction, or by the jocular acronym MAD.

“The nuclear side is very destabilizing, which is why the Russians are going after it,” says Christopher Combs, a professor of mechanical engineering at the University of Texas at San Antonio. “But on the U.S. side we see no need for that, so we’re going conventional.”

That is indeed the official U.S. policy. But in August, some months after Combs spoke with IEEE Spectrum, an Aviation Week article pointed out that an Air Force agency charged with nuclear weapons requested that companies submit ideas for a “thermal protection system that can support [a] hypersonic glide to ICBM ranges.” Soon after that, the request was hastily taken down, and the U.S. Air Force felt compelled to restate its policy not to pursue nuclear-capable hypersonic weapons.

Today’s forays into hypersonic research have deep roots, reaching back to the late 1950s, in both the United States and the Soviet Union. Although this work continued for decades, in 1994, a few years after the Cold War ended with the dissolution of the Soviet Union, the United States pulled the plug on research into hypersonic flight, including its last and biggest program, the Rockwell X-30. Nicknamed the “Orient Express,” the X-30 was to have been a crewed transport that would top out at 25 times the speed of sound, Mach 25—enough to take off from Washington, D.C., and land in Tokyo 2 hours later. Russia also discontinued research in this area during the 1990s, when its economy was in tatters.

Today’s test vehicles just pick up where the old ones left off, explains Alexander Fedorov, a professor at Moscow Institute of Physics and Technology and an expert on hypersonic flow at the boundary layer, which is right next to the vehicle’s skin. “What’s flying now is just a demonstration of technology—the science is 30 years old,” he says.

Fedorov has lectured in the United States; he even helps U.S. graduate students with their research. He laments how the arms race has stifled international cooperation, adding that he himself has “zero knowledge” about the military project Putin touted two years ago. “But I know that people are working on it,” he adds.

In the new race, Fedorov says, Russia has experience without much money, China has money without much experience, and the United States has both, although it revived its efforts later than did Russia or China and is now playing catch-up. For fiscal 2021, U.S. research agencies have budgeted US $3.2 billion[PDF] for all hypersonic weapons research, up from $2.6 billion in the previous year.

Other programs are under way in India and Australia; even Israel and Iran are in the game, if on the sidelines. But Fedorov suggests that the Chinese are the ones to watch: They used to talk at international meetings, he says, but now they mostly just listen, which is what you’d expect if they had started working on truly new ideas—of which, he reiterates, there are very few on display. All the competing powers have shown vehicles that are “very conservative,” he says.

One good reason for the rarity of radical designs is the enormous expense of the research. Engineers can learn only so much by running tests on the ground, using computational fluid-flow models and hypersonic wind tunnels, which themselves cost a pretty penny (and simulate only some limited aspects of hypersonic flight). Engineers really need to fly their creations, and usually when they do, they use up the test vehicle. That makes design iteration very costly.

It’s no wonder hypersonic prototypes fail so often. In mere supersonic flight, passing Mach 1 is a clear-cut thing: The plane outdistances the sound waves that it imparts to the air to produce a shock wave, which forms the familiar two-beat sonic boom. But as the vehicle exceeds Mach 5, the density of the air just behind the shock wave diminishes, allowing the wave to nestle along the surface of the vehicle. That in-your-face layer poses no aerodynamic problems, and it could even be an advantage, when it’s smooth. But it can become turbulent in a heartbeat.

“Predicting when it’s going turbulent is hard,” says Wheaton, of Johns Hopkins APL. “And it’s important because when it does, heating goes up, and it affects how control surfaces can steer. Also, there’s more drag.”

The pioneers of hypersonic flight learned about turbulence the hard way. On one of its many flights, in 1967, the U.S. Air Force’s X-15 experimental hypersonic plane went into a spin, killing the pilot, Michael J. Adams. The right stuff, indeed.

Hypersonic missiles come in two varieties. The first kind, launched into space on the tip of a ballistic missile, punches down into the atmosphere, then uses momentum to maneuver. Such “boost-glide” missiles have no jet engines and thus need no air inlets, so it’s easy to make them symmetrical, typically a tube with a cone-shape tip. Every part of the skin gets equal exposure to the air, which at these speeds breaks down into a plume of plasma, like the one that puts astronauts in radio silence during reentry.

Boost-glide missiles are now operational. China appears to have deployed the first one, called the Dongfeng-17, a ballistic missile that carries glide vehicles. Some of those gliders are billed as capable of knocking out U.S. Navy supercarriers. For such a mission it need not pack a nuclear or even a conventional warhead, instead relying on its enormous kinetic energy to destroy its target. And there’s nothing that any country can now do to defend against it.

“Those things are going so fast, you’re not going to get it,” General Mark Milley, chairman of the Joint Chiefs of Staff, said in March, in testimony[PDF] before Congress.

You might think that you give up the element of surprise by starting with a ballistic trajectory. But not completely. Once the hypersonic missile comes out of its dive to fly horizontally, it becomes invisible to sparsely spaced radars, particularly the handful based in the Pacific Ocean. And that flat flight path can swerve a lot. That’s not because of any AI-managed magic—the vehicle just follows a randomized, preprogrammed set of turns. But the effect on those playing defense is the same: The pizza arrives before they can find their wallets.

The second kind of hypersonic missile gets the bulk of its impulse from a jet engine that inhales air really fast, whirls it together with fuel, and burns the mixture in the instant that it tarries in the combustion chamber before blowing out the back as exhaust. Because these engines don’t need compressors but simply use the force of forward movement to ram air inside, and because that combustion proceeds supersonically, they are called supersonic ram jets—scramjets, for short.

One advantage the scramjet has over the boost-glide missile is its ability to stay below radar and continue to maneuver over great distances, all the way to its target. And because it never enters outer space, it doesn’t need to ride a rocket booster, although it does need some powerful helper to get it up to the speed at which first a ramjet, and then a scramjet, can work.

Another advantage of the scramjet is that it can, in principle, be applied for civilian purposes, moving people or packages that absolutely, positively have to be there quickly. The Europeans have such a project. So do the Chinese, and Boeing has shown a concept. Everyone talks up this possibility because, frankly, it’s the only peaceable talking point there is for hypersonics. Don’t forget, though, that supersonic commercial flight happened long ago, made no money, and ended—and supersonic flight is way easier.

The scramjet has one big disadvantage: It’s a lot harder technically. Any hypersonic vehicle must fend off the rapidly moving air outside, which can heat the leading edges to as high as 3,000 °C. But that heat and stress is nothing like the hellfire inside a scramjet engine. There, the heat cannot radiate away, it’s hard to keep the flame lit, and the insides can come apart second by second, affecting airflow and stability. Five minutes is a long time in this business.

That’s why scramjets, though conceived in the 1950s, still remain a work in progress. In the early 2000s, NASA’s X-43 used scramjets for about 10 seconds in flight. In 2013, Boeing’s X-51 Waverider flew at hypersonic speed for 210 seconds while under scramjet power.

Tests on the ground have fared better. In May, workers at the Beijing Academy of Sciences ran a scramjet for 10 minutes, according to a report in the South China Morning Post. Two years earlier, the leader of the project, Fan Xuejun, told the same newspaper that a factory was being built to construct a variety of scramjets, some supposedly for civilian application. One engine would use a combined cycle, with a turbojet to get off the ground, a ramjet to accelerate to near-hypersonic speed, a scramjet to blow past Mach 5, and maybe even a rocket to top off the thrust. That’s a lot of moving parts—and an ambition worthy of Elon Musk. But even Musk might hesitate to follow Putin’s proposal to use a nuclear reactor for energy.

The cost of developing a scramjet capability is only one part of the economic challenge. The other is making the engine cheap enough to deploy and use in a routine way. To do that, you need fuel you can rely on. Early researchers worked with a class of highly energetic fuels that would react on contact with air, like triethylaluminum.

“It’s a fantastic scramjet engine fuel, but very toxic, a bit like the hydrazine fuels used in rockets nowadays, and this became an inhibitor,” says David Van Wie, of Johns Hopkins APL, explaining why triethylaluminum was dropped from serious consideration.

Next up was liquid hydrogen, which is also very reactive. But it needs elaborate cooling. Worse, it packs a rather low amount of energy into a given volume, and as a cryogenic fuel it is inconvenient to store and transport. It has been and still is used in experimental missiles, such as the X-43.

Today’s choice for practical missiles is hydrocarbons, of the same ilk as jet fuel, but fancier. The Chinese scramjet that burned for 10 minutes—like others on the drawing board around the world—burns hydrocarbons. Here the problem lies in breaking down the hydrocarbon’s long molecular chains fast so the shards can bind with oxygen in the split second when the substances meet and mate. And a split second isn’t enough—you have to do it continuously, one split second after another, “like keeping a match lit in a hurricane,” in the oft-quoted words of NASA spokesman Gary Creech, back in 2004.

Scramjet designs try to protect the flame by shaping the inflow geometry to create an eddy, forming a calm zone not unlike the eye of a hurricane. Flameouts are particularly worrisome when the missile starts jinking about, thus disrupting the airflow. “It’s the ‘unstart’ phenomenon, where the shock wave at the air inlets stops the engine, and the vehicle will be lost,” says John D. Schmisseur, a researcher at the University of Tennessee Space Institute, in Tullahoma. And you really only get to meet such gremlins in actual flight, he adds.

There are other problems besides flameout that arise when you’re inhaling a tornado. One expert, who requested anonymity, puts it this way: “If you’re ingesting air, it’s no longer air; it’s a complex mix of ionized atmosphere,” he says. “There’s no water anymore; it’s all hydrogen and oxygen, and the nitrogen is to some fraction elemental, not molecular. So combustion isn’t air and fuel—it’s whatever you’re taking in, whatever junk—which means chemistry at the inlet matters.”

Simulating the chemistry is what makes hypersonic wind-tunnel tests problematic. It’s fairly simple to see how an airfoil responds aerodynamically to Mach 5—just cool the air so that the speed of sound drops, giving a higher Mach number for a given airspeed. But blowing cold air tells you only a small part of the story because it heads off all the chemistry you want to study. True, you can instead run your wind tunnel fast, hot, and dense—at “high enthalpy,” to use the term of art—but it’s hard to keep that maelstrom going for more than a few milliseconds.

“Get the airspeed high enough to start up the chemistry and the reactions sap the energy,” says Mark Gragston, an aerospace expert who’s also at the UT Space Institute. Getting access to such monster machines isn’t easy, either. “At Arnold Air Force Base, across the street from me, the Air Force does high-enthalpy wind-tunnel experiments,” he says. “They’re booked up three years in advance.”

Other countries have more of the necessary wind tunnels; even India has about a dozen[PDF]. Right now, the United States is spending loads of money building these machines in an effort to catch up with Russia and China. You could say there is a wind-tunnel gap—one more reason U.S. researchers are keen for test flights.

Another thing about cooling the air: It does wonders for any combustion engine, even the kind that pushes pistons. Reaction Engines, in Abingdon, England, appears to be the first to try to apply this phenomenon in flight, with a special precooling unit. In its less-ambitious scheme, the precooler sits in front of the air inlet of a standard turbojet, adding power and efficiency. In its more-ambitious concept, called SABRE (Synergetic Air Breathing Rocket Engine), the engine operates in combined mode: It takes off as a turbojet assisted by the precooler and accelerates until a ramjet can switch on, adding enough thrust to reach (but not exceed) Mach 5. Then, as the vehicle climbs and the atmosphere thins out, the engine switches to pure rocket mode, finally launching a payload into orbit.

In principle, a precooler could work in a scramjet. But if anyone’s trying that, they’re not talking about it.

Fast forward five years and boost-glide missiles will no doubt be deployed in the service of multiple countries. Jump ahead another 15 or 20 years, and the world’s superpowers will have scramjet missiles.

So what? Won’t these things always play second fiddle to ballistic missiles? And won’t the defense also have its say, by unveiling superfast antimissiles and Buck Rogers–style directed-energy weapons?

Perhaps not. The defense always has the harder job. As President John F. Kennedy noted in an interview way back in 1962, when talking about antiballistic missile defense, what you are trying to do is shoot a bullet with a bullet. And, he added, you have to shoot down not just one but many, including a bodyguard of decoys.

Today there are indeed antimissile defenses that can protect particular targets against ballistic missiles, at least when they’re not being fired in massive salvos. But you can’t defend everything, which is why the great powers still count on deterrence through mutually assured destruction. By that logic, if you can detect cruise missiles soon enough, you can at least make those who launched them wish they hadn’t.

For that to work, we’ll need better eyes in the skies. In the United States, the military wants around $100 million for research on low-orbit space sensors to detect low-flying hypersonic missiles, Aviation Week reported in 2019.

Hardly any of the recent advances in hypersonic flight result from new scientific discoveries; almost all of it stems from changes in political will. Powers that challenge the international status quo—China and Russia—have found the resources and the will to shape the arms race to their benefit. Powers that benefit from the status quo—the United States, above all—are responding in kind. Politicians fired the starting pistol, and the technologists are gamely leaping forward.

And the point? There is no point. It’s an arms race.

This article appears in the December 2020 print issue as “Going Hypersonic.”

MIT Unveils a Roboat Big Enough to Stand on

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/marine/mit-unveils-a-roboat-big-enough-to-stand-on

An early idea for making the most of autonomous technology was to put a human-driven lead car in front of a string of robocars. It’s called platooning, and it looks for all the world like a mama goose leading a gaggle of goslings.

You can play “follow the leader” on the water, too, but because boats can easily touch and move in tandem, you can have much more complex arrangements than simple caravans. The coordination between the lead boats and the followers allows you to go lighter on sensors and other hardware when designing those follower boats, which can rely on the lead boat to sense the wider environment. This all means that small boats can form and reform in a variety of ways—“shapeshifting” into useful structures like a bridge or a platform. Presto! You’ve got yourself a lilypad fit to host a popup event on a canal or lake—say, a flower show or a concert.

“You could create on demand whatever is needed to shift human activities to the water,” Daniela Rus, a professor of electrical engineer and computer science at MIT tells IEEE Spectrum. She is one of the leaders of a project jointly run by the university’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and its Senseable City Lab, to explore autonomous boats.

Five years ago the project began by moving meter-long boats around in pools and in a canal; now it is graduating to bigger boats. “Roboat II” measures 2 meters, which MIT calls “Covid-friendly” because it is sufficient for social isolation between passengers. It is being tested on Boston’s Charles River, and it has also braved the canals of Amsterdam, where it steered itself around for three hours and returned to within 17 centimeters (6.7 inches) of its origin. A full-size model, at 4 meters, is being built by the Amsterdam Institute for Advanced Metropolitan Solutions, for testing in Amsterdam’s canals.

The latest developments are described in a paper that is being presented today virtually, at the International Conference on Intelligent Robots and Systems. The lead author is Wei Wang, a postdoctoral fellow at MIT; Rus is among the co-authors.

“In order to be really accurate and to get situational awareness you have to put a rich set of sensors on every single node in the system—every single boat,” Rus says. “With the leader-follower arrangement you don’t have to have that. If you have a large swarm of vehicles, do they all need to be endowed with all the sensors?”

MIT’s smallest robo-boats haven’t made the jump from the lab to the marketplace, although they can in principle be used for mapping, water-pollution testing, and other highly localized work. The real promise lies in bigger boats, with their greater carrying capacity and longer run times between charges.

Autonomous charging is possible, though, for boats large or small. They can always plug themselves in, much as a Roomba vacuum cleaner does. Full autonomy is the ultimate goal, but there’s a lot that can be done even now. Just follow the leader.

A Battery That’s Tough Enough To Take Structural Loads

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/energywise/energy/batteries-storage/a-structural-battery-that-makes-up-the-machine-that-it-powers

Batteries can add considerable mass to any design, and they have to be supported using a sufficiently strong structure, which can add significant mass of its own. Now researchers at the University of Michigan have designed a structural zinc-air battery, one that integrates directly into the machine that it powers and serves as a load-bearing part. 

That feature saves weight and thus increases effective storage capacity, adding to the already hefty energy density of the zinc-air chemistry. And the very elements that make the battery physically strong help contain the chemistry’s longstanding tendency to degrade over many hundreds of charge-discharge cycles. 

The research is being published today in Science Robotics.

Nicholas Kotov, a professor of chemical engineer, is the leader of the project. He would not say how many watt-hours his prototype stores per gram, but he did note that zinc air—because it draw on ambient air for its electricity-producing reactions—is inherently about three times as energy-dense as lithium-ion cells. And, because using the battery as a structural part means dispensing with an interior battery pack, you could free up perhaps 20 percent of a machine’s interior. Along with other factors the new battery could in principle provide as much as 72 times the energy per unit of volume (not of mass) as today’s lithium-ion workhorses.

“It’s not as if we invented something that was there before us,” Kotov says. ”I look in the mirror and I see my layer of fat—that’s for the storage of energy, but it also serves other purposes,” like keeping you warm in the wintertime.  (A similar advance occurred in rocketry when designers learned how to make some liquid propellant tanks load bearing, eliminating the mass penalty of having separate external hull and internal tank walls.)

Others have spoken of putting batteries, including the lithium-ion kind, into load-bearing parts in vehicles. Ford, BMW, and Airbus, for instance, have expressed interest in the idea. The main problem to overcome is the tradeoff in load-bearing batteries between electrochemical performance and mechanical strength.

The Michigan group get both qualities by using a solid electrolyte (which can’t leak under stress) and by covering the electrodes with a membrane whose nanostructure of fibers is derived from Kevlar. That makes the membrane tough enough to suppress the growth of dendrites—branching fibers of metal that tend to form on an electrode with every charge-discharge cycle and which degrade the battery.

The Kevlar need not be purchased new but can be salvaged from discarded body armor. Other manufacturing steps should be easy, too, Kotov says. He has only just begun to talk to potential commercial partners, but he says there’s no reason why his battery couldn’t hit the market in the next three or four years.

Drones and other autonomous robots might be the most logical first application because their range is so severely chained to their battery capacity. Also, because such robots don’t carry people about, they face less of a hurdle from safety regulators leery of a fundamentally new battery type.

“And it’s not just about the big Amazon robots but also very small ones,” Kotov says. “Energy storage is a very significant issue for small and flexible soft robots.”

Here’s a video showing how Kotov’s lab has used batteries to form the “exoskeleton” of robots that scuttle like worms or scorpions.

Noise-Cancelling Smart Window Blocks Street Din

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/tech-talk/green-tech/buildings/canceling-street-noise-at-your-homes-window

During hot weather, it’s nice to open a window to let in a breeze. Maybe not, though, if the window also lets in a cacophony from cars and trucks roaring past.

Street noise is a nuisance, a health hazard, and often cited as a reason to abandon the city for the quieter pastures. Why can’t technology ease the problem? We have noise-cancelling headphones; why not noise-cancelling windows as well? Now, researchers in Singapore have created just such a thing for mockup room, and they are working on adapting their proof of principle to a real room.

The idea is simple: A sensor picks up a regularly repeating waveform, like the sound created by a rolling wheel or a turning propeller. Electronics characterizes the wave, generates a mirror image of it, and emits that second “antiwave” from speaker, causing the two waves’ peaks and troughs to cancel out.

Antinoise works best for frequencies above 300 Hertz and up to about 1000 Hz—so think about the rumble of traffic rather than the cracking of fireworks that has been plaguing some U.S. cities this summer. Antinoise also works best in limited spaces, where the wave and its antiwave are sure to meet up properly, as in the gap between a headphone and an ear. However, with careful engineering, the audio trick can help in an airplane’s cabin and even in a car. In airliners, the antinoise is conveyed through special “shakers” attached to the fuselage; in cars it’s channeled through the existing sound system.

In a paper published today in the British journal Nature, researchers at Nanyang Technological University describe how an array of 24 small speakers placed in a window, together with a sensor, can generate an antinoise signal strong enough to cut the room’s noise by 10 decibels as perceived by the human ear—that is, A-adjusted decibels, or dBA. That’s about the difference between heavy traffic 90 meters away (60 dBA) and a quiet moment in a city (50 dBA).

It’s a striking achievement to make wave and antiwave cancel out perfectly throughout an entire room. The key is that the noise all comes through a relatively small aperture—the window, explains Bhan Lam, an electrical engineer and the leader of the group. 

“In a way, we are treating the window opening as the noise source,” he tells IEEE Spectrum. “Effective control of the noise source will result in noise control everywhere in the room.” He adds that simulations show that it ought to work no matter how big the room is.

There are two engineering tradeoffs. First, as you move the speakers further apart, the highest frequency they can cancel goes down. And as you make the speakers smaller, you reduce their maximum output power and their bass response. But if you really want to make the most of today’s speaker technology, Bhan says, you can enlarge the window so that it can accommodate bigger speakers.

Years ago, noise from overflying airliners so ruffled people at the U.S. Open tennis tournament, in Queens, NY, that the city arranged to re-route air traffic to and from LaGuardia Airport for the duration of the event. Why can’t antinoise do that job instead?

“In an open space, if the noise source is far away—say, from an aircraft—it becomes a challenging problem,” Bahn explains. “This type of control is termed as spatial active noise control, and the research is still in the fundamental stage; only simulations have been reported thus far.”

Q&A: The Masterminds Behind Toyota’s Self-Driving Cars Say AI Still Has a Way to Go

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/transportation/self-driving/qa-the-masterminds-behind-toyotas-selfdriving-cars-say-ai-still-has-a-way-to-go

Getting a car to drive itself is undoubtedly the most ambitious commercial application of artificial intelligence (AI). The research project was kicked into life by the 2004 DARPA Urban Challenge and then taken up as a business proposition, first by Alphabet, and later by the big automakers.

The industry-wide effort vacuumed up many of the world’s best roboticists and set rival companies on a multibillion-dollar acquisitions spree. It also launched a cycle of hype that paraded ever more ambitious deadlines—the most famous of which, made by Alphabet’s Sergei Brin in 2012, was that full self-driving technology would be ready by 2017. Those deadlines have all been missed.

Much of the exhilaration was inspired by the seeming miracles that a new kind of AI—deep learning—was achieving in playing games, recognizing faces, and transliterating voices. Deep learning excels at tasks involving pattern recognition—a particular challenge for older, rule-based AI techniques. However, it now seems that deep learning will not soon master the other intellectual challenges of driving, such as anticipating what human beings might do.

Among the roboticists who have been involved from the start are Gill Pratt, the chief executive officer of Toyota Research Institute (TRI) , formerly a program manager at the Defense Advanced Research Projects Agency (DARPA); and Wolfram Burgard, vice president of automated driving technology for TRI and president of the IEEE Robotics and Automation Society. The duo spoke with IEEE Spectrum’s Philip Ross at TRI’s offices in Palo Alto, Calif.

This interview has been condensed and edited for clarity.

IEEE Spectrum: How does AI handle the various parts of the self-driving problem?

Gill Pratt: There are three different systems that you need in a self-driving car: It starts with perception, then goes to prediction, and then goes to planning.

The one that by far is the most problematic is prediction. It’s not prediction of other automated cars, because if all cars were automated, this problem would be much more simple. How do you predict what a human being is going to do? That’s difficult for deep learning to learn right now.

Spectrum: Can you offset the weakness in prediction with stupendous perception?

Wolfram Burgard: Yes, that is what car companies basically do. A camera provides semantics, lidar provides distance, radar provides velocities. But all this comes with problems, because sometimes you look at the world from different positions—that’s called parallax. Sometimes you don’t know which range estimate that pixel belongs to. That might make the decision complicated as to whether that is a person painted onto the side of a truck or whether this is an actual person.

With deep learning there is this promise that if you throw enough data at these networks, it’s going to work—finally. But it turns out that the amount of data that you need for self-driving cars is far larger than we expected.

Spectrum: When do deep learning’s limitations become apparent?

Pratt: The way to think about deep learning is that it’s really high-performance pattern matching. You have input and output as training pairs; you say this image should lead to that result; and you just do that again and again, for hundreds of thousands, millions of times.

Here’s the logical fallacy that I think most people have fallen prey to with deep learning. A lot of what we do with our brains can be thought of as pattern matching: “Oh, I see this stop sign, so I should stop.” But it doesn’t mean all of intelligence can be done through pattern matching.

For instance, when I’m driving and I see a mother holding the hand of a child on a corner and trying to cross the street, I am pretty sure she’s not going to cross at a red light and jaywalk. I know from my experience being a human being that mothers and children don’t act that way. On the other hand, say there are two teenagers—with blue hair, skateboards, and a disaffected look. Are they going to jaywalk? I look at that, you look at that, and instantly the probability in your mind that they’ll jaywalk is much higher than for the mother holding the hand of the child. It’s not that you’ve seen 100,000 cases of young kids—it’s that you understand what it is to be either a teenager or a mother holding a child’s hand.

You can try to fake that kind of intelligence. If you specifically train a neural network on data like that, you could pattern-match that. But you’d have to know to do it.

Spectrum: So you’re saying that when you substitute pattern recognition for reasoning, the marginal return on the investment falls off pretty fast?

Pratt: That’s absolutely right. Unfortunately, we don’t have the ability to make an AI that thinks yet, so we don’t know what to do. We keep trying to use the deep-learning hammer to hammer more nails—we say, well, let’s just pour more data in, and more data.

Spectrum: Couldn’t you train the deep-learning system to recognize teenagers and to assign the category a high propensity for jaywalking?

Burgard: People have been doing that. But it turns out that these heuristics you come up with are extremely hard to tweak. Also, sometimes the heuristics are contradictory, which makes it extremely hard to design these expert systems based on rules. This is where the strength of the deep-learning methods lies, because somehow they encode a way to see a pattern where, for example, here’s a feature and over there is another feature; it’s about the sheer number of parameters you have available.

Our separation of the components of a self-driving AI eases the development and even the learning of the AI systems. Some companies even think about using deep learning to do the job fully, from end to end, not having any structure at all—basically, directly mapping perceptions to actions. 

Pratt: There are companies that have tried it; Nvidia certainly tried it. In general, it’s been found not to work very well. So people divide the problem into blocks, where we understand what each block does, and we try to make each block work well. Some of the blocks end up more like the expert system we talked about, where we actually code things, and other blocks end up more like machine learning.

Spectrum: So, what’s next—what new technique is in the offing?

Pratt: If I knew the answer, we’d do it. [Laughter]

Spectrum: You said that if all cars on the road were automated, the problem would be easy. Why not “geofence” the heck out of the self-driving problem, and have areas where only self-driving cars are allowed?

Pratt: That means putting in constraints on the operational design domain. This includes the geography—where the car should be automated; it includes the weather, it includes the level of traffic, it includes speed. If the car is going slow enough to avoid colliding without risking a rear-end collision, that makes the problem much easier. Street trolleys operate with traffic still in some parts of the world, and that seems to work out just fine. People learn that this vehicle may stop at unexpected times. My suspicion is, that is where we’ll see Level 4 autonomy in cities. It’s going to be in the lower speeds.

That’s a sweet spot in the operational design domain, without a doubt. There’s another one at high speed on a highway, because access to highways is so limited. But unfortunately there is still the occasional debris that suddenly crosses the road, and the weather gets bad. The classic example is when somebody irresponsibly ties a mattress to the top of a car and it falls off; what are you going to do? And the answer is that terrible things happen—even for humans.

Spectrum: Learning by doing worked for the first cars, the first planes, the first steam boilers, and even the first nuclear reactors. We ran risks then; why not now?

Pratt: It has to do with the times. During the era where cars took off, all kinds of accidents happened, women died in childbirth, all sorts of diseases ran rampant; the expected characteristic of life was that bad things happened. Expectations have changed. Now the chance of dying in some freak accident is quite low because of all the learning that’s gone on, the OSHA [Occupational Safety and Health Administration] rules, UL code for electrical appliances, all the building standards, medicine.

Furthermore—and we think this is very important—we believe that empathy for a human being at the wheel is a significant factor in public acceptance when there is a crash. We don’t know this for sure—it’s a speculation on our part. I’ve driven, I’ve had close calls; that could have been me that made that mistake and had that wreck. I think people are more tolerant when somebody else makes mistakes, and there’s an awful crash. In the case of an automated car, we worry that that empathy won’t be there.

Spectrum: Toyota is building a system called Guardian to back up the driver, and a more futuristic system called Chauffeur, to replace the driver. How can Chauffeur ever succeed? It has to be better than a human plus Guardian!

Pratt: In the discussions we’ve had with others in this field, we’ve talked about that a lot. What is the standard? Is it a person in a basic car? Or is it a person with a car that has active safety systems in it? And what will people think is good enough?

These systems will never be perfect—there will always be some accidents, and no matter how hard we try there will still be occasions where there will be some fatalities. At what threshold are people willing to say that’s okay?

Spectrum: You were among the first top researchers to warn against hyping self-driving technology. What did you see that so many other players did not?

Pratt: First, in my own case, during my time at DARPA I worked on robotics, not cars. So I was somewhat of an outsider. I was looking at it from a fresh perspective, and that helps a lot.

Second, [when I joined Toyota in 2015] I was joining a company that is very careful—even though we have made some giant leaps—with the Prius hybrid drive system as an example. Even so, in general, the philosophy at Toyota is kaizen—making the cars incrementally better every single day. That care meant that I was tasked with thinking very deeply about this thing before making prognostications.

And the final part: It was a new job for me. The first night after I signed the contract I felt this incredible responsibility. I couldn’t sleep that whole night, so I started to multiply out the numbers, all using a factor of 10. How many cars do we have on the road? Cars on average last 10 years, though ours last 20, but let’s call it 10. They travel on an order of 10,000 miles per year. Multiply all that out and you get 10 to the 10th miles per year for our fleet on Planet Earth, a really big number. I asked myself, if all of those cars had automated drive, how good would they have to be to tolerate the number of crashes that would still occur? And the answer was so incredibly good that I knew it would take a long time. That was five years ago.

Burgard: We are now in the age of deep learning, and we don’t know what will come after. We are still making progress with existing techniques, and they look very promising. But the gradient is not as steep as it was a few years ago.

Pratt: There isn’t anything that’s telling us that it can’t be done; I should be very clear on that. Just because we don’t know how to do it doesn’t mean it can’t be done.

Mercedes and Nvidia Announce the Advent of the Software-Defined Car

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/mercedes-and-nvidia-announce-the-advent-of-the-softwaredefined-car

The idea of tomorrow’s car as a computer on wheels is ripening apace. Today, Daimler and Nvidia announced that Daimler’s carmaking arm, Mercedes-Benz, will drop Nvidia’s newest computerized driving system into every car it sells, beginning in 2024.

The system, called the Drive AGX Orin, is a system-on-a-chip that was announced in December and is planned to ship in 2022. It’s an open system, but as adapted for Mercedes, it will be laden with specially designed software. The result, say the two companies, will be a software-defined car:  Customers will buy a car, then periodically download new features, among them some that were not known at the time of purchase. This capability will be enhanced by using software instead of dedicated hardware in the form of a constellation of electronic control units, or ECUs.

“In modern cars, there can be 100, up to 125, ECUs,” said Danny Shapiro, Nvidia’s senior director of automotive. “Many of those will be replaced by software apps. That will change how different things function in the car—from windshield wipers to door locks to performance mode.”

The plan is to give cars a degree of self-driving competence comparable to Level 2  (where the car assists the driver) and Level 3 (where the driver can do other things as the car drives itself, while remaining ready to take back the wheel). The ability to park itself will be Level 4 (where there’s no need to mind the car at all, so long as it’s operating in a predefined comfort zone).

In all these matters, the Nvidia-Daimler partnership is following a trail that Tesla blazed and others have followed. Volkswagen plainly emulated Tesla in a project that is just now yielding its first fruit: an all-electric car that will employ a single master electronic architecture that will eventually power all of VW’s electric and self-driving cars. The rollout was slightly delayed because of glitches in software, as we reported last week

Asked whether Daimler would hire a horde of software experts, as Volkwagen is doing, and thus become something of a software company, Bernhard Wardin, spokesman for Daimler’s autonomous driving and artificial intelligence division, said he had no comment. (Sounds like a “yes.”) He added that though a car model had been selected for the debut of the new system, its name was still under wraps.

One thing that strikes the eye is the considerable muscle of the system. The AGX Orin has 17 billion transistors, incorporating what Nvidia says are new deep learning and computer vision accelerators that “deliver 200 trillion operations per second—nearly 7 times the performance of Nvidia’s previous generation Xavier SoC.” 

It’s interesting to note that the Xavier itself began to ship only last year, after being announced toward the end of 2016. On its first big publicity tour, at the 2017 Consumer Electronics Show, the Xavier was touted by Jen-Hsun Huang, the CEO of Nvidia. He spoke together with the head of Audi, which was going to use the Xavier in a “Level 4” car that was supposed to hit the road in three years. Yes, that would be now.

So, a chip 7 times more powerful, with far more advanced AI software, is now positioned to power a car with mere Level 2/3 capabilities—in 2024.

None of this is meant to single out Nvidia or Daimler for ridicule. It’s meant to ridicule the entire industry, both the tech people and the car people, who have been busy walking back expectations for self-driving for the past two years. And to ridicule us tech journalists, who have been backtracking right alongside them.

Cars that help the driver do the driving are here already. More help is on the way. Progress is real. But the future is still in the future: Robocar tech is harder than we thought.

Can Electric Cars on the Highway Emulate Air-to-Air Refueling?

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/energy/batteries-storage/will-electric-cars-on-the-highway-emulate-airtoair-refueling

Jet fighters can’t carry a huge tank of fuel because it would slow them down. Instead they have recourse to air-to-air refueling, using massive tanker planes as their gas stations. 

What if electric cars could do the same thing, while zooming down the highway? One car with charge to spare could get in front of another that was short of juice, and the two could extend telescopic charging booms until they linked together magnetically. The charge-rich car would then share some of its largesse. And to complete the aerial analogy, just add a few big “tanker trucks” bearing enormous batteries to beef up the power reserves of an entire flotilla of EVs.

The advantages of the concept are clear: It uses a lot of battery capacity that would otherwise go untapped, and it allows cars to save time in transit by recharging on the go, without taking detours or sitting still while topping off.

Yeah, and the tooth fairy leaves presents under your pillow. We’re too far into the month for this kind of story. Right?

Maybe it’s no April Fools joke. Maybe sharing charge is the way forward, not just for electric cars and trucks on the highways but for other mobile vehicles. That’s the brief of professor Swarup Bhunia and his colleagues in the department of electrical and computer engineering at the University of Florida, in Gainesville.

Bhunia is no mere enthusiast: He has written three features so far for IEEE Spectrum (this, this and this). And his group has published their new proposal in arXiv, an online forum for preprints that have been vetted, if not yet fully peer-reviewed. The researchers call the concept peer-to-peer car charging.

The point is to make a given amount of battery go further and thus to solve the two main problems of electric vehicles—high cost and range anxiety. In 2019 batteries made up about a third of the cost of a mid-size electric car, down from half just a few years ago, but still a huge expense. And though most drivers typically cover only short distances, they usually want to be able to go very far if need be.

Mobile charging works by dividing a car’s battery pack into independent banks of cells. One bank runs the motor, the other one accepts charge. If the power source is a battery-bearing truck, you can move a great deal of power —enough for “an extra 20 miles of range,” Bhunia suggests. True, even a monster truck can charge only one car at a time, but each newly topped-off car will then be in a position to spare a few watt-hours for other cars it meets down the road.

We already have the semblance of such a battery truck. We recently wrote about a concept from Volkswagen to use mobile robots to haul batteries to stranded EVs. And even now you can buy a car-to-car charge-sharing system from the Italian company Andromeda, although so far no one seems to have tried using it while in motion.

If all the cars participated (yes, a big “if”), then you’d get huge gains. In computer modeling, done with the traffic simulator SUMO, the researchers found that EVs had to stop to recharge only about a third as often. What’s more, they could manage things with nearly a quarter less battery capacity.

A few disadvantages suggest themselves. First off, how do two EVs dock while  barreling down the freeway? Bhunia says it would be no strain at all for a self-driving car, when that much-awaited creature finally comes. Nothing can hold cars in tandem more precisely than a robot. But even a human being, assisted by an automatic system, might be able to pull of the feat, just as pilots do during in-flight refueling. 

Then there is the question of reciprocity. How many people are likely to lend their battery to a perfect stranger? 

“They wouldn’t donate power,” explains Bhunia. “They’d be getting the credit back when they are in need. If you’re receiving charge within a network”—like ride-sharing drivers for companies like Uber and Lyft—“one central management system can do it. But if you want to share between networks, that transaction can be stored in a bank as a credit and paid back later, in kind or in cash.”

In their proposal, the researchers envisage a central management system that operates in the cloud.

Any mobile device that moves about on its own can benefit from a share-and-share-alike charging scheme. Delivery bots would make a lot more sense if they didn’t have to spend half their time searching for a wall socket. And being able to recharge a UAV from a moving truck would greatly benefit any operator of a fleet of cargo drones, as Amazon appears to want to do.

Sure, road-safety regulators would pop their corks at the very mention of high-speed energy swapping. And, yes, one big advance in battery technology would send the idea out the window, as it were. Still, if aviators could share fuel as early as 1923, drivers might well try their hand at it a century later.

Lucid Motors’ Peter Rawlinson Talks E-Car Efficiency

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/advanced-cars/lucid-motors-says-ecar-design-all-about-efficiency

Two years ago, our own Lawrence Ulrich wrote that Lucid Motors “might just have a shot at being a viable, smaller-scale competitor to Tesla,” with series production of its fiendishly fast electric car, the Air, then slated for 2019.

Here we are in 2020, and you still can’t buy the Air. But now all systems are go, insists Peter Rawlinson, the chief executive and chief technology officer of Lucid and, in a former incarnation, the chief engineer for the Tesla Model S. He credits last year’s infusion of US $1 billion from the sovereign wealth fund of Saudi Arabia.

Is Coronavirus Speeding the Adoption of Driverless Technology?

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/will-coronavirus-speed-adoption-of-driverless-technology

Neolix, a maker of urban robo-delivery trucks, made an interesting claim recently. The Beijing-based company said orders for its self-driving delivery vehicles were soaring because the coronavirus epidemic had both cleared the roads of cars and opened the eyes of customers to the advantages of driverlessness. The idea is that when the epidemic is over, the new habits may well persist.

Neolix last week told Automotive News it had booked 200 orders in the past two months after having sold just 159 in the eight months before. And on 11 March, the company confirmed that it had raised US $29 million in February to fund mass production.

Of course, this flurry of activity could merely be coincidental to the epidemic, but Tallis Liu, the company’s manager of business development, maintains that it reflects changing attitudes in a time of plague.

“We’ve seen a rise in both acceptance and demand both from the general public and from the governmental institutions,” he tells IEEE Spectrum. The sight of delivery bots on the streets of Beijing is “educating the market” about “mobility as a service” and on “how it will impact people’s day-to-day lives during and after the outbreak.”

During the epidemic, Neolix has deployed 50 vehicles in 10 major cities in China to do mobile delivery and also disinfection service. Liu says that many of the routes were chosen because they include public roads that the lockdown on movement has left relatively empty. 

The company’s factory has a production capacity of 10,000 units a year, and most of the  factory staff has returned to their positions,  Liu adds. “Having said that, we are indeed facing some delays from our suppliers given the ongoing situation.”

Neolix’s deliverybots are adorable—a term this site once used to describe a strangely similar-looking rival bot from the U.S. firm Nuro. The bots are the size of a small car, and they’re each equipped with cameras, three 16-channel lidar laser sensors, and one single-channel lidar. The low-speed version also has 14 ultrasonic short-range sensors; on the high-speed version, the ultrasonic sensors are supplanted by radars. 

If self-driving technology benefits from the continued restrictions on movement in China and around the world, it wouldn’t be the first time that necessity had been the mother of invention. An intriguing example is furnished by a mere two-day worker’s strike on the London Underground in 2014. Many commuters, forced to find alternatives, ended up sticking with those workarounds even after Underground service resumed, according to a 2015 analysis by three British economists.

One of the researchers, Tim Willems of Oxford University, tells Spectrum that disruptions can induce permanent changes when three conditions are met. First, “decision makers are lulled into habits and have not been able to achieve their optimum (close to our Tube strike example).” Second, “there are coordination failures that make it irrational for any one decision maker to deviate from the status quo individually” and a disruption “forces everybody away from the status quo at the same time.” And third, the reluctance to pay the fixed costs required to set up a new way of doing things can be overcome under crisis conditions.

By that logic, many workers sent home for months on end to telecommute will stay on their porches or in their pajamas long after the all-clear signal has sounded. And they will vastly accelerate the move to online shopping, with package delivery of both the human and the nonhuman kind.

On Monday, New York City’s mayor, Bill de Blasio, said he was suspending his long-running campaign against e-bikes. “We are suspending that enforcement for the duration of this crisis,” he said. And perhaps forever.

Gill Pratt on “Irrational Exuberance” in the Robocar World

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/gill-pratt-of-toyota-on-the-irrational-exuberance-in-the-robocar-world

A lot of people in the auto industry talked for way too long about the imminent advent of fully self-driving cars. 

In 2013, Carlos Ghosn, now very much the ex-chairman of Nissan, said it would happen in seven years. In 2016, Elon Musk, then chairman of Tesla, implied  his cars could basically do it already. In 2017 and right through early 2019 GM Cruise talked 2019. And Waymo, the company with the most to show for its efforts so far, is speaking in more measured terms than it used just a year or two ago. 

It’s all making Gill Pratt, CEO of the Toyota Research Institute in California, look rather prescient. A veteran roboticist who joined Toyota in 2015 with the task of developing robocars, Pratt from the beginning emphasized just how hard the task would be and how important it was to aim for intermediate goals—notably by making a car that could help drivers now, not merely replace them at some distant date.

That helpmate, called Guardian, is set to use a range of active safety features to coach a driver and, in the worst cases, to save him from his own mistakes. The more ambitious Chauffeur will one day really drive itself, though in a constrained operating environment. The constraints on the current iteration will be revealed at the first demonstration at this year’s Olympic games in Tokyo; they will certainly involve limits to how far afield and how fast the car may go.

Earlier this week, at TRI’s office in Palo Alto, Calif., Pratt and his colleagues gave Spectrum a walkaround look at the latest version of the Chauffeur, the P4; it’s a Lexus with a package of sensors neatly merging with the roof. Inside are two lidars from Luminar, a stereocamera, a mono-camera (just to zero in on traffic signs), and radar. At the car’s front and corners are small Velodyne lidars, hidden behind a grill or folded smoothly into small protuberances. Nothing more could be glimpsed, not even the electronics that no doubt filled the trunk.

Pratt and his colleagues had a lot to say on the promises and pitfalls of self-driving technology. The easiest to excerpt is their view on the difficulty of the problem.

There isn’t anything that’s telling us it can’t be done; I should be very clear on that,” Pratt says. “Just because we don’t know how to do it doesn’t mean it can’t be done.”

That said, though, he notes that early successes (using deep neural networks to process vast amounts of data) led researchers to optimism. In describing that optimism, he does not object to the phrase “irrational exuberance,” made famous during the 1990s dot-com bubble.

It turned out that the early successes came in those fields where deep learning, as it’s known, was most effective, like artificial vision and other aspects of perception. Computers, long held to be particularly bad at pattern recognition, were suddenly shown to be particularly good at it—even better, in some cases, than human beings. 

“The irrational exuberance came from looking  at the slope of the [graph] and seeing the seemingly miraculous improvement deep learning had given us,” Pratt says. “Everyone was surprised, including the people who developed it, that suddenly, if you threw enough data and enough computing at it, the performance would get so good. It was then easy to say that because we were surprised just now, it must mean we’re going to continue to be surprised in the next couple of years.”

The mindset was one of permanent revolution: The difficult, we do immediately; the impossible just takes a little longer. 

Then came the slow realization that AI not only had to perceive the world—a nontrivial problem, even now—but also to make predictions, typically about human behavior. That problem is more than nontrivial. It is nearly intractable. 

Of course, you can always use deep learning to do whatever it does best, and then use expert systems to handle the rest. Such systems use logical rules, input by actual experts, to handle whatever problems come up. That method also enables engineers to tweak the system—an option that the black box of deep learning doesn’t allow.

Putting deep learning and expert systems together does help, says Pratt. “But not nearly enough.”

Day-to-day improvements will continue no matter what new tools become available to AI researchers, says Wolfram Burgard, Toyota’s vice president for automated driving technology

“We are now in the age of deep learning,” he says. “We don’t know what will come after—it could be a rebirth of an old technology that suddenly outperforms what we saw before. We are still in a phase where we are making progress with existing techniques, but the gradient isn’t as steep as it was a few years ago. It is getting more difficult.”





Velodyne Will Sell a Lidar for $100

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/velodyne-will-sell-a-lidar-for-100

Velodyne claims to have broken the US $100 barrier for automotive lidar with its tiny Velabit, which it unveiled at CES earlier this month.

“Claims” is the mot juste because this nice, round dollar amount is an estimate based on the mass-manufacturing maturity of a product that has yet to ship. Such a factoid would hardly be worth mentioning had it come from some of the several-score odd lidar startups that haven’t shipped anything at all. But Velodyne created this industry back during DARPA-funded competitions, and has been the market leader ever since.

Damon’s Hypersport AI Boosts Motorcycle Safety

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/damons-hypersport-motorcycle-safety-ai

For all its pure-electric acceleration and range and its ability to shapeshift, the Hypersport motorcycle shown off last week at CES by Vancouver, Canada-based Damon Motorcycles matters for just one thing: It’s the first chopper swathed in active safety systems.

These systems don’t take control, not even in anticipation of a crash, as they do in many advanced driver assistance systems in cars. They leave a motorcyclist fully in command while offering the benefit of an extra pair of eyes.

Why drape high tech “rubber padding” over the motorcycle world? Because that’s where the danger is: Motorcyclists are 27 times more likely to die in a crash than are passengers in cars.

“It’s not a matter of if you’ll have an accident on a motorbike, but when,” says Damon chief executive Jay Giraud. “Nobody steps into motorbiking knowing that, but they learn.”

The Hypersport’s sensor suite includes cameras, radar, GPS, solid-state gyroscopes and accelerometers. It does not include lidar­–“it’s not there yet,” Giraud says–but it does open the door a crack to another way of seeing the world: wireless connectivity.  

The bike’s brains note everything that happens when danger looms, including warnings issued and evasive maneuvers taken, then shunts the data to the cloud via 4G wireless. For now that data is processed in batches, to help Damon refine its algorithms, a practice common among self-driving car researchers. Some day, it will share such data with other vehicles in real-time, a strategy known as vehicle-to-everything, or V2x.

But not today. “That whole world is 5-10 years away—at least,” Giraud grouses. “I’ve worked on this for over decade—we’re no closer today than we were in 2008.”

The bike has an onboard neural net whose settings are fixed at any given time. When the net up in the cloud comes up with improvements, these are sent as over-the-air updates to each motorcycle. The updates have to be approved by each owner before going live onboard. 

When the AI senses danger it gives warning. If the car up ahead suddenly brakes, the handlebars shake, warning of a frontal collision. If a vehicle coming from behind enters the biker’s blind spot, LEDs flash. That saves the rider the trouble of constantly having to look back to check the blind spot.

Above all, it gives the rider time. A 2018 report by the National Highway Traffic Safety Administration found that from 75 to 90 percent of riders in accidents had less than three seconds to notice a threat and try to avert it; 10 percent had less than one second. Just an extra second or two could save a lot of lives.

The patterns the bike’s AI tease out from the data are not always comparable to those a self-driving car would care about. A motorcycle shifts from one half of a lane to the other; it leans down, sometimes getting fearsomely close to the pavement; and it is often hard for drivers in other vehicles to see.

One motorbike-centric problem is the high risk a biker takes just by entering an intersection. Some three-quarters of motorcycle accidents happen there, and of that number about two-thirds are caused by a car’s colliding from behind or from the side. The side collision, called a T-bone, is particularly bad because there’s nothing at all to shield the rider.

Certain traffic patterns increase the risk of such collisions. “Patterns that repeat allow our system to predict risk,” Giraud says. “As the cloud sees the tagged information again and again, we can use it to make predictions.”

Damon is taking pre-orders, but it expects to start shipping in mid-2021. Like Tesla, it will deliver straight to the customer, with no dealers to get in the way.

Seeing Around the Corner With Lasers—and Speckle

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/seeing-around-corner-lasers-speckle

Researchers from Rice, Stanford, Princeton, and Southern Methodist University have developed a new way to use lasers to see around corners that beats the previous technique on resolution and scanning speed. The findings appear today in the journal Optica.

The U.S. military—which funded the work through DARPA grants—is interested for obvious reasons, and NASA wants to use it to image caves, perhaps doing so from orbit. The technique might one day also let rescue workers peer into earthquake-damaged buildings and help self-driving cars navigate tricky intersections.

One day. Right now it’s a science project, and any application is years away. 

The original way of corner peeping, dating to 2012, studies the time it takes laser light to go to a reflective surface, onward to an object and back again. Such time-of-flight measurement requires hours of scanning time to produce a resolution measured in centimeters. Other methods have since been developed that look at reflected light in an image to infer missing parts.

The latest method looks instead at speckle, a shimmering interference pattern that in many laser applications is a bug; here it is a feature because it contains a trove of spatial information. To get the image hidden in the speckle—a process called non-line-of-sight correlography—involves a bear of a calculation. The researchers used deep-learning methods to accelerate the analysis. 

“Image acquisition takes a quarter of a second, and we’re getting sub-millimeter resolution,” says Chris Metzler, the leader of the project, who is a post-doc in electrical engineering at Stanford. He did most of the work while completing a doctorate at Rice.

The problem is that the system can achieve these results only by greatly narrowing the field of view.

“Speckle encodes interference information, and as the area gets larger, the resolution gets worse,” says Ashok Veeraraghavan, an associate professor of electrical engineering and computer science at Rice. “It’s not ideal to image a room; it’s ideal to image an ID badge.”

The two methods are complementary: Time-of-flight gets you the room, the guy standing in that room, and maybe a hint of a badge. Speckle analysis reads the badge. Doing all that would require separate systems operating two lasers at different wavelengths, to avoid interference.

Today corner peeping by any method is still “lab bound,” says Veeraraghavan, largely because of interference from ambient light and other problems. To get the results that are being released today, the researchers had to work from just one meter away, under ideal lighting conditions. A possible way forward may be to try lasers that emit in the infrared.

Another intriguing possibility is to wait until the wireless world’s progress to ever-shorter wavelengths finally hits the millimeter band, which is small enough to resolve most identifying details. Cars and people, for instance.

Startup Offers a Silicon Solution for Lithium-Ion Batteries

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/energywise/green-tech/fuel-cells/startup-advano-silicon-lithium-ion-batteries

You could cram a lot more energy in a lithium-ion battery anode if you replaced the graphite in the anode with silicon. Silicon has about 10 times as much storage capacity.

But silicon bloats during charging, and that can break a battery’s innards. Graphite hardly swells at all because during charging it slips incoming lithium ions between its one-atom-thick layers. That process, called intercalation, involves no chemical change whatever. But it’s different with silicon: Silicon and lithium combine to form lithium silicide, a bulky alloy. 

A lot of companies are trying to sidestep the problem by fine-tuning the microstructure of silicon sheets or particles, typically by building things from the bottom up. We’ve written about a few of them, including Sila Nanotechnologies, Enovix, and XNRGI.

Now comes Advano, a startup in New Orleans, that champions a top-down approach. The point is to produce silicon particles that are perhaps not so fine-tuned, but still good enough, and to produce the little dots in huge quantities.

Aeva Unveils Lidar on a Chip

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/sensors/lidar-on-a-chip-at-last

There are scores of lidar startups. Why read about another one? 

It wouldn’t be enough to have the backing of a big car company, though Aeva has that. In April the company announced a partnership with Audi, and today it’s doing so with Porsche—the two upscale realms of the Volkswagen auto empire. 

Nor is it enough to claim that truly driverless cars are just around the bend. We’ve seen that promise made and broken many times. 

What makes this two-year-old company worth a look-see is its technology, which is unusual both for its miniaturization and for the way in which it modulates its laser beam. 

Lithium-Sulfur Battery Project Aims to Double the Range of Electric Airplanes

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/energywise/aerospace/aviation/lithiumsulfur-battery-project-aims-to-double-the-range-of-electric-airplanes

When General Motors briefly first wowed the world with its EV-1 electric car, back in 1990, it relied on lead-acid batteries that packed a piddling 30 to 40 watt-hours per kilogram. The project eventually died, in part because that metric was so low (and the cost was so high).

It was the advent of new battery designs, above all the lithium-ion variant, that launched today’s electric-car wave. Today’s Tesla Model 3’s lithium-ion battery pack has an estimated 168 Wh/kg. And important as this energy-per-weight ratio is for electric cars, it’s more important still for electric aircraft.

Now comes Oxis Energy, of Abingdon, UK, with a battery based on lithium-sulfur chemistry that it says can greatly increase the ratio, and do so in a product that’s safe enough for use even in an electric airplane. Specifically, a plane built by Bye Aerospace, in Englewood, Colo., whose founder, George Bye, described the project in this 2017 article for IEEE Spectrum.