Tag Archives: transportation

Boeing Sponsors $1 Million GoFly Prize for Best Personal Flying Machines

Post Syndicated from Ned Potter original https://spectrum.ieee.org/tech-talk/transportation/air/boeing-gofly-prize-best-personal-flying-machines-aviation-news

What would it take to be as free as a bird—flying above the treetops with the wind in your face and the world far beneath your feet?

For Mariah Cain and Jeff Elkins, the team behind DragonAir Aviation of Panama City Beach, Fla., the answer is a personal flying machine that makes the pilot look like a skier grasping two poles, standing atop an oversized hobby drone. For Stephen Tibbitts, who built the Zero-emissions Electric Vehicle Aircraft (ZEVA) ZERO in Tacoma, Wash., it was a bulbous eight-foot disc in which a pilot lies prone, sped across the sky by eight propellers.  

Maybe you’d design something different—like a tiny helicopter, open to the breeze? Or a lounge chair surrounded by a ring of rotors? How about a gondola with two sets of blades at its base? Or your machine might resemble a flying motorcycle, with rotors clustered in front and back.

Those are just some of the machines, many of them conceived by startups, built for a competition started by GoFly, a New York-based tech incubator. It plans to offer a US $1 million grand prize to the winners of a fly-off at NASA’s Ames Research Center in California from 27 to 29 February.

Apex.OS: An Open Source Operating System for Autonomous Cars

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/apexos-operating-system-open-source-autonomous-cars

The facets of autonomous car development that automakers tend to get excited about are things like interpreting sensor data, decision making, and motion planning.

Unfortunately, if you want to make self-driving cars, there’s all kinds of other stuff that you need to get figured out first, and much of it is really difficult but also absolutely critical. Things like, how do you set up a reliable network inside of your vehicle? How do you manage memory and data recording and logging? How do you get your sensors and computers to all talk to each other at the same time? And how do you make sure it’s all stable and safe?

In robotics, the Robot Operating System (ROS) has offered an open-source solution for many of these challenges. ROS provides the groundwork for researchers and companies to build off of, so that they can focus on the specific problems that they’re interested in without having to spend time and money on setting up all that underlying software infrastructure first.

Apex.ai’s Apex OS, which is having its version 1.0 release today, extends this idea from robotics to autonomous cars. It promises to help autonomous carmakers shorten their development timelines, and if it has the same effect on autonomous cars as ROS has had on robotics, it could help accelerate the entire autonomous car industry.

For more about what this 1.0 software release offers, we spoke with Apex.ai CEO Jan Becker.

IEEE Spectrum: What exactly can Apex.OS do, and what doesn’t it do? 

Jan Becker: Apex.OS is a fork of ROS 2 that has been made robust and reliable so that it can be used for the development and deployment of highly safety-critical systems such as autonomous vehicles, robots, and aerospace applications. Apex.OS is API-compatible to ROS 2. In a  nutshell, Apex.OS is an SDK for autonomous driving software and other safety-critical mobility applications. The components enable customers to focus on building their specific applications without having to worry about message passing, reliable real-time execution, hardware integration, and more.

Apex.OS is not a full [self-driving software] stack. Apex.OS enables customers to build their full stack based on their needs. We have built an automotive-grade 3D point cloud/lidar object detection and tracking component and we are in the process of building a lidar-based localizer, which is available as Apex.Autonomy. In addition, we are starting to work with other algorithmic component suppliers to integrate Apex.OS APIs into their software. These components make use of Apex.OS APIs, but are available separately, which allows customers to assemble a customized full software stack from building blocks such that it exactly fits their needs. The algorithmic components re-use the open architecture which is currently being built in the open source Autoware.Auto project.

So if every autonomous vehicle company started using Apex.OS, those companies would still be able to develop different capabilities?

Apex.OS is an SDK for autonomous driving software and other safety-critical mobility applications. Just like iOS SDK provides an SDK for iPhone app developers enabling them to focus on the application, Apex.OS provides an SDK to developers of safety-critical mobility applications. 

Every autonomous mobility system deployed into a public environment must be safe. We enable customers to focus on their application without having to worry about the safety of the underlying components. Organizations will differentiate themselves through performance, discrete features, and other product capabilities. By adopting Apex.OS, we enable them to focus on developing these differentiators. 

What’s the minimum viable vehicle that I could install Apex.OS on and have it drive autonomously? 

In terms of compute hardware, we showed Apex.OS running on a Renesas R-Car H3 and on a Quanta V3NP at CES 2020. The R-Car H3 contains just four ARM Cortex-A57 cores and four ARM Cortex-A53 cores and is the smallest ECU for which our customers have requested support. You can install Apex.OS on much smaller systems, but this is the smallest one we have tested extensively so far, and which is also powering our vehicle.

We are currently adding support for the Renesas R-Car V3H, which contains four ARM Cortex-A53 cores (and no ARM Cortex-A57 cores) and an additional image processing processor. 

You suggest that Apex.OS is also useful for other robots and drones, in addition to autonomous vehicles. Can you describe how Apex.OS would benefit applications in these spaces?

Apex.OS provides a software framework that enables reading, processing, and outputting data on embedded real-time systems used in safety-critical environments. That pertains to robotics and aerospace applications just as much as to automotive applications. We simply started with automotive applications because of the stronger market pull. 

Industrial robots today often run ROS for the perception system and non-ROS embedded controller for highly-accurate position control, because ROS cannot run the realtime controller with the necessary precision. Drones often run PX4 for the autopilot and ROS for the perception stack. Apex.OS combines the capabilities of ROS with the requirements of mobility systems, specifically regarding real-time, reliability and the ability to run on embedded compute systems.

How will Apex contribute back to the open-source ROS 2 ecosystem that it’s leveraging within Apex.OS?

We have contributed back to the ROS 2 ecosystem from day one. Any and all bugs that we find in ROS 2 get fixed in ROS 2 and thereby contributed back to the open-source codebase. We also provide a significant amount of funding to Open Robotics to do this. In addition, we are on the ROS 2 Technical Steering Committee to provide input and guidance to make ROS 2 more useful for automotive applications. Overall we have a great deal of interest in improving ROS 2 not only because it increases our customer base, but also because we strive to be a good open-source citizen.

The features we keep in house pertain to making ROS 2 realtime, deterministic, tested, and certified on embedded hardware. Our goals are therefore somewhat orthogonal to the goals of an open-source project aiming to address as many applications as possible. We, therefore, live in a healthy symbiosis with ROS 2. 

[ Apex.ai ]

Bosch’s Smart Visor Tracks the Sun While You Drive

Post Syndicated from Lawrence Ulrich original https://spectrum.ieee.org/cars-that-think/transportation/sensors/boschs-smart-virtual-visor-tracks-sun

The automotive sun visor has been around for nearly a century, first affixed in 1924 as a “glare shield” on the outside of a Ford Model T. Yet despite modest advances—lighted vanity mirrors, anyone?—it’s still a crude, view-blocking slab that’s often as annoying as it is effective. 

Bosch, finally, has a better idea: An AI-enhanced liquid crystal display (LCD) screen that links with a driver-monitoring camera to keep the sun out of your eyes without blocking the outward view. The German supplier debuted the Bosch Virtual Visor at the recent CES show in Las Vegas. 

Uh-oh Uber, Look Out Lyft: Here Comes Driverless Ride Sharing

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/transportation/self-driving/uhoh-uber-look-out-lyft-here-comes-driverless-ride-sharing

Yesterday I drove from Silicon Valley to San Francisco. It started raining on the way and I hadn’t thought to take an umbrella. No matter—I had the locations of two parking garages, just a block or so from my destination, preloaded into my navigation app. But both were full, and I found myself driving in stop-and-go traffic around crowded, wet, hilly, construction-heavy San Francisco, hunting for street parking or an open garage for nearly an hour. It was driving hell.

So when I finally arrived at a launch event hosted by Cruise, I couldn’t have been more receptive to the company’s pitch for Cruise Origin, a new vehicle that, Cruise executives say, intends to make it so I won’t need to drive or park in a city ever again.

Damon’s Hypersport AI Boosts Motorcycle Safety

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/damons-hypersport-motorcycle-safety-ai

For all its pure-electric acceleration and range and its ability to shapeshift, the Hypersport motorcycle shown off last week at CES by Vancouver, Canada-based Damon Motorcycles matters for just one thing: It’s the first chopper swathed in active safety systems.

These systems don’t take control, not even in anticipation of a crash, as they do in many advanced driver assistance systems in cars. They leave a motorcyclist fully in command while offering the benefit of an extra pair of eyes.

Why drape high tech “rubber padding” over the motorcycle world? Because that’s where the danger is: Motorcyclists are 27 times more likely to die in a crash than are passengers in cars.

“It’s not a matter of if you’ll have an accident on a motorbike, but when,” says Damon chief executive Jay Giraud. “Nobody steps into motorbiking knowing that, but they learn.”

The Hypersport’s sensor suite includes cameras, radar, GPS, solid-state gyroscopes and accelerometers. It does not include lidar­–“it’s not there yet,” Giraud says–but it does open the door a crack to another way of seeing the world: wireless connectivity.  

The bike’s brains note everything that happens when danger looms, including warnings issued and evasive maneuvers taken, then shunts the data to the cloud via 4G wireless. For now that data is processed in batches, to help Damon refine its algorithms, a practice common among self-driving car researchers. Some day, it will share such data with other vehicles in real-time, a strategy known as vehicle-to-everything, or V2x.

But not today. “That whole world is 5-10 years away—at least,” Giraud grouses. “I’ve worked on this for over decade—we’re no closer today than we were in 2008.”

The bike has an onboard neural net whose settings are fixed at any given time. When the net up in the cloud comes up with improvements, these are sent as over-the-air updates to each motorcycle. The updates have to be approved by each owner before going live onboard. 

When the AI senses danger it gives warning. If the car up ahead suddenly brakes, the handlebars shake, warning of a frontal collision. If a vehicle coming from behind enters the biker’s blind spot, LEDs flash. That saves the rider the trouble of constantly having to look back to check the blind spot.

Above all, it gives the rider time. A 2018 report by the National Highway Traffic Safety Administration found that from 75 to 90 percent of riders in accidents had less than three seconds to notice a threat and try to avert it; 10 percent had less than one second. Just an extra second or two could save a lot of lives.

The patterns the bike’s AI tease out from the data are not always comparable to those a self-driving car would care about. A motorcycle shifts from one half of a lane to the other; it leans down, sometimes getting fearsomely close to the pavement; and it is often hard for drivers in other vehicles to see.

One motorbike-centric problem is the high risk a biker takes just by entering an intersection. Some three-quarters of motorcycle accidents happen there, and of that number about two-thirds are caused by a car’s colliding from behind or from the side. The side collision, called a T-bone, is particularly bad because there’s nothing at all to shield the rider.

Certain traffic patterns increase the risk of such collisions. “Patterns that repeat allow our system to predict risk,” Giraud says. “As the cloud sees the tagged information again and again, we can use it to make predictions.”

Damon is taking pre-orders, but it expects to start shipping in mid-2021. Like Tesla, it will deliver straight to the customer, with no dealers to get in the way.

Test of Complex Autonomous Vehicle Designs

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/test-of-complex-autonomous-vehicle-designs

Autonomous vehicles (AV) combine multiple sensors, computers and communication technology to make driving safer and improve the driver’s experience. Learn about design and test of complex sensor and communication technologies being built into AVs from our white paper and posters.

Key points covered in our AV resources:

  • Comparison of dedicated short-range communications and C-V2X technologies
  • Definition of AV Levels 0 to 5
  • Snapshot of radar technology from 24 to 77 GHz
image

Panasonic’s Cloud Analytics Will Give Cars a Guardian Angel

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/transportation/safety/panasonics-cloud-analytics-will-give-cars-a-guardian-angel

graphic link to special report landing page

Vehicle-to-everything (V2X) technology—“everything” meaning other vehicles and road infrastructure—has long promised that a digital seatbelt would make cars safer. This year Panasonic expects to keep that promise by taking data to the cloud.

Car seatbelts, made mandatory in the United States in 1968, dramatically reduced the likelihood of death and serious injury. Airbags, becoming standard equipment some 20 years later, gave added protection. Roadway innovations—like rumble strips, better guardrail designs, and breakaway signposts—have also done their part.

In the past few years, however, the number of fatalities in U.S. car crashes has been creeping up—­possibly (but not provably) because people are increasingly being distracted by their mobile devices. What’s to be done, given how difficult it has been to get people off their cellphones?

Panasonic engineers, working with the departments of transportation in Colorado and Utah, think they can help turn the trend around. They are starting with what some call a digital seatbelt. This technology allows cars to talk to the transportation infrastructure, sending key information like speed and direction, and enables the infrastructure to talk back, alerting drivers about trouble ahead—a construction zone, perhaps, or a traffic jam.

This back-and-forth conversation is already happening on stretches of highway around the world, most notably in Europe. But these efforts use limited information—typically, speed, heading, and sometimes brake status—in limited areas. Panasonic thinks the digital seatbelt could do more for more drivers if it looked at a lot more data and processed it all centrally, no matter where it originates.

So, in the second half of this year, the company is launching Cirrus by Panasonic, a cloud-based system designed to make car travel a lot safer.

The Cirrus system takes the standard set of safety data that today’s cars transmit along the controller-area network (CAN) bus—including antilock brake status, stability control status, wiper status, fog light and headlight status, ambient air temperature, and other details and transmits it to receivers along the roadway. The receivers send the data to a cloud-based platform for analysis, where it can be used to generate personalized safety warnings for drivers.

This data is already being used in a number of ways by auto companies and researchers, but Chris Armstrong, vice president for V2X technology at Panasonic Corp. of North America, says Panasonic is the first company to use so much of it in a commercially available safety system.

“We are building a central nervous system for connected cars,” he says.

Blaine Leonard, transportation technology engineer for the Utah Department of Transportation, says, “Right now, when you are driving down a road, you might see a static road sign that says, ‘Bridge ices before road does,’ or an electronic sign that says ‘Ice ahead, beware.’ ” Drivers generally don’t pay much attention to these very imprecise warnings, he notes. But with Cirrus, Leonard says, “the temperature gauge of a vehicle passing through the area, along with the slippage of its wheels, will give us the exact location of the ice, so we can send that as a message to be displayed on the dashboard of a subsequent vehicle: ‘Ice ahead, 325 feet.’ A driver will be more likely to pay attention to a message that really is just for him. And if he passes through the area and the ice is no longer there, his vehicle will report that back, so the next driver won’t get the alert.”

Or consider an airbag deployment. “With this system,” Leonard says, “we will know within seconds if an airbag deployed, how fast the vehicle was traveling, and how many other cars in that area had airbag deployments. That information can allow us to get an emergency response out minutes faster than if the accident had been reported by a 911 call, and two to three minutes can save a life.”

The Utah Department of Transportation, along with its counterpart in Colorado, is acting as a test bed for the technology. The Utah people expect the data-gathering side of their system to be operational in May. It will start with 30 state-owned vehicles and 40 roadside receivers this year, with each receiver designed to transmit for 300 meters in all directions. (Receivers don’t have to be placed so that their individual ranges always meet; the system will also be able to briefly store data locally and share it with the cloud moments later.)

In the next few years, Utah plans to roll out 220 roadside sensors and equip thousands of vehicles to talk to them; although the department has a five-year plan to work with Panasonic, the exact pace of installation hasn’t been set. Last year, Colorado installed 100 receivers and equipped 94 vehicles; plans to go further are currently on hold pending evaluation by the state’s new administration.

Panasonic will feed data collected from these two implementations into machine-learning programs, which will make the algorithms better at predicting changing or hazardous road and traffic conditions. “For example, if we can build up historical data about weather events—ambient air temperature, status of control systems, windshield wipers—our systems will learn which data elements matter, understand the conditions as they develop, and potentially send out alerts proactively,” Panasonic’s Armstrong says.

Using the system today requires adding a module that collects the data from the CAN bus and sends it to the receivers, receives alerts, and displays the alerts to the driver. Panasonic’s engineers expect that its system will soon have the capability to collect the data and send it out either via dedicated short-range communications (DSRC), a variant of Wi-Fi, or by ­cellular-based ­vehicle-to-everything (C-V2X). One or the other method of wireless communication will eventually be built into all cars. Volkswagen is incorporating DSRC in its latest Golf model, and Cadillac has announced plans to start offering it in its crossover vehicles.

And then Panasonic will be free to focus on running the cloud-based platform and making the system available to app developers. The company expects that those developers will find ways to enhance safety even further.

This article appears in the January 2020 print issue as “A Guardian Angel for Your Car.”

Volkswagen’s Concept Robot Would Bring Mobile EV Charging to Any Garage

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/cars-that-think/transportation/infrastructure/volkswagens-concept-robot-would-bring-mobile-ev-charging-to-any-garage

As the fraction of vehicles on the road that are electric increases, finding places to charge up away from home gets more complicated. Ideally, you want your car charging whenever it’s parked so that it’s always topped up, but in urban parking garages, typically just a few spaces (if any) are equipped for electric vehicle charging.

Ideally, the number of EV-friendly spaces would increase with demand, but wiring up a bajillion parking spaces with dedicated chargers isn’t likely to happen anytime soon. With that in mind, Volkswagen has come up with a concept for a way to charge any vehicle in a parking garage, using an autonomous mobile robot that can ferry battery packs around and plug them directly into your car.

AI System Warns Pedestrians Wearing Headphones About Passing Cars

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/cars-that-think/transportation/safety/ai-headphone-pedestrians-safety-warning-cars

Journal Watch report logo, link to report landing page

How can headphone-wearing pedestrians tune out the chaotic world around them without compromising their own safety? One solution may come from the pedestrian equivalent of a vehicle collision warning system that aims to detect nearby vehicles based purely on sound.

Aeva Unveils Lidar on a Chip

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/cars-that-think/transportation/sensors/lidar-on-a-chip-at-last

There are scores of lidar startups. Why read about another one? 

It wouldn’t be enough to have the backing of a big car company, though Aeva has that. In April the company announced a partnership with Audi, and today it’s doing so with Porsche—the two upscale realms of the Volkswagen auto empire. 

Nor is it enough to claim that truly driverless cars are just around the bend. We’ve seen that promise made and broken many times. 

What makes this two-year-old company worth a look-see is its technology, which is unusual both for its miniaturization and for the way in which it modulates its laser beam. 

Hand-Tracking Tech Watches Riders in Self-Driving Cars to See If They’re Ready to Take the Wheel

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/handtracking-tech-for-the-next-generation-of-autonomous-cars

Journal Watch report logo, link to report landing page

Researchers have developed a new technique for tracking the hand movements of a non-attentive driver, to calculate how long it would take the driver to assume control of a self-driving car in an emergency.

If manufacturers can overcome the final legal hurdles, cars with Level 3 autonomous vehicle technology will one day be chauffeuring people from A to B. These cars allow a driver to have his or her eyes off the road and the freedom to do minor tasks (such as texting or watching a movie). However, these cars need a way of knowing how quickly—or slowly—a driver can respond when taking control during an emergency.

To address this need, Kevan Yuen and Mohan Trivedi at the University of California, San Diego developed their new hand-tracking system, which is described in a study published 22 November in IEEE Transactions on Intelligent Vehicles.

An Internet of Tires? Pirelli Marries 5G And Automobile Wheels

Post Syndicated from Lawrence Ulrich original https://spectrum.ieee.org/cars-that-think/transportation/sensors/an-internet-of-tires-pirelli-marries-5g-and-automobile-wheels

The term “5G” typically makes people think of the smartphone in their hand, not the tires on their car. But Pirelli has developed the Cyber Tire, a smart tire that reads the road surface and transmits key data — including the potential risk of hydroplaning — along a 5G communications network. 

Pirelli demonstrated its Cyber Tire (also known as the Cyber Tyre) at a conference hosted by the 5G Automotive Association, atop the architect Renzo Piano’s reworking of the landmark Lingotto Building in Turin, Italy. That’s the former Fiat factory where classic models such as the Torpedo and 500 (the latter known as Topolino, or “Little Mouse”) barrelled around its banked, three-quarter-mile rooftop test track beginning in the 1920’s. Engineers of that era, of course, couldn’t begin to fathom how digital technology would transform automobiles, let alone the revolution in tires that has dramatically boosted their performance, durability and safety. 

Using an Audi A8 as its test car, Pirelli’s network-enabled tires sent real-time warnings of slippery conditions to a following Audi Q8, taking advantage of the ultra-high bandwidth and low latency of 5G. Corrado Rocca, head of Cyber R&D for Pirelli, said that an accelerometer mounted within the tire itself—rather than the wheel rims that send familiar tire-pressure readouts in many modern cars—precisely measures handling forces along three axes. That includes the ability to sense water, ice or other low-coefficient of friction roadway conditions.

The sensor data can be used to the immediate benefit of safety and autonomous systems onboard a car. It can also be used in the growing realm of vehicle-to-vehicle (V2V) or vehicle-to-x communications (V2X), which means the once-humble tire could become a critical player in a wider ecosystem of networked safety and traffic management. Obvious scenarios include a car on the freeway that suddenly encounters ice, with tires that instantly send visual or audio hazard warnings not only to that car but also to nearby vehicle and pedestrians, as well as to networked roadway signs that announce the potential danger, or adjust prevailing speed limits accordingly.   

“No other element of a car is as connected to the road as the tire,” Rocca reminds us. “There are many modern sensors; lidar, sonar, cameras, but nothing on the ‘touching’ side of the car.’” 

Virtually every new car is equipped with anti-lock brakes (ABS) and electronic stability control (ESC) systems, which also spring into action when a car’s wheels begin to slip, or when a car begins to slide off the driver’s intended course. But the Cyber Tire could further improve those systems, Rocca said, allowing a car to proactively adjust those safety systems, or automatically slow itself down in response to changing roadway conditions. 

“Because we’re sensing the ground constantly, we can warn of the risk of hydroplaning well before you lose control,” Rocca says. “The warning could appear on a screen, or the car could automatically decide to correct it with ABS or ESC.” 

Aside from data on dynamic loads, the Cyber Tire’s internal sensor might also communicate in-car information specific to that tire model, or the kilometers of travel it has absorbed. 

Pirelli is also developing the technology for race circuits and driving enthusiasts, with its Italia Track Adrenaline tire. With tire temperatures dramatically affecting traction, wear and safety, this version monitors temperatures, pressure and handling forces in real time. That combines with onboard GPS and telemetry data to help drivers improve their on-track skills. The system could deliver simple real-time instructions — such as color-coded screen readouts as a tire rises to or beyond optimal operating temperature—or using popular telemetry tools, a granular analysis of the tire’s performance after a lapping session. (At the highest levels of Formula One racing, cars are equipped with roughly 140 sensors, which collect 20 to 30 megabytes of telemetry data every lap). 

With 5G, V2V and V2X systems still in the development phase,  Pirelli can’t say when it sensor-enabled hunks of rubber will reach the market. Automakers ultimately lead the adoption of new tire technology, and many are leery of new tech until they’re sure consumers will pay for it. Car companies are also cautious about ceding the networked space in their cars to outside suppliers—witness their glacial, grudging adoption of Apple CarPlay and Android Auto. But Pirelli says it’s working with major automakers on integrating the technology. And Rocca says that, like ABS in its nascent stages, smart tires could become common on vehicles within a decade.  It’s almost enough to get us wishing for a winter storm to try them out.

GPS Manipulation

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/11/gps_manipulatio.html

Long article on the manipulation of GPS in Shanghai. It seems not to be some Chinese military program, but ships who are stealing sand.

The Shanghai “crop circles,” which somehow spoof each vessel to a different false location, are something new. “I’m still puzzled by this,” says Humphreys. “I can’t get it to work out in the math. It’s an interesting mystery.” It’s also a mystery that raises the possibility of potentially deadly accidents.

“Captains and pilots have become very dependent on GPS, because it has been historically very reliable,” says Humphreys. “If it claims to be working, they rely on it and don’t double-check it all that much.”

On June 5 this year, the Run 5678, a river cargo ship, tried to overtake a smaller craft on the Huangpu, about five miles south of the Bund. The Run avoided the small ship but plowed right into the New Glory (Chinese name: Tong Yang Jingrui), a freighter heading north.

Boing Boing article.

Self-Driving Cars Learn About Road Hazards Through Augmented Reality

Post Syndicated from Yiheng Feng original https://spectrum.ieee.org/transportation/self-driving/selfdriving-cars-learn-about-road-hazards-through-augmented-reality

For decades, anyone who wanted to know whether a new car was safe to drive could simply put it through its paces, using tests established through trial and error. Such tests might investigate whether the car can take a sharp turn while keeping all four wheels on the road, brake to a stop over a short distance, or survive a collision with a wall while protecting its occupants.

But as cars take an ever greater part in driving themselves, such straightforward testing will no longer suffice. We will need to know whether the vehicle has enough intelligence to handle the same kind of driving conditions that humans have always had to manage. To do that, automotive safety-assurance testing has to become less like an obstacle course and more like an IQ test.

One obvious way to test the brains as well as the brawn of autonomous vehicles would be to put them on the road along with other traffic. This is necessary if only because the self-driving cars will have to share the road with the human-driven ones for many years to come. But road testing brings two concerns. First, the safety of all concerned can’t be guaranteed during the early stages of deployment; self-driving test cars have already been involved in fatal accidents. Second is the sheer scale that such direct testing would require.

That’s because most of the time, test vehicles will be driven under typical conditions, and everything will go as it normally does. Only in a tiny fraction of cases will things take a different turn. We call these edge cases, because they concern events that are at the edge of normal experience. Example: A truck loses a tire, which hops the median and careens into your lane, right in front of your car. Such edge cases typically involve a concurrence of failures that are hard to conceive of and are still harder to test for. This raises the question, How long must we road test a self-driving, connected vehicle before we can fairly claim that it is safe?

The answer to that question may never be truly known. What’s clear, though, is that we need other strategies to gauge the safety of self-driving cars. And the one we describe here—a mixture of physical vehicles and computer simulation—might prove to be the most effective way there is to evaluate self-driving cars.

A fatal crash occurs only once in about 160 million kilometers of driving, according to statistics compiled by the U.S. National Highway Traffic Safety Administration [PDF]. That’s a bit more than the distance from Earth to the sun (and 10 times as much as has been logged by the fleet of Google sibling Waymo, the company that has the most experience with self-driving cars). To travel that far, an autonomous car driving at highway speeds for 24 hours a day would need almost 200 years. It would take even longer to cover that distance on side streets, passing through intersections and maneuvering around parking lots. It might take a fleet of 500 cars 10 years to finish the job, and then you’d have to do it all over again for each new design.

Clearly, the industry must augment road testing with other strategies to bring out as many edge cases as possible. One method now in use is to test self-driving vehicles in closed test facilities where known edge cases can be staged again and again. Take, as an example, the difficulties posed by cars that run a red light at high speed. An intersection can be built as if it were a movie set, and self-driving cars can be given the task of crossing when the light turns green while at the same time avoiding vehicles that illegally cross in front of them.

While this approach is helpful, it also has limitations. Multiple vehicles are typically needed to simulate edge cases, and professional drivers may well have to pilot them. All this can be costly and difficult to coordinate. More important, no one can guarantee that the autonomous vehicle will work as desired, particularly during the early stages of such testing. If something goes wrong, a real crash could happen and damage the self-driving vehicle or even hurt people in other vehicles. Finally, no matter how ingenious the set designers may be, they cannot be expected to create a completely realistic model of the traffic environment. In real life, a tree’s shadow can confuse an autonomous car’s sensors and a radar reflection off a manhole cover can make the radar see a truck where none is present.

Computer simulation provides a way around the limitations of physical testing. Algorithms generate virtual vehicles and then move them around on a digital map that corresponds to a real-world road. If the data thus generated is then broadcast to an actual vehicle driving itself on the same road, the vehicle will interpret the data exactly as if it had come from its own sensors. Think of it as augmented reality tuned for use by a robot.

Although the physical test car is driving on empty roads, it “thinks” that it is surrounded by other vehicles. Meanwhile, it sends information that it is gathering—both from augmented reality and from its sensing of the real-world surroundings—back to the simulation platform. Real vehicles, simulated vehicles, and perhaps other simulated objects, such as pedestrians, can thus interact. In this way, a wide variety of scenarios can be tested in a safe and cost-effective way.

The idea for automotive augmented reality came to us by the back door: Engineers had already improved certain kinds of computer simulations by including real machines in them. As far back as 1999, Ford Motor Co. used measurements of an actual revving engine to supply data for a computer simulation of a power train. This hybrid simulation method was called hardware-in-the-loop, and engineers resorted to it because mimicking an engine in software can be very difficult. Knowing this history, it occurred to us that it would be possible to do the opposite—generate simulated vehicles as part of a virtual environment for testing actual cars.

In June 2017, we implemented an augmented-reality environment in Mcity, the world’s first full-scale test bed for autonomous vehicles. It occupies 32 acres on the North Campus of the University of Michigan, in Ann Arbor. Its 8 lane-kilometers (5 lane-miles) of roadway are arranged in sections having the attributes of a highway, a multilane arterial road, or an intersection.

Here’s how it works. The autonomous test car is equipped with an onboard device that can broadcast vehicle status, such as location, speed, acceleration, and heading, doing so every tenth of a second. It does this wirelessly, using dedicated short-range communications (DSRC), a standard similar to Wi-Fi that has been earmarked for mobile users. Roadside devices distributed around the testing facility receive this information and forward it to a traffic-simulation model, one that can simulate the testing facility by boiling it down to an equivalent network geometry that incorporates the actions of traffic signals. Once the computer model receives the test car’s information, it creates a virtual twin of that car. Then it updates the virtual car’s movements based on the movements of the real test car.

Feeding data from the real test vehicle into the computer simulation constitutes only half of the loop. We complete the other half by sending information about the various vehicles the computer has simulated to the test car. This is the essence of the augmented-reality environment. Every simulated vehicle also generates vehicle-status messages at a frequency of 10 hertz, which we forward to the roadside devices, which in turn broadcast it in real time. When the real test car receives that data, its vehicle-control system uses it to “see” all the virtual vehicles. To the car, these simulated entities are indistinguishable from the real thing.

By having vehicles pass messages through the roadside devices—that is, by substituting “vehicle-to-infrastructure” connections for direct “vehicle-to-vehicle” links—real vehicles and virtual vehicles can sense one another and interact accordingly. In the same fashion, traffic-signal status is also synchronized between the real and the simulated worlds. That way, real and virtual vehicles can each “look” at a given light and see whether it is green or red.

The status messages passed between real and simulated worlds include, of course, vehicle positions. This allows actual vehicles to be mapped onto the simulated road network, and simulated vehicles to be mapped into the actual road network. The positions of actual vehicles are represented with GPS coordinates—latitude, longitude, and elevation—and those of simulated vehicles with local coordinates—x, y, and z. An algorithm transforms one system of coordinates into the other.

But that mathematical transformation isn’t all that’s needed. There are small GPS and map errors, and they sometimes prevent a GPS position, forwarded from the actual test car and translated to the local system of coordinates, from appearing on a simulated road. We correct these errors with a separate mapping algorithm. Also, when the test car stops, we must lock it in place in the simulation, so that fluctuations in its GPS coordinates do not cause it to drift [PDF] out of position in the simulation.

Everything here depends on wireless communication. To ensure that it was reliable, we installed four roadside radios in Mcity, enough to cover the entire testing facility. The DSRC wireless standard, which operates in the 5.9-gigahertz band, gives us high data-transmission rates and very low latency. These are critical to safety at high speeds and during stop-on-a-dime maneuvers. DSRC is in wide use in Japan and Europe; it hasn’t yet gained much traction in the United States, although Cadillac is now equipping some of its cars with DSRC devices.

Whether DSRC will be the way cars communicate with one another is uncertain, though. Some people have argued that cellular communications, particularly in the coming 5G implementation, might offer equally low latency with a greater range. Whichever standard wins out, the communications protocols used in our system can easily be adapted to it.

We expect that the software framework we used to build our system will also endure, at least for a few years. We constructed our simulation with PTV Vissim, a commercial package developed in Germany to model traffic flow “microscopically,” that is, by simulating the behavior of each individual vehicle.

One thing that can be expected to change is the test vehicle, as other companies begin to use our system to put their own autonomous vehicles through their paces. For now, our one test vehicle is a Lincoln MKZ Hybrid, which is equipped with DSRC and thus fully connected. Drive-by-wire controls that we added to the car allow software to command the steering wheel, throttle, brake, and transmission. The car also carries multiple radars, lidars, cameras, and a GPS receiver with real-time kinematic positioning, which improves resolution by referring to a signal from a ground-based radio station.

We have implemented two testing scenarios. In the first one, the system generates a virtual train and projects it into the augmented reality perceived by the test car as the train approaches a mock-up of a rail crossing in Mcity. The point is to see whether the test car can stop in time and then wait for the train to pass. We also throw in other virtual vehicles, such as cars that follow the test car. These strings of cars—actual and virtual—can be formally arranged convoys (known as platoons) or ad hoc arrangements: perhaps cars queuing to get onto an entry ramp.

The second, more complicated testing scenario involves the case we mentioned earlier—running a red light. In the United States, cars running red lights cause more than a quarter of all the fatalities that occur at an intersection, according to the American Automobile Association. This scenario serves two purposes: to see how the test car reacts to traffic signals and also how it reacts to red-light-running scofflaws.

Our test car should be able to tell whether the signal is red or green and decide accordingly whether to stop or to go. It should also be able to notice that the simulated red-light runner is coming, predict its trajectory, and calculate when and where the test car might be when it crosses that trajectory. The test car ought to be able to do all these things well enough to avoid a collision.

Because the computer running the simulation can fully control the actions of the red-light runner, it can generate a wide variety of testing parameters in successive iterations of the experiment. This is precisely the sort of thing a computer can do much more accurately than any human driver. And of course, the entire experiment can be done in complete safety because the lawbreaker is merely a virtual car.

There is a lot more of this kind of edge-case simulation that can be done. For example, we can use the augmented-reality environment to evaluate the ability of test cars to handle complex driving situations, like turning left from a stop sign onto a major highway. The vehicle needs to seek gaps in traffic going in both directions, meanwhile watching for pedestrians who may cross at the sign. The car can decide to make a stop in the median first, or instead simply drive straight into the desired lane. This involves a decision-making process of several stages, all while taking into account the actions of a number of other vehicles (including predicting how they will react to the test car’s actions).

Another example involves maneuvers at roundabouts—entering, exiting, and negotiation for position with other cars—without help from a traffic signal. Here the test car needs to predict what other vehicles will do, decide on an acceptable gap to use to merge, and watch for aggressive vehicles. We can also construct augmented-reality scenarios with bicyclists, pedestrians, and other road users, such as farm machinery. The less predictable such alternative actors are, the more intelligence the self-driving car will need.

Ultimately, we would like to put together a large library of test scenarios including edge cases, then use the augmented-reality testing environment to run the tests repeatedly. We are now building up such a library with data scoured from reports of actual crashes, together with observations by sensor-laden vehicles of how people drive when they don’t know they’re part of an experiment. By putting together disparate edge conditions, we expect to create artificial edge cases that are particularly challenging for the software running in self-driving cars.

Thus armed, we ought to be able to see just how safe a given autonomous car is without having to drive it to the sun and back.

This article appears in the December 2019 print issue as “Augmented Reality for Robocars.”

About the Authors

Henry X. Liu is a professor of civil and environmental engineering at the University of Michigan, Ann Arbor, and a research professor at the University of Michigan Transportation Research Institute. Yiheng Feng is an assistant research scientist in the university’s Engineering Systems Group.

Should Yellow Traffic Lights Last Longer?

Post Syndicated from Michelle V. Rafter original https://spectrum.ieee.org/cars-that-think/transportation/safety/transportation-engineers-consider-whether-yellow-traffic-lights-should-last-longer

Mats Järlström’s six-year crusade to make yellow traffic lights safer for drivers could finally be paying off.

In mid-October, an Institute of Transportation Engineers appeals panel agreed with the Oregon consultant’s claims that a long-standing, widely used formula for setting the timing of yellow traffic lights doesn’t adequately account for the extra time a driver might need to safely and comfortably make a turn through an intersection.

The three-person ITE panel findings [PDF] didn’t suggest what the timing should be. A separate ITE committee will propose recommended practice for so-called “dilemma-zone situations for left-turn and right-turn movements” that the organization’s board must then approve. According to ITE Chief Technical Director Jeff Lindley, that process is underway and ITE could publish guidelines during the first quarter of 2020.

“It’s a historic moment,” Järlström said of the appeal panel’s decision. “This is a very conservative area of technology. There are many traffic signals that need to be changed. We want to change it so all of them are consistent, not only in the U.S. but through the world.”

NTSB Investigation of Fatal Driverless Car Accident

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/11/ntsb_investigat.html

Autonomous systems are going to have to do much better than this.

The Uber car that hit and killed Elaine Herzberg in Tempe, Ariz., in March 2018 could not recognize all pedestrians, and was being driven by an operator likely distracted by streaming video, according to documents released by the U.S. National Transportation Safety Board (NTSB) this week.

But while the technical failures and omissions in Uber’s self-driving car program are shocking, the NTSB investigation also highlights safety failures that include the vehicle operator’s lapses, lax corporate governance of the project, and limited public oversight.

The details of what happened in the seconds before the collision are worth reading. They describe a cascading series of issues that led to the collision and the fatality.

As computers continue to become part of things, and affect the world in a direct physical manner, this kind of thing will become even more important.

Driving Tests Coming for Autonomous Cars

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/driving-tests-coming-for-autonomous-cars

The three laws of robotic safety in Isaac Asimov’s science fiction stories seem simple and straightforward, but the ways the fictional tales play out reveal unexpected complexities. Writers of safety standards for self-driving cars express their goals in similarly simple terms. But several groups now developing standards for how autonomous vehicles will interact with humans and with each other face real-world issues much more complex than science fiction.

Advocates of autonomous cars claim that turning the wheel over to robots could slash the horrific toll of 1.3 million people killed around the world each year by motor vehicles. Yet the public has become wary because robotic cars also can kill. Documents released last week by the U.S. National Transportation Safety Board blame the March 2018 death of an Arizona pedestrian struck by a self-driving Uber on safety failures by the car’s safety driver, the company, and the state of Arizona. Even less-deadly safety failures are damning, like the incident where a Tesla in Autopilot mode wasn’t smart enough to avoid crashing into a stopped fire engine whose warning lights were flashing.

Safety standards for autonomous vehicles “are absolutely critical” for public acceptance of the new technology, says Greg McGuire, associate director of the Mcity autonomous vehicle testing lab at the University of Michigan. “Without them, how do we know that [self-driving cars] are safe, and how do we gain public trust?” Earning that trust requires developing standards through an open process that the public can scrutinize, and may even require government regulation, he adds.

Companies developing autonomous technology have taken notice. Earlier this year, representatives from 11 companies including Aptiv, Audi, Baidu, BMW, Daimler, Infineon, Intel, and Volkswagen collaborated to write a wide-ranging whitepaper titled “Safety First for Automated Driving.” They urged designing safety features into the automated driving function, and using heightened cybersecurity to assure the integrity of vital data including the locations, movement, and identification of other objects in the vehicle environment. They also urged validating and verifying the performance of robotic functions in a wide range of operating conditions.

On 7 November, the International Telecommunications Union announced the formation of a focus group called AI for Autonomous and Assisted Driving. It’s aim: to develop performance standards for artificial intelligence (AI) systems that control self-driving cars. (The ITU has come a long way since its 1865 founding as the International Telegraph Union, with a mandate to standardize the operations of telegraph services.)

ITU intends the standards to be “an equivalent of a Turing Test for AI on our roads,” says focus group chairman Bryn Balcombe of the Autonomous Drivers Alliance. A computer passes a Turing Test if it can fool a person into thinking it’s a human. The AI test is vital, he says, to assure that human drivers and the AI behind self-driving cars understand each other and predict each other’s behaviors and risks.

A planning document says AI development should match public expectations so:

• AI never engages in careless, dangerous, or reckless driving behavior

• AI remains aware, willing, and able to avoid collisions at all times

• AI meets or exceeds the performance of a competent, careful human driver

 

These broad goals for automotive AI algorithms resemble Asimov’s laws, insofar as they bar hurting humans and demand that they obey human commands and protect their own existence. But the ITU document includes a list of 15 “deliverables” including developing specifications for evaluating AIs and drafting technical reports needed for validating AI performance on the road.

A central issue is convincing the public to entrust the privilege of driving—a potentially life-and-death activity—to a technology which has suffered embarrassing failures like the misidentification of minorities that led San Francisco to ban the use of facial recognition by police and city agencies.

Testing how well an AI can drive is vastly complex, says McGuire. Human adaptability makes us fairly good drivers. “We’re not perfect, but we are very good at it, with typically a hundred million miles between fatal traffic crashes,” he says. Racking up that much distance in real-world testing is impractical—and it is but a fraction of the billions of vehicle miles needed for statistical significance. That’s a big reason developers have turned to simulations. Computers can help them run up virtual mileage needed to find potential safety flaws that might arise only rare situations, like in a snowstorm or heavy rain, or on a road under construction.

It’s not enough for an automotive AI to assure the vehicle’s safety, says McGuire. “The vehicle has to work in a way that humans would understand.” Self-driving cars have been rear-ended when they stopped in situations where most humans would not have expected a driver to stop. And a truck can be perfectly safe even when close enough to unnerve a bicyclist.

Other groups are also developing standards for robotic vehicles. ITU is covering both automated driver assistance and fully autonomous vehicles. Underwriters Laboratories is working on a standard for fully-autonomous vehicles. The Automated Vehicle Safety Consortium, a group including auto companies, plus Lyft, Uber, and SAE International (formerly the Society of Automotive Engineers) is developing safety principles for SAE Level 4 and 5 autonomous vehicles. The BSI Group (formerly the British Institute of Standards) developed a strategy for British standards for connected and autonomous vehicles and is now working on the standards themselves.

How long will it take to develop standards? “This is a research process,” says McGuire. “It takes as long as it takes” to establish public trust and social benefit. In the near term, Mcity has teamed with the city of Detroit, the U.S. Department of Transportation, and Verizon to test autonomous vehicles for transporting the elderly on city streets. But he says the field “needs to be a living thing that continues to evolve” over a longer period.

Meet the VoloCity

Post Syndicated from LEMO original https://spectrum.ieee.org/transportation/alternative-transportation/meet-the-volocity

Presented in last August, the VoloCity is the latest solution proposed by Volocopter, the first designed for actual commercial use. All of its features (number of seats, range, speed…) are related to its mission: to be an inner-city flying taxi and nothing else. The choice of simplicity, for instance (such as direct-drive motors and fixed pitch rotors) makes the solution less costly to manufacture, more reliable (less expensive maintenance and easier to certify), lighter, so more economical and less noisy. Everything is closely linked.

1.    Drive

The wide span and a large number of battery-powered engines and rotors (18 of each) reduce the noise level and generate a frequency that is softer and more pleasant on the ear. It also improves safety: the VoloCity is capable of flying even if several engines are inoperative. The aircraft will fly at “only” 110kmph, which is safer (better collision avoidance) and less noisy than rapid eVTOLs.

 2.   Cabin

 The passenger  and the pilot ​have access and are seated comfortably (Volocopter’s analyses show that the large majority of intra-urban passengers travel alone). There is space for hand luggages, air conditioning, silence and a stunning view. Once the regulations will authorise it, the VoloCity will also be able to fly autonomously.

3.    Batteries

 The VoloCity embarks 9 Lithium-ion exchangeable battery packs. These are recharged on the vertiports. Whenever the aircraft lands in between two flights, batteries can be changed in five minutes to fresh batteries and can take off. Its 35km range makes it possible to connect the most popular destinations (city centres, airports, business centres …).

4.   Skids

 Vertical take-off and landingso no need for wheels nor retractable landing gear. The skids are part of the rationalisation process to reduce weight, breakdowns, production and maintenance costs. Ground operations are ensured by conveyor belts or platforms.

image

LEMO-Sky is the Limit

Post Syndicated from LEMO original https://spectrum.ieee.org/transportation/alternative-transportation/lemo-the-sky-is-the-limit

Eight years ago, when Alex Zosel cofounded his start-up, the idea of electric air taxis seemed completely crazy. Today, however, some cities are already testing them. Volocopter’s founder explains why urban transport should turn to the sky and how his start-up wants to become more than just an aircraft manufacturer.

Alex Zosel, how will urban mobility evolve?

Urban mobility will undergo a huge transformation in the next 20 to 30 years. Following an already perceptible trend, private means of transport will be increasingly chosen for environmental reasons (efficiency, low emission…) rather than for ego-boosting (power, speed…). Many new devices will appear or co-exist: we will see cable cars, e-scooters, conveyor belts, streets dedicated to fast bikes, etc. This evolution will certainly also depend on the evolution of society and work. New technologies could very well reduce working hours, the workforce, increase teleworking – all this will have an impact on traffic.

We believe that in megacities with saturated roads and major traffic flow issues, the only viable solution will be the sky! We want to be one of the big players, one of the drivers in urban air mobility. This is why we talk a lot about the transformation of cities with architects and infrastructure specialists. On the one hand, we are thinking in the long term – imagining a future with thousands of aircraft in the city skies. On the other hand, in the short term, the first solutions, the first missions: where are the current needs for flying taxis? Where would they really make a positive impact in terms of mobility? It is on these short-term solutions that we will gradually build up the ecosystem of urban air mobility.

Which missions are you focusing on first?

On combating bottlenecks in megacities. They represent a highly critical situation that flying taxis could rapidly improve. For instance, by transporting people between airports and city centres, or between tourist attractions – an issue in a large number of cities. It wouldn’t be necessary to transport everyone through the skies, but relieving some roads could rapidly help the whole system to flow better.

Short hops and no long-distance commuting, unlike the market targeted by the Uber Elevate system?

We want to fly between cities one day, but we believe that flying in inner cities would fix a more urgent problem. There is also a technological reason for this choice: we want a reliable solution now, using tried and tested, already existing batteries, which do not provide for flying over long distances. The limited range of our aircraft – 35km – is not a problem: our analysis shows that 90% of the megacities we are targeting have a major airport within 30km of the city centre. Therefore, millions of journeys are within our reach.

Your targeted mission also influences some other design aspects of your solution…

Our aircraft is fully designed for this mission. Its extensive propulsion system including 18 engines, for example, makes it extremely quiet and reliable (it can fly even if several engines aren’t functioning). Noise and safety are the major criteria for being approved in cities. Our speed of 110km/h is also important: flying twice as fast would be terrifying, dangerous, much noisier and, considering the short distances, it wouldn’t really be a time-saver.

Are eVTOLs appropriate from the environmental point of view?

They are zero emission which is already great. Obviously, they require more energy for taking off than a car on a flat surface. On the other hand, there is no need to build roads, bridges, tunnels – all these costly infrastructures which then generate huge maintenance costs. If you add the impact of these infrastructures to the impact of vehicles, it is clearly more efficient to go up into the sky! Slowing down or decreasing the development of the road network also makes it possible to save or reintroduce nature into cities.

Launching innovative new means of transport is often hindered by the rules which have to be created or adopted…

Integrating this aspect is part of Volocopter’s DNA: we have been discussing with the authorities for over 7 years about how to integrate eVTOLs. In general, the authorities are fairly open: they want to make flights safer, which is exactly what sensors and automated systems built in the heart of eVTOLs provide for. We have been partners of the European Aviation Safety Agency (EASA) for two years. We have helped to integrate air taxis flying in inner cities – which could also fly autonomously – in these regulations. Similarly, we have been working with the US Federal Aviation Administration, whose processes are somewhat more complex and extensive. The “Special Condition for VTOL” of the EASA was presented in early July and we are very proud to have contributed to it. The VoloCity, our new aircraft, will be the first commercially licensed Volocopter accredited to these high standards and requirements. Simply put, the safety requirements are that flying taxis will have to be as safe as commer- cial aircraft. They shall be.

How about integration in air traffic?

This is more of a local thing: the selection of routes, weather conditions, altitudes, coexistence with drones and helicopters… Again, we have a lot of experience, namely thanks to our cooperation with the Dubai government’s Roads and Transport Authority. They asked us to participate in a project for autonomous flying devices in the airspace over their city. The feasibility of such a service was proven after a successful test with our Volocopter 2X in September 2017. This was the first-ever public flight of an autonomous urban air taxi. We are currently working with Singapore authorities, who are also interested.

So, it is city by city that Volocopter develops its projects?

Yes. Many cities have already asked us to become technical partners to advise on how to develop urban air mobility infrastructures and services. Therefore, they are motivated and very much involved. They dedicate the necessary resources and it can quickly go forward – we have no fear of not getting approvals. We start off with a first route from A to B, as a first step. Whatever happens afterwards, Volocopter can always learn a lot and it helps us to extend our offer.

Do you also work with airport operators, such as Skyport?

Yes, indeed.Not only for the integration of air taxis in their territory, but also for integrating our services with theirs. 

We could imagine for instance that check-in for an international flight would be possible upon embarking at a vertiport in the city. It would relieve traffic in their departure area.

The way you describe it, it would seem that Volocopter will provide services, rather than just aircraft…

You are exactly right. From the beginning, we never wanted to be a manufacturer of vehicles. We want to be a mobility provider, to sell the tickets. Everything beneath that is based on very strong local partnerships, with the authorities, real estate players, airport operators or helicopter companies, whose launch pads we could use. In a nutshell, we have to associate with all of those to ensure the smooth integration of flying taxi services in a city. At the same time, our vertiport projects are not exclusive: other eVTOLs could also use them. Volocopter could even buy or rent aircraft from other manufacturers. We are really open – we wouldn’t be able to create an air-mobility system if we were exclusive. Our eVTOLs have a head-start, so we will start with them to be the first on the market. Then we will see how to increase the scale.

When will these air taxis be part of our daily lives?

They have been part of my daily life for years, since I am one of the test pilots! I think the first regular route will be opened in 2 to 3 years. The speed of their implementation will depend on several factors, including the production capacity of eVTOL manufacturers! By the time I retire in 13 years, we should have a system of flying taxis in at least 10 mega– cities. If not, well, in that case I will have to retire five years late!

From a small drone to flying taxis, the Volocopter history

In 2010, Stephan Wolf, an industrial automation software development specialist, bought a quadcopter for his son. It was one of the first ones that even a child could easily control. Impressed by its manoeuvrability, stability and easy handling, Wolf immediately started dreaming: what if it was scaled up to transport people? After some research and calculations, he developed a project. He contacted Alex Zosel, a childhood friend, who became an entrepreneur. A passionate hobby pilot and paragliding teacher himself, he didn’t hesitate for a second. The two men and Thomas Senkel founded e-Volo in 2011. The startup, located near Karlsruhe, in Germany, subsequently became Volocopter.

The VC1 proved in October 2011 that an electric multicopter was capable of taking off vertically with a passenger. Then, the first prototype, the VC200, was suitable for carrying 2 people. It made its first manned flight – with Zosel himself as the pilot – in March 2016. The VC200 had been financed through an instantly successful online crowdfunding (500,000 euros in 2 hours and 35 minutes!).

This model also marked a major milestone: it got a license to fly all over the country from the German authorities. The Volocopter 2X, first displayed in 2017, has since been built into a miniseries for extensive test flights in manned, unmanned (which includes remote-controlled), and automated (following a preset route with no human interference) configurations.

In only 8 years, the seemingly absurd idea of a “drone for humans” has become a certified aircraft. In the meantime, eVTOL aircraft (electric vertical take-off and landing) have generated an industrial race (see page 08). Volocopter, a visionary? In any case, that’s how the World Economic Forum seems to consider the startup, as they were among the winners of the 2019 “Technology Pioneer” award which recognises companies that are “poised to have a significant impact on business and society”.

Many investors, large and small, have contributed so far to financing Volocopter, including Daimler and Intel. Lukasz Gadowski, a German serial entrepreneur and founder of Circ (formerly known as Flash) micro mobility electric scooter, is one of the personalities who have been convinced. Last September, Geely, the Chinese Group owner of Volvo and Lotus cars, as well as Terrafugia (hybrid cars/aircraft) headed the round of private funding amounting to 50 million euros for the C Series.

This summer, Volocopter presented its next model, the VoloCity (for all the details, see page 14). Slightly larger and twice as heavy and powerful as the 2X, this multicopter is not a prototype, but rather the first to be marketed.

The startup, with a current staff of about 150, aims at becoming a mobility provider (see the interview with Alex Zosel, page 10). It has been working for several years already with Dubai and Singapore, who have requested a feasibility study on such services in their skies.

At the end of October, Volocopter conducted a public proof of concept flight in Singapore. It also displayed a vertiport prototype there, not for launching the eVTOLs (the authorisations are missing), but for testing and improving various services (reception, booking, check-in). This proof of concept on a one-to-one scale is another step towards the imminent launching of a brand new type of urban mobility.

LEMO-The Sky Is the Limit

Post Syndicated from LEMO original https://spectrum.ieee.org/transportation/alternative-transportation/lemothe-sky-is-the-limit

Eight years ago, when Alex Zosel cofounded his start-up, the idea of electric air taxis seemed completely crazy. Today, however, some cities are already testing them. Volocopter’s founder explains why urban transport should turn to the sky and how his start-up wants to become more than just an aircraft manufacturer. 

Alex Zosel, how will urban mobility evolve?

Urban mobility will undergo a huge transformation in the next 20 to 30 years. Following an already perceptible trend, private means of transport will be increasingly chosen for environmental reasons (efficiency, low emission…) rather than for ego-boosting (power, speed…). Many new devices will appear or co-exist: we will see cable cars, e-scooters, conveyor belts, streets dedicated to fast bikes, etc. This evolution will certainly also depend on the evolution of society and work. New technologies could very well reduce working hours, the workforce, increase teleworking – all this will have an impact on traffic.

How does Volocopter position itself in this evolution?

We believe that in megacities with saturated roads and major traffic flow issues, the only viable solution will be the sky! We want to be one of the big players, one of the drivers in urban air mobility. This is why we talk a lot about the transformation of cities with architects and infrastructure specialists. On the one hand, we are thinking in the long term – imagining a future with thousands of aircraft in the city skies. On the other hand, in the short term, the first solutions, the first missions: where are the current needs for flying taxis? Where would they really make a positive impact in terms of mobility? It is on these short-term solutions that we will gradually build up the ecosystem of urban air mobility.

Which missions are you focusing on first?

On combating bottlenecks in megacities. They represent a highly critical situation that flying taxis could rapidly improve. For instance, by transporting people between airports and city centres, or between tourist attractions – an issue in a large number of cities. It wouldn’t be necessary to transport everyone through the skies, but relieving some roads could rapidly help the whole system to flow better.

Short hops and no long-distance commuting, unlike the market targeted by the Uber Elevate system?

We want to fly between cities one day, but we believe that flying in inner cities would fix a more urgent problem. There is also a technological reason for this choice: we want a reliable solution now, using tried and tested, already existing batteries, which do not provide for flying over long distances. The limited range of our aircraft – 35km – is not a problem: our analysis shows that 90% of the megacities we are targeting have a major airport within 30km of the city centre. Therefore, millions of journeys are within our reach.

Your targeted mission also influences some other design aspects of your solution…

Our aircraft is fully designed for this mission. Its extensive propulsion system including 18 engines, for example, makes it extremely quiet and reliable (it can fly even if several engines aren’t functioning). Noise and safety are the major criteria for being approved in cities. Our speed of 110km/h is also important: flying twice as fast would be terrifying, dangerous, much noisier and, considering the short distances, it wouldn’t really be a time-saver.

Are eVTOLs appropriate from the environmental point of view?

They are zero emission which is already great. Obviously, they require more energy for taking off than a car on a flat surface. On the other hand, there is no need to build roads, bridges, tunnels – all these costly infrastructures which then generate huge maintenance costs. If you add the impact of these infrastructures to the impact of vehicles, it is clearly more efficient to go up into the sky! Slowing down or decreasing the development of the road network also makes it possible to save or reintroduce nature into cities.

Launching innovative new means of transport is often hindered by the rules which have to be created or adopted…

Integrating this aspect is part of Volocopter’s DNA: we have been discussing with the authorities for over 7 years about how to integrate eVTOLs. In general, the authorities are fairly open: they want to make flights safer, which is exactly what sensors and automated systems built in the heart of eVTOLs provide for. We have been partners of the European Aviation Safety Agency (EASA) for two years. We have helped to integrate air taxis flying in inner cities – which could also fly autonomously – in these regulations. Similarly, we have been working with the US Federal Aviation Administration, whose processes are somewhat more complex and extensive. The “Special Condition for VTOL” of the EASA was presented in early July and we are very proud to have contributed to it. The VoloCity, our new aircraft, will be the first commercially licensed Volocopter accredited to these high standards and requirements. Simply put, the safety requirements are that flying taxis will have to be as safe as commer- cial aircraft. They shall be.

How about integration in air traffic?

This is more of a local thing: the selection of routes, weather conditions, altitudes, coexistence with drones and helicopters… Again, we have a lot of experience, namely thanks to our cooperation with the Dubai government’s Roads and Transport Authority. They asked us to participate in a project for autonomous flying devices in the airspace over their city. The feasibility of such a service was proven after a successful test with our Volocopter 2X in September 2017. This was the first-ever public flight of an autonomous urban air taxi. We are currently working with Singapore authorities, who are also interested.

So, it is city by city that Volocopter develops its projects?

Yes. Many cities have already asked us to become technical partners to advise on how to develop urban air mobility infrastructures and services. Therefore, they are motivated and very much involved. They dedicate the necessary resources and it can quickly go forward – we have no fear of not getting approvals. We start off with a first route from A to B, as a first step. Whatever happens afterwards, Volocopter can always learn a lot and it helps us to extend our offer.

Do you also work with airport operators, such as Skyport?

Yes, indeed. Not only for the integration of air taxis in their territory, but also for integrating our services with theirs. We could imagine for instance that check-in for an international flight would be possible upon embarking at a vertiport in the city. It would relieve traffic in their departure area.

The way you describe it, it would seem that Volocopter will provide services, rather than just aircraft…

You are exactly right. From the beginning, we never wanted to be a manufacturer of vehicles. We want to be a mobility provider, to sell the tickets. Everything beneath that is based on very strong local partnerships, with the authorities, real estate players, airport operators or helicopter companies, whose launch pads we could use. In a nutshell, we have to associate with all of those to ensure the smooth integration of flying taxi services in a city. At the same time, our vertiport projects are not exclusive: other eVTOLs could also use them. Volocopter could even buy or rent aircraft from other manufacturers. We are really open – we wouldn’t be able to create an air-mobility system if we were exclusive. Our eVTOLs have a head-start, so we will start with them to be the first on the market. Then we will see how to increase the scale.

When will these air taxis be part of our daily lives?

They have been part of my daily life for years, since I am one of the test pilots! I think the first regular route will be opened in 2 to 3 years. The speed of their implementation will depend on several factors, including the production capacity of eVTOL manufacturers! By the time I retire in 13 years, we should have a system of flying taxis in at least 10 mega– cities. If not, well, in that case I will have to retire five years late!