Tag Archives: Aerospace

The Networks That Aim to Track GPS Interference Around the World

Post Syndicated from Mark Harris original https://spectrum.ieee.org/tech-talk/aerospace/aviation/the-networks-that-aim-to-track-gps-interference-around-the-world

When Kevin Wells’ private jet lost its GPS reception after taking off from Hayward Executive Airport in California’s Bay Area in February 2019, he whipped out his phone and starting filming the instrument panel. For a few seconds, the signals from over a dozen GPS satellites can be seen to blink away, before slowly returning as Wells continues his ascent. Wells was able to continue his flight safely, but may have accidentally flown too high, into commercial airspace near Oakland.

Wells was quick to film the incident because this was not the first time he had suffered GPS problems. Less than a month earlier, his Cessna had been hit by a similar outage, at almost the identical location. “When I asked the Hayward Tower about the loss of signal, they did not know of [it],” he wrote later in a report to NASA’s Aviation Safety Reporting System.

“It wasn’t a big event for me because I came out of the clouds at the beginning of the second event,” says Wells of the 2019 interference.

Wells had fallen victim to a GPS interference event, where rogue or malicious signals drown out the faint signals from navigation satellites in orbit. Such events, often linked to U.S. military tests, can cause dangerous situations and near-misses.  A recent Spectrum investigation discovered that they are far more prevalent, particularly in the western United States, than had previously been thought.

Luckily, Wells was in a position to do something about it. As Executive Director of Stanford Institute for Theoretical Physics, Wells knew many of the university’s researchers. He took his video to the GPS Lab at Stanford Engineering, where Professor Todd Walter was already studying the problem of GPS jamming.

The Federal Aviation Administration (FAA) and the Federal Communications Commission (FCC) do have procedures for finding people deliberately or accidentally interfering with GPS signals. When pilots reported mysterious, sporadic GPS jamming near Wilmington Airport in North Carolina, the FAA eventually identified a poorly designed antenna on a utility company’s wireless control system. “But this took many weeks, maybe months,” Walter tells Spectrum. “Our goal is to track down the culprit in days.”

Walter’s team was working on a drone that would autonomously sniff out local signals in the GPS band, without having to rely on GPS for its own navigation. “But we didn’t have permission to fly drones in Hayward’s airspace and it wasn’t quite at the point where we could just launch it and seek out the source of the interference,” says Walter.

Instead, Walter had a different idea: Why not use the GPS receivers in other aircraft to crowdsource a solution? All modern planes carry ADS-B transponders—devices that continually broadcast their GPS location, speed and heading to aid air traffic control and avoid potential collisions. These ADS-B signals are collected by nearby aircraft but also by many terrestrial sensors, including a network of open-access receivers organized by OpenSky, a Swiss nonprofit.

With OpenSky’s data in hand, the Stanford researchers’ first task was accurately identifying interference events. They found that the vast majority of times that ADS-B receivers lost data had nothing to do with interference. Some receivers were unreliable, others were obscured from planes overhead by buildings or trees.

Being able to link the loss of Wells’ GPS signals with data from the OpenSky database was an important step in characterizing genuine jamming. Integrity and accuracy indicators built into the ADS-B data stream also helped the researchers. “I think certain interference events have a characteristic that we can now recognize,” says Walter. “But I’d be concerned that there would be other interference events that we don’t have the pattern for. We need more resources and more data.”

For the Hayward incidents, Walter’s team managed to identify 17 days of potential jamming between January and March 2019. Of the 265 aircraft that flew near Hayward during that time, the ADS-B data showed that 25 had probably experienced GPS interference. This intermittent jamming was not enough to narrow down the source of the signals, says Walter: “We can say it’s in this region of Hayward but you don’t want to go searching ten city blocks. We want to localize it to a house or building.”

The FAA eventually issued a warning to pilots flying in and out of Hayward and much of the southern Bay Area, and the interference seemed to quiet down. However, Walter recently looked at 2020 ADS-B data near Hayward and found 13 more potentially jammed flights.

Walter now hopes to expand on its Hayward study by tapping into the FAA’s own network of ADS-B receivers to help uncover more interference signals hidden in the data.

Leveraging ADS-B signals is not the way researchers can troubleshoot GPS reception. Earlier this month, John Stader and Sanjeev Gunawardena at the U.S. Air Force Institute of Technology presented a paper at the Institute of Navigation’s PTTI 2021 conference detailing an alternative interference detection system.

The AFIT system also uses free and open-source data, but this time from a network of continuously-operating terrestrial GPS receivers across the globe. These provide high-quality data on GPS signals in their vicinity, which the AFIT researchers then mined to identify interference events. The AFIT team was able to identify 30 possible jamming events in one 24-hour period, with detection within a matter of minutes of each jamming incident occurring. “The end goal of this research effort is to create a worldwide, automated system for detecting interference events using publicly available data,” wrote the authors.

A small system like this is already up and running around Madrid Airport in Spain, where 11 GPS receivers constantly monitor for jamming or accidental interference. “This is costly and takes a while to install,” says Walter, “I feel like major airports probably do want something like that to protect their airspace, but it’s never going to be possible for smaller airports like Hayward.”

Another problem facing both these systems is the sparse distribution of sensors, says Todd Humphreys, Director of the Radionavigation Laboratory at UT Austin: “There are fewer than 3000 GPS reference stations with publicly-accessible data across the globe; these can be separated by hundreds of miles. Likewise, global coverage by ships and planes is still sparse enough to make detection challenging, and localization nearly impossible, except around ports and airports.”

Humphreys favors using other satellites to monitor the GPS satellites, and published a paper last year that zeroed in on a GPS jamming system in Syria. Aireon is already monitoring ADS-B signals worldwide from Iridium’s medium earth orbit satellites, while HawkEye 360 is building out its own radio-frequency sensing satellites in low earth orbit (LEO).

“That’s very exciting,” says Walter. “If you have dedicated equipment on these LEO satellites, that can be very powerful and maybe in the long term much lower cost than a whole bunch of terrestrial sensors.”

Until then, pilots like Kevin Wells will having to keep their wits about them as they navigate areas prone to GPS interference. The source of the jamming near Hayward Airport was never identified.

No Sleep Till Jezero: Touchdown Time for Mars Perseverance Rover?

Post Syndicated from Ned Potter original https://spectrum.ieee.org/automaton/aerospace/space-flight/nasa-perseverance-rover-lands-at-mars-jezero-crater

They used to call it “Seven Minutes of Terror”—a NASA probe would slice into the atmosphere of Mars at more than 20,000 kilometers per hour; slow itself with a heat shield, parachute, and rocket engines; and somehow land intact on the surface, just six or seven minutes later, while its makers waited helplessly on Earth. The computer-animated landing videos NASA produced before previous Mars missions—in 2004, 2008, and 2012—became online sensations. “If any one thing doesn’t work just right,” said NASA engineer Tom Rivellini in the last one, “it’s game over.”

NASA is now trying again, with the Perseverance rover and the tiny Ingenuity drone bolted to its undercarriage. NASA will be live-streaming the landing (across many video and social media platforms as well as in a Spanish language feed and in an immersive, 360-degree view) beginning at 11:15 a.m. PST/2:15 p.m. EST/19:15 UTC on Thursday, 18 February 2021. 

While this year’s animated landing video is as dramatic as ever, the tone has changed. “The models and simulations of landing at Jezero crater have assessed the probability of landing safely to be above 99 percent,” says Swati Mohan, the guidance, navigation and controls operations lead for the mission.

There isn’t a trace of arrogance in her voice as she says this. She’s been working on this mission for five years, has teammates who were around for NASA’s first Mars rover in 1997, and knows what they’re up against. Yes, they say, 99 percent reliability is realistic. 

The biggest advance over past missions is a system called Terrain Relative Navigation—TRN for short. In essence, it gives the spacecraft a way to know precisely where it’s headed, so it can steer clear of hazards on the very jagged landscapes that scientists most want to explore. If all goes as planned, Perseverance will image the Martian surface in rapid sequence as it plows toward its landing site, and compare what it sees to onboard maps of the ground below. The onboard database is primarily based on high-resolution images from NASA’s Mars Reconnaissance Orbiter, which has been mapping the planet from an altitude of 250 kilometers since 2006. Its images have a resolution of 30 cm per pixel. 

“This is kind of along the same lines as what the Apollo astronauts did with people in the loop, back in the day. Those guys looked out the window,” says Allen Chen, the mission’s entry, descent, and landing lead. “For the first time here on Mars, we’re automating that.”

There will still be plenty of anxious controllers at NASA’s Jet Propulsion Laboratory in California. After all, the spacecraft will be on its own, about 209 million kilometers from Earth, far enough away that its radio signals will take more than 11 minutes to reach home. The ship should reach the surface four minutes before engineers even know it has entered the Martian atmosphere. “Landing on Mars is hard enough,” says Thomas Zurbuchen, NASA’s associate administrator for science missions. “It is not guaranteed that we will be successful.” 

But the new navigation technology makes a very risky landing possible. Jezero crater, which was probably once a lake at the end of a river delta, has been on scientists’ shortlist since the 1990s as place to look for signs of past life on Mars. But engineers voted against it until this mission. Previous landers used radar, which Mohan likens to “closing your eyes and holding your hands out in front of you. You can use that to slow down and to stop. But with your eyes closed you can’t really control where you’re coming down.”

Everything happens fast as Perseverance comes in, following a long arcing path. Fewer than 90 seconds before scheduled touchdown, and about 2,100 meters above the Martian surface, the TRN system makes its calculations. Its rapid-fire imaging should by then have told it where it is relative to the ground below, and from that it can project its likely touchdown spot. If the ship is headed for a ridge, a crevice, or a dangerous outcropping of rock, the computer will send commands to eight downward-facing rocket engines to change the descent trajectory. 

In that final minute, as the spacecraft slows from 300 kilometers per hour to zero, the TRN system can shift the touchdown spot by up to 330 meters. The safe targets map in Perseverance’s memory is detailed enough, the team says, that the ship should be able to reach a suitable location for a safe landing. 

“It’s able to thread the needle of all these different hazards to land in the safe spots in between these hazards,” says Mohan, “and by landing amongst the hazards it’s also landing amongst the scientific features of interest.”

How NASA Designed a Helicopter That Could Fly Autonomously on Mars

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/aerospace/robotic-exploration/nasa-designed-perseverance-helicopter-rover-fly-autonomously-mars

Tucked under the belly of the Perseverance rover that will be landing on Mars in just a few days is a little helicopter called Ingenuity. Its body is the size of a box of tissues, slung underneath a pair of 1.2m carbon fiber rotors on top of four spindly legs. It weighs just 1.8kg, but the importance of its mission is massive. If everything goes according to plan, Ingenuity will become the first aircraft to fly on Mars. 

In order for this to work, Ingenuity has to survive frigid temperatures, manage merciless power constraints, and attempt a series of 90 second flights while separated from Earth by 10 light minutes. Which means that real-time communication or control is impossible. To understand how NASA is making this happen, below is our conversation with Tim Canham, Mars Helicopter Operations Lead at NASA’s Jet Propulsion Laboratory (JPL).

NASA’s Mars Perseverance Rover Should Leave Past Space Probes in the Dust

Post Syndicated from Ned Potter original https://spectrum.ieee.org/tech-talk/aerospace/space-flight/nasa-mars-perseverance-rover-should-leave-past-space-probes-in-dust

If you could stand next to NASA’s Perseverance rover on the Martian surface… well, you’d be standing on Mars. Which would be a pretty remarkable thing unto itself. But if you were waiting for the rover to go boldly exploring, your mind might soon wander. You’d be forgiven for thinking this is like watching paint dry. The Curiosity rover, which landed on Mars in 2012 and has the same chassis, often goes just 50 or 60 meters in a day. 

Which is why Rich Rieber and his teammates have been at work for five years, building a new driving system for Perseverance that they hope will set some land-speed records for Mars. 

“The reason we’re so concerned with speed is that if we’re driving, we’re not doing science,” he said. “If you’re on a road trip and you drive to Disneyland, you want to get to Disneyland. It’s not about driving, you want to be there.”

Rieber is the lead mobility systems engineer for the Perseverance mission. He ran the development of the drivetrain, suspension, engineering cameras, machine vision, and path-planning algorithms that should allow the rover to navigate the often-treacherous landscape around Perseverance’s destination, called Jezero crater. With luck, Perseverance will leave past rovers in the dust. 

“Perseverance is going to drive three times faster than any previous Mars rover,” said Matt Wallace, the deputy mission manager at NASA’s Jet Propulsion Lab. “We have added a lot of surface autonomy, a lot of new AI if you will, to this vehicle so that we can complete the mission on the surface.”

The rover, if everything works, will still have a maximum speed of only 4.4 cm per second, which is one-thirtieth as fast as human walking speed. It would travel the length of a football field in 45 minutes. But, says Rieber, “It is head and shoulders the fastest rover on Mars, and that’s not because we are driving the vehicle faster. It’s because we’re spending less time thinking about how.” 

Perseverance bases much of its navigational ability on an onboard map created from images taken by NASA’s Mars Reconnaissance Orbiter–detailed enough that it can show features less than 30 cm across. That helps tell the rover where it is. It then adds stereo imagery from two navigational cameras on its top mast and six hazard-detection cameras on its body. Each camera has a 20-megapixel color sensor. The so-called Navcams have a 90-degree field of view. They can pick out a golf ball-sized object 25 meters away.

These numbers add up: The technology should allow the rover to pick out obstacles as it goes—a ridge, an outcropping of rock, a risky-looking depression—and steer around many of them without help from Earth. Mission managers plan to send the rover its marching orders each morning, Martian time, and then wait for it to report its progress the next time it can communicate with Earth. 

Earlier rovers often had to image where they were and stop for the day to await new instructions from Earth. Curiosity, on days it’s been ordered to drive, only spends 13 percent of its time actually in motion. Perseverance may more than triple that. 

There are, however, still myriad complexities to driving on Mars. For instance, mission engineers can calculate how far Perseverance will go with each revolution of its six wheels. But what if the wheels on one side slip because they were driving through sand? How far behind or off its planned path might it be? The rover’s computers can figure that out, but their processing capacity is limited by the cosmic radiation that bombards spacecraft outside the Earth’s protective magnetosphere. “Our computer is like top of the line circa 1994,” said Rieber, “and that’s because of radiation. The closer [together] you have your transistors, the more susceptible they are.”

Matt Wallace, the deputy project manager, has been on previous missions when—sometimes only in hindsight—engineers realized they had barely escaped disaster. “There’s never a no-risk proposition here when you’re trying to do something new,” he said. 

But the payoff would come if the rover came across chemical signatures of life on Mars from billions of years ago. If Perseverance finds that, it could change our view of life on Earth.

Is there a spot somewhere at Jezero crater that could offer such an incredible scientific breakthrough? The first step is to drive to it.

Everything You Need to Know About NASA’s Perseverance Rover Landing on Mars

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/aerospace/robotic-exploration/nasa-perseverance-rover-landing-on-mars-overview

Just before 4PM ET on February 18 (this Thursday), NASA’s Perseverance rover will attempt to land on Mars. Like its predecessor Curiosity, which has been exploring Mars since 2012, Perseverance is a semi-autonomous mobile science platform the size of a small car. It’s designed to spend years roving the red planet, looking for (among other things) any evidence of microbial life that may have thrived on Mars in the past.

This mission to Mars is arguably the most ambitious one ever launched, combining technically complex science objectives with borderline craziness that includes the launching of a small helicopter. Over the next two days, we’ll be taking an in-depth look at both that helicopter and how Perseverance will be leveraging autonomy to explore farther and faster that ever before, but for now, we’ll quickly go through all the basics about the Perseverance mission to bring you up to speed on everything that will happen later this week.

3D-Printed Thruster Boosts Range of CubeSat Applications

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/aerospace/satellites/3dprint-thruster

A new 3-D printed electric thruster could one day help make advanced miniature satellites significantly easier and more affordable to build, a new study finds.

Conventional rockets use chemical reactions to generate propulsion. In contrast, electric thrusters produce thrust by using electric fields to accelerate electrically charged propellants away from a spacecraft.

The main weakness of electric propulsion is that it generates much less thrust than chemical rockets, making it too weak to launch a spacecraft from Earth’s surface. On the other hand, electric thrusters are extremely efficient at generating thrust, given the small amount of propellant they carry. This makes them very useful where every bit of weight matters, as in the case of a satellite that’s already in orbit.

Test and measurement solutions & how to control test equipment remotely, while respecting social-distancing.

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/test-and-measurement-solutions-how-to-control-test-equipment-remotely-while-respecting-socialdistancing

Introducing social distancing

The impact of COVID-19 has disrupted the global satellite production in an unprecedented way.

Many of the satellite industry manufacturing processes came to a halt, when staff lockdowns and social distancing measures had to be employed. Recovery is slowly underway but the effect of the impact is far from over.

The industry is now looking for new ways of making their satellite production more resilient towards the effects of the pandemic. Especially in the test and measurement domain, new technologies and solutions offer manufacturers the opportunity to remain productive and operational, while respecting social-distancing measures.

Much of the equipment used to test satellite electronics can be operated remotely and test procedures can be created, automated and controlled by the same engineers from their homes.

This webinar provides an overview on related test and measurement solutions from Rohde & Schwarz and explains how engineers can control test equipment remotely to continue producing, while respecting social-distancing.

In this webinar you will learn:

The interfaces/standards used to command test equipment remotely
How to maintain production while using social-distanced testing
Solutions from Rohde & Schwarz for cloud-based testing and cybersecurity

Presenters:

Sascha Laumann

Sascha Laumann, Product Manager, Rohde & Schwarz

Sascha Laumann is a product owner for digital products at Rohde & Schwarz. His main activities are definition, development and marketing of test and measurement products that address future challenges of the ever-changing market. Sascha is an alumni of the Technical University of Munich, having majored in EE with a special focus on efficient data acquisition, transfer and processing. His previous professional background comprises of developing solutions for aerospace testing applications.

Rajan Bedi

Dr. Rajan Bedi, CEO & Founder, Spacechips

Dr. Rajan Bedi is the CEO and founder of Spacechips, a UK SME disrupting the global space industry with its award-winning on-board processing and transponder products, space-electronics design-consultancy, technical-marketing and training and business-intelligence services. Spacechips won Start- Dr. Rajan Bedi has previously taught at Oxford University, was featured in Who’s Who in the World and is a winner of a Royal Society fellowship and a highly sought keynote speaker.


The content of this webcast is provided by the sponsor and may contain a commercial message

Don’t Believe the Hype About Hypersonic Missiles

Post Syndicated from Cameron L. Tracy original https://spectrum.ieee.org/tech-talk/aerospace/military/hypersonic-missiles-are-being-hyped

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Hypersonic weapons are one of the hottest trends in military hardware. These missiles, which fly through the atmosphere at more than five times the speed of sound, are commonly depicted as a revolutionary new tool of war: ostensibly faster, less detectable, and harder to intercept than currently-deployed long-range ballistic missiles. The major military powers—China, Russia, and the United States—have bought into this belief, with each investing vast sums in a hypersonic arms race.

But there is a glaring problem with this narrative: Many claims regarding the purported advantages of hypersonic weapons are false. These missiles are, in reality, an old technology with a massive price tag and few meaningful advantages over existing ballistic missiles.

An old technology

While commonly touted as an “emerging technology,” these weapons were first designed nearly a century ago, in late 1930s Germany. The Treaty of Versailles, signed at the end of World War I, prohibited most weapons development by the Nazi government. But a loophole allowed work on rockets, prompting an intense German interest in missiles. Out of this grew the Amerikabomber project, aimed at developing a weapon that could strike the United States from Germany. To this end, German engineers designed the Silbervogel (Silver Bird), the first conception of a hypersonic boost-glide missile.

Yet Germany declined to pursue this design, in part because it judged the costs of hypersonic weapons to be disproportionate with their meagre benefits. In subsequent decades, ballistic missiles achieved the same goal of swift warhead delivery to distant targets. Flying primarily through the vacuum of outer space, rather than the comparatively dense atmosphere, ballistic missiles were seen as superior to the hypersonic concept in many ways.

While research on hypersonic weapons continued throughout the 20th century, no nation deployed them, opting instead for ballistic and cruise missile technologies. Even after spending more than US $5 billion (in current year dollars) developing the Dyna-Soar hypersonic glider, the United States abandoned the design in 1963 when it could not identify any clear objective for the weapon.

Interest in hypersonic weapons was rekindled in the early 2000s, but not on the basis of a performance advantage over existing missiles. Rather, the United States sought to use conventionally-armed missiles for rapid strikes, including against non-state targets such as terrorist groups. But it was concerned that nearby nations could mistake launches for nuclear-armed ballistic missile attacks. Hypersonic gliders fly low-altitude trajectories that could be easily discriminated from those of ballistic missiles.

As this history shows, nations have consistently judged hypersonic weapons to be an unnecessary addition to existing arsenals, offering at best niche capabilities. Why today, after decades of disinterest, has the idea that hypersonic weapons outperform existing missiles taken hold? Was some unique performance advantage overlooked?

While the strategic environment has evolved, the physics of hypersonic flight—and the limitations it places on performance—remain unchanged. In a recently-published article, we report the results of computational modeling of hypersonic missile flight, showing that many common claims regarding their capabilities range from overblown to simply false.

Technical reassessment

First, consider the claim that hypersonic weapons can reach their targets faster than existing ballistic missiles. The fastest hypersonic vehicles are launched on rocket boosters, like those that launch intercontinental-range ballistic missiles. Thus, both types of missile reach the same initial speeds. But hypersonic weapons fly through the atmosphere where they are subjected to substantial drag forces. Ballistic missiles, on the other hand, fly high into outer space where they are free from these drag effects. Thus, while hypersonic weapons fly a more direct path to their targets, they lose much of their speed throughout flight, ultimately taking longer to reach their targets than comparable ballistic missiles.

Gliding through the dense atmosphere at hypersonic speeds subjects these gliders to more than just drag forces. As they slow down, hypersonic vehicles deposit large quantities of energy to the surrounding air, a portion of which is transferred back to the vehicle as thermal energy. Their surfaces commonly reach temperatures of thousands of degrees Celsius.

This extreme heating limits performance in two ways. First, it constrains glider geometry, as features like sharp noses and wings may be unable to withstand aerothermal heating. Because sharp leading edges decrease drag, these constraints degrade glider aerodynamics.

Second, this heating renders hypersonic missiles vulnerable to detection by the satellite-mounted sensors that the United States and Russia currently possess, and that China is reportedly developing. Hot objects emit light in proportion to their temperature. These satellites watch for light in the infrared band to warn of missile strikes. Ballistic missiles are visible to them during launch, when fiery rocket plumes emit a great deal of infrared light, but become harder to see after rocket burn-out, when the warhead arcs through outer space. Hypersonic missiles, on the other hand, stay hot throughout most of their glide. Our calculations indicate that incoming gliders would remain visible to existing space-based sensors not only during launch, but for nearly their entire flight.

Finally, it is often claimed that hypersonic weapons will upend the strategic balance between adversaries because they can bypass missile defenses. The reality is more complex. As discussed, the effects of atmospheric drag and heating mean that hypersonic weapons will have few, if any, advantages over existing missiles when it comes to outpacing interceptors or evading detection. Still, their low-altitude flight would allow them to fly under the reach of defenses designed to intercept ballistic missiles in outer space. Their maneuverability could also allow them to dodge interceptors in the atmosphere.

Yet the performance of hypersonic weapons against missile defenses is strategically meaningful only if it offers a new capability (i.e., if these weapons do something existing missiles cannot). This is not the case. Existing long-range missiles could easily overcome defensive systems by fairly simple means, such as firing more missiles than the adversary has interceptors, or by using countermeasures, like decoys.

Hypersonic hype

In short, technical analysis of the performance of hypersonic weapons shows that many claims as to their supposedly “revolutionary” performance are misinformed. So why has an arms race coalesced around a weapon that does not work as advertised?

Development and acquisition of strategic missiles are complex, bureaucratic processes involving a broad range of actors, from the military organizations that use these weapons to the legislators who fund them. In this context, decisions commonly serve the diverse interests (financial, professional, etc.) of broad coalitions, rather than narrow technical criteria. Even if a missile does not perform to the specifications by which it is marketed, proponents are often keen to convince others of its “revolutionary” nature—thus securing funding, prestige, and other benefits that accompany the successful development of a new weapon.

But now is not the time to let spending on these weapons go unscrutinized. Federal budgets are under enormous strain, and every dollar spent on hypersonic weapon development is one that does not go to other pressing needs. Beyond these financial concerns, arms racing could increase tensions and the likelihood of conflict between nations.

To be sure, research on hypersonic flight is a worthwhile endeavor, with broad applications from commercial transportation to space exploration. But when it comes to costly, ineffective hypersonic missiles, the technical basis fails to justify the price.

Cameron L. Tracy is a Kendall Fellow in the Global Security Program at the Union of Concerned Scientists. He has a PhD in materials science.

David Wright is a research associate in the Laboratory for Nuclear Security and Policy in the Department of Nuclear Science and Engineering at the Massachusetts Institute of Technology. He has a PhD in physics.

Breakthrough Listen Is Searching a Million Stars for One Sign of Intelligent Life

Post Syndicated from Danny Price original https://spectrum.ieee.org/aerospace/astrophysics/breakthrough-listen-seti

In 1960, astronomer Frank Drake turned the radio telescope at the National Radio Astronomy Observatory in Green Bank, W.Va., toward two nearby sunlike stars to search for transmissions from intelligent extraterrestrial societies. With a pair of headphones, Drake listened to Tau Ceti and Epsilon Eridani for a total of 150 hours across multiple sessions. He heard nothing, except for a single false positive that turned out to be an airplane.

Statistically speaking, Drake’s pioneering project had little chance of success. Estimates from both NASA and the European Space Agency (ESA) suggest that our galaxy alone contains roughly 100 billion stars. (And there are perhaps 2 trillion galaxies in the universe).

Nevertheless, Drake’s effort, called Project Ozma, launched the modern search for extra­terrestrial intelligence (SETI) efforts that continue to this day. But given the universe’s size, a SETI project trying to make even the smallest dent in searching for intriguing signals needs to be much more than one astronomer with a pair of headphones. The project will need more telescopes, more data, and more computational power to find such signals. Breakthrough Listen has all three.

Breakthrough Listen is the most comprehensive and expensive SETI effort ever undertaken, costing US $100 million, using 13 telescopes, and bringing together 50 researchers from institutions around the globe, including me as part of the Parkes telescope team. The project is making use of the newest advances in radio astronomy to listen to more of the electromagnetic spectrum; cutting-edge data processing to analyze petabytes of data across billions of frequency channels; and artificial intelligence (AI) to detect, amid the universe’s cacophony, even one signal that might indicate an extraterrestrial intelligence. And even if we don’t hear anything by the time the project ends, we’ll still have a wealth of data that will set the stage for future SETI efforts and astronomical research.

Breakthrough Listen is a 10-year initiative, launched in July 2015, and headquartered at the University of California, Berkeley. As my colleague there, J. Emilio Enriquez, likes to put it, Breakthrough Listen is the Apollo program of SETI projects.

It was the first of several Breakthrough Initiatives founded by investor Yuri Milner and his wife, Julia. The initiatives are a group of programs investigating the origin, extent, and nature of life in the universe. Aside from Listen, there’s also Message, which is exploring whether and how to communicate with extraterrestrial life; Starshot, which is designing a tiny probe and a ground-based laser to propel it to the star Alpha Centauri; and Watch, which is looking for Earth-like planets around other stars. There are also projects in earlier stages of development that will explore Venus and Saturn’s moon Enceladus for signs of life.

Listen was started with three telescopes. Two are radio telescopes: the 100-meter Robert C. Byrd Green Bank Telescope, in Green Bank, W.Va., and the 64-meter Parkes Observatory telescope, in New South Wales, Australia, for which I’m the project scientist. The third is an optical telescope: the Automated Planet Finder at Lick Observatory, in California. We’re using these telescopes to survey millions of stars and galaxies for unexpected signals, and powerful data processors to comb through the collected data at a very fine resolution.

In 2021, Listen will also begin using the MeerKAT array in South Africa. MeerKAT, inaugurated in July 2018, is an array of 64 parabolic dish antennas, each 13.5 meters in diameter. MeerKAT is located deep in the Karoo Desert, in a federally protected radio-quiet zone where other transmissions are restricted. Even better, while at the other three telescopes we have to share time with other research efforts, at MeerKAT our Listen data recorders can “eavesdrop” on antenna data 24/7. Listen has also partnered over the last couple of years with several other observatories around the globe.

The anomalous signals we’re searching for fall under a broad umbrella called “technosignatures.” A technosignature is any sort of indication that technology exists or existed somewhere beyond our solar system. That includes both intentional and unintentional radio signals, but it can also include things like an abundance of artificial molecules in the atmosphere. For example, Earth’s atmosphere has plenty of hydrofluorocarbons because they were used as refrigerants before scientists realized they were damaging the ozone layer. Breakthrough Listen isn’t searching for anything like artificial molecules, but we are looking for both intentional communications, like a “We’re here!” message, and unintentional signals, like the ones produced over the years by the Arecibo Observatory in Puerto Rico.

Arecibo, before cable breaks destroyed the dish in November 2020, was a telescope with four radio transmitters, the most powerful of which effectively transmitted at 22 terawatts at a frequency of 2,380 megahertz (similar to Wi-Fi). The telescope used those transmitters to reflect signals off of objects like asteroids to make measurements. However, an artificial signal from Arecibo that made it into interstellar space would be a clear indication of our existence, even though it wouldn’t be an intentional communication. In fact, Arecibo’s signal was so tightly focused and powerful that another Arecibo-­like radio telescope could pick up that signal from halfway across the galaxy.

Looking for technosignatures allows us to be much broader in our search than limiting ourselves to detecting only intentional communications. In contrast, Drake limited his search to a narrow range of frequencies around 1.42 gigahertz called the 21-centimeter line. He picked those frequencies because 1.42 GHz is the frequency at which atomic hydrogen gas, the most abundant gas in the universe, spontaneously radiates photons. Drake hypothesized that an intelligent society would select this frequency for deliberate transmissions, as it would be noticed by any astronomer mapping the galaxy’s hydrogen.

Drake’s reasoning was sound. But without a comprehensive search across a wider chunk of the electromagnetic spectrum, it’s impossible to say whether there aren’t any signals out there. An Arecibo-like 2,380-MHz signal, for example, would have gone completely unnoticed by Project Ozma.

Ground-based radio telescopes can’t scan the entire electromagnetic spectrum. Earth’s atmosphere blocks large swaths of bandwidth at both high and low frequencies. For this reason, gamma ray, X-ray, and ultraviolet astronomy all require space-based telescopes. However, there is a range of frequencies from roughly 10 MHz to 100 GHz that can pass easily through Earth’s atmosphere and reach ground-based telescopes called the terrestrial microwave window. Breakthrough Listen is focused on these frequencies because they’re probable carriers for technosignatures from other planets with Earth-like atmospheres. But it’s still a huge range of frequencies to cover: Drake searched a spectrum band of only about 100 hertz, or one-billionth of that window. In contrast, Breakthrough Listen is covering the entire range of frequencies from 1 GHz to 10 GHz.

Each radio telescope is unique, but they all function according to the same basic principles. A large reflective dish gathers radio signals and focuses them on the same point. At that point, a receiver converts those radio signals into signals on a coaxial cable, just like a well-aligned TV antenna. Then computers digitize and process the signals.

The telescopes we’re using for Breakthrough Listen are wideband telescopes, which means that they can record signals from across a wide range of frequencies. Parkes, for example, covers 3.3 GHz of spectrum, while Green Bank can detect signals across 10 GHz of spectrum. Wideband telescopes allow us to avoid making difficult choices about which frequencies we want to focus on.

However, expanding the search band also greatly increases the amount of data collected. Fortunately, after the receivers at the telescopes have digitized the incoming signals, we can turn to computers—rather than astronomers with headphones—for the data processing. Even so, the amount of data we’re dealing with required substantial new hardware.

Each observatory has only enough memory to store Listen’s raw data for about 24 hours before running out of room. For example, the Parkes telescope generates 215 gigabits per second of data, enough to fill up a laptop hard drive in a few seconds. The MeerKAT array produces even more data, with its 64 antennas producing an aggregate of 2.2 terabits per second. That’s roughly the equivalent of downloading the entirety of Wikipedia 10 times every second. To make such quantities of data manageable, we have to compress that raw data to about one-hundredth of its initial size in real time.

After compression, we split the frequency-band data into sub-bands, and then process the sub-bands in parallel. Individual servers each process a single sub-band. Thus the number of servers at each telescope varies, depending on how much data each one produces. Parkes, for example, has a complement of 27 servers, and Green Bank needs 64, while MeerKAT requires 128.

Each sub-band is further split into discrete frequency channels, each about 2 Hz wide. This resolution allows us to spot any signals that span a very narrow range of frequencies. We can also detect wider-frequency signals that are short in duration by averaging the power of each channel over a few seconds or minutes. Simultaneous spikes for those seconds or minutes over a contiguous band of channels would indicate a wide-frequency, short-duration signal. We’re focused on these two types of signals because concentrating signal power continuously into a single frequency or into short pulses over a range of frequencies are the two most effective ways to send a transmission over interstellar distances while minimizing dissipation. Therefore, these signal types are the most likely candidates for extraterrestrial technosignatures.

Fortunately for us, these kinds of signals are the easiest to spot, too. Narrowband signals are easily distinguishable from ones caused by natural astrophysical processes, because such processes do not produce signals narrower than a few thousand hertz. To understand why, let’s look at the example of a cloud of hydrogen-gas molecules in space emitting radiation in the 21-cm line. When Drake searched that band, how would he have known the difference between an intentional signal from the noise created by hydrogen gas?

Here’s how: Even though all the hydrogen gas in a cloud radiates at that frequency, each molecule is moving in a different, random direction. This movement causes each emitted photon to arrive at a radio telescope with a slightly different frequency. It’s the Doppler effect at work: An emitted photon will have a higher frequency if the hydrogen molecule was moving toward you and a lower frequency if the molecule was moving away. In the end, the hydrogen cloud’s signal is smeared across a frequency band centered around 1.42 GHz. If Drake had heard a narrowband signal at that frequency, it would have meant he had detected a physical impossibility: a hydrogen cloud in which every molecule was stationary relative to Earth. Alternatively, the signal was artificial.

We can make a similar assumption about short-duration signals. While there are some natural short-duration signals, namely fast radio bursts and pulsars, these have other characteristics that single them out. A signal emitted by a pulsar, for example, has a long “tail” caused by the lower frequencies lagging behind higher frequencies over interstellar distances.

Also, I should mention that artificial signals generally don’t turn out to be extraterrestrial. In 2019, we published a paper in the Astronomical Journal, detailing our search across 1,327 nearby stars. While we detected tens of millions of narrowband signals, we were able to attribute all of them to satellites, aircraft, and other terrestrial sources. To date, our most promising signal is one at 982 MHz collected in April 2019 from the nearby star Proxima Centauri. The signal is too narrow for any known natural phenomenon and isn’t in a commonly used terrestrial band. That said, there remains a lot of double-checking before we rule out all possibilities other than an extraterrestrial technosignature of some kind.

It’s also entirely possible that an alien society might give off radio technosignatures that are neither narrowband nor short duration. We really don’t know what kinds of communications such a society might use, or what kinds of signals it would give off as a by-product of its technologies. Regardless, it’s likely we couldn’t find such signals simply by finely chopping up the telescope data and analyzing it. So we’re also using AI to hunt for more complicated technosignatures.

Specifically, we’re using anomaly-detection AI, in order to find signals that don’t look like anything else astronomers have found. Anomaly detection is used in many fields to find rare events. For example, if your bank has ever asked you to confirm an expensive purchase like airline tickets, odds are its system determined the transaction was unusual and wanted to be sure your credit card information hadn’t been stolen.

We’re training our AI to spot unexpected signals using a method called unsupervised learning. The method involves giving the AI tons of data like radio signals and asking it to figure out how to classify the data into different categories. In other words, we might give the AI signals coming from quasars, pulsars, supernovas, main-sequence stars like the sun, and dozens of other astronomical sources. Crucially, we do not tell the AI what any of these sources are. That requires the AI to figure out what kinds of characteristics make certain signals similar to one another as it pores through the data, and then to lump like signals into groups.

Unsupervised learning causes an AI classifying radio signals to create a category for pulsar signals, another category for supernova signals, a third for quasar signals, and so on, without ever needing to know what a pulsar, supernova, or quasar is. After the AI has created and populated its groups, we then assign labels based on astronomical objects: ­quasars, pulsars, and so on. What’s most important, though, is that the AI creates a new category for signals that don’t fit neatly into the categories it has already developed. In other words, it will automatically classify an anomalous signal in a category of its own.

This technique is how we hope to spot any technosignatures that are more complicated than a straightforward narrow­band or short-duration signal. That’s precisely why we needed an AI that can spot anything that’s out of the ordinary, without having any preconceived notions about what an odd signal should look like.

The Listen program is dramatically increasing the number of stars searched through SETI projects, by listening in on thousands of nearby sunlike stars over the program’s duration. We’re also surveying millions of more-distant stars across the galactic plane at a lower sensitivity. And we’re even planning to observe nearby galaxies for any exceptionally energetic signals that have crossed intergalactic space. That last category is a bit of a long shot, but after all, the project’s purpose is to listen more than ever before.

Breakthrough Listen is a hugely ambitious program and the most comprehensive search for intelligent life beyond Earth ever undertaken. Even so, odds are we won’t find a single promising signal. Jill Tarter, the former director of the SETI Institute, has often likened the cumulative SETI efforts so far to scooping a glass of water out of the ocean, seeing there are no fish in the glass, and concluding no fish exist. By the time Breakthrough Listen is finished, it will be more like having examined a swimming pool’s worth of ocean water. But if no compelling candidate signals are found, we haven’t failed. We’ll still be able to place statistical constraints on how common it is for intelligent life to evolve. We may also spot new natural phenomena that warrant more study. Then it’s on to the next swimming pool.

In the meantime, our data will help provide insights, at least, into some of the most profound questions in all of science. We still know almost nothing about how often life emerges from nonliving matter, or how often life develops into an intelligent society, or how long intelligent societies last. So while we may not find any of the signs we’re looking for, we can come closer to putting boundaries on how often these things occur. Even if the stars we search are silent, at least we’ll have learned that we need to look farther afield for any cosmic company.

This article appears in the February 2021 print issue as “Seti’s Million Star Search.”

About the Author

Danny Price is the Breakthrough Listen initiative’s project scientist for the Parkes radio telescope, in Australia.

FAA Files Reveal a Surprising Threat to Airline Safety: the U.S. Military’s GPS Tests

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/aerospace/aviation/faa-files-reveal-a-surprising-threat-to-airline-safety-the-us-militarys-gps-tests

Early one morning last May, a commercial airliner was approaching El Paso International Airport, in West Texas, when a warning popped up in the cockpit: “GPS Position Lost.” The pilot contacted the airline’s operations center and received a report that the U.S. Army’s White Sands Missile Range, in South Central New Mexico, was disrupting the GPS signal. “We knew then that it was not an aircraft GPS fault,” the pilot wrote later.

The pilot missed an approach on one runway due to high winds, then came around to try again. “We were forced to Runway 04 with a predawn landing with no access to [an instrument landing] with vertical guidance,” the pilot wrote. “Runway 04…has a high CFIT threat due to the climbing terrain in the local area.”

CFIT stands for “controlled flight into terrain,” and it is exactly as serious as it sounds. The pilot considered diverting to Albuquerque, 370 kilometers away, but eventually bit the bullet and tackled Runway 04 using only visual aids. The plane made it safely to the ground, but the pilot later logged the experience on NASA’s Aviation Safety Reporting System, a forum where pilots can anonymously share near misses and safety tips.

This is far from the most worrying ASRS report involving GPS jamming. In August 2018, a passenger aircraft in Idaho, flying in smoky conditions, reportedly suffered GPS interference from military tests and was saved from crashing into a mountain only by the last-minute intervention of an air traffic controller. “Loss of life can happen because air traffic control and a flight crew believe their equipment are working as intended, but are in fact leading them into the side of the mountain,” wrote the controller. “Had [we] not noticed, that flight crew and the passengers would be dead. I have no doubt.”

There are some 90 ASRS reports detailing GPS interference in the United States over the past eight years, the majority of which were filed in 2019 and 2020. Now IEEE Spectrum has new evidence that GPS disruption to commercial aviation is much more common than even the ASRS database suggests. Previously undisclosed Federal Aviation Administration (FAA) data for a few months in 2017 and 2018 detail hundreds of aircraft losing GPS reception in the vicinity of military tests. On a single day in March 2018, 21 aircraft reported GPS problems to air traffic controllers near Los Angeles. These included a medevac helicopter, several private planes, and a dozen commercial passenger jets. Some managed to keep flying normally; others required help from air traffic controllers. Five aircraft reported making unexpected turns or navigating off course. In all likelihood, there are many hundreds, possibly thousands, of such incidents each year nationwide, each one a potential accident. The vast majority of this disruption can be traced back to the U.S. military, which now routinely jams GPS signals over wide areas on an almost daily basis somewhere in the country.

The military is jamming GPS signals to develop its own defenses against GPS jamming. Ironically, though, the Pentagon’s efforts to safeguard its own troops and systems are putting the lives of civilian pilots, passengers, and crew at risk. In 2013, the military essentially admitted as much in a report, saying that “planned EA [electronic attack] testing occasionally causes interference to GPS based flight operations, and impacts the efficiency and economy of some aviation operations.”

In the early days of aviation, pilots would navigate using road maps in daylight and follow bonfires or searchlights after dark. By World War II, radio beacons had become common. From the late 1940s, ground stations began broadcasting omnidirectional VHF signals that planes could lock on to, while shorter-range systems indicated safe glide slopes to help pilots land. At their peak, in 2000, there were more than a thousand very high frequency (VHF) navigation stations in the United States. However, in areas with widely spaced stations, pilots were forced to take zigzag routes from one station to the next, and reception of the VHF signals could be hampered by nearby buildings and hills.

Everything changed with the advent of global navigation satellite systems (GNSS), first devised by the U.S. military in the 1960s. The arrival in the mid-1990s of the civilian version of the technology, called the Global Positioning System, meant that aircraft could navigate by satellite and take direct routes from point to point; GPS location and altitude data was also accurate enough to help them land.

The FAA is about halfway through its NextGen effort, which is intended to make flying safer and more efficient through a wholesale switch from ground-based navigation aids like radio beacons to a primarily satellite-enabled navigation system. Along with that switch, the agency began decommissioning VHF navigation stations a decade ago. The United States is now well on its way to having a minimal backup network of fewer than 600 ground stations.

Meanwhile, the reliance on GPS is changing the practice of flying and the habits of pilots. As GPS receivers have become cheaper, smaller, and more capable, they have become more common and more widely integrated. Most airplanes must now carry Automatic Dependent Surveillance-Broadcast (ADS-B) transponders, which use GPS to calculate and broadcast their altitude, heading, and speed. Private pilots use digital charts on tablet computers, while GPS data underpins autopilot and flight-management computers. Pilots should theoretically still be able to navigate, fly, and land without any GPS assistance at all, using legacy radio systems and visual aids. Commercial airlines, in particular, have a range of backup technologies at their disposal. But because GPS is so widespread and reliable, pilots are in danger of forgetting these manual techniques.

When an Airbus passenger jet suddenly lost GPS near Salt Lake City in June 2019, its pilot suffered “a fair amount of confusion,” according to the pilot’s ASRS report. “To say that my raw data navigation skills were lacking is an understatement! I’ve never done it on the Airbus and can’t remember having done it in 25 years or more.”

“I don’t blame pilots for getting a little addicted to GPS,” says Todd E. ­Humphreys, director of the Radionavigation Laboratory at the University of Texas at Austin. “When something works well 99.99 percent of the time, humans don’t do well in being vigilant for that 0.01 percent of the time that it doesn’t.”

Losing GPS completely is not the worst that can happen. It is far more dangerous when accurate GPS data is quietly replaced by misleading information. The ASRS database contains many accounts of pilots belatedly realizing that GPS-enabled autopilots had taken them many kilometers in the wrong direction, into forbidden military areas, or dangerously close to other aircraft.

In December 2012, an air traffic controller noticed that a westbound passenger jet near Reno, Nev., had veered 16 kilometers (10 miles) off course. The controller confirmed that military GPS jamming was to blame and gave new directions, but later noted: “If the pilot would have noticed they were off course before I did and corrected the course, it would have caused [the] aircraft to turn right into [an] opposite direction, eastbound [jet].”

So why is the military interfering so regularly with such a safety-critical system? Although most GPS receivers today are found in consumer smartphones, GPS was designed by the U.S. military, for the U.S. military. The Pentagon depends heavily on GPS to locate and navigate its aircraft, ships, tanks, and troops.

For such a vital resource, GPS is exceedingly vulnerable to attack. By the time GPS signals reach the ground, they are so faint they can be easily drowned out by interference, whether accidental or malicious. Building a basic electronic warfare setup to disrupt these weak signals is trivially easy, says Humphreys: “Detune the oscillator in a microwave oven and you’ve got a superpowerful jammer that works over many kilometers.” Illegal GPS jamming devices are widely available on the black market, some of them marketed to professional drivers who may want to avoid being tracked while working.

Other GNSS systems, such as Russia’s GLONASS, China’s BeiDou, and Europe’s Galileo constellations, use slightly different frequencies but have similar vulnerabilities, depending on exactly who is conducting the test or attack. In China, mysterious attacks have successfully “spoofed” ships with GPS receivers toward fake locations, while vessels relying on BeiDou reportedly remain unaffected. Similarly, GPS signals are regularly jammed in the eastern Mediterranean, Norway, and Finland, while the Galileo system is untargeted in the same attacks.

The Pentagon uses its more remote military bases, many in the American West, to test how its forces operate under GPS denial, and presumably to develop its own electronic warfare systems and countermeasures. The United States has carried out experiments in spoofing GPS signals on at least one occasion, during which it was reported to have taken great care not to affect civilian aircraft.

Despite this, many ASRS reports record GPS units delivering incorrect positions rather than failing altogether, but this can also happen when the satellite signals are degraded. Whatever the nature of its tests, the military’s GPS jamming can end up disrupting service for civilian users, particularly high-altitude commercial aircraft, even at a considerable distance.

The military issues Notices to Airmen (NOTAM) to warn pilots of upcoming tests. Many of these notices cover hundreds of thousands of square kilometers. There have been notices that warn of GPS disruption over all of Texas or even the entire American Southwest. Such a notice doesn’t mean that GPS service will be disrupted throughout the area, only that it might be disrupted. And that uncertainty creates its own problems.

In 2017, the FAA commissioned the nonprofit Radio Technical Commission for Aeronautics to look into the effects of intentional GPS interference on civilian aircraft. Its report, issued the following year by the RTCA’s GPS Interference Task Group, found that the number of military GPS tests had almost tripled from 2012 to 2017. Unsurprisingly, ASRS safety reports referencing GPS jamming are also on the rise. There were 38 such ASRS narratives in 2019—nearly a tenfold increase over 2018.

New internal FAA materials obtained by Spectrum from a member of the task group and not previously made public indicate that the ASRS accounts represent only the tip of the iceberg. The FAA data consists of pilots’ reports of GPS interference to the Los Angeles Air Route Traffic Control Center, one of 22 air traffic control centers in the United States. Controllers there oversee air traffic across central and Southern California, southern Nevada, southwestern Utah, western Arizona, and portions of the Pacific Ocean—areas heavily affected by military GPS testing.

This data includes 173 instances of lost or intermittent GPS during a six-month period of 2017 and another 60 over two months in early 2018. These reports are less detailed than those in the ASRS database, but they show aircraft flying off course, accidentally entering military airspace, being unable to maneuver, and losing their ability to navigate when close to other aircraft. Many pilots required the assistance of air traffic control to continue their flights. The affected aircraft included a pet rescue shuttle, a hot-air balloon, multiple medical flights, and many private planes and passenger jets.

In at least a handful of episodes, the loss of GPS was deemed an emergency. Pilots of five aircraft, including a Southwest Airlines flight from Las Vegas to
Chicago, invoked the “stop buzzer,” a request routed through air traffic control for the military to immediately cease jamming. According to the Aircraft Owners and Pilots Association, pilots must use this phrase only when a safety-of-flight issue is encountered.

To be sure, many other instances in the FAA data were benign. In early March 2017, for example, Jim Yoder was flying a Cessna jet owned by entrepreneur and space tourist Dennis Tito between Las Vegas and Palm Springs, Calif., when both onboard GPS devices were jammed. “This is the only time I’ve ever had GPS go out, and it was interesting because I hadn’t thought about it really much,” Yoder told Spectrum. “I asked air traffic control what was going on and they were like, ‘I don’t really know.’ But we didn’t lose our ability to navigate, and I don’t think we ever got off course.”

Indeed, one of the RTCA task group’s conclusions was that the Notice to Airmen system was part of the problem: Most pilots who fly through affected areas experience no ill effects, causing some to simply ignore such warnings in the future.

“We call the NOTAMs ‘Chicken Little,’ ” says Rune Duke, an FAA official who was cochair of the RTCA’s task group. “They say the sky is falling over large areas…and it’s not realistic. There are mountains and all kinds of things that would prevent GPS interference from making it 500 nautical miles [926 km] from where it is initiated.”

GPS interference can be affected by the terrain, aircraft altitude and attitude, direction of flight, angle to and distance from the center of the interference, equipment aboard the plane, and many other factors, concluded the task group, which included representatives of the FAA, airlines, pilots, aircraft manufacturers, and the U.S. military. One aircraft could lose all GPS reception, even as another one nearby is completely unaffected. One military test might pass unnoticed while another causes chaos in the skies.

This unreliability has consequences. In 2014, a passenger plane approaching El Paso had to abort its landing after losing GPS reception. “This is the first time in my flying career that I have experienced or even heard of GPS signal jamming,” wrote the pilot in an ASRS report. “Although it was in the NOTAMs, it still caught us by surprise as we really did not expect to lose all GPS signals at any point. It was a good thing the weather was good or this could have become a real issue.”

Sometimes air traffic controllers are as much in the dark as pilots. “They are the last line of defense,” the FAA’s Duke told Spectrum. “And in many cases, air traffic control was not even aware of the GPS interference taking place.”

The RTCA report made many recommendations. The Department of Defense could improve coordination with the FAA, and it could refrain from testing GPS during periods of high air traffic. The FAA could overhaul its data collection and analysis, match anecdotal reports with digital data, and improve documentation of adverse events. The NOTAM system could be made easier to interpret, with warnings that more accurately match the experiences of pilots and controllers.

Remarkably, until the report came out, the FAA had been instructing pilots to report GPS anomalies only when they needed assistance from air traffic control. “The data has been somewhat of a challenge because we’ve somewhat discouraged reporting,” says Duke. “This has led the FAA to believe it’s not been such a problem.”

NOTAMs now encourage pilots to report all GPS interference, but many of the RTCA’s other recommendations are languishing within the Office of Accident Investigation and Prevention at the FAA.

New developments are making the problem worse. The NextGen project is accelerating the move of commercial aviation to satellite-enabled navigation. Emerging autonomous air systems, such as drones and air taxis, will put even more weight on GPS’s shaky shoulders.

When any new aircraft is adopted, it risks posing new challenges to the system. The Embraer EMB-505 Phenom 300, for instance, entered service in 2009 and has since become the world’s best-selling light jet. In 2016, the FAA warned that if the Phenom 300 encountered an unreliable or unavailable GPS signal, it could enter a Dutch roll (named for a Dutch skating technique), a dangerous combination of wagging and rocking that could cause pilots to lose control. The FAA instructed Phenom 300 owners to avoid all areas of GPS interference.

As GPS assumes an ever more prominent role, the military is naturally taking a stronger interest in it. “Year over year, the military’s need for GPS interference-event testing has increased,” says Duke. “There was an increase again in 2019, partly because of counter-UAS [drone] activity. And they’re now doing GPS interference where they previously had not, like Michigan, Wisconsin, and the Dakotas, because it adds to the realism of any type of military training.”

So there are ever more GPS-jamming tests, more aircraft navigating by satellite, and more pilots utterly reliant on GPS. It is a feedback loop, and it constantly raises the chances that one of these near misses and stop buzzers will end in catastrophe.

When asked to comment, the FAA said it has established a resilient navigation and surveillance infrastructure to enable aircraft to continue safe operations during a GPS outage, including radio beacons and radars. It also noted that it and other agencies are working to create a long-term GPS backup solution that will provide position, navigation, and ­timing—again, to minimize the effects of a loss of GPS.

However, in a report to Congress in April 2020, the agency coordinating this effort, the U.S. Department of Homeland Security, wrote: “DHS recommends that responsibility for mitigating temporary GPS outages be the responsibility of the individual user and not the responsibility of the Federal Government.” In short, the problem of GPS interference is not going away.

In September 2019, the pilot of a small business jet reported experienced jamming on a flight into New Mexico. He could hear that aircraft all around him were also affected, with some being forced to descend for safety. “Since the FAA is deprecating [ground-based radio aids], we are becoming dependent upon an unreliable navigation system,” wrote the pilot upon landing. “This extremely frequent [interference with] critical GPS navigation is a significant threat to aviation safety. This jamming has to end.”

The same pilot was jammed again on his way home.

This article appears in the February 2021 print issue as “Lost in Airspace.”

10 Exciting Engineering Milestones to Look for in 2021

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/aerospace/space-flight/10-exciting-engineering-milestones-to-look-for-in-2021

graphic link to special report landing page
  • A Shining Light

    Last year, germicidal ultraviolet light found a place in the arsenal of weapons used to fight the coronavirus, with upper-air fixtures placed near hospital-room ceilings and sterilization boxes used to clean personal protective equipment. But a broader rollout was prevented by the dangers posed by UV-C light, which damages the genetic material of viruses and humans alike. Now, the Tokyo-based lighting company Ushio thinks it has the answer: lamps that produce 222-nanometer wavelengths that still kill microbes but don’t penetrate human eyes or skin. Ushio’s Care222 lamp modules went into mass production at the end of 2020, and in 2021 they’ll be integrated into products from other companies—such as Acuity Brands’ lighting fixtures for offices, classrooms, stores, and other occupied spaces.

  • Quantum Networking

    Early this year, photons will speed between Stony Brook University and Brookhaven National Laboratory, both in New York, in an ambitious demonstration of quantum communication. This next-gen communication concept may eventually offer unprecedented security, as a trick of quantum mechanics will make it obvious if anyone has tapped into a transmission. In the demo, “quantum memory buffers” from the startup Qunnect will be placed at each location, and photons within those buffers will be entangled with each other over a snaking 70-kilometer network.

  • Winds of Change

    Developers of offshore wind power quickly find themselves in deep water off the coast of California; one prospective site near Humboldt Bay ranges from 500 to 1,100 meters deep. These conditions call for a new breed of floating wind turbine that’s tethered to the seafloor with strong cables. Now, that technology has been demonstrated in pilot projects off the coasts of Scotland and Portugal, and wind power companies are eager to gain access to three proposed sites off the California coast. They’re expecting the U.S. Bureau of Ocean Energy Management to begin the process of auctioning leases for at least some of those sites in 2021.

  • Driverless Race Cars

    The Indianapolis Motor Speedway, the world’s most famous auto racetrack, will host an unprecedented event in October: the first high-speed race of self-driving race cars. About 30 university teams from around the world have signed up to compete in the Indy Autonomous Challenge, in which souped-up race cars will reach speeds of up to 320 kilometers per hour (200 miles per hour). To win the US $1 million top prize, a team’s autonomous Dallara speedster must be first to complete 20 laps in 25 minutes or less. The deep-learning systems that control the cars will be tested under conditions they’ve never experienced before; both sensors and navigation tools will have to cope with extreme speed, which leaves no margin for error.

  • Robots Below

    The robots participating in DARPA’s Subterranean Challenge have already been tested on three different courses; they’ve had to navigate underground tunnels, urban environments, and cave networks (although the caves portion was switched to an all-virtual course due to the pandemic). Late this year, SubT teams from across the world will put it all together at the final event, which will combine elements of all three subdomains into a single integrated challenge course. The robots will have to demonstrate their versatility and endurance in an obstacle-filled environment where communication with the world above ground is limited. DARPA expects pandemic conditions to improve enough in 2021 to make a physical competition possible.

  • Mars or Bust

    In February, no fewer than three spacecraft are due to rendezvous with the Red Planet. It’s not a coincidence—the orbits of Earth and Mars have brought the planets relatively close together this year, making the journey between them faster and cheaper. China’s Tianwen-1 mission plans to deliver an orbiter and rover to search for water beneath the surface; the United Arab Emirates’ Hope orbiter is intended to study the Martian climate; and a shell-like capsule will deposit NASA’s Perseverance rover, which aims to seek signs of past life, while also testing out a small helicopter drone named Ingenuity. There’s no guarantee that the spacecraft will reach their destinations safely, but millions of earthlings will be rooting for them.

  • Stopping Deepfakes

    By the end of the year, some Android phones may include a new feature that’s intended to strike a blow against the ever-increasing problem of manipulated photos and videos. The anti-deepfake tech comes from a startup called Truepic, which is collaborating with Qualcomm on chips for smartphones that will enable phone cameras to capture “verified” images. The raw pixel data, the time stamp, and location data will all be sent over secure channels to isolated hardware processors, where they’ll be put together with a cryptographic seal. Photos and videos produced this way will carry proof of their authenticity with them. “Fakes are never going to go away,” says Sherif Hanna, Truepic’s vice president of R&D. “Instead of trying to prove what’s fake, we’re trying to protect what’s real.”

  • Faster Data

    In a world reshaped by the COVID-19 pandemic, in which most office work, conferences, and entertainment have gone virtual, cloud data centers are under more stress than ever before. To ensure that people have all the bandwidth they need whenever they need it, service providers are increasingly using networks of smaller data centers within metropolitan areas instead of the traditional massive data center on a single campus. This approach offers higher resilience and availability for end users, but it requires sending torrents of data between facilities. That need will be met in 2021 with new 400ZR fiber optics, which can send 400 gigabits per second over data center interconnects of between 80 and 100 kilometers. Verizon completed a trial run last September, and experts believe we’ll see widespread deployment toward the end of this year.

  • Your Next TV

    While Samsung won’t confirm it, consumer-electronics analysts say the South Korean tech giant will begin mass production of a new type of TV in late 2021. Samsung ended production of liquid crystal display (LCD) TVs in 2020, and has been investing heavily in organic light-emitting diode (OLED) display panels enhanced with quantum dot (QD) technology. OLED TVs include layers of organic compounds that emit light in response to an electric current, and they currently receive top marks for picture quality. In Samsung’s QD-OLED approach, the TV will use OLEDs to create blue light, and a QD layer will convert some of that blue light into the red and green light needed to make images. This hybrid technology is expected to create displays that are brighter, higher contrast, and longer lasting than today’s best models.

  • Brain Scans Everywhere

    Truly high-quality data about brain activity is hard to come by; today, researchers and doctors typically rely either on big and expensive machines like MRI and CT scanners or on invasive implants. In early 2021, the startup Kernel is launching a wearable device called Kernel Flow that could change the game. The affordable low-power device uses a type of near-infrared spectroscopy to measure changes in blood flow within the brain. Within each headset, 52 lasers fire precise pulses, and reflected light is picked up by 312 detectors. Kernel will distribute its first batch of 50 devices to “select partners” in the first quarter of the year, but company founder Bryan Johnson hopes the portable technology will one day be ubiquitous. At a recent event, he described how consumers could eventually use Kernel Flow in their own homes.

At Last, First Light for the James Webb Space Telescope

Post Syndicated from David Schneider original https://spectrum.ieee.org/aerospace/astrophysics/at-last-first-light-for-the-james-webb-space-telescope

graphic link to special report landing page

Back in 1990, after significant cost overruns and delays, the Hubble Space Telescope was finally carried into orbit aboard the space shuttle. Astronomers rejoiced, but within a few weeks, elation turned to dread. Hubble wasn’t able to achieve anything like the results anticipated, because, simply put, its 2.4-meter mirror was made in the wrong shape.

For a time, it appeared that the US $5 billion spent on the project had been a colossal waste. Then NASA came up with a plan: Astronauts would compensate for ­Hubble’s distorted main mirror by adding small secondary mirrors shaped in just the right way to correct the flaw. Three years later, that fix was carried out, and Hubble’s mission to probe some of the faintest objects in the sky was saved.

Fast-forward three decades. Later this year, the James Webb Space Telescope is slated to be launched. Like Hubble, the Webb telescope project has been plagued by cost overruns and delays. At the turn of the millennium, the cost was thought to be $1 billion. But the final bill will likely be close to $10 billion.

Unlike Hubble, the Webb telescope won’t be orbiting Earth. Instead, it will be sent to the Earth-Sun L2 Lagrange point, which is about 1.5 million kilometers away in the opposite direction from the sun. This point is cloaked in semidarkness, with Earth shadowing much of the sun. Such positioning is good for observing, but bad for solar powering. So the telescope will circle around L2 instead of positioning itself there.

The downside will be that the telescope will be too distant to service, at least with any kind of spacecraft available now. “Webb’s peril is that it will be parked in space at a place that we can’t get back to if something goes wrong,” says Eric Chaisson, an astrophysicist at Harvard who was a senior scientist at Johns Hopkins Space Telescope Science Institute when Hubble was commissioned. “Given its complexity, it’s hard to believe something won’t, though we can all hope, as I do, that all goes right.”

Why then send the Webb telescope so far away?

The answer is that the Webb telescope is intended to gather images of stars and galaxies created soon after the big bang. And to look back that far in time, the telescope must view objects that are very distant. These objects will be extremely faint, sure. More important, the visible light they gave off will be shifted into the infrared.

For this reason, the telescope’s detectors are sensitive to comparatively long infrared wavelengths. And those detectors would be blinded if the body of the telescope itself was giving off a lot of radiation at these wavelengths due to its heat. To avoid that, the telescope will be kept far from the sun at L2 and will be shaded by a tennis-court-size sun shield composed of five layers of aluminum- and silicon-coated DuPont Kapton. This shielding will keep the mirror of the telescope at a chilly 40 to 50 kelvins, considerably colder than Hubble’s mirror, which is kept essentially at room temperature.

Clearly, for the Webb telescope to succeed, everything about the mission has to go right. Gaining confidence in that outcome has taken longer than anyone involved in the project had envisioned. A particularly troubling episode occurred in 2018, when a shake test of the sun shield jostled some screws loose. That and other problems, attributed to human error, shook legislators’ confidence in the prime contractor, Northrop Grumman. The government convened an independent review board, which uncovered fundamental issues with how the project was being managed.

In 2018 testimony to the House Committee on Science, Space, and Technology, Wesley Bush, then the CEO of Northrop Grumman, came under fire when the chairman of the committee, Rep. Lamar Smith (R-Texas), asked him whether Northrop Grumman would agree to pay for $800 million of unexpected expenditures beyond the nominal final spending limit.

Naturally, Bush demurred. He also marshalled an argument that has been used to justify large expenditures on space missions since Sputnik: the need to demonstrate technological prowess. “It is especially important that we take on programs like Webb to demonstrate to the world that we can lead,” said Bush.

During the thoroughgoing reevaluation in 2018, launch was postponed to March of 2021. Then the pandemic hit, delaying work up and down the line. In July of 2020, launch was postponed yet again, to 31 October 2021.

Whether Northrop Grumman will really hit that target is anyone’s guess: The company did not respond to requests from IEEE Spectrum for information about how the pandemic is affecting the project timeline. But if this massive, quarter-century-long undertaking finally makes it into space this year, astronomers will no doubt be elated. Let’s just hope that elation over a space telescope doesn’t again turn into dread.

This article appears in the January 2021 print issue as “Where No One Has Seen Before.”

SpaceX, Blue Origin, and Dynetics Compete to Build the Next Moon Lander

Post Syndicated from Jeff Foust original https://spectrum.ieee.org/aerospace/space-flight/spacex-blue-origin-and-dynetics-compete-to-build-the-next-moon-lander

graphic link to special report landing page

In March 2019, Vice President Mike Pence instructed NASA to do something rarely seen in space projects: move up a schedule. Speaking at a meeting of the National Space Council, Pence noted that NASA’s plans for returning humans to the moon called for a landing in 2028. “That’s just not good enough. We’re better than that,” he said. Instead, he announced that the new goal was to land humans on the moon by 2024.

That decision wasn’t as radical as it might have seemed. The rocket that is the centerpiece of NASA’s lunar exploration plans, the Space Launch System, has been in development for nearly a decade and is scheduled to make its first flight in late 2021. The Orion, a crewed spacecraft that will be launched on that rocket, is even older, dating back to the previous NASA initiative to send humans to the moon, the Constellation program in the mid-2000s.

What’s missing, though, is the lander that astronauts will use to go from the Orion in lunar orbit down to the surface and back. NASA was just starting to consider new ideas for lunar landers when Pence gave his speech. Indeed, proposals for an initial round of studies were due to NASA just the day before he spoke. Those studies have since progressed, and NASA may settle on a design for its upcoming moon lander as soon as next month.

To meet Pence’s 2024 deadline, it was clear that NASA would have to move faster than normal. That meant moving differently.

The Lunar Module, or LM, used for the Apollo program, was developed through a standard government contract with Grumman Corp. But in the half century since Apollo, not only has the technology of spaceflight changed, but so has the business. Commercial programs to ferry cargo and crew to the space station demonstrated there were opportunities for government to partner with industry, with companies covering some of the development costs in exchange for using the vehicles they designed to serve other customers.

That approach, NASA argued, would allow it to support more companies in the early phases of the program and perhaps through full-scale lander development. “We want to have numerous suppliers that are competing against each other on cost, and on innovation, and on safety,” NASA Administrator Jim Bridenstine told the Senate Commerce Committee in September of 2020.

That was the approach NASA decided to follow to develop what it calls the Human Landing System (HLS), part of its upcoming Artemis program of lunar exploration. NASA would award up to four contracts for less than a year of initial work to refine lander designs. Based on the quality of those lander designs as well as the available funding—including the share of the costs companies offered to pay—NASA would select one or more for continued development.

Proposals for the HLS program were due to NASA in November of 2019. On 30 April 2020, NASA announced the winners of three contracts with a combined US $967 million for those initial studies: Blue Origin, Dynetics, and SpaceX. “Let’s make no mistake about it: We are now on our way,” said Douglas Loverro, then a NASA official, at the briefing announcing the contracts. Loverro, who was the NASA associate administrator responsible for human spaceflight, added, “We’ve got all the pieces we need.” Later this year, NASA will choose which of these three to pursue.

The three landers are as different as the companies NASA selected. Blue Origin, in Kent, Wash., has the significant financial resources of its founder, Amazon.com’s Jeff Bezos, but chose to partner with three other major companies on its lander: Draper, Lockheed Martin, and Northrop Grumman. “What we’re trying to do is take the best of what all the partners bring to build a truly integrated lander vehicle,” says John Couluris, program manager for the HLS at Blue Origin.

Its design is the most similar of the three to the original Apollo LM. Blue Origin is leading the development of the descent stage, which brings the lander down to the surface, using a version of an uncrewed lunar lander called Blue Moon that it had previously been working on. Lockheed Martin is building the ascent stage, which will carry astronauts back into lunar orbit. Its crew cabin is based on the one the company designed for the Orion spacecraft. Northrop Grumman will provide a transfer stage, which will move the combined ascent and descent stages from a high lunar orbit to a low one, from which the descent stage takes over for the landing. Draper Laboratory, in Cambridge, Mass., is providing the avionics for the combined lander.

That modular approach, Couluris argues, has advantages over building a larger integrated lander. “The three elements provide a lot of flexibility in launch-vehicle selection,” he says, allowing NASA to use any number of vehicles to send the modules to the moon. “It also allows three independent efforts to go in parallel as Blue Origin leads the overall integrated effort.”

The early work on the lander has included building full-scale models of the ascent and descent modules and installing them in a training facility at NASA’s Johnson Space Center. This gives engineers and astronauts a hands-on way to see how the modules work together and determine the best places to put major components before the design is fixed.

“Understanding things that affect the primary structure are important. For example, are the windows in the right spot?” says Kirk Shireman, a former NASA space-station program manager who joined Lockheed Martin recently as vice president of lunar campaigns. “They impact big pieces of the structure that are long-lead and need to be finalized so we can begin manufacturing.”

Dynetics, a Huntsville, Ala.–based engineering company, is similarly working on its lander with other firms—more than two dozen. These include Sierra Nevada Corp., which is building the Dream Chaser cargo vehicle for the space station; Thales Alenia Space, which built pressurized module structures for the station and other spacecraft; and launch-vehicle company United Launch Alliance (ULA).

Dynetics took a very different approach with its lander, though, creating a single module ringed by thrusters and propellant tanks. That results in a low-slung design that puts the crew cabin just a couple of meters off the ground, making it easy for astronauts to get from the module to the surface and back.

“We felt like it was important to get the crew module as close to the lunar surface as we could to ease the operations of getting in and out,” says Robert Wright, a program manager for space systems at Dynetics.

To make that approach work—and also be carried into space using available launch vehicles—Dynetics proposes to fuel the lander only after it is orbiting the moon. The lander will be launched with empty propellant tanks on a ULA Vulcan Centaur rocket and placed in lunar orbit. Two more Vulcan Centaur launches will follow, each about two to three weeks apart, carrying the liquid oxygen and methane propellants needed to land on the moon and return to orbit.

Refueling in space using cryogenic propellants requires new technology, but Dynetics believes it can demonstrate it in time to support a 2024 landing. “We worked closely with NASA on our concept of operations, and the Orion plans, to ensure that our operational scenario is viable and feasible,” says Kim Doering, vice president of space systems at Dynetics.

The third contender, SpaceX, is offering a lunar lander based on Starship, the reusable launch vehicle it is developing and testing at a site on the Texas coast near Brownsville. Starship, in many respects, looks oversized for the job, resembling the giant rocket that the graphic-novel hero Tintin used in his fictional journey to the moon. Starship’s crew cabin is so high off the lunar surface that SpaceX plans to use an elevator to transport astronauts down to the surface and back.

“Starship works well for the HLS mission,” says Nick Cummings, director of civil-space advanced development at SpaceX. “We did look at alternative architectures, multi-element architectures, but we came back because Starship provides a lot of capability.” That capability includes the ability to carry what he described as “extraordinarily large cargo” to the surface of the moon, for example large rovers.

The Starship used for lunar missions will be different from those flying to and from Earth. The lunar Starship lacks the flaps and heat shield needed for reentering Earth’s atmosphere. It will have several large Raptor engines, but also a smaller set of thrusters used for landing and taking off on the moon because the Raptors are too powerful.

Starship, like the Dynetics lander, will require in-space refueling. The lunar ­Starship will be launched into Earth orbit, and several Starships will follow with propellant to transfer to it before it heads to the moon.

SpaceX CEO Elon Musk said at a conference in October that he expects to demonstrate Starship-to-Starship refueling in 2022. “As soon as you’ve got orbital refilling, you can send significant payload to the moon,” he said.

All three companies face significant obstacles to overcome in their lander development. The most obvious ones are technical: New rocket engines need to be built and new technologies, like in-space refueling, need to be demonstrated, all on a tight schedule to meet the 2024 deadline.

NASA acknowledged that crunch in a document explaining why it chose those three companies. Blue Origin, for example, has “a very significant amount of development work” that needs to be completed “on what appears to be an aggressive timeline.” Dynetics’s lander requires technologies that “need to be developed at an unprecedented pace.” And SpaceX’s concept “requires numerous, highly complex launch, rendezvous, and fueling operations which all must succeed in quick succession in order to successfully execute on its approach.”

The companies nevertheless remain optimistic about staying on schedule. “It certainly is an aggressive timeline,” acknowledged SpaceX’s Cummings. But, based on the company’s experience with other programs, “we think this is very doable.”

Congress may be harder to convince. Some members remain skeptical that NASA’s approach to returning humans to the moon by 2024, including its use of partnerships with industry, is the right one.

NASA’s Bridenstine, in many public comments, said he appreciates that Congress is willing to provide at least some funding for the lunar lander program. “Accelerating it to 2024 requires a $3.2 billion budget for 2021,” he told senators last September. That funding decision, he added, needs to come by February, when NASA will select the companies that will continue development work for the HLS program. “If we get to February of 2021 without [a $3.2 billion] appropriation, that’s really going to put the brakes on our ability to achieve a moon landing by as early as 2024,” he warned. And the current Senate proposal, now being negotiated with the House, budgets only $1 billion.

In space, as on Earth, the most important fuel is money.

This article appears in the January 2021 print issue as “Three Ways to the Moon.”

Boom Supersonic’s XB-1 Test Aircraft Promises a New Era of Faster-Than-Sound Travel

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/aerospace/aviation/boom-supersonics-xb-1-test-aircraft-promises-a-new-era-of-faster-than-sound-travel

The Concorde, the world’s first supersonic airliner, landed for the last time 17 years ago. It was a beautiful feat of engineering, but it never turned a profit. So why should we believe that Boom Supersonic can do better now?

Blake Scholl, the founder and chief executive, turns the question on its head: How could it not do better? The Concorde was designed in the 1960s using slide rules and wind tunnels, whereas Boom Supersonic’s planned airliner, the Overture, is being modeled in software and built out of ultralight materials, using superefficient jet engines. That should reduce the consumption of fuel per passenger-mile well below the Concorde’s gluttonous rate, low enough to make possible fares no pricier than today’s business class.

“The greatest challenge—it’s not technology,” says Scholl. “It’s not that people don’t want them. And it’s not that the money isn’t out there. It’s the complex execution, and it’s getting the right people together to do it, building a startup in an industry that doesn’t usually have startups.”

In October, Boom rolled out a one-third-scale model of the Overture called the XB-1, the point of which is to prove the design principles, cockpit ergonomics, flight envelope, even the experience of flight itself. The first flight will take place in the “near future,” over the Mojave Desert says Scholl. There’s just one seat, which means that the experience will be the pilot’s alone. The Overture, with 65 seats, is slated to fly in 2025, and allowing for the time it takes to certify it with regulators, it is planned for commercial flight by 2029.

Test planes are hardly standard in the commercial aviation industry. This one is necessary, though, because the company started with a clean sheet of paper.

“There’s nothing more educational than flying hardware,” Brian Durrence, senior vice president in charge of developing the ­Overture, tells IEEE Spectrum. “You’re usually using a previous aircraft—if you’re a company that has a stable of aircraft. But it’s probably not a great business plan to jump straight into building your first commercial aircraft brand new.”

The XB-1 is slender and sleek, with swept-back delta wings and three jet engines, two slung under the wings and a third in the tail. These are conventional jet engines, coupled to afterburners—thrust-enhancing, fuel-sucking devices that were used by the Concorde, contributing to its high cost of operation. But the future airliner will use advanced turbofan engines that won’t need afterburners.

Another difference from yesteryear is in the liberal use of virtual views of the aircraft’s surroundings. The Concorde, optimized as it was for speed, was awkward during takeoff and landing, when the wings had to be angled upward to provide sufficient lift at low speeds. To make sure the pilot had a good view of the runway, the plane thus needed a “drooping” nose, which was lowered at these times. But the XB-1 can keep a straight face because it relies instead on cameras and cockpit monitors to make the ground visible.

To speed the design process, the engineers had recourse not only to computer modeling but also to 3D printing, which lets them prototype a part in hours, revise it, and reprint it. Boom uses parts made by Stratasys, an additive-­manufacturing company, which uses a new laser system from Velo 3D to fuse titanium powder into objects with intricate internal channels, both to save weight and allow the passage of cooling air.

Perhaps Scholl’s most debatable claim is that there’s a big potential market for Mach-plus travel out there. If speed were really such a selling point, why have commercial airliners actually slowed down a bit since the 1960s? It’s not that today’s planes are pokier—even now, a pilot will routinely make up lost time simply by opening the throttle. No, today’s measured pace is a business decision: Carriers traded speed against fuel consumption because the market demanded low fares.

Scholl insists there are people who have a genuine need for speed. He cites internal market surveys suggesting that most business travelers would buy a supersonic flight at roughly the same price as business class, and he argues that still more customers might pop up once supersonic travel allowed you to, say, fly from New York to ­London in 3.5 hours, meet with associates, then get back home in time to sleep in your own bed.

Why not go after the market for business jets, then? Because Scholl has Elon Musk–like ambitions: He says that once business travel gets going, the unit cost of manufacturing will fall, further lowering fares and inducing even mere tourists to break the sound barrier. That, he argues, is why he’s leaving the market for supersonic business planes—those having 19 seats or less—to the likes of Virgin Galactic and Aerion Supersonic.

Boom’s flights will be between coastal cities, because overwater routes inflict no supersonic booms on anyone except the fish. So we’re talking about a niche within a niche. Still, overwater flight is a big and profitable slice of the pie, and it’s precisely on such long routes that speed matters the most.

This article appears in the January 2021 print issue as “Supersonic Travel Returns.”

Nuclear-Powered Rockets Get a Second Look for Travel to Mars


Post Syndicated from Prachi Patel original https://spectrum.ieee.org/aerospace/space-flight/nuclear-powered-rockets-get-a-second-look-for-travel-to-mars

For all the controversy they stir up on Earth, nuclear reactors can produce the energy and propulsion needed to rapidly take large spacecraft to Mars and, if desired, beyond. The idea of nuclear rocket engines dates back to the 1940s. This time around, though, plans for interplanetary missions propelled by nuclear fission and fusion are being backed by new designs that have a much better chance of getting off the ground.

Crucially, the nuclear engines are meant for interplanetary travel only, not for use in the Earth’s atmosphere. Chemical rockets launch the craft out beyond low Earth orbit. Only then does the nuclear propulsion system kick in.

The challenge has been making these nuclear engines safe and lightweight. New fuels and reactor designs appear up to the task, as NASA is now working with industry partners for possible future nuclear-fueled crewed space missions. “Nuclear propulsion would be advantageous if you want to go to Mars and back in under two years,” says Jeff Sheehy, chief engineer in NASA’s Space Technology Mission Directorate. To enable that mission capability, he says, “a key technology that needs to be advanced is the fuel.”

Specifically, the fuel needs to endure the superhigh temperatures and volatile conditions inside a nuclear thermal engine. Two companies now say their fuels are sufficiently robust for a safe, compact, high-performance reactor. In fact, one of these companies has already delivered a detailed conceptual design to NASA.

Nuclear thermal propulsion uses energy released from nuclear reactions to heat liquid hydrogen to about 2,430 °C—some eight times the temperature of nuclear-power-plant cores. The propellant expands and jets out the nozzles at tremendous speeds. This can produce twice the thrust per mass of propellant as compared to that of chemical rockets, allowing nuclear-powered ships to travel longer and faster. Plus, once at the destination, be it Saturn’s moon Titan or Pluto, the nuclear reactor could switch from propulsion system to power source, enabling the craft to send back high-quality data for years.

Getting enough thrust out of a nuclear rocket used to require weapons-grade, highly enriched uranium. Low-enriched uranium fuels, used in commercial power plants, would be safer to use, but they can become brittle and fall apart under the blistering temperatures and chemical attacks from the extremely reactive hydrogen.

However, Ultra Safe Nuclear Corp. Technologies (USNC-Tech), based in Seattle, uses a uranium fuel enriched to below 20 percent, which is a higher grade than that of power reactors but “can’t be diverted for nefarious purposes, so it greatly reduces proliferation risks,” says director of engineering Michael Eades. The company’s fuel contains microscopic ceramic-coated uranium fuel particles dispersed in a zirconium carbide matrix. The microcapsules keep radioactive fission by-products inside while letting heat escape.

Lynchburg, Va.–based BWX Technologies, is working under a NASA contract to look at designs using a similar ceramic composite fuel—and also examining an alternate fuel form encased in a metallic matrix. “We’ve been working on our reactor design since 2017,” says Joe Miller, general manager for the company’s advanced technologies group.

Both companies’ designs rely on different kinds of moderators. Moderators slow down energetic neutrons produced during fission so they can sustain a chain reaction, instead of striking and damaging the reactor structure. BWX intersperses its fuel blocks between hydride elements, while USNC-Tech’s unique design integrates a beryllium metal moderator into the fuel. “Our fuel stays in one piece, survives the hot hydrogen and radiation conditions, and does not eat all the reactor’s neutrons,” Eades says.

Princeton Plasma Physics Laboratory scientists are using this experimental reactor to heat fusion plasmas up to one million degrees C—on the long journey to developing fusion-powered rockets for interplanetary travel.

There is another route to small, safe nuclear-powered rockets, says ­Samuel Cohen at Princeton Plasma Physics Laboratory: fusion reactors. Mainline fusion uses deuterium and tritium fuels, but Cohen is leading efforts to make a reactor that relies on fusion between deuterium atoms and helium-3 in a high-temperature plasma, which produces very few neutrons. “We don’t like neutrons because they can change structural material like steel to something more like Swiss cheese and can make it radioactive,” he says. The Princeton lab’s concept, called Direct Fusion Drive, also needs much less fuel than conventional fusion, and the device could be one-thousandth as large, Cohen says.

Fusion propulsion could in theory far outperform fission-based propulsion, because fusion reactions release up to four times as much energy, says NASA’s Sheehy. However, the technology isn’t as far along and faces several challenges, including generating and containing the plasma and efficiently converting the energy released into directed jet exhaust. “It could not be ready for Mars missions in the late 2030s,” he says.

USNC-Tech, by contrast, has already made small hardware prototypes based on its new fuel. “We’re on track to meet NASA’s goal to have a half-scale demonstration system ready for launch by 2027,” says Eades. The next step would be to build a full-scale Mars flight system, one that could very well drive a 2035 Mars mission.

This article appears in the January 2021 print issue as “Nuclear-Powered Rockets Get a Second Look.”

Team CoSTAR Trains Robots for Exploring Caves on Earth and Space

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/aerospace/robotic-exploration/team-costar-robots-earth-space

Another DARPA SubT team that’s been doing it’s own in-cave practice to prepare for the final event next year is Team CoSTAR, from NASA JPL and Caltech. CoSTAR, of course, won the SubT Urban Circuit earlier this year with their team of wheeled and legged robots, which was awesome—but you’d maybe expect that for a group developing planetary exploration robots, places like urban environments and man-made tunnels wouldn’t necessarily be their top priority, right? Unless there’s something they’re not telling us, and I’m sure it’s aliens.*

NASA’s been working on robotic cave exploration for a long, long time, and Team CoSTAR (and the SubT Challenge) fit right in with that. The team and its robots have been spending some time in lava tubes, and we asked some folks from NASA JPL how it’s been going.

Japan Prepares to Welcome Home Asteroid Explorer Hayabusa2

Post Syndicated from John Boyd original https://spectrum.ieee.org/tech-talk/aerospace/space-flight/japan-prepares-to-welcome-home-asteroid-explorer-hayabusa2

Okaerinasai is Japanese for “Welcome back.” It’s a word everyone at the Japan Aerospace Exploration Agency JAXA will be shouting together around 2 am Tokyo time on December 6, when a capsule ejected from the Hayabusa2 space probe is due to land in Woomera, South Australia, after a 5.2 billion kilometer round-trip.

The reentry capsule is expected to contain precious particles scooped from the rock-strewn surface and subsurface of Ryugu, a diamond-shape asteroid less than a kilometer in diameter. This difficult trick was pulled off as Ryugu traveled on its 16-month orbit around the sun between Earth and Mars. What’s more, Hayabusa2 was able to land twice on the spinning asteroid and complete a series of missions. Perhaps the most difficult and spectacular of these was using an impactor to form a crater on the asteroid for gathering particles from below its surface—one of a number of firsts in space exploration Hayabusa2 has achieved.

Providing all goes as planned, the capsule will separate from the spacecraft on December 5, some 220,000 km from Earth. After which Hayabusa2 will enter an escape trajectory to depart Earth and commence on an extension of its mission. So long as the craft is still operational and has 50 percent of its xenon fuel remaining to drive its ion thruster engines, JAXA has set it the goal of visiting and observing an asteroid of a type never explored before. If successful, Hayabusa2 will reach its target in 2031.

In a press briefing a week before the landing, the project’s mission manager Makoto Yoshikawa explained how a JAXA team, working with the Australian Space Agency, will locate and retrieve the capsule. Ground stations and an airplane flying above the clouds will triangulate the capsule fireball’s progress through the Earth’s atmosphere by measuring its light trail. Then, at an altitude of 10 km above the Earth, the capsule will deploy a parachute and land somewhere (depending on weather conditions) within a 100 km2 area of the Woomera desert

Yoshikawa described how JAXA track and trace the capsule. Four marine radar units have been set up around the predicted landing area. Their fan-beam horizontal rotating antenna will track an umbrella made of radar-reflective-cloth attached to the top of the descending parachute. At the same time, a radio beacon in the capsule will begin signaling its location. 

“A helicopter and a team on the ground will track the beacon’s signal that will continue transmitting after landing,” Yoshikawa added. “We’re also bringing drones to help us find it as quickly as possible.”

The capsule will be taken by helicopter to a quick-look facility established in the Woomera Prohibited Area. There, the instrument module and sealed containers holding the Ryugu samples will be removed, including any gas released by the particles, and stored and sealed in special containers and airlifted to Japan. The aim is to complete all this in less than 100 hours after the landing to minimize any risk of contamination.

Once at JAXA, the samples will be removed in a vacuum environment inside a cleanroom. Over the next two years, each particle will be analyzed, described, and curated, with some of the particles being sent to international organizations, including NASA, for further study.

In a separate press briefing in Australia on December 1, Masaki Fujimoto, a Deputy Director at JAXA, explained why the samples taken from Ryugu are so important. Earth, being relatively close to the sun, was created dry without much water present. Something must have brought the H2O here. Ryugu is a primordial asteroid born outside the inner solar system and is the kind of body that could have brought water and organic materials to our planet that enabled the creation of life.

Analysis of the collected samples, says Fujimoto, “could help answer the fundamental question of how our planet became habitable.”  

He said that the asteroid samples gathered will likely amount to around one gram. “One gram may sound small to some of you,” said Fujimoto, “but for us, it is huge and enough to address the science questions we have in mind.”

Update (Dec. 6): JAXA confirmed the reentry capsule entered the Earth’s atmosphere at 2.28 a.m. Japan standard time on December 6. A helicopter searched for and located it in the Woomera desert at 4.47, then flew it to the quick look facility in the Woomera Prohibited Area, arriving just after 8 a.m. The JAXA recovery team is expected to extract any gas from the captured Ryugu samples. On its approach and orbit, beginning on December 5, Hayabusa2 performed several trajectory maneuvers to successfully depart Earth’s orbit and has now set out on its extended mission to observe an asteroid it hopes to visit in 2031.

Russia, China, the U.S.: Who Will Win the Hypersonic Arms Race?

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/aerospace/aviation/russia-china-the-us-who-will-win-the-hypersonic-arms-race

It’s obvious why the militaries of the world want missiles that can follow erratic paths at low altitude while flying at five times the speed of sound, eluding any chance at detection or interception.

“Think of it as delivering a pizza, except it’s not a pizza,” says Bradley Wheaton, a specialist in hypersonics at the Johns Hopkins University Applied Physics Laboratory (APL), in Maryland. “In the United States, just 15 minutes can cover the East Coast; a really fast missile takes 20 minutes to get to the West Coast. At these speeds, you have a factor of 50 increase in the area covered per unit of time.”

So the question isn’t why the great powers are pursuing hypersonic arms, but why they are doing so now. Quick answer: They are once again locked in an arms race.

The wider world first heard of this type of weaponry in March 2018, when Russian president Vladimir Putin gave a speech describing his country’s plans for a nuclear-powered cruise missile that could fly around the world at blinding speed, then snake around hills and dales to a target. His bold assertions have been questioned, particularly the part about nuclear power. Even so, a year later a nuclear accident killed seven people near a testing range off the northern coast of Russia, and U.S. intelligence officials speculated that it involved hypersonic experiments.

The nature of that accident is still shrouded in mystery, but it’s clear there’s been a huge increase in the research effort in hypersonics. Here’s a roundup of what the superpowers of the 21st century are doing to pursue what is, in fact, an old concept.

The hypersonic missiles in use or in testing in China and Russia can apparently carry either conventional warheads, aimed at ships and other small military targets, or nuclear ones, aimed at cities and government centers. These ship killers could deprive the United States of its preeminence at sea, which is more than enough reason for China, for instance, to develop hypersonics. But a nuclear-armed version that leaves the defender too little time to launch a retaliatory strike would do even more to shift the balance of power, because it would dismantle the painstakingly constructed system of deterrence known as mutually assured destruction, or by the jocular acronym MAD.

“The nuclear side is very destabilizing, which is why the Russians are going after it,” says Christopher Combs, a professor of mechanical engineering at the University of Texas at San Antonio. “But on the U.S. side we see no need for that, so we’re going conventional.”

That is indeed the official U.S. policy. But in August, some months after Combs spoke with IEEE Spectrum, an Aviation Week article pointed out that an Air Force agency charged with nuclear weapons requested that companies submit ideas for a “thermal protection system that can support [a] hypersonic glide to ICBM ranges.” Soon after that, the request was hastily taken down, and the U.S. Air Force felt compelled to restate its policy not to pursue nuclear-capable hypersonic weapons.

Today’s forays into hypersonic research have deep roots, reaching back to the late 1950s, in both the United States and the Soviet Union. Although this work continued for decades, in 1994, a few years after the Cold War ended with the dissolution of the Soviet Union, the United States pulled the plug on research into hypersonic flight, including its last and biggest program, the Rockwell X-30. Nicknamed the “Orient Express,” the X-30 was to have been a crewed transport that would top out at 25 times the speed of sound, Mach 25—enough to take off from Washington, D.C., and land in Tokyo 2 hours later. Russia also discontinued research in this area during the 1990s, when its economy was in tatters.

Today’s test vehicles just pick up where the old ones left off, explains Alexander Fedorov, a professor at Moscow Institute of Physics and Technology and an expert on hypersonic flow at the boundary layer, which is right next to the vehicle’s skin. “What’s flying now is just a demonstration of technology—the science is 30 years old,” he says.

Fedorov has lectured in the United States; he even helps U.S. graduate students with their research. He laments how the arms race has stifled international cooperation, adding that he himself has “zero knowledge” about the military project Putin touted two years ago. “But I know that people are working on it,” he adds.

In the new race, Fedorov says, Russia has experience without much money, China has money without much experience, and the United States has both, although it revived its efforts later than did Russia or China and is now playing catch-up. For fiscal 2021, U.S. research agencies have budgeted US $3.2 billion[PDF] for all hypersonic weapons research, up from $2.6 billion in the previous year.

Other programs are under way in India and Australia; even Israel and Iran are in the game, if on the sidelines. But Fedorov suggests that the Chinese are the ones to watch: They used to talk at international meetings, he says, but now they mostly just listen, which is what you’d expect if they had started working on truly new ideas—of which, he reiterates, there are very few on display. All the competing powers have shown vehicles that are “very conservative,” he says.

One good reason for the rarity of radical designs is the enormous expense of the research. Engineers can learn only so much by running tests on the ground, using computational fluid-flow models and hypersonic wind tunnels, which themselves cost a pretty penny (and simulate only some limited aspects of hypersonic flight). Engineers really need to fly their creations, and usually when they do, they use up the test vehicle. That makes design iteration very costly.

It’s no wonder hypersonic prototypes fail so often. In mere supersonic flight, passing Mach 1 is a clear-cut thing: The plane outdistances the sound waves that it imparts to the air to produce a shock wave, which forms the familiar two-beat sonic boom. But as the vehicle exceeds Mach 5, the density of the air just behind the shock wave diminishes, allowing the wave to nestle along the surface of the vehicle. That in-your-face layer poses no aerodynamic problems, and it could even be an advantage, when it’s smooth. But it can become turbulent in a heartbeat.

“Predicting when it’s going turbulent is hard,” says Wheaton, of Johns Hopkins APL. “And it’s important because when it does, heating goes up, and it affects how control surfaces can steer. Also, there’s more drag.”

The pioneers of hypersonic flight learned about turbulence the hard way. On one of its many flights, in 1967, the U.S. Air Force’s X-15 experimental hypersonic plane went into a spin, killing the pilot, Michael J. Adams. The right stuff, indeed.

Hypersonic missiles come in two varieties. The first kind, launched into space on the tip of a ballistic missile, punches down into the atmosphere, then uses momentum to maneuver. Such “boost-glide” missiles have no jet engines and thus need no air inlets, so it’s easy to make them symmetrical, typically a tube with a cone-shape tip. Every part of the skin gets equal exposure to the air, which at these speeds breaks down into a plume of plasma, like the one that puts astronauts in radio silence during reentry.

Boost-glide missiles are now operational. China appears to have deployed the first one, called the Dongfeng-17, a ballistic missile that carries glide vehicles. Some of those gliders are billed as capable of knocking out U.S. Navy supercarriers. For such a mission it need not pack a nuclear or even a conventional warhead, instead relying on its enormous kinetic energy to destroy its target. And there’s nothing that any country can now do to defend against it.

“Those things are going so fast, you’re not going to get it,” General Mark Milley, chairman of the Joint Chiefs of Staff, said in March, in testimony[PDF] before Congress.

You might think that you give up the element of surprise by starting with a ballistic trajectory. But not completely. Once the hypersonic missile comes out of its dive to fly horizontally, it becomes invisible to sparsely spaced radars, particularly the handful based in the Pacific Ocean. And that flat flight path can swerve a lot. That’s not because of any AI-managed magic—the vehicle just follows a randomized, preprogrammed set of turns. But the effect on those playing defense is the same: The pizza arrives before they can find their wallets.

The second kind of hypersonic missile gets the bulk of its impulse from a jet engine that inhales air really fast, whirls it together with fuel, and burns the mixture in the instant that it tarries in the combustion chamber before blowing out the back as exhaust. Because these engines don’t need compressors but simply use the force of forward movement to ram air inside, and because that combustion proceeds supersonically, they are called supersonic ram jets—scramjets, for short.

One advantage the scramjet has over the boost-glide missile is its ability to stay below radar and continue to maneuver over great distances, all the way to its target. And because it never enters outer space, it doesn’t need to ride a rocket booster, although it does need some powerful helper to get it up to the speed at which first a ramjet, and then a scramjet, can work.

Another advantage of the scramjet is that it can, in principle, be applied for civilian purposes, moving people or packages that absolutely, positively have to be there quickly. The Europeans have such a project. So do the Chinese, and Boeing has shown a concept. Everyone talks up this possibility because, frankly, it’s the only peaceable talking point there is for hypersonics. Don’t forget, though, that supersonic commercial flight happened long ago, made no money, and ended—and supersonic flight is way easier.

The scramjet has one big disadvantage: It’s a lot harder technically. Any hypersonic vehicle must fend off the rapidly moving air outside, which can heat the leading edges to as high as 3,000 °C. But that heat and stress is nothing like the hellfire inside a scramjet engine. There, the heat cannot radiate away, it’s hard to keep the flame lit, and the insides can come apart second by second, affecting airflow and stability. Five minutes is a long time in this business.

That’s why scramjets, though conceived in the 1950s, still remain a work in progress. In the early 2000s, NASA’s X-43 used scramjets for about 10 seconds in flight. In 2013, Boeing’s X-51 Waverider flew at hypersonic speed for 210 seconds while under scramjet power.

Tests on the ground have fared better. In May, workers at the Beijing Academy of Sciences ran a scramjet for 10 minutes, according to a report in the South China Morning Post. Two years earlier, the leader of the project, Fan Xuejun, told the same newspaper that a factory was being built to construct a variety of scramjets, some supposedly for civilian application. One engine would use a combined cycle, with a turbojet to get off the ground, a ramjet to accelerate to near-hypersonic speed, a scramjet to blow past Mach 5, and maybe even a rocket to top off the thrust. That’s a lot of moving parts—and an ambition worthy of Elon Musk. But even Musk might hesitate to follow Putin’s proposal to use a nuclear reactor for energy.

The cost of developing a scramjet capability is only one part of the economic challenge. The other is making the engine cheap enough to deploy and use in a routine way. To do that, you need fuel you can rely on. Early researchers worked with a class of highly energetic fuels that would react on contact with air, like triethylaluminum.

“It’s a fantastic scramjet engine fuel, but very toxic, a bit like the hydrazine fuels used in rockets nowadays, and this became an inhibitor,” says David Van Wie, of Johns Hopkins APL, explaining why triethylaluminum was dropped from serious consideration.

Next up was liquid hydrogen, which is also very reactive. But it needs elaborate cooling. Worse, it packs a rather low amount of energy into a given volume, and as a cryogenic fuel it is inconvenient to store and transport. It has been and still is used in experimental missiles, such as the X-43.

Today’s choice for practical missiles is hydrocarbons, of the same ilk as jet fuel, but fancier. The Chinese scramjet that burned for 10 minutes—like others on the drawing board around the world—burns hydrocarbons. Here the problem lies in breaking down the hydrocarbon’s long molecular chains fast so the shards can bind with oxygen in the split second when the substances meet and mate. And a split second isn’t enough—you have to do it continuously, one split second after another, “like keeping a match lit in a hurricane,” in the oft-quoted words of NASA spokesman Gary Creech, back in 2004.

Scramjet designs try to protect the flame by shaping the inflow geometry to create an eddy, forming a calm zone not unlike the eye of a hurricane. Flameouts are particularly worrisome when the missile starts jinking about, thus disrupting the airflow. “It’s the ‘unstart’ phenomenon, where the shock wave at the air inlets stops the engine, and the vehicle will be lost,” says John D. Schmisseur, a researcher at the University of Tennessee Space Institute, in Tullahoma. And you really only get to meet such gremlins in actual flight, he adds.

There are other problems besides flameout that arise when you’re inhaling a tornado. One expert, who requested anonymity, puts it this way: “If you’re ingesting air, it’s no longer air; it’s a complex mix of ionized atmosphere,” he says. “There’s no water anymore; it’s all hydrogen and oxygen, and the nitrogen is to some fraction elemental, not molecular. So combustion isn’t air and fuel—it’s whatever you’re taking in, whatever junk—which means chemistry at the inlet matters.”

Simulating the chemistry is what makes hypersonic wind-tunnel tests problematic. It’s fairly simple to see how an airfoil responds aerodynamically to Mach 5—just cool the air so that the speed of sound drops, giving a higher Mach number for a given airspeed. But blowing cold air tells you only a small part of the story because it heads off all the chemistry you want to study. True, you can instead run your wind tunnel fast, hot, and dense—at “high enthalpy,” to use the term of art—but it’s hard to keep that maelstrom going for more than a few milliseconds.

“Get the airspeed high enough to start up the chemistry and the reactions sap the energy,” says Mark Gragston, an aerospace expert who’s also at the UT Space Institute. Getting access to such monster machines isn’t easy, either. “At Arnold Air Force Base, across the street from me, the Air Force does high-enthalpy wind-tunnel experiments,” he says. “They’re booked up three years in advance.”

Other countries have more of the necessary wind tunnels; even India has about a dozen[PDF]. Right now, the United States is spending loads of money building these machines in an effort to catch up with Russia and China. You could say there is a wind-tunnel gap—one more reason U.S. researchers are keen for test flights.

Another thing about cooling the air: It does wonders for any combustion engine, even the kind that pushes pistons. Reaction Engines, in Abingdon, England, appears to be the first to try to apply this phenomenon in flight, with a special precooling unit. In its less-ambitious scheme, the precooler sits in front of the air inlet of a standard turbojet, adding power and efficiency. In its more-ambitious concept, called SABRE (Synergetic Air Breathing Rocket Engine), the engine operates in combined mode: It takes off as a turbojet assisted by the precooler and accelerates until a ramjet can switch on, adding enough thrust to reach (but not exceed) Mach 5. Then, as the vehicle climbs and the atmosphere thins out, the engine switches to pure rocket mode, finally launching a payload into orbit.

In principle, a precooler could work in a scramjet. But if anyone’s trying that, they’re not talking about it.

Fast forward five years and boost-glide missiles will no doubt be deployed in the service of multiple countries. Jump ahead another 15 or 20 years, and the world’s superpowers will have scramjet missiles.

So what? Won’t these things always play second fiddle to ballistic missiles? And won’t the defense also have its say, by unveiling superfast antimissiles and Buck Rogers–style directed-energy weapons?

Perhaps not. The defense always has the harder job. As President John F. Kennedy noted in an interview way back in 1962, when talking about antiballistic missile defense, what you are trying to do is shoot a bullet with a bullet. And, he added, you have to shoot down not just one but many, including a bodyguard of decoys.

Today there are indeed antimissile defenses that can protect particular targets against ballistic missiles, at least when they’re not being fired in massive salvos. But you can’t defend everything, which is why the great powers still count on deterrence through mutually assured destruction. By that logic, if you can detect cruise missiles soon enough, you can at least make those who launched them wish they hadn’t.

For that to work, we’ll need better eyes in the skies. In the United States, the military wants around $100 million for research on low-orbit space sensors to detect low-flying hypersonic missiles, Aviation Week reported in 2019.

Hardly any of the recent advances in hypersonic flight result from new scientific discoveries; almost all of it stems from changes in political will. Powers that challenge the international status quo—China and Russia—have found the resources and the will to shape the arms race to their benefit. Powers that benefit from the status quo—the United States, above all—are responding in kind. Politicians fired the starting pistol, and the technologists are gamely leaping forward.

And the point? There is no point. It’s an arms race.

This article appears in the December 2020 print issue as “Going Hypersonic.”

How JPL’s Team CoSTAR Won the DARPA SubT Challenge: Urban Circuit Systems Track

Post Syndicated from Edward Terry original https://spectrum.ieee.org/automaton/aerospace/robotic-exploration/nasa-jpl-team-costar-darpa-subt-urban-circuit-systems-track

In 2017, a team at NASA’s Jet Propulsion Laboratory in Pasadena, Calif., was in the process of prototyping some small autonomous robots capable of exploring caves and subsurface voids on the Moon, Mars, and Titan, Saturn’s largest moon. Our goal was the development of new technologies to help us solve one of humanity’s most significant questions: is there or has there been life beyond Earth?

The more we study the surfaces of planetary bodies in our solar system, the more we are compelled to voyage underground to seek answers to this question. Planetary subsurface voids are not only one of the most likely places to find both signs of life, past and present, but thanks to the shelter they provide, are also one of the main candidates for future human habitation. While we were working on various technologies for cave exploration at JPL, DARPA launched the latest in its series of Grand Challenges, the Subterranean Challenge, or SubT. Compared to earlier events that focused on on-road driving and humanoid robots in pre-defined disaster relief scenarios, the focus of SubT is the exploration of unknown and extreme underground environments. Even though SubT is about exploring such environments on Earth, we can use the competition as an analog to help us learn how to explore unknown environments on other planetary bodies. 

Spotting Mystery Methane Leaks From Space


Post Syndicated from By Jason McKeever original https://spectrum.ieee.org/aerospace/satellites/spotting-mystery-methane-leaks-from-space

Something new happened in space in January 2019. For the first time, a previously unknown leak of natural gas was spotted from orbit by a microsatellite, and then, because of that detection, plugged.

The microsatellite, Claire, had been flying since 2016. That day, Claire was monitoring the output of a mud volcano in Central Asia when it spied a plume of methane where none should be. Our team at GHGSat, in Montreal, instructed the spacecraft to pan over and zero in on the origin of the plume, which turned out to be a facility in an oil and gas field in Turkmenistan.

The need to track down methane leaks has never been more important. In the slow-motion calamity that is climate change, methane emissions get less public attention than the carbon dioxide coming from smokestacks and tailpipes. But methane—which mostly comes from fossil-fuel production but also from livestock farming and other sources—has an outsize impact. Molecule for molecule, methane traps 84 times as much heat in the atmosphere as carbon dioxide does, and it accounts for about a quarter of the rise in atmospheric temperatures. Worse, research from earlier this year shows that we might be enormously underestimating the amount released—by as much as 25 to 40 percent.

Satellites have been able to see greenhouse gases like methane and carbon dioxide from space for nearly 20 years, but it took a confluence of need and technological innovation to make such observations practical and accurate enough to do them for profit. Through some clever engineering and a more focused goal, our company has managed to build a 15-kilogram microsatellite and perform feats of detection that previously weren’t possible, even with a US $100 million, 1,000-kg spacecraft. Those scientific behemoths do their job admirably, but they view things on a kilometer scale. Claire can resolve methane emissions down to tens of meters. So a polluter (or anybody else) can determine not just what gas field is involved but which well in that field.

Since launching Claire, our first microsatellite, we’ve improved on both the core technology—a miniaturized version of an instrument known as a wide-angle Fabry-Pérot imaging spectrometer—and the spacecraft itself. Our second methane-seeking satellite, dubbed Iris, launched this past September, and a third is scheduled to go up before the end of the year. When we’re done, there will be nowhere on Earth for methane leaks to hide.

The creation of Claire and its siblings was driven by a business case and a technology challenge. The business part was born in mid-2011, when Quebec (GHGSat’s home province) and California each announced that they would implement a market-based “cap and trade” system. The systems would attribute a value to each ton of carbon emitted by industrial sites. Major emitters would be allotted a certain number of tons of carbon—or its equivalent in methane and other greenhouse gases—that they could release into the atmosphere each year. Those that needed to emit more could then purchase emissions credits from those that needed less. Over time, governments could shrink the total allotment to begin to reduce the drivers of climate change.

Even in 2011, there was a wider, multibillion-dollar market for carbon emissions, which was growing steadily as more jurisdictions imposed taxes or implemented carbon-trading mechanisms. By 2019, these carbon markets covered 22 percent of global emissions and earned governments $45 billion, according to the World Bank’s State and Trends of Carbon Pricing 2020.

Despite those billions, it’s methane, not carbon dioxide, that has become the focus of our systems. One reason is technological—our original instrument was better tuned for methane. But the business reason is the simpler one: Methane has value whether there’s a greenhouse-gas trading system or not.

Markets for greenhouse gases motivate the operators of industrial sites to better measure their emissions so they can control and ultimately reduce them. Existing, mostly ground-based methods using systems like flux chambers, eddy covariance towers, and optical gas imaging were fairly expensive, of limited accuracy, and varied as to their geographic availability. Our company’s bet was that industrial operators would flock to a single, less expensive, more precise solution that could spot greenhouse-gas emissions from individual industrial facilities anywhere in the world.

Once we’d decided on our business plan, the only question was: Could we do it?

One part of the question had already been answered, to a degree, by pioneering space missions such as Europe’s Envisat (which operated from 2002 to 2012) and Japan’s GOSat (launched in 2009). These satellites measure surface-level trace gases using spectrometers that collect sunlight scattering off the earth. The spectrometers break down the incoming light by wavelength. Molecules in the light’s path will absorb a certain pattern of wavelengths, leaving dark bands in the spectrum. The greater the concentration of those molecules, the darker the bands. This method can measure methane concentrations from orbit with a precision that’s better than 1 percent of background levels.

While those satellites proved the concept of methane tracking, their technology was far from what we needed. For one thing, the instruments are huge. The spectrometer portion of Envisat, called SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY), contained nearly 200 kg of complex optics; the entire spacecraft carried eight other scientific instruments and weighed 8.2 metric tons. GOSat, which is dedicated to greenhouse-gas sensing, weighs 1.75 metric tons.

Furthermore, these systems were designed to measure gas concentrations across the whole planet, quickly and repeatedly, in order to inform global climate modeling. Their instruments scan huge swaths of land and then average greenhouse-gas levels over tens or hundreds of square kilometers. And that is far too coarse to pinpoint an industrial site responsible for rogue emissions.

To achieve our goals, we needed to design something that was the first of its kind—an orbiting hyperspectral imager with spatial resolution in the tens of meters. And to make it affordable enough to launch, we had to fit it in a 20-by-20-by-20-centimeter package.

The most critical enabling technology to meet those constraints was our spectrometer—the wide-angle Fabry-Pérot etalon (WAF-P). (An etalon is an interferometer made from two partially reflective plates.) To help you understand what that is, we’ve first got to explain a more common type of spectrometer and how it works in a hyperspectral imaging system.

Hyperspectral imaging detects a wide range of wavelengths, some of which, of course, are beyond the visible. To achieve such detection, you need both a spectrometer and an imager.

The spectrometers in SCIAMACHY are based on diffraction gratings. A diffraction grating disperses the incoming light as a function of its wavelength—just as a prism spreads out the spectrum of white light into a rainbow. In space-based hyperspectral imaging systems, one dimension of the imager is used for spectral dispersion, and the other is used for spatial imaging. By imaging a narrow slit of a scene at the correct orientation, you get a spectrum at each point along that thin strip of land. As the spacecraft travels, sequential strips can be imaged to form a two-dimensional array of points, each of which has a full spectrum associated with it.

If the incoming light has passed through a gas—say, Earth’s atmosphere—in a region tainted with methane, certain bands in the infrared part of that spectrum should be dimmer than otherwise in a pattern characteristic of that chemical.

Such a spectral-imaging system works well, but making it compact is challenging for several reasons. One challenge is the need to minimize optical aberrations to achieve a sharp image of ground features and emission plumes. However, in remote sensing, the signal strength (and hence signal-to-noise ratio) is driven by the aperture size, and the larger this is, the more difficult it is to minimize aberrations. Adding a dispersive grating to the system leads to additional complexity in the optical system.

A Fabry-Pérot etalon can be much more compact without the need for a complex imaging system, despite certain surmountable drawbacks. It is essentially two partially mirrored pieces of glass held very close together to form a reflective cavity. Imagine a beam of light of a certain wavelength entering the cavity at a slight angle through one of the mirrors. A fraction of that beam would zip across the cavity, squeak straight through the other mirror, and continue on to a lens that focuses it onto a pixel on an imager placed a short distance away. The rest of that beam of light would bounce back to the front mirror and then across to the back mirror. Again, a small fraction would pass through, the rest would continue to bounce between the mirrors, and the process would repeat. All that bouncing around adds distance to the light’s paths toward the pixel. If the light’s angle and its wavelength obey a particular relationship to the distance between the mirrors, all that light will constructively interfere with itself. Where that relation holds, a set of bright concentric rings forms. Different wavelengths and different angles would produce a different set of rings.

In an imaging system with a Fabry-Pérot etalon like the ones in our satellites, the radius of the ring on the imager is roughly proportional to the ray angle. What this means for our system is that the etalon acts as an angle-dependent filter. So rather than dispersing the light by wavelength, we filter the light to specific wavelengths, depending on the light’s radial position within the scene. Since we’re looking at light transmitted through the atmosphere, we end up with dark rings at specific radii corresponding to molecular absorption lines.

The etalon can be miniaturized more easily than a diffraction-grating spectrometer, because the spectral discrimination arises from interference that happens within a very small gap of tens to hundreds of micrometers; no large path lengths or beam separation is required. Furthermore, since the etalon consists of substrates that are parallel to one another, it doesn’t add significantly to aberrations, so you can use relatively straightforward optical-design techniques to obtain sufficient spatial resolution.

However, there are complications associated with the WAF-P imaging spectrometer. For example, the imager behind the etalon picks up both the image of the scene (where the gas well is) and the interference pattern (the methane spectrum). That is, the spectral rings are embedded in—and corrupted by—the actual image of the patch of Earth the satellite is pointing at. So, from a single camera frame, you can’t distinguish variability in how much light reflects off the surface from changes in the amount of greenhouse gases in the atmosphere. Separating spatial and spectral information, so that we can pinpoint the origin of a methane plume, took some innovation.

The computational process used to extract gas concentrations from spectral measurements is called a retrieval. The first step in getting this to work for the WAF-P was characterizing the instrument properly before launch. That produces a detailed model that can help predict precisely the spectral response of the system for each pixel.

But that’s just the beginning. Separating the etalon’s mixing of spectral and spatial information took some algorithmic magic. We overcame this issue by designing a protocol that captures a sequence of 200 overlapping images as the satellite flies over a site. At our satellite’s orbit, that means maximizing the time we have to acquire images by continuously adjusting the satellite’s orientation. In other words, we have the satellite stare at the site as it passes by, like a rubbernecking driver on a highway passing a car wreck.

The next step in the retrieval procedure is to align the images, basically tracking all the ground locations within the scene through the sequence of images. This gives us a collection of up to 200 readings where a feature, say, a leaking gas well, passes across the complete interference pattern. This effectively is measuring the same spot on Earth at decreasing infrared wavelengths as that spot moves outward from the center of the image. If the methane concentration is anomalously high, this leads to small but predictable changes in signal level at specific positions on the image. Our retrievals software then compares these changes to its internal model of the system’s spectral response to extract methane levels in parts per million.

At this point, the WAF-P’s drawbacks become an advantage. Some other satellites use separate instruments to visualize the ground and sense the methane or CO2 spectra. They then have to realign those two. Our system acquires both at once, so the gas plume automatically aligns with its point of origin down to the level of tens of meters. Then there’s the advantage of high spatial resolution. Other systems, such as Tropomi (TROPOspheric Monitoring Instrument, launched in 2017), must average methane density across a 7-kilometer-wide pixel. The peak concentration of a plume that Claire could spot would be so severely diluted by Tropomi’s resolution that it would seem only 1/200th as strong. So high-spatial-resolution systems like Claire can detect weaker emitters, not just pinpoint their location.

Just handing a customer an image of their methane plume on a particular day is useful, but it’s not a complete picture. For weaker emitters, measurement noise can make it difficult to detect methane point sources from a single observation. But temporal averaging of multiple observations using our analytics tools reduces the noise: Even with a single satellite we can make 25 or more observations of a site per year, cloud cover permitting.

Using that average, we then produce an estimation of the methane emission rate. The process takes snapshots of methane density measurements of the plume column and calculates how much methane must be leaking per hour to generate that kind of plume. Retrieving the emission rate requires knowledge of local wind conditions, because the excess methane density depends not only on the emission rate but also on how quickly the wind transports the emitted gas out of the area.

We’ve learned a lot in the four years since Claire started its observations. And we’ve managed to put some of those lessons into practice in our next generation of microsatellites, of which Iris is the first. The biggest lesson is to focus on methane and leave carbon dioxide for later.

If methane is all we want to measure, we can adjust the design of the etalon so that it better measures methane’s corner of the infrared spectrum, instead of being broad enough to catch CO2’s as well. This, coupled with better optics that keep out extraneous light, should result in a 10-fold increase in methane sensitivity. So Iris and the satellites to follow will be able to spot smaller leaks than Claire can.

We also discovered that our next satellites would need better radiation shielding. Radiation in orbit is a particular problem for the satellite’s imaging chip. Before launching Claire, we’d done careful calculations of how much shielding it needed, which were then balanced with the increased cost of the shielding’s weight. Nevertheless, Claire’s imager has been losing pixels more quickly than expected. (Our software partially compensates for the loss.) So Iris and the rest of the next generation sport heavier radiation shields.

Another improvement involves data downloads. Claire has made about 6,000 observations in its first four years. The data is sent to Earth by radio as the satellite streaks past a single ground station in northern Canada. We don’t want future satellites to run into limits in the number of observations they make just because they don’t have enough time to download the data before their next appointment with a methane leak. So Iris is packed with more memory than Claire has, and the new microsatellite carries an experimental laser downlink in addition to its regular radio antenna. If all goes to plan, the laser should boost download speeds 1,000-fold, to 1 gigabit per second.

In its polar orbit, 500 kilometers above Earth, Claire passes over every part of the planet once every two weeks. With Iris, the frequency of coverage effectively doubles. And the addition in December of Hugo and three more microsatellites due to launch in 2021 will give us the ability to check in on any site on the planet almost daily—depending on cloud cover, of course.

With our microsatellites’ resolution and frequency, we should be able to spot the bigger methane leaks, which make up about 70 percent of emissions. Closing off the other 30 percent will require a closer look. For example, with densely grouped facilities in a shale gas region, it may not be possible to attribute a leak to a specific facility from space. And a sizable leak detectable by satellite might be an indicator of several smaller leaks. So we have developed an aircraft-mounted version of the WAF-P instrument that can scan a site with 1-meter resolution. The first such instrument took its test flights in late 2019 and is now in commercial use monitoring a shale oil and gas site in British Columbia. Within the next year we expect to deploy a second airplane-mounted instrument and expand that service to the rest of North America.

By providing our customers with fine-grained methane surveys, we’re allowing them to take the needed corrective action. Ultimately, these leaks are repaired by crews on the ground, but our approach aims to greatly reduce the need for in-person visits to facilities. And every source of fugitive emissions that is spotted and stopped represents a meaningful step toward mitigating climate change.

This article appears in the November 2020 print issue as “Microsatellites Spot Mystery Methane Leaks.”

About the Author

Jason McKeever, Dylan Jervis, and Mathias Strupler are with GHGSat, a remote-sensing company in Montreal. McKeever is the company’s science and systems lead, Jervis is a systems specialist, and Strupler is an optical systems specialist.