Tag Archives: Aerospace

No Antenna Could Survive Europa’s Brutal, Radioactive Environment—Until Now

Post Syndicated from Nancer E. Chahat original https://spectrum.ieee.org/aerospace/space-flight/no-antenna-could-survive-europas-brutal-radioactive-environment-until-now

Europa, one of Jupiter’s Galilean moons, has twice as much liquid water as Earth’s oceans, if not more. An ocean estimated to be anywhere from 40 to 100 miles (60 to 150 kilometers) deep spans the entire moon, locked beneath an icy surface over a dozen kilometers thick. The only direct evidence for this ocean is the plumes of water that occasionally erupt through cracks in the ice, jetting as high as 200 km above the surface.

The endless, sunless, roiling ocean of Europa might sound astoundingly bleak. Yet it’s one of the most promising candidates for finding extraterrestrial life. Designing a robotic lander that can survive such harsh conditions will require rethinking all of its systems to some extent, including arguably its most important: communications. After all, even if the rest of the lander works flawlessly, if the radio or antenna breaks, the lander is lost forever.

Ultimately, when NASA’s Jet Propulsion Laboratory (JPL), where I am a senior antenna engineer, began to seriously consider a Europa lander mission, we realized that the antenna was the limiting factor. The antenna needs to maintain a direct-to-Earth link across more than 550 million miles (900 million km) when Earth and Jupiter are at their point of greatest separation. The antenna must be radiation-hardened enough to survive an onslaught of ionizing particles from Jupiter, and it cannot be so heavy or so large that it would imperil the lander during takeoff and landing. One colleague, when we laid out the challenge in front of us, called it impossible. We built such an antenna anyway—and although it was designed for Europa, it is a revolutionary enough design that we’re already successfully implementing it in future missions for other destinations in the solar system.

Currently, the only planned mission to Europa is the Clipper orbiter, a NASA mission that will study the moon’s chemistry and geology and will likely launch in 2024. Clipper will also conduct reconnaissance for a potential later mission to put a lander on Europa. At this time, any such lander is conceptual. NASA has still funded a Europa lander concept, however, because there are crucial new technologies that we need to develop for any successful mission on the icy world. Europa is unlike anywhere else we’ve attempted to land before.

For context, so far the only lander to explore the outer solar system is the European Space Agency’s Huygens lander. It successfully descended to Saturn’s moon Titan in 2005 after being carried by the Cassini orbiter. Much of our frame of reference for designing landers—and their antennas—comes from Mars landers.

Traditionally, landers (and rovers) designed for Mars missions rely on relay orbiters with high data rates to get scientific data back to Earth in a timely manner. These orbiters, such as the Mars Reconnaissance Orbiter and Mars Odyssey, have large, parabolic antennas that use large amounts of power, on the order of 100 megawatts, to communicate with Earth. While the Perseverance and Curiosity rovers also have direct-to-Earth antennas, they are small, use less power (about 25 W), and are not very efficient. These antennas are mostly used for transmitting the rover’s status and other low-data updates. These existing direct-to-Earth antennas simply aren’t up to the task of communicating all the way from Europa.

Additionally, Europa, unlike Mars, has virtually no atmosphere, so landers can’t use parachutes or air resistance to slow down. Instead, the lander will depend entirely on rockets to brake and land safely. This necessity limits how big it can be—too heavy and it will require far too much fuel to both launch and land. A modestly sized 400-kilogram lander, for example, requires a rocket and fuel that combined weigh between 10 to 15 tonnes. The lander then needs to survive six or seven years of deep space travel before finally landing and operating within the intense radiation produced by Jupiter’s powerful magnetic field.

We also can’t assume a Europa lander would have an orbiter overhead to relay signals, because adding an orbiter could very easily make the mission too expensive. Even if Clipper is miraculously still functional by the time a lander arrives, we won’t assume that will be the case, as the lander would arrive well after Clipper’s official end-of-mission date.

I’ve mentioned previously that the antenna will need to transmit signals up to 900 million km. As a general rule, less efficient antennas need a larger surface area to transmit farther. But as the lander won’t have an orbiter overhead with a large relay antenna, and it won’t be big enough itself for a large antenna, it needs a small antenna with a transmission efficiency of 80 percent or higher—much more efficient than most space-bound antennas.

So, to reiterate the challenge: The antenna cannot be large, because then the lander will be too heavy. It cannot be inefficient for the same reason, because requiring more power would necessitate bulky power systems instead. And it needs to survive exposure to a brutal amount of radiation from Jupiter. This last point requires that the antenna must be mostly, if not entirely, made out of metal, because metals are more resistant to ionizing radiation.

The antenna we ultimately developed depends on a key innovation: The antenna is made up of circularly polarized, aluminum-only unit cells—more on this in a moment—that can each send and receive on X-band frequencies (specifically, 7.145 to 7.19 gigahertz for the uplink and 8.4 to 8.45 GHz for the downlink). The entire antenna is an array of these unit cells, 32 on a side or 1,024 in total. The antenna is 32.5 by 32.5 inches (82.5 by 82.5 centimeters), allowing it to fit on top of a modestly sized lander, and it can achieve a downlink rate to Earth of 33 kilobits per second at 80 percent efficiency.

Let’s take a closer look at the unit cells I mentioned, to better understand how this antenna does what it does. Circular polarization is commonly used for space communications. You might be more familiar with linear polarization, which is often used for terrestrial wireless signals; you can imagine such a signal propagating across a distance as a 2D sine wave that’s oriented, say, vertically or horizontally relative to the ground. Circular polarization instead propagates as a 3D helix. This helix pattern makes circular polarization useful for deep space communications because the helix’s larger “cross section” doesn’t require that the transmitter and receiver be as precisely aligned. As you can imagine, a superprecise alignment across almost 750 million km is all but impossible. Circular polarization has the added benefit of being less sensitive to Earth’s weather when it arrives. Rain, for example, causes linearly polarized signals to attenuate more quickly than circularly polarized ones. 

Each unit cell, as mentioned, is entirely made of aluminum. Earlier antenna arrays that similarly use smaller component cells include dielectric materials like ceramic or glass to act as insulators. Unfortunately, dielectric materials are also vulnerable to Jupiter’s ionizing radiation. The radiation builds up a charge on the materials over time, and precisely because they’re insulators there’s nowhere for that charge to go—until it’s ultimately released in a hardware-damaging electrostatic discharge. So we can’t use them.

As mentioned before, metals are more resilient to ionizing radiation. The problem is they’re not insulators, and so an antenna constructed entirely out of metal is ­­still at risk of an electrostatic discharge damaging its components. We worked around this problem by designing each unit cell to be fed at a single point. The “feed” is the connection between an antenna and the radio’s transmitter and receiver. Typically, circularly polarized antennas require two perpendicular feeds to control the signal generation. But with a bit of careful engineering and the use of a type of automated optimization called a genetic algorithm, we developed a precisely shaped single feed that could get the job done. Meanwhile, a comparatively large metal post acts as a ground to protect each feed from electrostatic discharges.

The unit cells are placed in small 16-by-16 subarrays, four subarrays in total. Each of these subarrays is fed with something we call a suspended air stripline, in which the transmission line is suspended between two ground planes, turning the gap in between into a dielectric insulator. We can then safely transmit power through the stripline while still protecting the line from electric discharges that would build up on a dielectric like ceramic or glass. Additionally, suspended air striplines are low loss, which is perfect for the highly efficient antenna design we wanted.

Put together, the new antenna design accomplishes three things: It’s highly efficient, it can handle a large amount of power, and it’s not very sensitive to temperature fluctuations. Removing traditional dielectric materials in favor of air striplines and an aluminum-only design gives us high efficiency. It’s also a phased array, which means it uses a cluster of smaller antennas to create steerable, tightly focused signals. The nature of such an array is that each individual cell needs to handle only a fraction of the total transmission power. So while each individual cell can handle only a minuscule amount of power, each subarray can handle more than 6 kilowatts. That’s still low—much lower than the megawatt-range transmissions from Mars rovers—but it’s enough for the modest downlink rates I mentioned above. And finally, because the antenna is made of metal, it expands and contracts uniformly as the temperature changes. In fact, one of the reasons we picked aluminum is because the metal does not expand or contract much as temperatures change.

When I originally proposed this antenna concept to the Europa lander project, I was met with skepticism. Space exploration is typically a very risk-averse endeavor, for good reason—the missions are expensive, and a single mistake can end one prematurely. For this reason, new technologies may be dismissed in favor of tried-and-true methods. But this situation was different because without a new antenna design, there would never be a Europa mission. The rest of my team and I were given the green light to prove the antenna could work.

Designing, fabricating, and testing the antenna took only 6 months. To put that in context, the typical development cycle for a new space technology is measured in years. The results were outstanding. Our antenna achieved the 80 percent efficiency threshold on both the send and receive frequency bands, despite being smaller and lighter than other antennas. It also doesn’t require a delicate gimbal to point it toward Earth. Instead, the antenna’s subarrays act as a phased array capable of shaping the direction of the signal without reorienting the antenna.

In order to prove how successful our antenna could be, we subjected it to a battery of extreme environmental tests, including a handful of tests specific to Europa’s atypical environment.

One test is what we call thermal cycling. For this test, we place the antenna in a room called a thermal chamber and adjust the temperature over a large range—as low as –170 ℃ and as high as 150 ℃. We put the antenna through multiple temperature cycles, measuring its transmitting capabilities before, during, and after each cycle. The antenna passed this test without any issues.

The antenna also needed to demonstrate, like any piece of hardware that goes into space, resilience against vibrations. Rockets—and everything they’re carrying into space—shake intensely during launch, which means we need to be sure that anything that goes up doesn’t come apart on the trip. For the vibration test, we loaded the entire antenna onto a vibrating table. We used accelerometers at different locations on the antenna to determine if it was holding up or breaking apart under the vibrations. Over the course of the test, we ramped up the vibrations to the point where they approximate a launch.

Thermal cycling and vibration tests are standard tests for the hardware on any spacecraft, but as I mentioned, Europa’s challenging environment required a few additional nonstandard tests. We typically do some tests in anechoic chambers for antennas. You may recognize anechoic chambers as those rooms with wedge-covered surfaces to absorb any signal reflections. An anechoic chamber makes it possible for us to determine the antenna’s signal propagation over extremely long distances by eliminating interference from local reflections. One way to think about it is that the anechoic chamber simulates a wide open space, so we can measure the signal’s propagation and extrapolate how it will look over a longer distance.

What made this particular anechoic chamber test interesting is that it was also conducted at ultralow temperatures. We couldn’t make the entire chamber that cold, so we instead placed the antenna in a sealed foam box. The foam is transparent to the antenna’s radio transmissions, so from the point of view of the actual test, it wasn’t there. But by connecting the foam box to a heat exchange plate filled with liquid nitrogen, we could lower the temperature inside it to –170 ℃. To our delight, we found that the antenna had robust long-range signal propagation even at that frigid temperature.

The last unusual test for this antenna was to bombard it with electrons in order to simulate Jupiter’s intense radiation. We used JPL’s Dynamitron electron accelerator to subject the antenna to the entire ionizing radiation dose the antenna would see during its lifetime in a shortened time frame. In other words, in the span of two days in the accelerator, the antenna was exposed to the same amount of radiation as it would be during the six- or seven-year trip to Europa, plus up to 40 days on the surface. Like the anechoic chamber testing, we also conducted this test at cryogenic temperatures that were as close to those of Europa’s surface conditions as possible.

The reason for the electron bombardment test was our concern that Jupiter’s ionizing radiation would cause a dangerous electrostatic discharge at the antenna’s port, where it connects to the rest of the lander’s communications hardware. Theoretically, the danger of such a discharge grows as the antenna spends more time exposed to ionizing radiation. If a discharge happens, it could damage not just the antenna but also hardware deeper in the communications system and possibly elsewhere in the lander. Thankfully, we didn’t measure any discharges during our test, which confirms that the antenna can survive both the trip to and work on Europa.

We designed and tested this antenna for Europa, but we believe it can be used for missions elsewhere in the solar system. We’re already tweaking the design for the joint JPL/ESA Mars Sample Return mission that—as the name implies—will bring Martian rocks, soil, and atmospheric samples back to Earth. The mission is currently slated to launch in 2026. We see no reason why our antenna design couldn’t be used on every future Mars lander or rover as a more robust alternative—one that could also increase data rates 4 to 16 times those of  current antenna designs. We also could use it on future moon missions to provide high data rates.

Although there isn’t an approved Europa lander mission yet, we at JPL will be ready if and when it happens. Other engineers have pursued different projects that are also necessary for such a mission. For example, some have developed a new, multilegged landing system to touch down safely on uncertain or unstable surfaces. Others have created a “belly pan” that will protect vulnerable hardware from Europa’s cold. Still others have worked on an intelligent landing system, radiation-tolerant batteries, and more. But the antenna remains perhaps the most vital system, because without it there will be no way for the lander to communicate how well any of these other systems are working. Without a working antenna, the lander will never be able to tell us whether we could have living neighbors on Europa.

This article appears in the August 2021 print issue as “An Antenna Made for an Icy, Radioactive Hell.”

About the Author

Nacer E. Chahat is a senior antenna engineer at NASA’s Jet Propulsion Laboratory. In a previous article for IEEE Spectrum, he described the antennas that allowed microsatellites to fly to Mars.

Stratospheric Balloons Take Monitoring and Surveillance to New Heights

Post Syndicated from Mark Harris original https://spectrum.ieee.org/aerospace/satellites/stratospheric-balloons-take-monitoring-and-surveillance-to-new-heights

Alphabet’s enthusiasm for ­balloons deflated earlier this year, when it announced that its high-altitude Internet company, Loon, could not become commercially viable.

But while the stratosphere might not be a great place to put a cellphone tower, it could be the sweet spot for cameras, argue a host of high-tech startups.

The market for Earth-observation services from satellites is expected to top US $4 billion by 2025, as orbiting cameras, radars, and other devices monitor crops, assess infrastructure, and detect greenhouse gas emissions. Low­-altitude observations from drones could be worth.

Neither platform is perfect. Satellites can cover huge swaths of the planet but remain expensive to develop, launch, and operate. Their cameras are also hundreds of kilometers from the things they are trying to see, and often moving at tens of thousands of kilometers per hour.

Drones, on the other hand, can take supersharp images, but only over a relatively small area. They also need careful human piloting to coexist with planes and helicopters.

Balloons in the stratosphere, 20 kilometers above Earth (and 10 km above most jets), split the difference. They are high enough not to bother other aircraft and yet low enough to observe broad areas in plenty of detail. For a fraction of the price of a satellite, an operator can launch a balloon that lasts for weeks (even months), carrying large, capable sensors.

Unsurprisingly, perhaps, the U.S. military has funded development in stratospheric balloon tests across six Midwest states to “provide a persistent surveillance system to locate and deter narcotic trafficking and homeland security threats.”

But the Pentagon is far from the only organization flying high. An IEEE Spectrum analysis of applications filed with the U.S. Federal Communications Commission reveals at least six companies conducting observation experiments in the stratosphere. Some are testing the communications, navigation, and flight infrastructure required for such balloons. Others are running trials for commercial, government, and military customers.

The illustration above depicts experimental test permits granted by the FCC from January 2020 to June 2021, together covering much of the continental United States. Some tests were for only a matter of hours; others spanned days or more.

2022 U.S. Budget Funds New ICBMs—A Reckless Diversion?

Post Syndicated from Natasha Bajema original https://spectrum.ieee.org/tech-talk/aerospace/military/2022-united-states-budget-funds-new-icbms-reckless-diversion

With the Biden administration’s 2022 defense budget coming in at US $753 billion, it’s easy to get diverted by the megaton-sized sum that the United States plans to spend on modernizing its nuclear forces over the coming decades. But a bigger question about the future of nuclear deterrence arguably looms—namely, how might intercontinental ballistic missiles (ICBMs) affect national security in an era of emerging tech threats? Some of those tech threats are not even typically associated with warfare: social media, deepfakes, cyber weapons, machine learning, commercial satellites, and autonomous systems, to name a few.

To the surprise of many, President Biden decided in May 2021 to push ahead with a strategic-weapons modernization proposed by past administrations. The centerpiece is a planned replacement for the aging Minuteman III ICBMs called the Ground Based Strategic Deterrent (GBSD), for which a whopping $2.6 billion has been pledged to begin development. Given Biden’s promise to take a closer look at reducing the size of the U.S. nuclear arsenal during his campaign, many arms control advocates were stunned by the administration’s full endorsement of the GBSD in the budget. The new land-based missiles are scheduled to replace the 400 Minuteman III missiles deployed under the New Start Treaty in the states of Colorado, Montana, Nebraska, North Dakota, and Wyoming over the next sixteen years; they will be in active service until sometime in the 2080s, at least.

In the lead up to the budget request, experts on both sides of the issue engaged in a spirited debate about the necessity, or lack thereof, for maintaining all three legs of the U.S. nuclear triad—nuclear-armed bombers, land-based ICBMs, and submarine-launched missiles. A recurring theme in these arguments revolves around the role that ICBMs might play in the deterrence equation in the 21st century. And yet, conspicuously missing from the discussion was sufficient consideration of the dangerous, destabilizing implications for ICBMs and other strategic weapons created by the categories of emerging technologies indicated above: cyber weapons, autonomous systems, and so on.

Proponents view ICBMs as a key component of a sound U.S. nuclear deterrent in the future, raising the threshold for nuclear war—and thereby reducing any likelihood of a nuclear attack by an adversary. In an interview, Dr. Brad Roberts, Former Deputy Assistant Secretary of Defense for Nuclear and Missile Defense Policy, suggests two scenarios, one without any ICBMs and the other with the current stockpile: “In one, the adversary has the means to eliminate most of the U.S. nuclear force with preemptive attacks on a few submarine and bomber bases, reserving the bulk of its nuclear force for punishment of the U.S. if it retaliates. In the other, the adversary must launch hundreds of nuclear weapons into the American heartland, depleting its arsenal while killing millions. In which scenario can U.S. leaders be expected by enemy leaders to have the political will to retaliate? The latter. The ICBM force helps adversaries to understand that the U.S. will defend its interests if attacked—and thereby to avoid a serious miscalculation.”

Other experts vehemently disagree. Former Secretary of Defense Bill Perry has argued that ICBMs in particular are highly unstable, increasing the risk of miscalculation, accidental launch, and thus nuclear war. As such, an enormous program to make new ICBMs is a dangerous enterprise. In Perry’s view it’s also an unnecessary expense, especially when the lifetime of existing Minutemen III can be extended until 2030, allowing for these missiles to be phased out in the course of future arms treaties or other weapons-reduction initiatives. However, it still remains unclear whether such life extension would result in any cost savings. Meanwhile, the total costs for the new land-based missiles could reach more than $264 billion over the course of their development and eventual deployment. 

Given their perceived instability, ICBMs would likely face more risks from emerging technologies that distort the information landscape (including deep fakes) or that compress decision-making timeframes (including autonomous systems). Roberts is sanguine about risks, suggesting that “new technologies such as social media, deepfakes, and cyber weapons make a complex situation even more complex. But this is an old problem in new form,” he says. “Nuclear decision-makers have faced the challenges of information overload, the risk of unreliable information, and the pressure to act for decades, and they will continue to adapt their practices to a changing technology environment.”

Oddly, the debate surrounding ICBMs has mostly failed to take emerging technologies into account. But Marina Favaro, a Research Fellow at the Institute for Peace Research and Security Policy (IFSH) at the University of Hamburg, warns in a new report that in a nuclear crisis, several of the new technologies have the potential to distort the information landscape and force leaders to make hasty, badly informed decisions. “When you consider the strategic environment today, we see a number of emerging technologies that may contract decision-making windows or disrupt information flows,” Favaro said in an interview. That, she added, “could lead to uncertainty, miscalculation, and escalation in a nuclear crisis.”

Biden’s budget decision appears to reflect a deep bipartisan consensus among a majority of experts within the defense community about the need for a nuclear triad—a notion that has been deeply embedded in American strategic thinking for many decades. The U.S. and Russia have maintained a strong nuclear triad since the height of the Cold War in the 1960s, and were recently joined by both China and India. The Nuclear Posture Reviews of both the Obama and Trump administrations strongly support the triad as a means of providing options for reducing incentives for nuclear war and for hedging against possible changes in nuclear threats. 

The desire to hedge against the future makes the absence of a thorough debate about the relative impact of emerging technologies on the different legs of nuclear triad more striking. The Biden administration plans to begin deliberations on its Nuclear Posture Review in the coming months. Early indications suggest that the President will seek to reduce the role of nuclear weapons in U.S. national security. Perhaps, the administration will also take another look at plans for ICBM modernization and examine the new risks posed by emerging technologies.

Dr. Natasha Bajema is the Director of the Converging Risks Lab at the Council on Strategic Risks and an IEEE Spectrum contributor. She has held long-term assignments at the National Defense University, in the U.S. Office of the Secretary of Defense, and at the U.S. Department of Energy’s National Nuclear Security Administration.

At Least 2,034 Ways Earth Has Blown Its Cover

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/tech-talk/aerospace/astrophysics/2034-ways-earth-has-blown-its-cover

We search the stars for signs of intelligent life. What if the stars were looking back, wondering the same thing about us?

Austrian astrophysicist Lisa Kaltenegger has an idea of what that might mean, or at least where that perspective might be coming from: 2,034 stars, seven with known and confirmed exoplanents, either are, have been, or will one day be positioned so they could spot Earth using techniques currently known to us.

This would cover a period starting roughly about the beginning of recorded history (a time when people still spoke Proto-Indo-European, and the first Pharaonic dynasty sprouted in the Nile Valley) and going 5,000 years into the future. And at some point during this timeline, beings orbiting one of those 2,034 stars might have a chance to look Earthward and see our pale blue dot transiting the sun. 

The observation comes out of a nifty bit of trajectory mining on a giant catalog of nearby stars.

Kaltenegger, director of the Carl Sagan Institute at Cornell University, and Jackie Faherty, astrophysicist and star catalog expert at the American Museum of Natural History, teamed up to explore it. They used analytical software to comb through a cosmic chart of observed star positions; this data comes via the European Space Agency craft Gaia. Gaia is a space observatory now eight years in orbit and delivering increasingly robust snapshots in its quest to plot a three-dimensional map of perhaps two billion stars in the Milky Way and further out when it’s all said and done.

Kaltenegger and Faherty use motion calculations to plot linear star trajectories backwards and forwards in time, filtering the observed stars to focus on the region of the sky through which from our perspective the sun appears to pass through during a year. Projected out into space on a narrow band, it is a place from which an observer would be able to detect the transit of our planet across the sun. 

In our long-running search for extraterrestrial life, we use transits to examine exoplanets, or planets outside our solar system. (We’ve even used transits of Venus to study the solar system.) Starlight passes through the atmosphere of a planet, or is reflected off of it. We can use spectrometry to analyze and gain an understanding of the chemical composition of planetary atmospheres, and so whether they might be friendly to life (as we know it). 

Kaltenegger and Faherty show the perspective from the other end of the telescope—where we’d be the aliens—and show the potential, at least, for advanced life forms. Spectrometric sensing from far away might have picked up our Great Oxidation Event and marked us as a living world. 

As published in the journal Nature, Faherty and Kaltenegger sifted through a Gaia catalog of nearby stars, starting with 330,000 star positions within 100 parsecs of us (that’s 326 light years and a common celestial distance benchmark). They converted these positions into three-dimensional Cartesian coordinates and back using scripts and a classification algorithm; in going through these steps in increments of time, backwards and forwards, they could see which stars were entering or exiting a position where they’d have a view on Earth. (For a closer look at Faherty’s code subroutines, check her GitHub repository.) 

They found 313 stars had been in the zone at some point in the last 5,000 years, 1,402 stars that have been there for some time, and 319 that will do so at some point in the next 5,000 years (Teegarden’s Star will come into the zone in 29 years). Faherty then wrote code in order to ingest the list into OpenSpace visualization software.

The star data Kaltenegger and Faherty worked with is a recent download, as researchers get ready for Gaia’s next full data delivery sometime in 2022. Kaltenegger’s been pondering how we look from space for some time, though. She published the first “Alien ID Chart” in 2007, showing how Earth might appear as seen through geological time. “That’s interesting,” says Leiden Observatory’s Anthony Brown, who heads Gaia’s data processing and analysis consortium. “Thinking about these stars that in principle could see Earth transit across the Sun.”

Brown’s also worked with Gaia astrometry data to build a map, his being an image of star trails projected across the sky for the next 400,000 years. As Gaia takes more and more census observations over time, the projections will get more precise. Gaia’s set to deliver other surprises, too: spectral data from Gaia’s blue and red photometers mean astronomers are in process of characterizing the astrophysics of hundreds of millions of stars. 

Sorting through all this and getting to findings will take data processing power and creative thinking, says Minia Manteiga, a member of Gaia’s data processing consortium and astronomer at the University of A Coruña. “Gaia is a paradigmatic example of big data astronomy,” she says. It will require inferential statistics as well as unsupervised algorithms and machine learning techniques, she adds.

Kaltenegger’s role-reversing perspective—where we become the observed instead of the observer—prompt questions for her students, who generally say they’d visit advanced planets, given the means. “But while I love Earth—it is my favorite planet— in terms of technology and evolution we are not that far along yet,” Kaltenegger says, pointing out we have only been using radio waves for 100 years. “Assume the cosmos is teeming with life, would we really be the place everyone would want to contact and visit? Or maybe…not yet?” 

China Tries To Solve Its Rocket Debris Problem

Post Syndicated from Andrew Jones original https://spectrum.ieee.org/tech-talk/aerospace/space-flight/chinas-rocket-debris-problem

On the morning of 17 June, China launched the first astronauts to its Tianhe space station module, with a Long March 2F rocket sending the crewed Shenzhou-12 spacecraft into orbit.

Overlooked from this major success, however, was that downrange from its Gobi Desert launch site, empty rocket stages fell to ground. As getting to orbit is about reaching velocity high enough to overcome gravity (roughly 7.8 kilometers per second), rockets consist of stages, with tanks and engines optimized for flying through atmosphere dumped to reduce mass once empty. A video posted on the Twitter-like Sina Weibo emerged that seems to show one of these as part of the recovery process, with apparent residual, hazardous propellant leaking from the broken boosters. According to the source, road closures and evacuations allowed a safe clean up.

Stunningly, this is not unusual for Chinese space launches. China’s three main spaceports were built during the Cold War when security was paramount, with the U.S. and Soviet Union considering the possibility of preemptive strikes on facilities linked to China’s fledging nuclear weapon capabilities. Only the Wenchang launch center, established in 2016 for new rockets, is on the coast. As China has been aiming to launch as many as 40 or more times annually in the last few years, the issue is growing, with some stages falling on or near inhabited areas, including close to a school.

However where the stages fall and how they are dealt with is not as haphazard as it appears. The areas in which stages and side boosters fall are calculated to avoid major populations, and local warnings and evacuation orders are issued and implemented in advance. The array of amateur footage of falling rocket stages backs up the claim that the events are known and expected. No casualties from these events have so far been reported.

Yet this approach is costly, disruptive and not without continued risks and occasional damage.

To mitigate and eventually solve the problem, the China Aerospace Science and Technology Corp. (CASC), the country’s main space contractor is developing controllable parafoils to constrain the areas in which the stages fall, most recently tested on a Long March 3B launch from Xichang, southwest China, one of China’s workhorse launchers, which most frequently threaten inhabited areas. Grid fins, the kind which help guide Falcon 9 rocket core stages to landing areas, have been tested on smaller Long March 2C and 4B launch vehicles.

The latter step is part of attempts to develop rockets that can launch, land and be reused like the Falcon 9, thus controlling the fall of large first stages.

The Long March 8, which had a debut (expendable) flight in December, is expected to be CASC’s first such launcher. Chinese commercial companies including Landspace, iSpace, Galactic Energy and Deep Blue Aerospace, are also working on reusable rockets. Low altitude hop tests are expected this year. Though, as SpaceX has demonstrated in a self-depreciating compilation, landing rockets vertically is no mean feat.

Additionally, the Long March 7A rocket, which launches from the coast, is expected to eventually replace the aging Long March 3B rocket for launches to geostationary orbit. As a bonus the 7A uses relatively clean kerosene and liquid oxygen propellant instead of the toxic fuel and oxidizer mix used by the 3B. The Long March 2F used for crewed launches could also be replaced by a Long March 7 or a low-Earth orbit version of a new-generation rocket being developed to eventually send Chinese astronauts to the moon.

China has also developed the ability to conduct launches at sea (though the most recent such launch flew directly over Taiwan). Its new Long March 5B, which launches from the coast, also has its own particular issue, namely the first stage actually (and unusually) reaching orbit, and returning to Earth wherever and whenever the atmosphere drags it back down.

Despite these measures, launching over land and population centers is fraught with risk. The debris issues and events above are when things go well; failures could bring yet greater danger. China’s space industry and activities have expanded greatly in recent decades, including lunar and planetary exploration, human spaceflight, remote sensing, spy, weather and other satellites, as well as a new commercial space sector. Some areas of space operations playing catch up.

The Networks That Aim to Track GPS Interference Around the World

Post Syndicated from Mark Harris original https://spectrum.ieee.org/tech-talk/aerospace/aviation/the-networks-that-aim-to-track-gps-interference-around-the-world

When Kevin Wells’ private jet lost its GPS reception after taking off from Hayward Executive Airport in California’s Bay Area in February 2019, he whipped out his phone and starting filming the instrument panel. For a few seconds, the signals from over a dozen GPS satellites can be seen to blink away, before slowly returning as Wells continues his ascent. Wells was able to continue his flight safely, but may have accidentally flown too high, into commercial airspace near Oakland.

Wells was quick to film the incident because this was not the first time he had suffered GPS problems. Less than a month earlier, his Cessna had been hit by a similar outage, at almost the identical location. “When I asked the Hayward Tower about the loss of signal, they did not know of [it],” he wrote later in a report to NASA’s Aviation Safety Reporting System.

“It wasn’t a big event for me because I came out of the clouds at the beginning of the second event,” says Wells of the 2019 interference.

Wells had fallen victim to a GPS interference event, where rogue or malicious signals drown out the faint signals from navigation satellites in orbit. Such events, often linked to U.S. military tests, can cause dangerous situations and near-misses.  A recent Spectrum investigation discovered that they are far more prevalent, particularly in the western United States, than had previously been thought.

Luckily, Wells was in a position to do something about it. As Executive Director of Stanford Institute for Theoretical Physics, Wells knew many of the university’s researchers. He took his video to the GPS Lab at Stanford Engineering, where Professor Todd Walter was already studying the problem of GPS jamming.

The Federal Aviation Administration (FAA) and the Federal Communications Commission (FCC) do have procedures for finding people deliberately or accidentally interfering with GPS signals. When pilots reported mysterious, sporadic GPS jamming near Wilmington Airport in North Carolina, the FAA eventually identified a poorly designed antenna on a utility company’s wireless control system. “But this took many weeks, maybe months,” Walter tells Spectrum. “Our goal is to track down the culprit in days.”

Walter’s team was working on a drone that would autonomously sniff out local signals in the GPS band, without having to rely on GPS for its own navigation. “But we didn’t have permission to fly drones in Hayward’s airspace and it wasn’t quite at the point where we could just launch it and seek out the source of the interference,” says Walter.

Instead, Walter had a different idea: Why not use the GPS receivers in other aircraft to crowdsource a solution? All modern planes carry ADS-B transponders—devices that continually broadcast their GPS location, speed and heading to aid air traffic control and avoid potential collisions. These ADS-B signals are collected by nearby aircraft but also by many terrestrial sensors, including a network of open-access receivers organized by OpenSky, a Swiss nonprofit.

With OpenSky’s data in hand, the Stanford researchers’ first task was accurately identifying interference events. They found that the vast majority of times that ADS-B receivers lost data had nothing to do with interference. Some receivers were unreliable, others were obscured from planes overhead by buildings or trees.

Being able to link the loss of Wells’ GPS signals with data from the OpenSky database was an important step in characterizing genuine jamming. Integrity and accuracy indicators built into the ADS-B data stream also helped the researchers. “I think certain interference events have a characteristic that we can now recognize,” says Walter. “But I’d be concerned that there would be other interference events that we don’t have the pattern for. We need more resources and more data.”

For the Hayward incidents, Walter’s team managed to identify 17 days of potential jamming between January and March 2019. Of the 265 aircraft that flew near Hayward during that time, the ADS-B data showed that 25 had probably experienced GPS interference. This intermittent jamming was not enough to narrow down the source of the signals, says Walter: “We can say it’s in this region of Hayward but you don’t want to go searching ten city blocks. We want to localize it to a house or building.”

The FAA eventually issued a warning to pilots flying in and out of Hayward and much of the southern Bay Area, and the interference seemed to quiet down. However, Walter recently looked at 2020 ADS-B data near Hayward and found 13 more potentially jammed flights.

Walter now hopes to expand on its Hayward study by tapping into the FAA’s own network of ADS-B receivers to help uncover more interference signals hidden in the data.

Leveraging ADS-B signals is not the way researchers can troubleshoot GPS reception. Earlier this month, John Stader and Sanjeev Gunawardena at the U.S. Air Force Institute of Technology presented a paper at the Institute of Navigation’s PTTI 2021 conference detailing an alternative interference detection system.

The AFIT system also uses free and open-source data, but this time from a network of continuously-operating terrestrial GPS receivers across the globe. These provide high-quality data on GPS signals in their vicinity, which the AFIT researchers then mined to identify interference events. The AFIT team was able to identify 30 possible jamming events in one 24-hour period, with detection within a matter of minutes of each jamming incident occurring. “The end goal of this research effort is to create a worldwide, automated system for detecting interference events using publicly available data,” wrote the authors.

A small system like this is already up and running around Madrid Airport in Spain, where 11 GPS receivers constantly monitor for jamming or accidental interference. “This is costly and takes a while to install,” says Walter, “I feel like major airports probably do want something like that to protect their airspace, but it’s never going to be possible for smaller airports like Hayward.”

Another problem facing both these systems is the sparse distribution of sensors, says Todd Humphreys, Director of the Radionavigation Laboratory at UT Austin: “There are fewer than 3000 GPS reference stations with publicly-accessible data across the globe; these can be separated by hundreds of miles. Likewise, global coverage by ships and planes is still sparse enough to make detection challenging, and localization nearly impossible, except around ports and airports.”

Humphreys favors using other satellites to monitor the GPS satellites, and published a paper last year that zeroed in on a GPS jamming system in Syria. Aireon is already monitoring ADS-B signals worldwide from Iridium’s medium earth orbit satellites, while HawkEye 360 is building out its own radio-frequency sensing satellites in low earth orbit (LEO).

“That’s very exciting,” says Walter. “If you have dedicated equipment on these LEO satellites, that can be very powerful and maybe in the long term much lower cost than a whole bunch of terrestrial sensors.”

Until then, pilots like Kevin Wells will having to keep their wits about them as they navigate areas prone to GPS interference. The source of the jamming near Hayward Airport was never identified.

No Sleep Till Jezero: Touchdown Time for Mars Perseverance Rover?

Post Syndicated from Ned Potter original https://spectrum.ieee.org/automaton/aerospace/space-flight/nasa-perseverance-rover-lands-at-mars-jezero-crater

They used to call it “Seven Minutes of Terror”—a NASA probe would slice into the atmosphere of Mars at more than 20,000 kilometers per hour; slow itself with a heat shield, parachute, and rocket engines; and somehow land intact on the surface, just six or seven minutes later, while its makers waited helplessly on Earth. The computer-animated landing videos NASA produced before previous Mars missions—in 2004, 2008, and 2012—became online sensations. “If any one thing doesn’t work just right,” said NASA engineer Tom Rivellini in the last one, “it’s game over.”

NASA is now trying again, with the Perseverance rover and the tiny Ingenuity drone bolted to its undercarriage. NASA will be live-streaming the landing (across many video and social media platforms as well as in a Spanish language feed and in an immersive, 360-degree view) beginning at 11:15 a.m. PST/2:15 p.m. EST/19:15 UTC on Thursday, 18 February 2021. 

While this year’s animated landing video is as dramatic as ever, the tone has changed. “The models and simulations of landing at Jezero crater have assessed the probability of landing safely to be above 99 percent,” says Swati Mohan, the guidance, navigation and controls operations lead for the mission.

There isn’t a trace of arrogance in her voice as she says this. She’s been working on this mission for five years, has teammates who were around for NASA’s first Mars rover in 1997, and knows what they’re up against. Yes, they say, 99 percent reliability is realistic. 

The biggest advance over past missions is a system called Terrain Relative Navigation—TRN for short. In essence, it gives the spacecraft a way to know precisely where it’s headed, so it can steer clear of hazards on the very jagged landscapes that scientists most want to explore. If all goes as planned, Perseverance will image the Martian surface in rapid sequence as it plows toward its landing site, and compare what it sees to onboard maps of the ground below. The onboard database is primarily based on high-resolution images from NASA’s Mars Reconnaissance Orbiter, which has been mapping the planet from an altitude of 250 kilometers since 2006. Its images have a resolution of 30 cm per pixel. 

“This is kind of along the same lines as what the Apollo astronauts did with people in the loop, back in the day. Those guys looked out the window,” says Allen Chen, the mission’s entry, descent, and landing lead. “For the first time here on Mars, we’re automating that.”

There will still be plenty of anxious controllers at NASA’s Jet Propulsion Laboratory in California. After all, the spacecraft will be on its own, about 209 million kilometers from Earth, far enough away that its radio signals will take more than 11 minutes to reach home. The ship should reach the surface four minutes before engineers even know it has entered the Martian atmosphere. “Landing on Mars is hard enough,” says Thomas Zurbuchen, NASA’s associate administrator for science missions. “It is not guaranteed that we will be successful.” 

But the new navigation technology makes a very risky landing possible. Jezero crater, which was probably once a lake at the end of a river delta, has been on scientists’ shortlist since the 1990s as place to look for signs of past life on Mars. But engineers voted against it until this mission. Previous landers used radar, which Mohan likens to “closing your eyes and holding your hands out in front of you. You can use that to slow down and to stop. But with your eyes closed you can’t really control where you’re coming down.”

Everything happens fast as Perseverance comes in, following a long arcing path. Fewer than 90 seconds before scheduled touchdown, and about 2,100 meters above the Martian surface, the TRN system makes its calculations. Its rapid-fire imaging should by then have told it where it is relative to the ground below, and from that it can project its likely touchdown spot. If the ship is headed for a ridge, a crevice, or a dangerous outcropping of rock, the computer will send commands to eight downward-facing rocket engines to change the descent trajectory. 

In that final minute, as the spacecraft slows from 300 kilometers per hour to zero, the TRN system can shift the touchdown spot by up to 330 meters. The safe targets map in Perseverance’s memory is detailed enough, the team says, that the ship should be able to reach a suitable location for a safe landing. 

“It’s able to thread the needle of all these different hazards to land in the safe spots in between these hazards,” says Mohan, “and by landing amongst the hazards it’s also landing amongst the scientific features of interest.”

How NASA Designed a Helicopter That Could Fly Autonomously on Mars

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/aerospace/robotic-exploration/nasa-designed-perseverance-helicopter-rover-fly-autonomously-mars

Tucked under the belly of the Perseverance rover that will be landing on Mars in just a few days is a little helicopter called Ingenuity. Its body is the size of a box of tissues, slung underneath a pair of 1.2m carbon fiber rotors on top of four spindly legs. It weighs just 1.8kg, but the importance of its mission is massive. If everything goes according to plan, Ingenuity will become the first aircraft to fly on Mars. 

In order for this to work, Ingenuity has to survive frigid temperatures, manage merciless power constraints, and attempt a series of 90 second flights while separated from Earth by 10 light minutes. Which means that real-time communication or control is impossible. To understand how NASA is making this happen, below is our conversation with Tim Canham, Mars Helicopter Operations Lead at NASA’s Jet Propulsion Laboratory (JPL).

NASA’s Mars Perseverance Rover Should Leave Past Space Probes in the Dust

Post Syndicated from Ned Potter original https://spectrum.ieee.org/tech-talk/aerospace/space-flight/nasa-mars-perseverance-rover-should-leave-past-space-probes-in-dust

If you could stand next to NASA’s Perseverance rover on the Martian surface… well, you’d be standing on Mars. Which would be a pretty remarkable thing unto itself. But if you were waiting for the rover to go boldly exploring, your mind might soon wander. You’d be forgiven for thinking this is like watching paint dry. The Curiosity rover, which landed on Mars in 2012 and has the same chassis, often goes just 50 or 60 meters in a day. 

Which is why Rich Rieber and his teammates have been at work for five years, building a new driving system for Perseverance that they hope will set some land-speed records for Mars. 

“The reason we’re so concerned with speed is that if we’re driving, we’re not doing science,” he said. “If you’re on a road trip and you drive to Disneyland, you want to get to Disneyland. It’s not about driving, you want to be there.”

Rieber is the lead mobility systems engineer for the Perseverance mission. He ran the development of the drivetrain, suspension, engineering cameras, machine vision, and path-planning algorithms that should allow the rover to navigate the often-treacherous landscape around Perseverance’s destination, called Jezero crater. With luck, Perseverance will leave past rovers in the dust. 

“Perseverance is going to drive three times faster than any previous Mars rover,” said Matt Wallace, the deputy mission manager at NASA’s Jet Propulsion Lab. “We have added a lot of surface autonomy, a lot of new AI if you will, to this vehicle so that we can complete the mission on the surface.”

The rover, if everything works, will still have a maximum speed of only 4.4 cm per second, which is one-thirtieth as fast as human walking speed. It would travel the length of a football field in 45 minutes. But, says Rieber, “It is head and shoulders the fastest rover on Mars, and that’s not because we are driving the vehicle faster. It’s because we’re spending less time thinking about how.” 

Perseverance bases much of its navigational ability on an onboard map created from images taken by NASA’s Mars Reconnaissance Orbiter–detailed enough that it can show features less than 30 cm across. That helps tell the rover where it is. It then adds stereo imagery from two navigational cameras on its top mast and six hazard-detection cameras on its body. Each camera has a 20-megapixel color sensor. The so-called Navcams have a 90-degree field of view. They can pick out a golf ball-sized object 25 meters away.

These numbers add up: The technology should allow the rover to pick out obstacles as it goes—a ridge, an outcropping of rock, a risky-looking depression—and steer around many of them without help from Earth. Mission managers plan to send the rover its marching orders each morning, Martian time, and then wait for it to report its progress the next time it can communicate with Earth. 

Earlier rovers often had to image where they were and stop for the day to await new instructions from Earth. Curiosity, on days it’s been ordered to drive, only spends 13 percent of its time actually in motion. Perseverance may more than triple that. 

There are, however, still myriad complexities to driving on Mars. For instance, mission engineers can calculate how far Perseverance will go with each revolution of its six wheels. But what if the wheels on one side slip because they were driving through sand? How far behind or off its planned path might it be? The rover’s computers can figure that out, but their processing capacity is limited by the cosmic radiation that bombards spacecraft outside the Earth’s protective magnetosphere. “Our computer is like top of the line circa 1994,” said Rieber, “and that’s because of radiation. The closer [together] you have your transistors, the more susceptible they are.”

Matt Wallace, the deputy project manager, has been on previous missions when—sometimes only in hindsight—engineers realized they had barely escaped disaster. “There’s never a no-risk proposition here when you’re trying to do something new,” he said. 

But the payoff would come if the rover came across chemical signatures of life on Mars from billions of years ago. If Perseverance finds that, it could change our view of life on Earth.

Is there a spot somewhere at Jezero crater that could offer such an incredible scientific breakthrough? The first step is to drive to it.

Everything You Need to Know About NASA’s Perseverance Rover Landing on Mars

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/aerospace/robotic-exploration/nasa-perseverance-rover-landing-on-mars-overview

Just before 4PM ET on February 18 (this Thursday), NASA’s Perseverance rover will attempt to land on Mars. Like its predecessor Curiosity, which has been exploring Mars since 2012, Perseverance is a semi-autonomous mobile science platform the size of a small car. It’s designed to spend years roving the red planet, looking for (among other things) any evidence of microbial life that may have thrived on Mars in the past.

This mission to Mars is arguably the most ambitious one ever launched, combining technically complex science objectives with borderline craziness that includes the launching of a small helicopter. Over the next two days, we’ll be taking an in-depth look at both that helicopter and how Perseverance will be leveraging autonomy to explore farther and faster that ever before, but for now, we’ll quickly go through all the basics about the Perseverance mission to bring you up to speed on everything that will happen later this week.

3D-Printed Thruster Boosts Range of CubeSat Applications

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/aerospace/satellites/3dprint-thruster

A new 3-D printed electric thruster could one day help make advanced miniature satellites significantly easier and more affordable to build, a new study finds.

Conventional rockets use chemical reactions to generate propulsion. In contrast, electric thrusters produce thrust by using electric fields to accelerate electrically charged propellants away from a spacecraft.

The main weakness of electric propulsion is that it generates much less thrust than chemical rockets, making it too weak to launch a spacecraft from Earth’s surface. On the other hand, electric thrusters are extremely efficient at generating thrust, given the small amount of propellant they carry. This makes them very useful where every bit of weight matters, as in the case of a satellite that’s already in orbit.

Test and measurement solutions & how to control test equipment remotely, while respecting social-distancing.

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/test-and-measurement-solutions-how-to-control-test-equipment-remotely-while-respecting-socialdistancing

Introducing social distancing

The impact of COVID-19 has disrupted the global satellite production in an unprecedented way.

Many of the satellite industry manufacturing processes came to a halt, when staff lockdowns and social distancing measures had to be employed. Recovery is slowly underway but the effect of the impact is far from over.

The industry is now looking for new ways of making their satellite production more resilient towards the effects of the pandemic. Especially in the test and measurement domain, new technologies and solutions offer manufacturers the opportunity to remain productive and operational, while respecting social-distancing measures.

Much of the equipment used to test satellite electronics can be operated remotely and test procedures can be created, automated and controlled by the same engineers from their homes.

This webinar provides an overview on related test and measurement solutions from Rohde & Schwarz and explains how engineers can control test equipment remotely to continue producing, while respecting social-distancing.

In this webinar you will learn:

The interfaces/standards used to command test equipment remotely
How to maintain production while using social-distanced testing
Solutions from Rohde & Schwarz for cloud-based testing and cybersecurity


Sascha Laumann

Sascha Laumann, Product Manager, Rohde & Schwarz

Sascha Laumann is a product owner for digital products at Rohde & Schwarz. His main activities are definition, development and marketing of test and measurement products that address future challenges of the ever-changing market. Sascha is an alumni of the Technical University of Munich, having majored in EE with a special focus on efficient data acquisition, transfer and processing. His previous professional background comprises of developing solutions for aerospace testing applications.

Rajan Bedi

Dr. Rajan Bedi, CEO & Founder, Spacechips

Dr. Rajan Bedi is the CEO and founder of Spacechips, a UK SME disrupting the global space industry with its award-winning on-board processing and transponder products, space-electronics design-consultancy, technical-marketing and training and business-intelligence services. Spacechips won Start- Dr. Rajan Bedi has previously taught at Oxford University, was featured in Who’s Who in the World and is a winner of a Royal Society fellowship and a highly sought keynote speaker.

The content of this webcast is provided by the sponsor and may contain a commercial message

Don’t Believe the Hype About Hypersonic Missiles

Post Syndicated from Cameron L. Tracy original https://spectrum.ieee.org/tech-talk/aerospace/military/hypersonic-missiles-are-being-hyped

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Hypersonic weapons are one of the hottest trends in military hardware. These missiles, which fly through the atmosphere at more than five times the speed of sound, are commonly depicted as a revolutionary new tool of war: ostensibly faster, less detectable, and harder to intercept than currently-deployed long-range ballistic missiles. The major military powers—China, Russia, and the United States—have bought into this belief, with each investing vast sums in a hypersonic arms race.

But there is a glaring problem with this narrative: Many claims regarding the purported advantages of hypersonic weapons are false. These missiles are, in reality, an old technology with a massive price tag and few meaningful advantages over existing ballistic missiles.

An old technology

While commonly touted as an “emerging technology,” these weapons were first designed nearly a century ago, in late 1930s Germany. The Treaty of Versailles, signed at the end of World War I, prohibited most weapons development by the Nazi government. But a loophole allowed work on rockets, prompting an intense German interest in missiles. Out of this grew the Amerikabomber project, aimed at developing a weapon that could strike the United States from Germany. To this end, German engineers designed the Silbervogel (Silver Bird), the first conception of a hypersonic boost-glide missile.

Yet Germany declined to pursue this design, in part because it judged the costs of hypersonic weapons to be disproportionate with their meagre benefits. In subsequent decades, ballistic missiles achieved the same goal of swift warhead delivery to distant targets. Flying primarily through the vacuum of outer space, rather than the comparatively dense atmosphere, ballistic missiles were seen as superior to the hypersonic concept in many ways.

While research on hypersonic weapons continued throughout the 20th century, no nation deployed them, opting instead for ballistic and cruise missile technologies. Even after spending more than US $5 billion (in current year dollars) developing the Dyna-Soar hypersonic glider, the United States abandoned the design in 1963 when it could not identify any clear objective for the weapon.

Interest in hypersonic weapons was rekindled in the early 2000s, but not on the basis of a performance advantage over existing missiles. Rather, the United States sought to use conventionally-armed missiles for rapid strikes, including against non-state targets such as terrorist groups. But it was concerned that nearby nations could mistake launches for nuclear-armed ballistic missile attacks. Hypersonic gliders fly low-altitude trajectories that could be easily discriminated from those of ballistic missiles.

As this history shows, nations have consistently judged hypersonic weapons to be an unnecessary addition to existing arsenals, offering at best niche capabilities. Why today, after decades of disinterest, has the idea that hypersonic weapons outperform existing missiles taken hold? Was some unique performance advantage overlooked?

While the strategic environment has evolved, the physics of hypersonic flight—and the limitations it places on performance—remain unchanged. In a recently-published article, we report the results of computational modeling of hypersonic missile flight, showing that many common claims regarding their capabilities range from overblown to simply false.

Technical reassessment

First, consider the claim that hypersonic weapons can reach their targets faster than existing ballistic missiles. The fastest hypersonic vehicles are launched on rocket boosters, like those that launch intercontinental-range ballistic missiles. Thus, both types of missile reach the same initial speeds. But hypersonic weapons fly through the atmosphere where they are subjected to substantial drag forces. Ballistic missiles, on the other hand, fly high into outer space where they are free from these drag effects. Thus, while hypersonic weapons fly a more direct path to their targets, they lose much of their speed throughout flight, ultimately taking longer to reach their targets than comparable ballistic missiles.

Gliding through the dense atmosphere at hypersonic speeds subjects these gliders to more than just drag forces. As they slow down, hypersonic vehicles deposit large quantities of energy to the surrounding air, a portion of which is transferred back to the vehicle as thermal energy. Their surfaces commonly reach temperatures of thousands of degrees Celsius.

This extreme heating limits performance in two ways. First, it constrains glider geometry, as features like sharp noses and wings may be unable to withstand aerothermal heating. Because sharp leading edges decrease drag, these constraints degrade glider aerodynamics.

Second, this heating renders hypersonic missiles vulnerable to detection by the satellite-mounted sensors that the United States and Russia currently possess, and that China is reportedly developing. Hot objects emit light in proportion to their temperature. These satellites watch for light in the infrared band to warn of missile strikes. Ballistic missiles are visible to them during launch, when fiery rocket plumes emit a great deal of infrared light, but become harder to see after rocket burn-out, when the warhead arcs through outer space. Hypersonic missiles, on the other hand, stay hot throughout most of their glide. Our calculations indicate that incoming gliders would remain visible to existing space-based sensors not only during launch, but for nearly their entire flight.

Finally, it is often claimed that hypersonic weapons will upend the strategic balance between adversaries because they can bypass missile defenses. The reality is more complex. As discussed, the effects of atmospheric drag and heating mean that hypersonic weapons will have few, if any, advantages over existing missiles when it comes to outpacing interceptors or evading detection. Still, their low-altitude flight would allow them to fly under the reach of defenses designed to intercept ballistic missiles in outer space. Their maneuverability could also allow them to dodge interceptors in the atmosphere.

Yet the performance of hypersonic weapons against missile defenses is strategically meaningful only if it offers a new capability (i.e., if these weapons do something existing missiles cannot). This is not the case. Existing long-range missiles could easily overcome defensive systems by fairly simple means, such as firing more missiles than the adversary has interceptors, or by using countermeasures, like decoys.

Hypersonic hype

In short, technical analysis of the performance of hypersonic weapons shows that many claims as to their supposedly “revolutionary” performance are misinformed. So why has an arms race coalesced around a weapon that does not work as advertised?

Development and acquisition of strategic missiles are complex, bureaucratic processes involving a broad range of actors, from the military organizations that use these weapons to the legislators who fund them. In this context, decisions commonly serve the diverse interests (financial, professional, etc.) of broad coalitions, rather than narrow technical criteria. Even if a missile does not perform to the specifications by which it is marketed, proponents are often keen to convince others of its “revolutionary” nature—thus securing funding, prestige, and other benefits that accompany the successful development of a new weapon.

But now is not the time to let spending on these weapons go unscrutinized. Federal budgets are under enormous strain, and every dollar spent on hypersonic weapon development is one that does not go to other pressing needs. Beyond these financial concerns, arms racing could increase tensions and the likelihood of conflict between nations.

To be sure, research on hypersonic flight is a worthwhile endeavor, with broad applications from commercial transportation to space exploration. But when it comes to costly, ineffective hypersonic missiles, the technical basis fails to justify the price.

Cameron L. Tracy is a Kendall Fellow in the Global Security Program at the Union of Concerned Scientists. He has a PhD in materials science.

David Wright is a research associate in the Laboratory for Nuclear Security and Policy in the Department of Nuclear Science and Engineering at the Massachusetts Institute of Technology. He has a PhD in physics.

Breakthrough Listen Is Searching a Million Stars for One Sign of Intelligent Life

Post Syndicated from Danny Price original https://spectrum.ieee.org/aerospace/astrophysics/breakthrough-listen-seti

In 1960, astronomer Frank Drake turned the radio telescope at the National Radio Astronomy Observatory in Green Bank, W.Va., toward two nearby sunlike stars to search for transmissions from intelligent extraterrestrial societies. With a pair of headphones, Drake listened to Tau Ceti and Epsilon Eridani for a total of 150 hours across multiple sessions. He heard nothing, except for a single false positive that turned out to be an airplane.

Statistically speaking, Drake’s pioneering project had little chance of success. Estimates from both NASA and the European Space Agency (ESA) suggest that our galaxy alone contains roughly 100 billion stars. (And there are perhaps 2 trillion galaxies in the universe).

Nevertheless, Drake’s effort, called Project Ozma, launched the modern search for extra­terrestrial intelligence (SETI) efforts that continue to this day. But given the universe’s size, a SETI project trying to make even the smallest dent in searching for intriguing signals needs to be much more than one astronomer with a pair of headphones. The project will need more telescopes, more data, and more computational power to find such signals. Breakthrough Listen has all three.

Breakthrough Listen is the most comprehensive and expensive SETI effort ever undertaken, costing US $100 million, using 13 telescopes, and bringing together 50 researchers from institutions around the globe, including me as part of the Parkes telescope team. The project is making use of the newest advances in radio astronomy to listen to more of the electromagnetic spectrum; cutting-edge data processing to analyze petabytes of data across billions of frequency channels; and artificial intelligence (AI) to detect, amid the universe’s cacophony, even one signal that might indicate an extraterrestrial intelligence. And even if we don’t hear anything by the time the project ends, we’ll still have a wealth of data that will set the stage for future SETI efforts and astronomical research.

Breakthrough Listen is a 10-year initiative, launched in July 2015, and headquartered at the University of California, Berkeley. As my colleague there, J. Emilio Enriquez, likes to put it, Breakthrough Listen is the Apollo program of SETI projects.

It was the first of several Breakthrough Initiatives founded by investor Yuri Milner and his wife, Julia. The initiatives are a group of programs investigating the origin, extent, and nature of life in the universe. Aside from Listen, there’s also Message, which is exploring whether and how to communicate with extraterrestrial life; Starshot, which is designing a tiny probe and a ground-based laser to propel it to the star Alpha Centauri; and Watch, which is looking for Earth-like planets around other stars. There are also projects in earlier stages of development that will explore Venus and Saturn’s moon Enceladus for signs of life.

Listen was started with three telescopes. Two are radio telescopes: the 100-meter Robert C. Byrd Green Bank Telescope, in Green Bank, W.Va., and the 64-meter Parkes Observatory telescope, in New South Wales, Australia, for which I’m the project scientist. The third is an optical telescope: the Automated Planet Finder at Lick Observatory, in California. We’re using these telescopes to survey millions of stars and galaxies for unexpected signals, and powerful data processors to comb through the collected data at a very fine resolution.

In 2021, Listen will also begin using the MeerKAT array in South Africa. MeerKAT, inaugurated in July 2018, is an array of 64 parabolic dish antennas, each 13.5 meters in diameter. MeerKAT is located deep in the Karoo Desert, in a federally protected radio-quiet zone where other transmissions are restricted. Even better, while at the other three telescopes we have to share time with other research efforts, at MeerKAT our Listen data recorders can “eavesdrop” on antenna data 24/7. Listen has also partnered over the last couple of years with several other observatories around the globe.

The anomalous signals we’re searching for fall under a broad umbrella called “technosignatures.” A technosignature is any sort of indication that technology exists or existed somewhere beyond our solar system. That includes both intentional and unintentional radio signals, but it can also include things like an abundance of artificial molecules in the atmosphere. For example, Earth’s atmosphere has plenty of hydrofluorocarbons because they were used as refrigerants before scientists realized they were damaging the ozone layer. Breakthrough Listen isn’t searching for anything like artificial molecules, but we are looking for both intentional communications, like a “We’re here!” message, and unintentional signals, like the ones produced over the years by the Arecibo Observatory in Puerto Rico.

Arecibo, before cable breaks destroyed the dish in November 2020, was a telescope with four radio transmitters, the most powerful of which effectively transmitted at 22 terawatts at a frequency of 2,380 megahertz (similar to Wi-Fi). The telescope used those transmitters to reflect signals off of objects like asteroids to make measurements. However, an artificial signal from Arecibo that made it into interstellar space would be a clear indication of our existence, even though it wouldn’t be an intentional communication. In fact, Arecibo’s signal was so tightly focused and powerful that another Arecibo-­like radio telescope could pick up that signal from halfway across the galaxy.

Looking for technosignatures allows us to be much broader in our search than limiting ourselves to detecting only intentional communications. In contrast, Drake limited his search to a narrow range of frequencies around 1.42 gigahertz called the 21-centimeter line. He picked those frequencies because 1.42 GHz is the frequency at which atomic hydrogen gas, the most abundant gas in the universe, spontaneously radiates photons. Drake hypothesized that an intelligent society would select this frequency for deliberate transmissions, as it would be noticed by any astronomer mapping the galaxy’s hydrogen.

Drake’s reasoning was sound. But without a comprehensive search across a wider chunk of the electromagnetic spectrum, it’s impossible to say whether there aren’t any signals out there. An Arecibo-like 2,380-MHz signal, for example, would have gone completely unnoticed by Project Ozma.

Ground-based radio telescopes can’t scan the entire electromagnetic spectrum. Earth’s atmosphere blocks large swaths of bandwidth at both high and low frequencies. For this reason, gamma ray, X-ray, and ultraviolet astronomy all require space-based telescopes. However, there is a range of frequencies from roughly 10 MHz to 100 GHz that can pass easily through Earth’s atmosphere and reach ground-based telescopes called the terrestrial microwave window. Breakthrough Listen is focused on these frequencies because they’re probable carriers for technosignatures from other planets with Earth-like atmospheres. But it’s still a huge range of frequencies to cover: Drake searched a spectrum band of only about 100 hertz, or one-billionth of that window. In contrast, Breakthrough Listen is covering the entire range of frequencies from 1 GHz to 10 GHz.

Each radio telescope is unique, but they all function according to the same basic principles. A large reflective dish gathers radio signals and focuses them on the same point. At that point, a receiver converts those radio signals into signals on a coaxial cable, just like a well-aligned TV antenna. Then computers digitize and process the signals.

The telescopes we’re using for Breakthrough Listen are wideband telescopes, which means that they can record signals from across a wide range of frequencies. Parkes, for example, covers 3.3 GHz of spectrum, while Green Bank can detect signals across 10 GHz of spectrum. Wideband telescopes allow us to avoid making difficult choices about which frequencies we want to focus on.

However, expanding the search band also greatly increases the amount of data collected. Fortunately, after the receivers at the telescopes have digitized the incoming signals, we can turn to computers—rather than astronomers with headphones—for the data processing. Even so, the amount of data we’re dealing with required substantial new hardware.

Each observatory has only enough memory to store Listen’s raw data for about 24 hours before running out of room. For example, the Parkes telescope generates 215 gigabits per second of data, enough to fill up a laptop hard drive in a few seconds. The MeerKAT array produces even more data, with its 64 antennas producing an aggregate of 2.2 terabits per second. That’s roughly the equivalent of downloading the entirety of Wikipedia 10 times every second. To make such quantities of data manageable, we have to compress that raw data to about one-hundredth of its initial size in real time.

After compression, we split the frequency-band data into sub-bands, and then process the sub-bands in parallel. Individual servers each process a single sub-band. Thus the number of servers at each telescope varies, depending on how much data each one produces. Parkes, for example, has a complement of 27 servers, and Green Bank needs 64, while MeerKAT requires 128.

Each sub-band is further split into discrete frequency channels, each about 2 Hz wide. This resolution allows us to spot any signals that span a very narrow range of frequencies. We can also detect wider-frequency signals that are short in duration by averaging the power of each channel over a few seconds or minutes. Simultaneous spikes for those seconds or minutes over a contiguous band of channels would indicate a wide-frequency, short-duration signal. We’re focused on these two types of signals because concentrating signal power continuously into a single frequency or into short pulses over a range of frequencies are the two most effective ways to send a transmission over interstellar distances while minimizing dissipation. Therefore, these signal types are the most likely candidates for extraterrestrial technosignatures.

Fortunately for us, these kinds of signals are the easiest to spot, too. Narrowband signals are easily distinguishable from ones caused by natural astrophysical processes, because such processes do not produce signals narrower than a few thousand hertz. To understand why, let’s look at the example of a cloud of hydrogen-gas molecules in space emitting radiation in the 21-cm line. When Drake searched that band, how would he have known the difference between an intentional signal from the noise created by hydrogen gas?

Here’s how: Even though all the hydrogen gas in a cloud radiates at that frequency, each molecule is moving in a different, random direction. This movement causes each emitted photon to arrive at a radio telescope with a slightly different frequency. It’s the Doppler effect at work: An emitted photon will have a higher frequency if the hydrogen molecule was moving toward you and a lower frequency if the molecule was moving away. In the end, the hydrogen cloud’s signal is smeared across a frequency band centered around 1.42 GHz. If Drake had heard a narrowband signal at that frequency, it would have meant he had detected a physical impossibility: a hydrogen cloud in which every molecule was stationary relative to Earth. Alternatively, the signal was artificial.

We can make a similar assumption about short-duration signals. While there are some natural short-duration signals, namely fast radio bursts and pulsars, these have other characteristics that single them out. A signal emitted by a pulsar, for example, has a long “tail” caused by the lower frequencies lagging behind higher frequencies over interstellar distances.

Also, I should mention that artificial signals generally don’t turn out to be extraterrestrial. In 2019, we published a paper in the Astronomical Journal, detailing our search across 1,327 nearby stars. While we detected tens of millions of narrowband signals, we were able to attribute all of them to satellites, aircraft, and other terrestrial sources. To date, our most promising signal is one at 982 MHz collected in April 2019 from the nearby star Proxima Centauri. The signal is too narrow for any known natural phenomenon and isn’t in a commonly used terrestrial band. That said, there remains a lot of double-checking before we rule out all possibilities other than an extraterrestrial technosignature of some kind.

It’s also entirely possible that an alien society might give off radio technosignatures that are neither narrowband nor short duration. We really don’t know what kinds of communications such a society might use, or what kinds of signals it would give off as a by-product of its technologies. Regardless, it’s likely we couldn’t find such signals simply by finely chopping up the telescope data and analyzing it. So we’re also using AI to hunt for more complicated technosignatures.

Specifically, we’re using anomaly-detection AI, in order to find signals that don’t look like anything else astronomers have found. Anomaly detection is used in many fields to find rare events. For example, if your bank has ever asked you to confirm an expensive purchase like airline tickets, odds are its system determined the transaction was unusual and wanted to be sure your credit card information hadn’t been stolen.

We’re training our AI to spot unexpected signals using a method called unsupervised learning. The method involves giving the AI tons of data like radio signals and asking it to figure out how to classify the data into different categories. In other words, we might give the AI signals coming from quasars, pulsars, supernovas, main-sequence stars like the sun, and dozens of other astronomical sources. Crucially, we do not tell the AI what any of these sources are. That requires the AI to figure out what kinds of characteristics make certain signals similar to one another as it pores through the data, and then to lump like signals into groups.

Unsupervised learning causes an AI classifying radio signals to create a category for pulsar signals, another category for supernova signals, a third for quasar signals, and so on, without ever needing to know what a pulsar, supernova, or quasar is. After the AI has created and populated its groups, we then assign labels based on astronomical objects: ­quasars, pulsars, and so on. What’s most important, though, is that the AI creates a new category for signals that don’t fit neatly into the categories it has already developed. In other words, it will automatically classify an anomalous signal in a category of its own.

This technique is how we hope to spot any technosignatures that are more complicated than a straightforward narrow­band or short-duration signal. That’s precisely why we needed an AI that can spot anything that’s out of the ordinary, without having any preconceived notions about what an odd signal should look like.

The Listen program is dramatically increasing the number of stars searched through SETI projects, by listening in on thousands of nearby sunlike stars over the program’s duration. We’re also surveying millions of more-distant stars across the galactic plane at a lower sensitivity. And we’re even planning to observe nearby galaxies for any exceptionally energetic signals that have crossed intergalactic space. That last category is a bit of a long shot, but after all, the project’s purpose is to listen more than ever before.

Breakthrough Listen is a hugely ambitious program and the most comprehensive search for intelligent life beyond Earth ever undertaken. Even so, odds are we won’t find a single promising signal. Jill Tarter, the former director of the SETI Institute, has often likened the cumulative SETI efforts so far to scooping a glass of water out of the ocean, seeing there are no fish in the glass, and concluding no fish exist. By the time Breakthrough Listen is finished, it will be more like having examined a swimming pool’s worth of ocean water. But if no compelling candidate signals are found, we haven’t failed. We’ll still be able to place statistical constraints on how common it is for intelligent life to evolve. We may also spot new natural phenomena that warrant more study. Then it’s on to the next swimming pool.

In the meantime, our data will help provide insights, at least, into some of the most profound questions in all of science. We still know almost nothing about how often life emerges from nonliving matter, or how often life develops into an intelligent society, or how long intelligent societies last. So while we may not find any of the signs we’re looking for, we can come closer to putting boundaries on how often these things occur. Even if the stars we search are silent, at least we’ll have learned that we need to look farther afield for any cosmic company.

This article appears in the February 2021 print issue as “Seti’s Million Star Search.”

About the Author

Danny Price is the Breakthrough Listen initiative’s project scientist for the Parkes radio telescope, in Australia.

FAA Files Reveal a Surprising Threat to Airline Safety: the U.S. Military’s GPS Tests

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/aerospace/aviation/faa-files-reveal-a-surprising-threat-to-airline-safety-the-us-militarys-gps-tests

Early one morning last May, a commercial airliner was approaching El Paso International Airport, in West Texas, when a warning popped up in the cockpit: “GPS Position Lost.” The pilot contacted the airline’s operations center and received a report that the U.S. Army’s White Sands Missile Range, in South Central New Mexico, was disrupting the GPS signal. “We knew then that it was not an aircraft GPS fault,” the pilot wrote later.

The pilot missed an approach on one runway due to high winds, then came around to try again. “We were forced to Runway 04 with a predawn landing with no access to [an instrument landing] with vertical guidance,” the pilot wrote. “Runway 04…has a high CFIT threat due to the climbing terrain in the local area.”

CFIT stands for “controlled flight into terrain,” and it is exactly as serious as it sounds. The pilot considered diverting to Albuquerque, 370 kilometers away, but eventually bit the bullet and tackled Runway 04 using only visual aids. The plane made it safely to the ground, but the pilot later logged the experience on NASA’s Aviation Safety Reporting System, a forum where pilots can anonymously share near misses and safety tips.

This is far from the most worrying ASRS report involving GPS jamming. In August 2018, a passenger aircraft in Idaho, flying in smoky conditions, reportedly suffered GPS interference from military tests and was saved from crashing into a mountain only by the last-minute intervention of an air traffic controller. “Loss of life can happen because air traffic control and a flight crew believe their equipment are working as intended, but are in fact leading them into the side of the mountain,” wrote the controller. “Had [we] not noticed, that flight crew and the passengers would be dead. I have no doubt.”

There are some 90 ASRS reports detailing GPS interference in the United States over the past eight years, the majority of which were filed in 2019 and 2020. Now IEEE Spectrum has new evidence that GPS disruption to commercial aviation is much more common than even the ASRS database suggests. Previously undisclosed Federal Aviation Administration (FAA) data for a few months in 2017 and 2018 detail hundreds of aircraft losing GPS reception in the vicinity of military tests. On a single day in March 2018, 21 aircraft reported GPS problems to air traffic controllers near Los Angeles. These included a medevac helicopter, several private planes, and a dozen commercial passenger jets. Some managed to keep flying normally; others required help from air traffic controllers. Five aircraft reported making unexpected turns or navigating off course. In all likelihood, there are many hundreds, possibly thousands, of such incidents each year nationwide, each one a potential accident. The vast majority of this disruption can be traced back to the U.S. military, which now routinely jams GPS signals over wide areas on an almost daily basis somewhere in the country.

The military is jamming GPS signals to develop its own defenses against GPS jamming. Ironically, though, the Pentagon’s efforts to safeguard its own troops and systems are putting the lives of civilian pilots, passengers, and crew at risk. In 2013, the military essentially admitted as much in a report, saying that “planned EA [electronic attack] testing occasionally causes interference to GPS based flight operations, and impacts the efficiency and economy of some aviation operations.”

In the early days of aviation, pilots would navigate using road maps in daylight and follow bonfires or searchlights after dark. By World War II, radio beacons had become common. From the late 1940s, ground stations began broadcasting omnidirectional VHF signals that planes could lock on to, while shorter-range systems indicated safe glide slopes to help pilots land. At their peak, in 2000, there were more than a thousand very high frequency (VHF) navigation stations in the United States. However, in areas with widely spaced stations, pilots were forced to take zigzag routes from one station to the next, and reception of the VHF signals could be hampered by nearby buildings and hills.

Everything changed with the advent of global navigation satellite systems (GNSS), first devised by the U.S. military in the 1960s. The arrival in the mid-1990s of the civilian version of the technology, called the Global Positioning System, meant that aircraft could navigate by satellite and take direct routes from point to point; GPS location and altitude data was also accurate enough to help them land.

The FAA is about halfway through its NextGen effort, which is intended to make flying safer and more efficient through a wholesale switch from ground-based navigation aids like radio beacons to a primarily satellite-enabled navigation system. Along with that switch, the agency began decommissioning VHF navigation stations a decade ago. The United States is now well on its way to having a minimal backup network of fewer than 600 ground stations.

Meanwhile, the reliance on GPS is changing the practice of flying and the habits of pilots. As GPS receivers have become cheaper, smaller, and more capable, they have become more common and more widely integrated. Most airplanes must now carry Automatic Dependent Surveillance-Broadcast (ADS-B) transponders, which use GPS to calculate and broadcast their altitude, heading, and speed. Private pilots use digital charts on tablet computers, while GPS data underpins autopilot and flight-management computers. Pilots should theoretically still be able to navigate, fly, and land without any GPS assistance at all, using legacy radio systems and visual aids. Commercial airlines, in particular, have a range of backup technologies at their disposal. But because GPS is so widespread and reliable, pilots are in danger of forgetting these manual techniques.

When an Airbus passenger jet suddenly lost GPS near Salt Lake City in June 2019, its pilot suffered “a fair amount of confusion,” according to the pilot’s ASRS report. “To say that my raw data navigation skills were lacking is an understatement! I’ve never done it on the Airbus and can’t remember having done it in 25 years or more.”

“I don’t blame pilots for getting a little addicted to GPS,” says Todd E. ­Humphreys, director of the Radionavigation Laboratory at the University of Texas at Austin. “When something works well 99.99 percent of the time, humans don’t do well in being vigilant for that 0.01 percent of the time that it doesn’t.”

Losing GPS completely is not the worst that can happen. It is far more dangerous when accurate GPS data is quietly replaced by misleading information. The ASRS database contains many accounts of pilots belatedly realizing that GPS-enabled autopilots had taken them many kilometers in the wrong direction, into forbidden military areas, or dangerously close to other aircraft.

In December 2012, an air traffic controller noticed that a westbound passenger jet near Reno, Nev., had veered 16 kilometers (10 miles) off course. The controller confirmed that military GPS jamming was to blame and gave new directions, but later noted: “If the pilot would have noticed they were off course before I did and corrected the course, it would have caused [the] aircraft to turn right into [an] opposite direction, eastbound [jet].”

So why is the military interfering so regularly with such a safety-critical system? Although most GPS receivers today are found in consumer smartphones, GPS was designed by the U.S. military, for the U.S. military. The Pentagon depends heavily on GPS to locate and navigate its aircraft, ships, tanks, and troops.

For such a vital resource, GPS is exceedingly vulnerable to attack. By the time GPS signals reach the ground, they are so faint they can be easily drowned out by interference, whether accidental or malicious. Building a basic electronic warfare setup to disrupt these weak signals is trivially easy, says Humphreys: “Detune the oscillator in a microwave oven and you’ve got a superpowerful jammer that works over many kilometers.” Illegal GPS jamming devices are widely available on the black market, some of them marketed to professional drivers who may want to avoid being tracked while working.

Other GNSS systems, such as Russia’s GLONASS, China’s BeiDou, and Europe’s Galileo constellations, use slightly different frequencies but have similar vulnerabilities, depending on exactly who is conducting the test or attack. In China, mysterious attacks have successfully “spoofed” ships with GPS receivers toward fake locations, while vessels relying on BeiDou reportedly remain unaffected. Similarly, GPS signals are regularly jammed in the eastern Mediterranean, Norway, and Finland, while the Galileo system is untargeted in the same attacks.

The Pentagon uses its more remote military bases, many in the American West, to test how its forces operate under GPS denial, and presumably to develop its own electronic warfare systems and countermeasures. The United States has carried out experiments in spoofing GPS signals on at least one occasion, during which it was reported to have taken great care not to affect civilian aircraft.

Despite this, many ASRS reports record GPS units delivering incorrect positions rather than failing altogether, but this can also happen when the satellite signals are degraded. Whatever the nature of its tests, the military’s GPS jamming can end up disrupting service for civilian users, particularly high-altitude commercial aircraft, even at a considerable distance.

The military issues Notices to Airmen (NOTAM) to warn pilots of upcoming tests. Many of these notices cover hundreds of thousands of square kilometers. There have been notices that warn of GPS disruption over all of Texas or even the entire American Southwest. Such a notice doesn’t mean that GPS service will be disrupted throughout the area, only that it might be disrupted. And that uncertainty creates its own problems.

In 2017, the FAA commissioned the nonprofit Radio Technical Commission for Aeronautics to look into the effects of intentional GPS interference on civilian aircraft. Its report, issued the following year by the RTCA’s GPS Interference Task Group, found that the number of military GPS tests had almost tripled from 2012 to 2017. Unsurprisingly, ASRS safety reports referencing GPS jamming are also on the rise. There were 38 such ASRS narratives in 2019—nearly a tenfold increase over 2018.

New internal FAA materials obtained by Spectrum from a member of the task group and not previously made public indicate that the ASRS accounts represent only the tip of the iceberg. The FAA data consists of pilots’ reports of GPS interference to the Los Angeles Air Route Traffic Control Center, one of 22 air traffic control centers in the United States. Controllers there oversee air traffic across central and Southern California, southern Nevada, southwestern Utah, western Arizona, and portions of the Pacific Ocean—areas heavily affected by military GPS testing.

This data includes 173 instances of lost or intermittent GPS during a six-month period of 2017 and another 60 over two months in early 2018. These reports are less detailed than those in the ASRS database, but they show aircraft flying off course, accidentally entering military airspace, being unable to maneuver, and losing their ability to navigate when close to other aircraft. Many pilots required the assistance of air traffic control to continue their flights. The affected aircraft included a pet rescue shuttle, a hot-air balloon, multiple medical flights, and many private planes and passenger jets.

In at least a handful of episodes, the loss of GPS was deemed an emergency. Pilots of five aircraft, including a Southwest Airlines flight from Las Vegas to
Chicago, invoked the “stop buzzer,” a request routed through air traffic control for the military to immediately cease jamming. According to the Aircraft Owners and Pilots Association, pilots must use this phrase only when a safety-of-flight issue is encountered.

To be sure, many other instances in the FAA data were benign. In early March 2017, for example, Jim Yoder was flying a Cessna jet owned by entrepreneur and space tourist Dennis Tito between Las Vegas and Palm Springs, Calif., when both onboard GPS devices were jammed. “This is the only time I’ve ever had GPS go out, and it was interesting because I hadn’t thought about it really much,” Yoder told Spectrum. “I asked air traffic control what was going on and they were like, ‘I don’t really know.’ But we didn’t lose our ability to navigate, and I don’t think we ever got off course.”

Indeed, one of the RTCA task group’s conclusions was that the Notice to Airmen system was part of the problem: Most pilots who fly through affected areas experience no ill effects, causing some to simply ignore such warnings in the future.

“We call the NOTAMs ‘Chicken Little,’ ” says Rune Duke, an FAA official who was cochair of the RTCA’s task group. “They say the sky is falling over large areas…and it’s not realistic. There are mountains and all kinds of things that would prevent GPS interference from making it 500 nautical miles [926 km] from where it is initiated.”

GPS interference can be affected by the terrain, aircraft altitude and attitude, direction of flight, angle to and distance from the center of the interference, equipment aboard the plane, and many other factors, concluded the task group, which included representatives of the FAA, airlines, pilots, aircraft manufacturers, and the U.S. military. One aircraft could lose all GPS reception, even as another one nearby is completely unaffected. One military test might pass unnoticed while another causes chaos in the skies.

This unreliability has consequences. In 2014, a passenger plane approaching El Paso had to abort its landing after losing GPS reception. “This is the first time in my flying career that I have experienced or even heard of GPS signal jamming,” wrote the pilot in an ASRS report. “Although it was in the NOTAMs, it still caught us by surprise as we really did not expect to lose all GPS signals at any point. It was a good thing the weather was good or this could have become a real issue.”

Sometimes air traffic controllers are as much in the dark as pilots. “They are the last line of defense,” the FAA’s Duke told Spectrum. “And in many cases, air traffic control was not even aware of the GPS interference taking place.”

The RTCA report made many recommendations. The Department of Defense could improve coordination with the FAA, and it could refrain from testing GPS during periods of high air traffic. The FAA could overhaul its data collection and analysis, match anecdotal reports with digital data, and improve documentation of adverse events. The NOTAM system could be made easier to interpret, with warnings that more accurately match the experiences of pilots and controllers.

Remarkably, until the report came out, the FAA had been instructing pilots to report GPS anomalies only when they needed assistance from air traffic control. “The data has been somewhat of a challenge because we’ve somewhat discouraged reporting,” says Duke. “This has led the FAA to believe it’s not been such a problem.”

NOTAMs now encourage pilots to report all GPS interference, but many of the RTCA’s other recommendations are languishing within the Office of Accident Investigation and Prevention at the FAA.

New developments are making the problem worse. The NextGen project is accelerating the move of commercial aviation to satellite-enabled navigation. Emerging autonomous air systems, such as drones and air taxis, will put even more weight on GPS’s shaky shoulders.

When any new aircraft is adopted, it risks posing new challenges to the system. The Embraer EMB-505 Phenom 300, for instance, entered service in 2009 and has since become the world’s best-selling light jet. In 2016, the FAA warned that if the Phenom 300 encountered an unreliable or unavailable GPS signal, it could enter a Dutch roll (named for a Dutch skating technique), a dangerous combination of wagging and rocking that could cause pilots to lose control. The FAA instructed Phenom 300 owners to avoid all areas of GPS interference.

As GPS assumes an ever more prominent role, the military is naturally taking a stronger interest in it. “Year over year, the military’s need for GPS interference-event testing has increased,” says Duke. “There was an increase again in 2019, partly because of counter-UAS [drone] activity. And they’re now doing GPS interference where they previously had not, like Michigan, Wisconsin, and the Dakotas, because it adds to the realism of any type of military training.”

So there are ever more GPS-jamming tests, more aircraft navigating by satellite, and more pilots utterly reliant on GPS. It is a feedback loop, and it constantly raises the chances that one of these near misses and stop buzzers will end in catastrophe.

When asked to comment, the FAA said it has established a resilient navigation and surveillance infrastructure to enable aircraft to continue safe operations during a GPS outage, including radio beacons and radars. It also noted that it and other agencies are working to create a long-term GPS backup solution that will provide position, navigation, and ­timing—again, to minimize the effects of a loss of GPS.

However, in a report to Congress in April 2020, the agency coordinating this effort, the U.S. Department of Homeland Security, wrote: “DHS recommends that responsibility for mitigating temporary GPS outages be the responsibility of the individual user and not the responsibility of the Federal Government.” In short, the problem of GPS interference is not going away.

In September 2019, the pilot of a small business jet reported experienced jamming on a flight into New Mexico. He could hear that aircraft all around him were also affected, with some being forced to descend for safety. “Since the FAA is deprecating [ground-based radio aids], we are becoming dependent upon an unreliable navigation system,” wrote the pilot upon landing. “This extremely frequent [interference with] critical GPS navigation is a significant threat to aviation safety. This jamming has to end.”

The same pilot was jammed again on his way home.

This article appears in the February 2021 print issue as “Lost in Airspace.”

10 Exciting Engineering Milestones to Look for in 2021

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/aerospace/space-flight/10-exciting-engineering-milestones-to-look-for-in-2021

graphic link to special report landing page
  • A Shining Light

    Last year, germicidal ultraviolet light found a place in the arsenal of weapons used to fight the coronavirus, with upper-air fixtures placed near hospital-room ceilings and sterilization boxes used to clean personal protective equipment. But a broader rollout was prevented by the dangers posed by UV-C light, which damages the genetic material of viruses and humans alike. Now, the Tokyo-based lighting company Ushio thinks it has the answer: lamps that produce 222-nanometer wavelengths that still kill microbes but don’t penetrate human eyes or skin. Ushio’s Care222 lamp modules went into mass production at the end of 2020, and in 2021 they’ll be integrated into products from other companies—such as Acuity Brands’ lighting fixtures for offices, classrooms, stores, and other occupied spaces.

  • Quantum Networking

    Early this year, photons will speed between Stony Brook University and Brookhaven National Laboratory, both in New York, in an ambitious demonstration of quantum communication. This next-gen communication concept may eventually offer unprecedented security, as a trick of quantum mechanics will make it obvious if anyone has tapped into a transmission. In the demo, “quantum memory buffers” from the startup Qunnect will be placed at each location, and photons within those buffers will be entangled with each other over a snaking 70-kilometer network.

  • Winds of Change

    Developers of offshore wind power quickly find themselves in deep water off the coast of California; one prospective site near Humboldt Bay ranges from 500 to 1,100 meters deep. These conditions call for a new breed of floating wind turbine that’s tethered to the seafloor with strong cables. Now, that technology has been demonstrated in pilot projects off the coasts of Scotland and Portugal, and wind power companies are eager to gain access to three proposed sites off the California coast. They’re expecting the U.S. Bureau of Ocean Energy Management to begin the process of auctioning leases for at least some of those sites in 2021.

  • Driverless Race Cars

    The Indianapolis Motor Speedway, the world’s most famous auto racetrack, will host an unprecedented event in October: the first high-speed race of self-driving race cars. About 30 university teams from around the world have signed up to compete in the Indy Autonomous Challenge, in which souped-up race cars will reach speeds of up to 320 kilometers per hour (200 miles per hour). To win the US $1 million top prize, a team’s autonomous Dallara speedster must be first to complete 20 laps in 25 minutes or less. The deep-learning systems that control the cars will be tested under conditions they’ve never experienced before; both sensors and navigation tools will have to cope with extreme speed, which leaves no margin for error.

  • Robots Below

    The robots participating in DARPA’s Subterranean Challenge have already been tested on three different courses; they’ve had to navigate underground tunnels, urban environments, and cave networks (although the caves portion was switched to an all-virtual course due to the pandemic). Late this year, SubT teams from across the world will put it all together at the final event, which will combine elements of all three subdomains into a single integrated challenge course. The robots will have to demonstrate their versatility and endurance in an obstacle-filled environment where communication with the world above ground is limited. DARPA expects pandemic conditions to improve enough in 2021 to make a physical competition possible.

  • Mars or Bust

    In February, no fewer than three spacecraft are due to rendezvous with the Red Planet. It’s not a coincidence—the orbits of Earth and Mars have brought the planets relatively close together this year, making the journey between them faster and cheaper. China’s Tianwen-1 mission plans to deliver an orbiter and rover to search for water beneath the surface; the United Arab Emirates’ Hope orbiter is intended to study the Martian climate; and a shell-like capsule will deposit NASA’s Perseverance rover, which aims to seek signs of past life, while also testing out a small helicopter drone named Ingenuity. There’s no guarantee that the spacecraft will reach their destinations safely, but millions of earthlings will be rooting for them.

  • Stopping Deepfakes

    By the end of the year, some Android phones may include a new feature that’s intended to strike a blow against the ever-increasing problem of manipulated photos and videos. The anti-deepfake tech comes from a startup called Truepic, which is collaborating with Qualcomm on chips for smartphones that will enable phone cameras to capture “verified” images. The raw pixel data, the time stamp, and location data will all be sent over secure channels to isolated hardware processors, where they’ll be put together with a cryptographic seal. Photos and videos produced this way will carry proof of their authenticity with them. “Fakes are never going to go away,” says Sherif Hanna, Truepic’s vice president of R&D. “Instead of trying to prove what’s fake, we’re trying to protect what’s real.”

  • Faster Data

    In a world reshaped by the COVID-19 pandemic, in which most office work, conferences, and entertainment have gone virtual, cloud data centers are under more stress than ever before. To ensure that people have all the bandwidth they need whenever they need it, service providers are increasingly using networks of smaller data centers within metropolitan areas instead of the traditional massive data center on a single campus. This approach offers higher resilience and availability for end users, but it requires sending torrents of data between facilities. That need will be met in 2021 with new 400ZR fiber optics, which can send 400 gigabits per second over data center interconnects of between 80 and 100 kilometers. Verizon completed a trial run last September, and experts believe we’ll see widespread deployment toward the end of this year.

  • Your Next TV

    While Samsung won’t confirm it, consumer-electronics analysts say the South Korean tech giant will begin mass production of a new type of TV in late 2021. Samsung ended production of liquid crystal display (LCD) TVs in 2020, and has been investing heavily in organic light-emitting diode (OLED) display panels enhanced with quantum dot (QD) technology. OLED TVs include layers of organic compounds that emit light in response to an electric current, and they currently receive top marks for picture quality. In Samsung’s QD-OLED approach, the TV will use OLEDs to create blue light, and a QD layer will convert some of that blue light into the red and green light needed to make images. This hybrid technology is expected to create displays that are brighter, higher contrast, and longer lasting than today’s best models.

  • Brain Scans Everywhere

    Truly high-quality data about brain activity is hard to come by; today, researchers and doctors typically rely either on big and expensive machines like MRI and CT scanners or on invasive implants. In early 2021, the startup Kernel is launching a wearable device called Kernel Flow that could change the game. The affordable low-power device uses a type of near-infrared spectroscopy to measure changes in blood flow within the brain. Within each headset, 52 lasers fire precise pulses, and reflected light is picked up by 312 detectors. Kernel will distribute its first batch of 50 devices to “select partners” in the first quarter of the year, but company founder Bryan Johnson hopes the portable technology will one day be ubiquitous. At a recent event, he described how consumers could eventually use Kernel Flow in their own homes.

At Last, First Light for the James Webb Space Telescope

Post Syndicated from David Schneider original https://spectrum.ieee.org/aerospace/astrophysics/at-last-first-light-for-the-james-webb-space-telescope

graphic link to special report landing page

Back in 1990, after significant cost overruns and delays, the Hubble Space Telescope was finally carried into orbit aboard the space shuttle. Astronomers rejoiced, but within a few weeks, elation turned to dread. Hubble wasn’t able to achieve anything like the results anticipated, because, simply put, its 2.4-meter mirror was made in the wrong shape.

For a time, it appeared that the US $5 billion spent on the project had been a colossal waste. Then NASA came up with a plan: Astronauts would compensate for ­Hubble’s distorted main mirror by adding small secondary mirrors shaped in just the right way to correct the flaw. Three years later, that fix was carried out, and Hubble’s mission to probe some of the faintest objects in the sky was saved.

Fast-forward three decades. Later this year, the James Webb Space Telescope is slated to be launched. Like Hubble, the Webb telescope project has been plagued by cost overruns and delays. At the turn of the millennium, the cost was thought to be $1 billion. But the final bill will likely be close to $10 billion.

Unlike Hubble, the Webb telescope won’t be orbiting Earth. Instead, it will be sent to the Earth-Sun L2 Lagrange point, which is about 1.5 million kilometers away in the opposite direction from the sun. This point is cloaked in semidarkness, with Earth shadowing much of the sun. Such positioning is good for observing, but bad for solar powering. So the telescope will circle around L2 instead of positioning itself there.

The downside will be that the telescope will be too distant to service, at least with any kind of spacecraft available now. “Webb’s peril is that it will be parked in space at a place that we can’t get back to if something goes wrong,” says Eric Chaisson, an astrophysicist at Harvard who was a senior scientist at Johns Hopkins Space Telescope Science Institute when Hubble was commissioned. “Given its complexity, it’s hard to believe something won’t, though we can all hope, as I do, that all goes right.”

Why then send the Webb telescope so far away?

The answer is that the Webb telescope is intended to gather images of stars and galaxies created soon after the big bang. And to look back that far in time, the telescope must view objects that are very distant. These objects will be extremely faint, sure. More important, the visible light they gave off will be shifted into the infrared.

For this reason, the telescope’s detectors are sensitive to comparatively long infrared wavelengths. And those detectors would be blinded if the body of the telescope itself was giving off a lot of radiation at these wavelengths due to its heat. To avoid that, the telescope will be kept far from the sun at L2 and will be shaded by a tennis-court-size sun shield composed of five layers of aluminum- and silicon-coated DuPont Kapton. This shielding will keep the mirror of the telescope at a chilly 40 to 50 kelvins, considerably colder than Hubble’s mirror, which is kept essentially at room temperature.

Clearly, for the Webb telescope to succeed, everything about the mission has to go right. Gaining confidence in that outcome has taken longer than anyone involved in the project had envisioned. A particularly troubling episode occurred in 2018, when a shake test of the sun shield jostled some screws loose. That and other problems, attributed to human error, shook legislators’ confidence in the prime contractor, Northrop Grumman. The government convened an independent review board, which uncovered fundamental issues with how the project was being managed.

In 2018 testimony to the House Committee on Science, Space, and Technology, Wesley Bush, then the CEO of Northrop Grumman, came under fire when the chairman of the committee, Rep. Lamar Smith (R-Texas), asked him whether Northrop Grumman would agree to pay for $800 million of unexpected expenditures beyond the nominal final spending limit.

Naturally, Bush demurred. He also marshalled an argument that has been used to justify large expenditures on space missions since Sputnik: the need to demonstrate technological prowess. “It is especially important that we take on programs like Webb to demonstrate to the world that we can lead,” said Bush.

During the thoroughgoing reevaluation in 2018, launch was postponed to March of 2021. Then the pandemic hit, delaying work up and down the line. In July of 2020, launch was postponed yet again, to 31 October 2021.

Whether Northrop Grumman will really hit that target is anyone’s guess: The company did not respond to requests from IEEE Spectrum for information about how the pandemic is affecting the project timeline. But if this massive, quarter-century-long undertaking finally makes it into space this year, astronomers will no doubt be elated. Let’s just hope that elation over a space telescope doesn’t again turn into dread.

This article appears in the January 2021 print issue as “Where No One Has Seen Before.”

SpaceX, Blue Origin, and Dynetics Compete to Build the Next Moon Lander

Post Syndicated from Jeff Foust original https://spectrum.ieee.org/aerospace/space-flight/spacex-blue-origin-and-dynetics-compete-to-build-the-next-moon-lander

graphic link to special report landing page

In March 2019, Vice President Mike Pence instructed NASA to do something rarely seen in space projects: move up a schedule. Speaking at a meeting of the National Space Council, Pence noted that NASA’s plans for returning humans to the moon called for a landing in 2028. “That’s just not good enough. We’re better than that,” he said. Instead, he announced that the new goal was to land humans on the moon by 2024.

That decision wasn’t as radical as it might have seemed. The rocket that is the centerpiece of NASA’s lunar exploration plans, the Space Launch System, has been in development for nearly a decade and is scheduled to make its first flight in late 2021. The Orion, a crewed spacecraft that will be launched on that rocket, is even older, dating back to the previous NASA initiative to send humans to the moon, the Constellation program in the mid-2000s.

What’s missing, though, is the lander that astronauts will use to go from the Orion in lunar orbit down to the surface and back. NASA was just starting to consider new ideas for lunar landers when Pence gave his speech. Indeed, proposals for an initial round of studies were due to NASA just the day before he spoke. Those studies have since progressed, and NASA may settle on a design for its upcoming moon lander as soon as next month.

To meet Pence’s 2024 deadline, it was clear that NASA would have to move faster than normal. That meant moving differently.

The Lunar Module, or LM, used for the Apollo program, was developed through a standard government contract with Grumman Corp. But in the half century since Apollo, not only has the technology of spaceflight changed, but so has the business. Commercial programs to ferry cargo and crew to the space station demonstrated there were opportunities for government to partner with industry, with companies covering some of the development costs in exchange for using the vehicles they designed to serve other customers.

That approach, NASA argued, would allow it to support more companies in the early phases of the program and perhaps through full-scale lander development. “We want to have numerous suppliers that are competing against each other on cost, and on innovation, and on safety,” NASA Administrator Jim Bridenstine told the Senate Commerce Committee in September of 2020.

That was the approach NASA decided to follow to develop what it calls the Human Landing System (HLS), part of its upcoming Artemis program of lunar exploration. NASA would award up to four contracts for less than a year of initial work to refine lander designs. Based on the quality of those lander designs as well as the available funding—including the share of the costs companies offered to pay—NASA would select one or more for continued development.

Proposals for the HLS program were due to NASA in November of 2019. On 30 April 2020, NASA announced the winners of three contracts with a combined US $967 million for those initial studies: Blue Origin, Dynetics, and SpaceX. “Let’s make no mistake about it: We are now on our way,” said Douglas Loverro, then a NASA official, at the briefing announcing the contracts. Loverro, who was the NASA associate administrator responsible for human spaceflight, added, “We’ve got all the pieces we need.” Later this year, NASA will choose which of these three to pursue.

The three landers are as different as the companies NASA selected. Blue Origin, in Kent, Wash., has the significant financial resources of its founder, Amazon.com’s Jeff Bezos, but chose to partner with three other major companies on its lander: Draper, Lockheed Martin, and Northrop Grumman. “What we’re trying to do is take the best of what all the partners bring to build a truly integrated lander vehicle,” says John Couluris, program manager for the HLS at Blue Origin.

Its design is the most similar of the three to the original Apollo LM. Blue Origin is leading the development of the descent stage, which brings the lander down to the surface, using a version of an uncrewed lunar lander called Blue Moon that it had previously been working on. Lockheed Martin is building the ascent stage, which will carry astronauts back into lunar orbit. Its crew cabin is based on the one the company designed for the Orion spacecraft. Northrop Grumman will provide a transfer stage, which will move the combined ascent and descent stages from a high lunar orbit to a low one, from which the descent stage takes over for the landing. Draper Laboratory, in Cambridge, Mass., is providing the avionics for the combined lander.

That modular approach, Couluris argues, has advantages over building a larger integrated lander. “The three elements provide a lot of flexibility in launch-vehicle selection,” he says, allowing NASA to use any number of vehicles to send the modules to the moon. “It also allows three independent efforts to go in parallel as Blue Origin leads the overall integrated effort.”

The early work on the lander has included building full-scale models of the ascent and descent modules and installing them in a training facility at NASA’s Johnson Space Center. This gives engineers and astronauts a hands-on way to see how the modules work together and determine the best places to put major components before the design is fixed.

“Understanding things that affect the primary structure are important. For example, are the windows in the right spot?” says Kirk Shireman, a former NASA space-station program manager who joined Lockheed Martin recently as vice president of lunar campaigns. “They impact big pieces of the structure that are long-lead and need to be finalized so we can begin manufacturing.”

Dynetics, a Huntsville, Ala.–based engineering company, is similarly working on its lander with other firms—more than two dozen. These include Sierra Nevada Corp., which is building the Dream Chaser cargo vehicle for the space station; Thales Alenia Space, which built pressurized module structures for the station and other spacecraft; and launch-vehicle company United Launch Alliance (ULA).

Dynetics took a very different approach with its lander, though, creating a single module ringed by thrusters and propellant tanks. That results in a low-slung design that puts the crew cabin just a couple of meters off the ground, making it easy for astronauts to get from the module to the surface and back.

“We felt like it was important to get the crew module as close to the lunar surface as we could to ease the operations of getting in and out,” says Robert Wright, a program manager for space systems at Dynetics.

To make that approach work—and also be carried into space using available launch vehicles—Dynetics proposes to fuel the lander only after it is orbiting the moon. The lander will be launched with empty propellant tanks on a ULA Vulcan Centaur rocket and placed in lunar orbit. Two more Vulcan Centaur launches will follow, each about two to three weeks apart, carrying the liquid oxygen and methane propellants needed to land on the moon and return to orbit.

Refueling in space using cryogenic propellants requires new technology, but Dynetics believes it can demonstrate it in time to support a 2024 landing. “We worked closely with NASA on our concept of operations, and the Orion plans, to ensure that our operational scenario is viable and feasible,” says Kim Doering, vice president of space systems at Dynetics.

The third contender, SpaceX, is offering a lunar lander based on Starship, the reusable launch vehicle it is developing and testing at a site on the Texas coast near Brownsville. Starship, in many respects, looks oversized for the job, resembling the giant rocket that the graphic-novel hero Tintin used in his fictional journey to the moon. Starship’s crew cabin is so high off the lunar surface that SpaceX plans to use an elevator to transport astronauts down to the surface and back.

“Starship works well for the HLS mission,” says Nick Cummings, director of civil-space advanced development at SpaceX. “We did look at alternative architectures, multi-element architectures, but we came back because Starship provides a lot of capability.” That capability includes the ability to carry what he described as “extraordinarily large cargo” to the surface of the moon, for example large rovers.

The Starship used for lunar missions will be different from those flying to and from Earth. The lunar Starship lacks the flaps and heat shield needed for reentering Earth’s atmosphere. It will have several large Raptor engines, but also a smaller set of thrusters used for landing and taking off on the moon because the Raptors are too powerful.

Starship, like the Dynetics lander, will require in-space refueling. The lunar ­Starship will be launched into Earth orbit, and several Starships will follow with propellant to transfer to it before it heads to the moon.

SpaceX CEO Elon Musk said at a conference in October that he expects to demonstrate Starship-to-Starship refueling in 2022. “As soon as you’ve got orbital refilling, you can send significant payload to the moon,” he said.

All three companies face significant obstacles to overcome in their lander development. The most obvious ones are technical: New rocket engines need to be built and new technologies, like in-space refueling, need to be demonstrated, all on a tight schedule to meet the 2024 deadline.

NASA acknowledged that crunch in a document explaining why it chose those three companies. Blue Origin, for example, has “a very significant amount of development work” that needs to be completed “on what appears to be an aggressive timeline.” Dynetics’s lander requires technologies that “need to be developed at an unprecedented pace.” And SpaceX’s concept “requires numerous, highly complex launch, rendezvous, and fueling operations which all must succeed in quick succession in order to successfully execute on its approach.”

The companies nevertheless remain optimistic about staying on schedule. “It certainly is an aggressive timeline,” acknowledged SpaceX’s Cummings. But, based on the company’s experience with other programs, “we think this is very doable.”

Congress may be harder to convince. Some members remain skeptical that NASA’s approach to returning humans to the moon by 2024, including its use of partnerships with industry, is the right one.

NASA’s Bridenstine, in many public comments, said he appreciates that Congress is willing to provide at least some funding for the lunar lander program. “Accelerating it to 2024 requires a $3.2 billion budget for 2021,” he told senators last September. That funding decision, he added, needs to come by February, when NASA will select the companies that will continue development work for the HLS program. “If we get to February of 2021 without [a $3.2 billion] appropriation, that’s really going to put the brakes on our ability to achieve a moon landing by as early as 2024,” he warned. And the current Senate proposal, now being negotiated with the House, budgets only $1 billion.

In space, as on Earth, the most important fuel is money.

This article appears in the January 2021 print issue as “Three Ways to the Moon.”

Boom Supersonic’s XB-1 Test Aircraft Promises a New Era of Faster-Than-Sound Travel

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/aerospace/aviation/boom-supersonics-xb-1-test-aircraft-promises-a-new-era-of-faster-than-sound-travel

The Concorde, the world’s first supersonic airliner, landed for the last time 17 years ago. It was a beautiful feat of engineering, but it never turned a profit. So why should we believe that Boom Supersonic can do better now?

Blake Scholl, the founder and chief executive, turns the question on its head: How could it not do better? The Concorde was designed in the 1960s using slide rules and wind tunnels, whereas Boom Supersonic’s planned airliner, the Overture, is being modeled in software and built out of ultralight materials, using superefficient jet engines. That should reduce the consumption of fuel per passenger-mile well below the Concorde’s gluttonous rate, low enough to make possible fares no pricier than today’s business class.

“The greatest challenge—it’s not technology,” says Scholl. “It’s not that people don’t want them. And it’s not that the money isn’t out there. It’s the complex execution, and it’s getting the right people together to do it, building a startup in an industry that doesn’t usually have startups.”

In October, Boom rolled out a one-third-scale model of the Overture called the XB-1, the point of which is to prove the design principles, cockpit ergonomics, flight envelope, even the experience of flight itself. The first flight will take place in the “near future,” over the Mojave Desert says Scholl. There’s just one seat, which means that the experience will be the pilot’s alone. The Overture, with 65 seats, is slated to fly in 2025, and allowing for the time it takes to certify it with regulators, it is planned for commercial flight by 2029.

Test planes are hardly standard in the commercial aviation industry. This one is necessary, though, because the company started with a clean sheet of paper.

“There’s nothing more educational than flying hardware,” Brian Durrence, senior vice president in charge of developing the ­Overture, tells IEEE Spectrum. “You’re usually using a previous aircraft—if you’re a company that has a stable of aircraft. But it’s probably not a great business plan to jump straight into building your first commercial aircraft brand new.”

The XB-1 is slender and sleek, with swept-back delta wings and three jet engines, two slung under the wings and a third in the tail. These are conventional jet engines, coupled to afterburners—thrust-enhancing, fuel-sucking devices that were used by the Concorde, contributing to its high cost of operation. But the future airliner will use advanced turbofan engines that won’t need afterburners.

Another difference from yesteryear is in the liberal use of virtual views of the aircraft’s surroundings. The Concorde, optimized as it was for speed, was awkward during takeoff and landing, when the wings had to be angled upward to provide sufficient lift at low speeds. To make sure the pilot had a good view of the runway, the plane thus needed a “drooping” nose, which was lowered at these times. But the XB-1 can keep a straight face because it relies instead on cameras and cockpit monitors to make the ground visible.

To speed the design process, the engineers had recourse not only to computer modeling but also to 3D printing, which lets them prototype a part in hours, revise it, and reprint it. Boom uses parts made by Stratasys, an additive-­manufacturing company, which uses a new laser system from Velo 3D to fuse titanium powder into objects with intricate internal channels, both to save weight and allow the passage of cooling air.

Perhaps Scholl’s most debatable claim is that there’s a big potential market for Mach-plus travel out there. If speed were really such a selling point, why have commercial airliners actually slowed down a bit since the 1960s? It’s not that today’s planes are pokier—even now, a pilot will routinely make up lost time simply by opening the throttle. No, today’s measured pace is a business decision: Carriers traded speed against fuel consumption because the market demanded low fares.

Scholl insists there are people who have a genuine need for speed. He cites internal market surveys suggesting that most business travelers would buy a supersonic flight at roughly the same price as business class, and he argues that still more customers might pop up once supersonic travel allowed you to, say, fly from New York to ­London in 3.5 hours, meet with associates, then get back home in time to sleep in your own bed.

Why not go after the market for business jets, then? Because Scholl has Elon Musk–like ambitions: He says that once business travel gets going, the unit cost of manufacturing will fall, further lowering fares and inducing even mere tourists to break the sound barrier. That, he argues, is why he’s leaving the market for supersonic business planes—those having 19 seats or less—to the likes of Virgin Galactic and Aerion Supersonic.

Boom’s flights will be between coastal cities, because overwater routes inflict no supersonic booms on anyone except the fish. So we’re talking about a niche within a niche. Still, overwater flight is a big and profitable slice of the pie, and it’s precisely on such long routes that speed matters the most.

This article appears in the January 2021 print issue as “Supersonic Travel Returns.”

Nuclear-Powered Rockets Get a Second Look for Travel to Mars

Post Syndicated from Prachi Patel original https://spectrum.ieee.org/aerospace/space-flight/nuclear-powered-rockets-get-a-second-look-for-travel-to-mars

For all the controversy they stir up on Earth, nuclear reactors can produce the energy and propulsion needed to rapidly take large spacecraft to Mars and, if desired, beyond. The idea of nuclear rocket engines dates back to the 1940s. This time around, though, plans for interplanetary missions propelled by nuclear fission and fusion are being backed by new designs that have a much better chance of getting off the ground.

Crucially, the nuclear engines are meant for interplanetary travel only, not for use in the Earth’s atmosphere. Chemical rockets launch the craft out beyond low Earth orbit. Only then does the nuclear propulsion system kick in.

The challenge has been making these nuclear engines safe and lightweight. New fuels and reactor designs appear up to the task, as NASA is now working with industry partners for possible future nuclear-fueled crewed space missions. “Nuclear propulsion would be advantageous if you want to go to Mars and back in under two years,” says Jeff Sheehy, chief engineer in NASA’s Space Technology Mission Directorate. To enable that mission capability, he says, “a key technology that needs to be advanced is the fuel.”

Specifically, the fuel needs to endure the superhigh temperatures and volatile conditions inside a nuclear thermal engine. Two companies now say their fuels are sufficiently robust for a safe, compact, high-performance reactor. In fact, one of these companies has already delivered a detailed conceptual design to NASA.

Nuclear thermal propulsion uses energy released from nuclear reactions to heat liquid hydrogen to about 2,430 °C—some eight times the temperature of nuclear-power-plant cores. The propellant expands and jets out the nozzles at tremendous speeds. This can produce twice the thrust per mass of propellant as compared to that of chemical rockets, allowing nuclear-powered ships to travel longer and faster. Plus, once at the destination, be it Saturn’s moon Titan or Pluto, the nuclear reactor could switch from propulsion system to power source, enabling the craft to send back high-quality data for years.

Getting enough thrust out of a nuclear rocket used to require weapons-grade, highly enriched uranium. Low-enriched uranium fuels, used in commercial power plants, would be safer to use, but they can become brittle and fall apart under the blistering temperatures and chemical attacks from the extremely reactive hydrogen.

However, Ultra Safe Nuclear Corp. Technologies (USNC-Tech), based in Seattle, uses a uranium fuel enriched to below 20 percent, which is a higher grade than that of power reactors but “can’t be diverted for nefarious purposes, so it greatly reduces proliferation risks,” says director of engineering Michael Eades. The company’s fuel contains microscopic ceramic-coated uranium fuel particles dispersed in a zirconium carbide matrix. The microcapsules keep radioactive fission by-products inside while letting heat escape.

Lynchburg, Va.–based BWX Technologies, is working under a NASA contract to look at designs using a similar ceramic composite fuel—and also examining an alternate fuel form encased in a metallic matrix. “We’ve been working on our reactor design since 2017,” says Joe Miller, general manager for the company’s advanced technologies group.

Both companies’ designs rely on different kinds of moderators. Moderators slow down energetic neutrons produced during fission so they can sustain a chain reaction, instead of striking and damaging the reactor structure. BWX intersperses its fuel blocks between hydride elements, while USNC-Tech’s unique design integrates a beryllium metal moderator into the fuel. “Our fuel stays in one piece, survives the hot hydrogen and radiation conditions, and does not eat all the reactor’s neutrons,” Eades says.

Princeton Plasma Physics Laboratory scientists are using this experimental reactor to heat fusion plasmas up to one million degrees C—on the long journey to developing fusion-powered rockets for interplanetary travel.

There is another route to small, safe nuclear-powered rockets, says ­Samuel Cohen at Princeton Plasma Physics Laboratory: fusion reactors. Mainline fusion uses deuterium and tritium fuels, but Cohen is leading efforts to make a reactor that relies on fusion between deuterium atoms and helium-3 in a high-temperature plasma, which produces very few neutrons. “We don’t like neutrons because they can change structural material like steel to something more like Swiss cheese and can make it radioactive,” he says. The Princeton lab’s concept, called Direct Fusion Drive, also needs much less fuel than conventional fusion, and the device could be one-thousandth as large, Cohen says.

Fusion propulsion could in theory far outperform fission-based propulsion, because fusion reactions release up to four times as much energy, says NASA’s Sheehy. However, the technology isn’t as far along and faces several challenges, including generating and containing the plasma and efficiently converting the energy released into directed jet exhaust. “It could not be ready for Mars missions in the late 2030s,” he says.

USNC-Tech, by contrast, has already made small hardware prototypes based on its new fuel. “We’re on track to meet NASA’s goal to have a half-scale demonstration system ready for launch by 2027,” says Eades. The next step would be to build a full-scale Mars flight system, one that could very well drive a 2035 Mars mission.

This article appears in the January 2021 print issue as “Nuclear-Powered Rockets Get a Second Look.”