Tag Archives: Aerospace

Russia, China, the U.S.: Who Will Win the Hypersonic Arms Race?

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/aerospace/aviation/russia-china-the-us-who-will-win-the-hypersonic-arms-race

It’s obvious why the militaries of the world want missiles that can follow erratic paths at low altitude while flying at five times the speed of sound, eluding any chance at detection or interception.

“Think of it as delivering a pizza, except it’s not a pizza,” says Bradley Wheaton, a specialist in hypersonics at the Johns Hopkins University Applied Physics Laboratory (APL), in Maryland. “In the United States, just 15 minutes can cover the East Coast; a really fast missile takes 20 minutes to get to the West Coast. At these speeds, you have a factor of 50 increase in the area covered per unit of time.”

So the question isn’t why the great powers are pursuing hypersonic arms, but why they are doing so now. Quick answer: They are once again locked in an arms race.

The wider world first heard of this type of weaponry in March 2018, when Russian president Vladimir Putin gave a speech describing his country’s plans for a nuclear-powered cruise missile that could fly around the world at blinding speed, then snake around hills and dales to a target. His bold assertions have been questioned, particularly the part about nuclear power. Even so, a year later a nuclear accident killed seven people near a testing range off the northern coast of Russia, and U.S. intelligence officials speculated that it involved hypersonic experiments.

The nature of that accident is still shrouded in mystery, but it’s clear there’s been a huge increase in the research effort in hypersonics. Here’s a roundup of what the superpowers of the 21st century are doing to pursue what is, in fact, an old concept.

The hypersonic missiles in use or in testing in China and Russia can apparently carry either conventional warheads, aimed at ships and other small military targets, or nuclear ones, aimed at cities and government centers. These ship killers could deprive the United States of its preeminence at sea, which is more than enough reason for China, for instance, to develop hypersonics. But a nuclear-armed version that leaves the defender too little time to launch a retaliatory strike would do even more to shift the balance of power, because it would dismantle the painstakingly constructed system of deterrence known as mutually assured destruction, or by the jocular acronym MAD.

“The nuclear side is very destabilizing, which is why the Russians are going after it,” says Christopher Combs, a professor of mechanical engineering at the University of Texas at San Antonio. “But on the U.S. side we see no need for that, so we’re going conventional.”

That is indeed the official U.S. policy. But in August, some months after Combs spoke with IEEE Spectrum, an Aviation Week article pointed out that an Air Force agency charged with nuclear weapons requested that companies submit ideas for a “thermal protection system that can support [a] hypersonic glide to ICBM ranges.” Soon after that, the request was hastily taken down, and the U.S. Air Force felt compelled to restate its policy not to pursue nuclear-capable hypersonic weapons.

Today’s forays into hypersonic research have deep roots, reaching back to the late 1950s, in both the United States and the Soviet Union. Although this work continued for decades, in 1994, a few years after the Cold War ended with the dissolution of the Soviet Union, the United States pulled the plug on research into hypersonic flight, including its last and biggest program, the Rockwell X-30. Nicknamed the “Orient Express,” the X-30 was to have been a crewed transport that would top out at 25 times the speed of sound, Mach 25—enough to take off from Washington, D.C., and land in Tokyo 2 hours later. Russia also discontinued research in this area during the 1990s, when its economy was in tatters.

Today’s test vehicles just pick up where the old ones left off, explains Alexander Fedorov, a professor at Moscow Institute of Physics and Technology and an expert on hypersonic flow at the boundary layer, which is right next to the vehicle’s skin. “What’s flying now is just a demonstration of technology—the science is 30 years old,” he says.

Fedorov has lectured in the United States; he even helps U.S. graduate students with their research. He laments how the arms race has stifled international cooperation, adding that he himself has “zero knowledge” about the military project Putin touted two years ago. “But I know that people are working on it,” he adds.

In the new race, Fedorov says, Russia has experience without much money, China has money without much experience, and the United States has both, although it revived its efforts later than did Russia or China and is now playing catch-up. For fiscal 2021, U.S. research agencies have budgeted US $3.2 billion[PDF] for all hypersonic weapons research, up from $2.6 billion in the previous year.

Other programs are under way in India and Australia; even Israel and Iran are in the game, if on the sidelines. But Fedorov suggests that the Chinese are the ones to watch: They used to talk at international meetings, he says, but now they mostly just listen, which is what you’d expect if they had started working on truly new ideas—of which, he reiterates, there are very few on display. All the competing powers have shown vehicles that are “very conservative,” he says.

One good reason for the rarity of radical designs is the enormous expense of the research. Engineers can learn only so much by running tests on the ground, using computational fluid-flow models and hypersonic wind tunnels, which themselves cost a pretty penny (and simulate only some limited aspects of hypersonic flight). Engineers really need to fly their creations, and usually when they do, they use up the test vehicle. That makes design iteration very costly.

It’s no wonder hypersonic prototypes fail so often. In mere supersonic flight, passing Mach 1 is a clear-cut thing: The plane outdistances the sound waves that it imparts to the air to produce a shock wave, which forms the familiar two-beat sonic boom. But as the vehicle exceeds Mach 5, the density of the air just behind the shock wave diminishes, allowing the wave to nestle along the surface of the vehicle. That in-your-face layer poses no aerodynamic problems, and it could even be an advantage, when it’s smooth. But it can become turbulent in a heartbeat.

“Predicting when it’s going turbulent is hard,” says Wheaton, of Johns Hopkins APL. “And it’s important because when it does, heating goes up, and it affects how control surfaces can steer. Also, there’s more drag.”

The pioneers of hypersonic flight learned about turbulence the hard way. On one of its many flights, in 1967, the U.S. Air Force’s X-15 experimental hypersonic plane went into a spin, killing the pilot, Michael J. Adams. The right stuff, indeed.

Hypersonic missiles come in two varieties. The first kind, launched into space on the tip of a ballistic missile, punches down into the atmosphere, then uses momentum to maneuver. Such “boost-glide” missiles have no jet engines and thus need no air inlets, so it’s easy to make them symmetrical, typically a tube with a cone-shape tip. Every part of the skin gets equal exposure to the air, which at these speeds breaks down into a plume of plasma, like the one that puts astronauts in radio silence during reentry.

Boost-glide missiles are now operational. China appears to have deployed the first one, called the Dongfeng-17, a ballistic missile that carries glide vehicles. Some of those gliders are billed as capable of knocking out U.S. Navy supercarriers. For such a mission it need not pack a nuclear or even a conventional warhead, instead relying on its enormous kinetic energy to destroy its target. And there’s nothing that any country can now do to defend against it.

“Those things are going so fast, you’re not going to get it,” General Mark Milley, chairman of the Joint Chiefs of Staff, said in March, in testimony[PDF] before Congress.

You might think that you give up the element of surprise by starting with a ballistic trajectory. But not completely. Once the hypersonic missile comes out of its dive to fly horizontally, it becomes invisible to sparsely spaced radars, particularly the handful based in the Pacific Ocean. And that flat flight path can swerve a lot. That’s not because of any AI-managed magic—the vehicle just follows a randomized, preprogrammed set of turns. But the effect on those playing defense is the same: The pizza arrives before they can find their wallets.

The second kind of hypersonic missile gets the bulk of its impulse from a jet engine that inhales air really fast, whirls it together with fuel, and burns the mixture in the instant that it tarries in the combustion chamber before blowing out the back as exhaust. Because these engines don’t need compressors but simply use the force of forward movement to ram air inside, and because that combustion proceeds supersonically, they are called supersonic ram jets—scramjets, for short.

One advantage the scramjet has over the boost-glide missile is its ability to stay below radar and continue to maneuver over great distances, all the way to its target. And because it never enters outer space, it doesn’t need to ride a rocket booster, although it does need some powerful helper to get it up to the speed at which first a ramjet, and then a scramjet, can work.

Another advantage of the scramjet is that it can, in principle, be applied for civilian purposes, moving people or packages that absolutely, positively have to be there quickly. The Europeans have such a project. So do the Chinese, and Boeing has shown a concept. Everyone talks up this possibility because, frankly, it’s the only peaceable talking point there is for hypersonics. Don’t forget, though, that supersonic commercial flight happened long ago, made no money, and ended—and supersonic flight is way easier.

The scramjet has one big disadvantage: It’s a lot harder technically. Any hypersonic vehicle must fend off the rapidly moving air outside, which can heat the leading edges to as high as 3,000 °C. But that heat and stress is nothing like the hellfire inside a scramjet engine. There, the heat cannot radiate away, it’s hard to keep the flame lit, and the insides can come apart second by second, affecting airflow and stability. Five minutes is a long time in this business.

That’s why scramjets, though conceived in the 1950s, still remain a work in progress. In the early 2000s, NASA’s X-43 used scramjets for about 10 seconds in flight. In 2013, Boeing’s X-51 Waverider flew at hypersonic speed for 210 seconds while under scramjet power.

Tests on the ground have fared better. In May, workers at the Beijing Academy of Sciences ran a scramjet for 10 minutes, according to a report in the South China Morning Post. Two years earlier, the leader of the project, Fan Xuejun, told the same newspaper that a factory was being built to construct a variety of scramjets, some supposedly for civilian application. One engine would use a combined cycle, with a turbojet to get off the ground, a ramjet to accelerate to near-hypersonic speed, a scramjet to blow past Mach 5, and maybe even a rocket to top off the thrust. That’s a lot of moving parts—and an ambition worthy of Elon Musk. But even Musk might hesitate to follow Putin’s proposal to use a nuclear reactor for energy.

The cost of developing a scramjet capability is only one part of the economic challenge. The other is making the engine cheap enough to deploy and use in a routine way. To do that, you need fuel you can rely on. Early researchers worked with a class of highly energetic fuels that would react on contact with air, like triethylaluminum.

“It’s a fantastic scramjet engine fuel, but very toxic, a bit like the hydrazine fuels used in rockets nowadays, and this became an inhibitor,” says David Van Wie, of Johns Hopkins APL, explaining why triethylaluminum was dropped from serious consideration.

Next up was liquid hydrogen, which is also very reactive. But it needs elaborate cooling. Worse, it packs a rather low amount of energy into a given volume, and as a cryogenic fuel it is inconvenient to store and transport. It has been and still is used in experimental missiles, such as the X-43.

Today’s choice for practical missiles is hydrocarbons, of the same ilk as jet fuel, but fancier. The Chinese scramjet that burned for 10 minutes—like others on the drawing board around the world—burns hydrocarbons. Here the problem lies in breaking down the hydrocarbon’s long molecular chains fast so the shards can bind with oxygen in the split second when the substances meet and mate. And a split second isn’t enough—you have to do it continuously, one split second after another, “like keeping a match lit in a hurricane,” in the oft-quoted words of NASA spokesman Gary Creech, back in 2004.

Scramjet designs try to protect the flame by shaping the inflow geometry to create an eddy, forming a calm zone not unlike the eye of a hurricane. Flameouts are particularly worrisome when the missile starts jinking about, thus disrupting the airflow. “It’s the ‘unstart’ phenomenon, where the shock wave at the air inlets stops the engine, and the vehicle will be lost,” says John D. Schmisseur, a researcher at the University of Tennessee Space Institute, in Tullahoma. And you really only get to meet such gremlins in actual flight, he adds.

There are other problems besides flameout that arise when you’re inhaling a tornado. One expert, who requested anonymity, puts it this way: “If you’re ingesting air, it’s no longer air; it’s a complex mix of ionized atmosphere,” he says. “There’s no water anymore; it’s all hydrogen and oxygen, and the nitrogen is to some fraction elemental, not molecular. So combustion isn’t air and fuel—it’s whatever you’re taking in, whatever junk—which means chemistry at the inlet matters.”

Simulating the chemistry is what makes hypersonic wind-tunnel tests problematic. It’s fairly simple to see how an airfoil responds aerodynamically to Mach 5—just cool the air so that the speed of sound drops, giving a higher Mach number for a given airspeed. But blowing cold air tells you only a small part of the story because it heads off all the chemistry you want to study. True, you can instead run your wind tunnel fast, hot, and dense—at “high enthalpy,” to use the term of art—but it’s hard to keep that maelstrom going for more than a few milliseconds.

“Get the airspeed high enough to start up the chemistry and the reactions sap the energy,” says Mark Gragston, an aerospace expert who’s also at the UT Space Institute. Getting access to such monster machines isn’t easy, either. “At Arnold Air Force Base, across the street from me, the Air Force does high-enthalpy wind-tunnel experiments,” he says. “They’re booked up three years in advance.”

Other countries have more of the necessary wind tunnels; even India has about a dozen[PDF]. Right now, the United States is spending loads of money building these machines in an effort to catch up with Russia and China. You could say there is a wind-tunnel gap—one more reason U.S. researchers are keen for test flights.

Another thing about cooling the air: It does wonders for any combustion engine, even the kind that pushes pistons. Reaction Engines, in Abingdon, England, appears to be the first to try to apply this phenomenon in flight, with a special precooling unit. In its less-ambitious scheme, the precooler sits in front of the air inlet of a standard turbojet, adding power and efficiency. In its more-ambitious concept, called SABRE (Synergetic Air Breathing Rocket Engine), the engine operates in combined mode: It takes off as a turbojet assisted by the precooler and accelerates until a ramjet can switch on, adding enough thrust to reach (but not exceed) Mach 5. Then, as the vehicle climbs and the atmosphere thins out, the engine switches to pure rocket mode, finally launching a payload into orbit.

In principle, a precooler could work in a scramjet. But if anyone’s trying that, they’re not talking about it.

Fast forward five years and boost-glide missiles will no doubt be deployed in the service of multiple countries. Jump ahead another 15 or 20 years, and the world’s superpowers will have scramjet missiles.

So what? Won’t these things always play second fiddle to ballistic missiles? And won’t the defense also have its say, by unveiling superfast antimissiles and Buck Rogers–style directed-energy weapons?

Perhaps not. The defense always has the harder job. As President John F. Kennedy noted in an interview way back in 1962, when talking about antiballistic missile defense, what you are trying to do is shoot a bullet with a bullet. And, he added, you have to shoot down not just one but many, including a bodyguard of decoys.

Today there are indeed antimissile defenses that can protect particular targets against ballistic missiles, at least when they’re not being fired in massive salvos. But you can’t defend everything, which is why the great powers still count on deterrence through mutually assured destruction. By that logic, if you can detect cruise missiles soon enough, you can at least make those who launched them wish they hadn’t.

For that to work, we’ll need better eyes in the skies. In the United States, the military wants around $100 million for research on low-orbit space sensors to detect low-flying hypersonic missiles, Aviation Week reported in 2019.

Hardly any of the recent advances in hypersonic flight result from new scientific discoveries; almost all of it stems from changes in political will. Powers that challenge the international status quo—China and Russia—have found the resources and the will to shape the arms race to their benefit. Powers that benefit from the status quo—the United States, above all—are responding in kind. Politicians fired the starting pistol, and the technologists are gamely leaping forward.

And the point? There is no point. It’s an arms race.

This article appears in the December 2020 print issue as “Going Hypersonic.”

How JPL’s Team CoSTAR Won the DARPA SubT Challenge: Urban Circuit Systems Track

Post Syndicated from Edward Terry original https://spectrum.ieee.org/automaton/aerospace/robotic-exploration/nasa-jpl-team-costar-darpa-subt-urban-circuit-systems-track

In 2017, a team at NASA’s Jet Propulsion Laboratory in Pasadena, Calif., was in the process of prototyping some small autonomous robots capable of exploring caves and subsurface voids on the Moon, Mars, and Titan, Saturn’s largest moon. Our goal was the development of new technologies to help us solve one of humanity’s most significant questions: is there or has there been life beyond Earth?

The more we study the surfaces of planetary bodies in our solar system, the more we are compelled to voyage underground to seek answers to this question. Planetary subsurface voids are not only one of the most likely places to find both signs of life, past and present, but thanks to the shelter they provide, are also one of the main candidates for future human habitation. While we were working on various technologies for cave exploration at JPL, DARPA launched the latest in its series of Grand Challenges, the Subterranean Challenge, or SubT. Compared to earlier events that focused on on-road driving and humanoid robots in pre-defined disaster relief scenarios, the focus of SubT is the exploration of unknown and extreme underground environments. Even though SubT is about exploring such environments on Earth, we can use the competition as an analog to help us learn how to explore unknown environments on other planetary bodies. 

Spotting Mystery Methane Leaks From Space

Post Syndicated from By Jason McKeever original https://spectrum.ieee.org/aerospace/satellites/spotting-mystery-methane-leaks-from-space

Something new happened in space in January 2019. For the first time, a previously unknown leak of natural gas was spotted from orbit by a microsatellite, and then, because of that detection, plugged.

The microsatellite, Claire, had been flying since 2016. That day, Claire was monitoring the output of a mud volcano in Central Asia when it spied a plume of methane where none should be. Our team at GHGSat, in Montreal, instructed the spacecraft to pan over and zero in on the origin of the plume, which turned out to be a facility in an oil and gas field in Turkmenistan.

The need to track down methane leaks has never been more important. In the slow-motion calamity that is climate change, methane emissions get less public attention than the carbon dioxide coming from smokestacks and tailpipes. But methane—which mostly comes from fossil-fuel production but also from livestock farming and other sources—has an outsize impact. Molecule for molecule, methane traps 84 times as much heat in the atmosphere as carbon dioxide does, and it accounts for about a quarter of the rise in atmospheric temperatures. Worse, research from earlier this year shows that we might be enormously underestimating the amount released—by as much as 25 to 40 percent.

Satellites have been able to see greenhouse gases like methane and carbon dioxide from space for nearly 20 years, but it took a confluence of need and technological innovation to make such observations practical and accurate enough to do them for profit. Through some clever engineering and a more focused goal, our company has managed to build a 15-kilogram microsatellite and perform feats of detection that previously weren’t possible, even with a US $100 million, 1,000-kg spacecraft. Those scientific behemoths do their job admirably, but they view things on a kilometer scale. Claire can resolve methane emissions down to tens of meters. So a polluter (or anybody else) can determine not just what gas field is involved but which well in that field.

Since launching Claire, our first microsatellite, we’ve improved on both the core technology—a miniaturized version of an instrument known as a wide-angle Fabry-Pérot imaging spectrometer—and the spacecraft itself. Our second methane-seeking satellite, dubbed Iris, launched this past September, and a third is scheduled to go up before the end of the year. When we’re done, there will be nowhere on Earth for methane leaks to hide.

The creation of Claire and its siblings was driven by a business case and a technology challenge. The business part was born in mid-2011, when Quebec (GHGSat’s home province) and California each announced that they would implement a market-based “cap and trade” system. The systems would attribute a value to each ton of carbon emitted by industrial sites. Major emitters would be allotted a certain number of tons of carbon—or its equivalent in methane and other greenhouse gases—that they could release into the atmosphere each year. Those that needed to emit more could then purchase emissions credits from those that needed less. Over time, governments could shrink the total allotment to begin to reduce the drivers of climate change.

Even in 2011, there was a wider, multibillion-dollar market for carbon emissions, which was growing steadily as more jurisdictions imposed taxes or implemented carbon-trading mechanisms. By 2019, these carbon markets covered 22 percent of global emissions and earned governments $45 billion, according to the World Bank’s State and Trends of Carbon Pricing 2020.

Despite those billions, it’s methane, not carbon dioxide, that has become the focus of our systems. One reason is technological—our original instrument was better tuned for methane. But the business reason is the simpler one: Methane has value whether there’s a greenhouse-gas trading system or not.

Markets for greenhouse gases motivate the operators of industrial sites to better measure their emissions so they can control and ultimately reduce them. Existing, mostly ground-based methods using systems like flux chambers, eddy covariance towers, and optical gas imaging were fairly expensive, of limited accuracy, and varied as to their geographic availability. Our company’s bet was that industrial operators would flock to a single, less expensive, more precise solution that could spot greenhouse-gas emissions from individual industrial facilities anywhere in the world.

Once we’d decided on our business plan, the only question was: Could we do it?

One part of the question had already been answered, to a degree, by pioneering space missions such as Europe’s Envisat (which operated from 2002 to 2012) and Japan’s GOSat (launched in 2009). These satellites measure surface-level trace gases using spectrometers that collect sunlight scattering off the earth. The spectrometers break down the incoming light by wavelength. Molecules in the light’s path will absorb a certain pattern of wavelengths, leaving dark bands in the spectrum. The greater the concentration of those molecules, the darker the bands. This method can measure methane concentrations from orbit with a precision that’s better than 1 percent of background levels.

While those satellites proved the concept of methane tracking, their technology was far from what we needed. For one thing, the instruments are huge. The spectrometer portion of Envisat, called SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY), contained nearly 200 kg of complex optics; the entire spacecraft carried eight other scientific instruments and weighed 8.2 metric tons. GOSat, which is dedicated to greenhouse-gas sensing, weighs 1.75 metric tons.

Furthermore, these systems were designed to measure gas concentrations across the whole planet, quickly and repeatedly, in order to inform global climate modeling. Their instruments scan huge swaths of land and then average greenhouse-gas levels over tens or hundreds of square kilometers. And that is far too coarse to pinpoint an industrial site responsible for rogue emissions.

To achieve our goals, we needed to design something that was the first of its kind—an orbiting hyperspectral imager with spatial resolution in the tens of meters. And to make it affordable enough to launch, we had to fit it in a 20-by-20-by-20-centimeter package.

The most critical enabling technology to meet those constraints was our spectrometer—the wide-angle Fabry-Pérot etalon (WAF-P). (An etalon is an interferometer made from two partially reflective plates.) To help you understand what that is, we’ve first got to explain a more common type of spectrometer and how it works in a hyperspectral imaging system.

Hyperspectral imaging detects a wide range of wavelengths, some of which, of course, are beyond the visible. To achieve such detection, you need both a spectrometer and an imager.

The spectrometers in SCIAMACHY are based on diffraction gratings. A diffraction grating disperses the incoming light as a function of its wavelength—just as a prism spreads out the spectrum of white light into a rainbow. In space-based hyperspectral imaging systems, one dimension of the imager is used for spectral dispersion, and the other is used for spatial imaging. By imaging a narrow slit of a scene at the correct orientation, you get a spectrum at each point along that thin strip of land. As the spacecraft travels, sequential strips can be imaged to form a two-dimensional array of points, each of which has a full spectrum associated with it.

If the incoming light has passed through a gas—say, Earth’s atmosphere—in a region tainted with methane, certain bands in the infrared part of that spectrum should be dimmer than otherwise in a pattern characteristic of that chemical.

Such a spectral-imaging system works well, but making it compact is challenging for several reasons. One challenge is the need to minimize optical aberrations to achieve a sharp image of ground features and emission plumes. However, in remote sensing, the signal strength (and hence signal-to-noise ratio) is driven by the aperture size, and the larger this is, the more difficult it is to minimize aberrations. Adding a dispersive grating to the system leads to additional complexity in the optical system.

A Fabry-Pérot etalon can be much more compact without the need for a complex imaging system, despite certain surmountable drawbacks. It is essentially two partially mirrored pieces of glass held very close together to form a reflective cavity. Imagine a beam of light of a certain wavelength entering the cavity at a slight angle through one of the mirrors. A fraction of that beam would zip across the cavity, squeak straight through the other mirror, and continue on to a lens that focuses it onto a pixel on an imager placed a short distance away. The rest of that beam of light would bounce back to the front mirror and then across to the back mirror. Again, a small fraction would pass through, the rest would continue to bounce between the mirrors, and the process would repeat. All that bouncing around adds distance to the light’s paths toward the pixel. If the light’s angle and its wavelength obey a particular relationship to the distance between the mirrors, all that light will constructively interfere with itself. Where that relation holds, a set of bright concentric rings forms. Different wavelengths and different angles would produce a different set of rings.

In an imaging system with a Fabry-Pérot etalon like the ones in our satellites, the radius of the ring on the imager is roughly proportional to the ray angle. What this means for our system is that the etalon acts as an angle-dependent filter. So rather than dispersing the light by wavelength, we filter the light to specific wavelengths, depending on the light’s radial position within the scene. Since we’re looking at light transmitted through the atmosphere, we end up with dark rings at specific radii corresponding to molecular absorption lines.

The etalon can be miniaturized more easily than a diffraction-grating spectrometer, because the spectral discrimination arises from interference that happens within a very small gap of tens to hundreds of micrometers; no large path lengths or beam separation is required. Furthermore, since the etalon consists of substrates that are parallel to one another, it doesn’t add significantly to aberrations, so you can use relatively straightforward optical-design techniques to obtain sufficient spatial resolution.

However, there are complications associated with the WAF-P imaging spectrometer. For example, the imager behind the etalon picks up both the image of the scene (where the gas well is) and the interference pattern (the methane spectrum). That is, the spectral rings are embedded in—and corrupted by—the actual image of the patch of Earth the satellite is pointing at. So, from a single camera frame, you can’t distinguish variability in how much light reflects off the surface from changes in the amount of greenhouse gases in the atmosphere. Separating spatial and spectral information, so that we can pinpoint the origin of a methane plume, took some innovation.

The computational process used to extract gas concentrations from spectral measurements is called a retrieval. The first step in getting this to work for the WAF-P was characterizing the instrument properly before launch. That produces a detailed model that can help predict precisely the spectral response of the system for each pixel.

But that’s just the beginning. Separating the etalon’s mixing of spectral and spatial information took some algorithmic magic. We overcame this issue by designing a protocol that captures a sequence of 200 overlapping images as the satellite flies over a site. At our satellite’s orbit, that means maximizing the time we have to acquire images by continuously adjusting the satellite’s orientation. In other words, we have the satellite stare at the site as it passes by, like a rubbernecking driver on a highway passing a car wreck.

The next step in the retrieval procedure is to align the images, basically tracking all the ground locations within the scene through the sequence of images. This gives us a collection of up to 200 readings where a feature, say, a leaking gas well, passes across the complete interference pattern. This effectively is measuring the same spot on Earth at decreasing infrared wavelengths as that spot moves outward from the center of the image. If the methane concentration is anomalously high, this leads to small but predictable changes in signal level at specific positions on the image. Our retrievals software then compares these changes to its internal model of the system’s spectral response to extract methane levels in parts per million.

At this point, the WAF-P’s drawbacks become an advantage. Some other satellites use separate instruments to visualize the ground and sense the methane or CO2 spectra. They then have to realign those two. Our system acquires both at once, so the gas plume automatically aligns with its point of origin down to the level of tens of meters. Then there’s the advantage of high spatial resolution. Other systems, such as Tropomi (TROPOspheric Monitoring Instrument, launched in 2017), must average methane density across a 7-kilometer-wide pixel. The peak concentration of a plume that Claire could spot would be so severely diluted by Tropomi’s resolution that it would seem only 1/200th as strong. So high-spatial-resolution systems like Claire can detect weaker emitters, not just pinpoint their location.

Just handing a customer an image of their methane plume on a particular day is useful, but it’s not a complete picture. For weaker emitters, measurement noise can make it difficult to detect methane point sources from a single observation. But temporal averaging of multiple observations using our analytics tools reduces the noise: Even with a single satellite we can make 25 or more observations of a site per year, cloud cover permitting.

Using that average, we then produce an estimation of the methane emission rate. The process takes snapshots of methane density measurements of the plume column and calculates how much methane must be leaking per hour to generate that kind of plume. Retrieving the emission rate requires knowledge of local wind conditions, because the excess methane density depends not only on the emission rate but also on how quickly the wind transports the emitted gas out of the area.

We’ve learned a lot in the four years since Claire started its observations. And we’ve managed to put some of those lessons into practice in our next generation of microsatellites, of which Iris is the first. The biggest lesson is to focus on methane and leave carbon dioxide for later.

If methane is all we want to measure, we can adjust the design of the etalon so that it better measures methane’s corner of the infrared spectrum, instead of being broad enough to catch CO2’s as well. This, coupled with better optics that keep out extraneous light, should result in a 10-fold increase in methane sensitivity. So Iris and the satellites to follow will be able to spot smaller leaks than Claire can.

We also discovered that our next satellites would need better radiation shielding. Radiation in orbit is a particular problem for the satellite’s imaging chip. Before launching Claire, we’d done careful calculations of how much shielding it needed, which were then balanced with the increased cost of the shielding’s weight. Nevertheless, Claire’s imager has been losing pixels more quickly than expected. (Our software partially compensates for the loss.) So Iris and the rest of the next generation sport heavier radiation shields.

Another improvement involves data downloads. Claire has made about 6,000 observations in its first four years. The data is sent to Earth by radio as the satellite streaks past a single ground station in northern Canada. We don’t want future satellites to run into limits in the number of observations they make just because they don’t have enough time to download the data before their next appointment with a methane leak. So Iris is packed with more memory than Claire has, and the new microsatellite carries an experimental laser downlink in addition to its regular radio antenna. If all goes to plan, the laser should boost download speeds 1,000-fold, to 1 gigabit per second.

In its polar orbit, 500 kilometers above Earth, Claire passes over every part of the planet once every two weeks. With Iris, the frequency of coverage effectively doubles. And the addition in December of Hugo and three more microsatellites due to launch in 2021 will give us the ability to check in on any site on the planet almost daily—depending on cloud cover, of course.

With our microsatellites’ resolution and frequency, we should be able to spot the bigger methane leaks, which make up about 70 percent of emissions. Closing off the other 30 percent will require a closer look. For example, with densely grouped facilities in a shale gas region, it may not be possible to attribute a leak to a specific facility from space. And a sizable leak detectable by satellite might be an indicator of several smaller leaks. So we have developed an aircraft-mounted version of the WAF-P instrument that can scan a site with 1-meter resolution. The first such instrument took its test flights in late 2019 and is now in commercial use monitoring a shale oil and gas site in British Columbia. Within the next year we expect to deploy a second airplane-mounted instrument and expand that service to the rest of North America.

By providing our customers with fine-grained methane surveys, we’re allowing them to take the needed corrective action. Ultimately, these leaks are repaired by crews on the ground, but our approach aims to greatly reduce the need for in-person visits to facilities. And every source of fugitive emissions that is spotted and stopped represents a meaningful step toward mitigating climate change.

This article appears in the November 2020 print issue as “Microsatellites Spot Mystery Methane Leaks.”

About the Author

Jason McKeever, Dylan Jervis, and Mathias Strupler are with GHGSat, a remote-sensing company in Montreal. McKeever is the company’s science and systems lead, Jervis is a systems specialist, and Strupler is an optical systems specialist.

Liquid Lasers Challenge Fiber Lasers As The Basis of Future High-Energy Weapons

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/aerospace/military/fiber-lasers-face-a-challenger-in-laser-weapons

Despite a lot of progress in recent years, practical laser weapons that can shoot down planes or missiles are still a ways off. But a new liquid laser may be bringing that day closer. 

Much of the effort in recent years has focused on high-power fiber lasers. These lasers usually specially doped coils of optical fibers to amplify a laser beam, and were in originally developed for industrial cutting and welding.  Initially, fiber laser were dark horses in the Pentagon’s effort to develop electrically powered solid-state laser weapons that began two decades ago.  However, by 2013 the Navy was testing a 30-kilowatt fiber laser on a ship. Since then, their ability to deliver high-energy beams of excellent optical quality has earned fiber lasers the leading role in the current field trials of laser weapons in the 50- to 100-kilowatt class.  But now aerospace giant Boeing has teamed with General Atomics—a defense contractor also known for research in nuclear fusion—to challenge fiber lasers in achieving the 250-kilowatt threshold that some believe will be essential for future generations of laser weapons. Higher laser powers would be needed for nuclear missile defense.

The challenging technology was developed to control crucial issues with high energy solid-state lasers: size, weight and power, and the problem of dissipating waste heat that could disrupt laser operation and beam quality. General Atomics “had a couple of completely new ideas, including a liquid laser. They were considered completely crazy at the time, but DARPA funded us,” said company vice president Mike Perry in a 2016 interview. Liquid lasers are similar to solid-state lasers, but they use a cooling liquid that flows through channels integrated into the solid-state laser material. A crucial trick was ensuring that the cooling liquid has a refractive index exactly the same as that of the solid laser material. A perfect match of the liquid and solid could avoid any refraction or reflection at the boundary between them. Avoiding reflection or refraction in the the cooling liquid also required making the fluid flow smoothly through the channels to prevent turbulence.

The system promised to be both compact and comparatively lightweight, just what DARPA wanted to fit a 150-kW laser into a fighter jet. The goal was a device that weighed only 750 kilograms, or just 5 kg/kW of output. The project that went through multiple development stages of testing that lasted a decade. In 2015, General Atomic delivered the HELLADS, the High Energy Liquid Laser Area Defense System, rated as 150-kW class, to the White Sands Missile Range in New Mexico for live fire tests against military targets.  A press release issued at the time boasted the laser held “the world’s record for the highest laser output power of any electrically powered laser.” At the time, General Atomics described it as a modular laser weapon weighing in at four kilograms per kilowatt.

Development has continued since then. A spokesperson says General Atomics is now on the seventh generation of their “Distributed gain laser heads,” modules which can be combined to generate over 250 kW of laser output from a very compact package. Improvements over the past two years have enhanced beam quality and the ability to emit high-energy beams both continuously or in a series of pulses, giving more flexibility in attacking targets.

Sustaining good beam quality at that power level is important. Mike Griffin, former undersecretary of defense for research and engineering, told Congress that current fiber laser technology could be scaled to 300 kilowatts to protect air force tankers.  However, that may be pushing the upper limits of how many beams from separate fiber lasers emitting at closely spaced wavelengths can be combined coherently to generate a single high-energy laser beam of high quality.

The agreement calls for use General Atomics to supply integrated thermal management equipment and a high-density modular high-power lithium-ion battery system able to store three megajoules of energy as well as the laser. Boeing will supply a beam director and software for precision acquisition and pointing technology that Boeing developed and supplied for other experimental laser weapon testbeds, including the Air Force’s megawatt-class Airborne Laser, the last big chemically-powered gas laser, scrapped in 2014.  “Together, we’re leveraging six decades of directed energy experience and proven, deployed technologies,” said  Norm Tew, Boeing Missile and Weapon Systems vice president and general manager, and Huntsville site senior executive, in a statement.

The Long Arm of NASA: The OSIRIS-REx Spacecraft Gets Ready To Grab An Asteroid Sample

Post Syndicated from Ned Potter original https://spectrum.ieee.org/tech-talk/aerospace/robotic-exploration/the-long-arm-of-nasa-the-osirisrex-spacecraft-gets-ready-to-grab-an-asteroid-sample

Just how do you bring home pieces of asteroid? Carefully, that’s how, with grudging respect for the curveballs the asteroid can throw you.

Sixteen years after NASA’s OSIRIS-REx mission was first proposed and two years after the robotic spacecraft went into orbit around asteroid 101955 Bennu, mission team members are now counting down to the moment when it will descend to the surface, grab a sample—and then get out of there before anything can go wrong.

The sampling is set for next Tuesday, Oct. 20. If it works, it will be a first for the United States. (A Japanese probe is currently returning to Earth with samples from asteroid 162173 Ryugu.)

Though the mission plan has so far been executed almost flawlessly, an outsider might be forgiven for thinking there’s something a bit…well, counterintuitive about it. The spacecraft has no landing legs, because it will never actually land. Instead, the OSIRIS-REx spacecraft vaguely resembles an insect with a long snout—a honeybee, perhaps, hovering over a flower to pollinate it. The “snout” is actually an articulated arm with a 30.5 cm round collection chamber at the end. It’s called TAGSAM – short for Touch-And-Go Sample Acquisition Mechanism. You’ve doubtless heard the old expression, “I wouldn’t touch that with a 10-foot pole.” The TAGSAM arm is an 11-foot pole.

The TAGSAM collector will gently bump the surface—and then try to kick up some rock and dust with a blast of nitrogen gas from a small pressurized canister. If all goes well, at least 60 grams of dirt will be caught in the collection chamber in the 5 to 10 seconds before the spacecraft pulls away.

The mission was conceived in 2004 at the Lunar and Planetary Laboratory at the University of Arizona. It was rejected twice by NASA before it won approval in 2011. The spacecraft was launched in 2016, reached Bennu in 2018, and has surveyed it for two years. If Tuesday’s sample pickup is successful (if not, there are two backup nitrogen canisters), the ship will bring its cargo back in a sealed re-entry capsule for a parachute landing in the Utah desert on Sept. 24, 2023.

Think about that: nearly 20 years of work, seven years in space, US$800 million spent, and the moment of truth—actually touching the asteroid—will not even last a minute.

And all for just 60 grams? “That’s like a few sugar packets that you use for your coffee,” says Michael C. Moreau, the deputy project manager at NASA’s Goddard Space Flight Center in Maryland. But for scientists involved in the mission, that’s enough to conduct their experiments and put some aside for future research. Bennu, and other near-Earth asteroids like it, are interesting because they are rich in carbon, probably dating back 4.5 billion years to the formation of the solar system. Might they tell us about the origins of life on Earth? “We are now optimistic that we will collect and return a sample with organic material—a central goal of the OSIRIS-REx mission,” said Dante Lauretta, the OSIRIS-REx principal investigator at the University of Arizona, who has been on the mission team from the start.

Bennu is not a friendly place for a spacecraft, especially a robotic probe operating on its own 334 million km from Earth, far enough away that commands from mission managers take 18 minutes to reach it. The asteroid is a rough, rocky spheroid, about 500 meters in diameter, and it spins fairly rapidly: a “day” on Bennu is about 4.3 hours long. It’s probably not solid. Scientists call it a rubble pile, a cluster of dirt and rock held together by gravity and natural cohesion. And when Moreau says Bennu threw them “a bunch of curveballs,” some of them were literal—pieces of debris being flung out into space from the surface, though not enough to endanger OSIRIS-REx.

An asteroid the size of Bennu has almost no gravity to speak of: an object on the surface would weigh 8 one-millionths of what it would on Earth. That means staying in orbit around it is very delicate business. OSIRIS-REx’s orbital velocity has been on the order of 0.2 km/hr, which means a tortoise could outrun it. More important, it can easily be thrown off course by solar wind, or heating on the sunlit side of the spacecraft, or other miniscule forces. “Wow, a[n extra] millimeter per second three days later changes the position by hundreds of meters,” says Moreau.

Which is an error they can’t afford. Scientists thought Bennu would have a fine-grained surface, but were surprised when the spacecraft’s images showed a rugged, craggy place with house-sized boulders. They had hoped to pick a touchdown spot 50 m across. But the best they could find was a depression they nicknamed Nightingale, all of 8.2 m wide, with a jagged rock nearby that they call Mount Doom. OSIRIS-REx has a hazard map on board; if it appears off-target it will automatically abort at an altitude of 5 meters.

So there will be some nail-biting on Tuesday, but after 16 years on the project, Dante Lauretta said they’re as ready as they can be. “It’s transcendental,” he said, “when you reach a moment that you’ve devoted most of your career to.”

China to Launch Space Mining Bot

Post Syndicated from Andrew Jones original https://spectrum.ieee.org/tech-talk/aerospace/satellites/china-to-launch-space-mining-bot

The possibility of space mining has long captured the imagination and even inspired business ventures. Now, a space startup in China is taking its first steps towards testing capabilities to identify and extract off-Earth resources.

Origin Space, a Beijing-based private space resources company, is set to launch its first ‘space mining robot’ in November. NEO-1 is a small (around 30 kilograms) satellite intended to enter a 500-kilometer-altitude sun-synchronous orbit. It will be launched by a Chinese Long March series rocket as a secondary payload.

This small spacecraft will not be doing actual mining; instead, it will be testing technologies. “The goal is to verify and demonstrate multiple functions such as spacecraft orbital maneuver, simulated small celestial body capture, intelligent spacecraft identification and control,” says Yu Tianhong, an Origin Space co-founder.

Origin Space, established in 2017, describes itself as China’s first firm focused on the utilization of space resources. China’s private space sector emerged following a 2014 government decision to open up the industry. Because asteroid mining has often been talked of as potentially a trillion-dollar industry, it is no surprise that a company focused on this area has joined the likes of others developing rockets and small satellites.

Another mission, Yuanwang-1 (‘Look up-1’), and nicknamed “Little Hubble”, is slated to launch in 2021. A deal for development of the satellite was reached with DFH Satellite Co., Ltd., a subsidiary of China’s main state-owned space contractor CASC, earlier this year.

The “Little Hubble” satellite will carry an optical telescope designed to observe and monitor Near Earth Asteroids. Origin Space notes that identifying suitable targets is the first step toward space resources utilization.

Beyond this, Origin Space will also be taking aim at the moon with NEO-2, with a target launch date of late 2021 or early 2022.

Yu says the lunar project plan is not completed, but includes an eventual lunar landing. The tentative mission profile envisions an indirect journey to our celestial neighbor. The spacecraft will first be launched into low-Earth orbit and then gradually raise its orbit with onboard propulsion until it reaches a lunar orbit. The spacecraft will—after completing its observation goals—make a hard landing on the lunar surface. 

While Chandrayaan-2, India’s second lunar mission, used a circuitous route to go from geosynchronous transfer orbit out to lunar orbit, a small spacecraft with limited propulsion may take a long time to reach the moon.

The issue of space resources became a hot topic once again after NASA administrator Jim Bridenstine last week announced that the agency will purchase lunar regolith and rock samples from commercial companies once they have collected moon material.

But Brian Weeden, Director of Program Planning for the Secure World Foundation, says that space resources companies still face myriad challenges, including the logistics extracting resources and the small matter of who (other than NASA) is going to buy them.

“We’ve heard a lot about water on the Moon, but if you talk to any lunar scientist they will tell you we don’t actually know what the chemical composition of that water is and how difficult it will be to extract and refine it into a usable product,” says Weeden.

“The same thing goes for asteroids to an even greater degree. On Earth, we have massive mining operations and factories and smelteries to refine raw materials into usable products. How much of that will you need in space and how do you build it?” Weeden says.

He adds: “Right now the only real customers are the national space agencies that are planning to do things on the Moon. They might have a use for lunar regolith as a building material and water for fuel and life support. But aside from the very small contract we saw from NASA last week, I haven’t seen any major interest from governments in buying those materials commercially or at what price.”

Origin Space is far from the only or first space mining company. Planetary Resources, a U.S.-based firm, was established back in 2009 before suffering funding issues and eventually being acquired by blockchain firm ConsenSys in 2018. Another U.S. venture, Deep Space Industries, was acquired in January 2019 and is apparently pivoting away from asteroid mining towards developing small satellites. Meanwhile Tokyo-based ispace recently raised $28 million for the first of a series of lunar landers.

Asked about learning from the case of companies such as Planetary Resources, Yu stated that the firm was a pioneer in the space resources industry, adding that it is always challenging for the first players in the game. “We think they lack important milestones and revenue. We are working hard to accelerate the progress of milestone projects while generating revenue.”

36Kr, a Chinese technology publishing and data company, reports (Chinese) that Origin Space will launch a pre-A financing round at the end of the year to fund the planned lunar exploration mission.

NASA Study Proposes Airships, Cloud Cities for Venus Exploration

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/aerospace/space-flight/nasa-study-proposes-airships-cloud-cities-for-venus-exploration

Editor’s note: It’s been 35 years since a pair of robotic balloons explored the clouds of Venus. The Soviet Vega 2 and Vega 2 probes didn’t manage to find any Venusians, but that may have been because we didn’t know exactly what to look for. On 14 September 2020, a study published in Nature suggested that traces of phosphine in Venus’ atmosphere could be an indication of a biological process: that is, of microbial alien life. If confirmed, such a finding could completely change the way we think about the universe, which has us taking a serious look at what it would take to get human explorers to Venus in the near future. This article was originally published on 16 December 2014.

It has been accepted for decades that Mars is the next logical place for humans to explore. Mars certainly seems to offer the most Earth-like environment of any other place in the solar system, and it’s closer to Earth than just about anyplace else, except Venus. But exploration of Venus has always been an enormous challenge: Venus’s surface is hellish, with 92 atmospheres of pressure and temperatures of nearly 500 °C.

The surface of Venus isn’t going to work for humans, but what if we ignore the surface and stick to the clouds? Dale Arney and Chris Jones, from the Space Mission Analysis Branch of NASA’s Systems Analysis and Concepts Directorate at Langley Research Center, in Virginia, have been exploring that idea. Perhaps humans could ride through the upper atmosphere of Venus in a solar-powered airship. Arney and Jones propose that it may make sense to go to Venus before we ever send humans to Mars.

To put NASA’s High Altitude Venus Operational Concept (HAVOC) mission in context, it helps to start thinking about exploring the atmosphere of Venus instead of exploring the surface. “The vast majority of people, when they hear the idea of going to Venus and exploring, think of the surface, where it’s hot enough to melt lead and the pressure is the same as if you were almost a mile underneath the ocean,” Jones says. “I think that not many people have gone and looked at the relatively much more hospitable atmosphere and how you might tackle operating there for a while.”

At 50 kilometers above its surface, Venus offers one atmosphere of pressure and only slightly lower gravity than Earth. Mars, in comparison, has a “sea level” atmospheric pressure of less than a hundredth of Earth’s, and gravity just over a third Earth normal. The temperature at 50 km on Venus is around 75 °C, which is a mere 17 degrees hotter than the highest temperature recorded on Earth. It averages -63 °C on Mars, and while neither extreme would be pleasant for an unprotected human, both are manageable.

What’s more important, especially relative to Mars, is the amount of solar power available on Venus and the amount of protection that Venus has from radiation. The amount of radiation an astronaut would be exposed to in Venus’s atmosphere would be “about the same as if you were in Canada,” says Arney. On Mars, unshielded astronauts would be exposed to about 0.67 millisieverts per day, which is 40 times as much as on Earth, and they’d likely need to bury their habitats several meters beneath the surface to minimize exposure. As for solar power, proximity to the sun gets Venus 40 percent more than we get here on Earth, and 240 percent more than we’d see on Mars. Put all of these numbers together and as long as you don’t worry about having something under your feet, Jones points out, the upper atmosphere of Venus is “probably the most Earth-like environment that’s out there.”

It’s also important to note that Venus is often significantly closer to Earth than Mars is. Because of how the orbits of Venus and Earth align over time, a crewed mission to Venus would take a total of 440 days using existing or very near-term propulsion technology: 110 days out, a 30-day stay, and then 300 days back—with the option to abort and begin the trip back to Earth immediately after arrival. That sounds like a long time to spend in space, and it absolutely is. But getting to Mars and back using the same propulsive technology would involve more than 500 days in space at a minimum. A more realistic Mars mission would probably last anywhere from 650 to 900 days (or longer) due to the need to wait for a favorable orbital alignment for the return journey, which means that there’s no option to abort the mission and come home earlier: If anything went wrong, astronauts would have to just wait around on Mars until their return window opened.

HAVOC comprises a series of missions that would begin by sending a robot into the atmosphere of Venus to check things out. That would be followed up by a crewed mission to Venus orbit with a stay of 30 days, and then a mission that includes a 30-day atmospheric stay. Later missions would have a crew of two spend a year in the atmosphere, and eventually there would be a permanent human presence there in a floating cloud city.

The defining feature of these missions is the vehicle that will be doing the atmospheric exploring: a helium-filled, solar-powered airship. The robotic version would be 31 meters long (about half the size of the Goodyear blimp), while the crewed version would be nearly 130 meters long, or twice the size of a Boeing 747. The top of the airship would be covered with more than 1,000 square meters of solar panels, with a gondola slung underneath for instruments and, in the crewed version, a small habitat and the ascent vehicle that the astronauts would use to return to Venus’s orbit, and home.

Getting an airship to Venus is not a trivial task, and getting an airship to Venus with humans inside it is even more difficult. The crewed mission would involve a Venus orbit rendezvous, where the airship itself (folded up inside a spacecraft) would be sent to Venus ahead of time. Humans would follow in a transit vehicle (based on NASA’s Deep Space Habitat), linking up with the airship in Venus orbit.

Since there’s no surface to land on, the “landing” would be extreme, to say the least. “Traditionally, say if you’re going to Mars, you talk about ‘entry, descent, and landing,’ or EDL,” explains Arney. “Obviously, in our case, ‘landing’ would represent a significant failure of the mission, so instead we have ‘entry, descent, and inflation,’ or EDI.” The airship would enter the Venusian atmosphere inside an aeroshell at 7,200 meters per second. Over the next seven minutes, the aeroshell would decelerate to 450 m/s, and it would deploy a parachute to slow itself down further. At this point, things get crazy. The aeroshell would drop away, and the airship would begin to unfurl and inflate itself, while still dropping through the atmosphere at 100 m/s. As the airship got larger, its lift and drag would both increase to the point where the parachute became redundant. The parachute would be jettisoned, the airship would fully inflate, and (if everything had gone as it’s supposed to), it would gently float to a stop at 50 km above Venus’s surface.

Near the equator of Venus (where the atmosphere is most stable), winds move at about 100 meters per second, circling the planet in just 110 hours. Venus itself barely rotates, and one Venusian day takes longer than a Venusian year does. The slow day doesn’t really matter, however, because for all practical purposes the 110-hour wind circumnavigation becomes the length of one day/night cycle. The winds also veer north, so to stay on course, the airship would push south during the day, when solar energy is plentiful, and drift north when it needs to conserve power at night.

Meanwhile, the humans would be busy doing science from inside a small (21-cubic-meter) habitat, based on NASA’s existing Space Exploration Vehicle concept. There’s not much reason to perform extravehicular activities, so that won’t even be an option, potentially making things much simpler and safer (if a bit less exciting) than a trip to Mars.

The airship has a payload capacity of 70,000 kilograms. Of that, nearly 60,000 kg will be taken up by the ascent vehicle, a winged two-stage rocket slung below the airship. (If this looks familiar, it’s because it’s based on the much smaller Pegasus rocket, which is used to launch satellites into Earth orbit from beneath a carrier aircraft.) When it’s time to head home, the astronauts would get into a tiny capsule on the front of the rocket, drop from the airship, and then blast back into orbit. There, they’ll meet up with their transit vehicle and take it back to Earth orbit. The final stage is to rendezvous in Earth orbit with one final capsule (likely Orion), which the crew will use to make the return to Earth’s surface.

The HAVOC team believes that its concept offers a realistic target for crewed exploration in the near future, pending moderate technological advancements and support from NASA. Little about HAVOC is dependent on technology that isn’t near-term. The primary restriction that a crewed version of HAVOC would face is that in its current incarnation it depends on the massive Block IIB configuration of the Space Launch System, which may not be ready to fly until the late 2020s. Several proof-of-concept studies have already been completed. These include testing Teflon coating that can protect solar cells (and other materials) from the droplets of concentrated sulfuric acid that are found throughout Venus’s atmosphere and verifying that an airship with solar panels can be packed into an aeroshell and successfully inflated out of it, at least at 1/50 scale.

Many of the reasons that we’d want to go to Venus are identical to the reasons that we’d want to go to Mars, or anywhere else in the solar system, beginning with the desire to learn and explore. With the notable exception of the European Space Agency’s Venus Express orbiter, the second planet from the sun has been largely ignored since the 1980s, despite its proximity and potential for scientific discovery. HAVOC, Jones says, “would be characterizing the environment not only for eventual human missions but also to understand the planet and how it’s evolved and the runaway greenhouse effect and everything else that makes Venus so interesting.” If the airships bring small robotic landers with them, HAVOC would complete many if not most of the science objectives that NASA’s own Venus Exploration Analysis Group has been promoting for the past two decades.

“Venus has value as a destination in and of itself for exploration and colonization,” says Jones. “But it’s also complementary to current Mars plans.…There are things that you would need to do for a Mars mission, but we see a little easier path through Venus.” For example, in order to get to Mars, or anywhere else outside of the Earth-moon system, we’ll need experience with long-duration habitats, aerobraking and aerocapture, and carbon dioxide processing, among many other things. Arney continues: “If you did Venus first, you could get a leg up on advancing those technologies and those capabilities ahead of doing a human-scale Mars mission. It’s a chance to do a practice run, if you will, of going to Mars.”

It would take a substantial policy shift at NASA to put a crewed mission to Venus ahead of one to Mars, no matter how much sense it might make to take a serious look at HAVOC. But that in no way invalidates the overall concept for the mission, the importance of a crewed mission to Venus, or the vision of an eventual long-term human presence there in cities in the clouds. “If one does see humanity’s future as expanding beyond just Earth, in all likelihood, Venus is probably no worse than the second planet you might go to behind Mars,” says Arney. “Given that Venus’s upper atmosphere is a fairly hospitable destination, we think it can play a role in humanity’s future in space.”

Hanford Has a Radioactive Capsule Problem

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/aerospace/military/hanford-has-a-radioactive-capsule-problem

At the vast reservation known as the Hanford Site in south-central Washington state, much of the activity these days concerns its 212 million liters (56 million gallons) of radioactive sludge. From World War II through the Cold War, the site produced plutonium for more than 60,000 nuclear weapons, creating enough toxic by-products to fill 177 giant underground tanks. The U.S. Department of Energy (DOE), which controls Hanford, is pushing to start “vitrifying,” or glassifying, some of that waste within two years. The monumental undertaking is the nation’s—and possibly the world’s—largest environmental cleanup effort. It has been going on for decades and will take decades more to complete.

But the tanks are not the only outsize radioactive hazard at Hanford. The site also houses nearly 2,000 capsules of highly radioactive cesium and strontium. Each of the double-walled, stainless-steel capsules weighs 11 kilograms and is roughly the size of a rolled-up yoga mat. Together, they contain over a third of the total radioactivity at Hanford.

For decades, the capsules have resided in a two-story building called the Waste Encapsulation and Storage Facility (WESF). Inside, the capsules sit beneath 4 meters of cooling water in concrete cells lined with stainless steel. The water surrounding the capsules glows neon blue as the cesium and strontium decay, a phenomenon known as Cherenkov radiation.

Built in 1973, the facility is well beyond its 30-year design life. In 2013, nuclear specialists in neighboring Oregon warned that the concrete walls of the pools had lost structural integrity due to gamma radiation emitted by the capsules. Hanford is located just 56 kilometers (35 miles) from Oregon’s border and sits beside the Columbia River. After leaving the site, the river flows through Oregon farms and fisheries and eventually through Portland, the state’s biggest city.

In 2014, the DOE’s Office of the Inspector General concluded that the WESF poses the “greatest risk” for serious accident of any DOE facility that’s beyond its design life. In the event of a severe earthquake, for instance, the degraded basins would likely collapse, draining the cooling water. In a worst-case scenario, the capsules would then overheat and break, releasing radioactivity that would contaminate the ground and air and render parts of the Hanford Site inaccessible for years and potentially reach nearby cities.

“If it’s bad enough, it means all cleanup essentially stops,” says Dirk Dunning, an engineer and retired Hanford expert who worked for the Oregon Department of Energy and who helped flag initial concerns about the concrete. “We can’t fix it, we can’t stop it. It just becomes a horrible, intractable problem.”

To avoid such a catastrophe, in 2015 the DOE began taking steps to transfer capsules out of the basins and into dry casks on an outdoor storage pad. The plan is to place six capsules inside a cylindrical metal sleeve; air inside the cylinder is displaced with helium to dissipate heat from the capsules. The sleeves are then fitted inside a series of shielded canisters, like a nuclear nesting doll. The final vessel is a 3.3-meter-tall cylindrical cask made of a special steel alloy and reinforced concrete. A passive cooling system draws cool air into the cask and expels warm air, without the need for fans or pools of water. The cask will sit vertically on the concrete pad. Eventually, there will be 16 to 20 casks. Similar systems are used to store spent nuclear fuel at commercial power plants, including the Columbia Generating Station at Hanford. The agency has until 31 August 2025 to complete the work, according to a legal agreement between the DOE, the state of Washington, and the U.S. Environmental Protection Agency.

When the transfer is completed, DOE estimates the new facility will save more than US $6 million per year in operating costs. But it’s intended only as a temporary fix. After 50 years in dry storage—around 2075, in other words—the capsules’ contents could be vitrified as well, or else buried in an unspecified deep geologic repository.

Even that timeline may be too ambitious. At a congressional hearing in March, DOE officials said that treatment of the tank waste was the “highest priority” and sought to defer the capsule-transfer work and other cleanup efforts at Hanford. They also proposed slashing Hanford’s annual budget by $700 million in fiscal year 2021. The DOE Office of Environmental Management’s “strategic vision” for 2020–2030 [PDF] noted only that the agency “will continue to evaluate” the transfer of capsules currently stored at the WESF.

And the COVID-19 pandemic has further complicated the department’s plans. The DOE now says it “will be assessing potential impacts on all projects” resulting from reduced operations due to the pandemic. The department’s FY2021 budget proposal calls for “safely” deferring work on the WESF capsule transfers for one year, while supporting “continued maintenance, monitoring, and assessment activities at WESF,” according to a written response sent to IEEE Spectrum.

Unsurprisingly, community leaders and state policymakers oppose the potential slowdowns and budget cuts. They argue that Hanford’s cleanup—now over three decades in the making—cannot be delayed further. David Reeploeg of the Tri-City Development Council (TRIDEC) says the DOE’s strategic vision and proposed budget cuts add to the “collective frustration” at “this pattern of kicking the can down the road.” TRIDEC advocates for Hanford-related priorities in the adjacent communities of Richland, Kennewick, and Pasco, Wash. Reeploeg adds that congressional support over the years has been key to increasing Hanford cleanup funding beyond the DOE’s request levels.

How did Hanford end up with 1,936 capsules of radioactive waste?

The cesium and strontium inside the capsules were once part of the toxic mix stored in Hanford’s giant underground tanks. The heat given off by these elements as they decayed was causing the high-level radioactive waste to dangerously overheat to the point of boiling. And so from 1967 to 1985, technicians extracted the elements from the tanks and put them in capsules.

Initially, the DOE believed that such materials, especially cesium-137, could be put to useful work, in thermoelectric power supplies, to calibrate industrial instruments, or to extend the shelf life of pork, wheat, and spices (though consumers are generally wary of irradiated foods). The department leased hundreds of capsules to private companies around the United States.

One of those companies was Radiation Sterilizers, which used Hanford’s cesium capsules to sterilize medical supplies at its facilities in Decatur, Ga., and Westerville, Ohio. In 1988, a capsule in Decatur developed a pinhole leak, and 0.02 percent of its contents escaped—a mess that took the DOE four years and $47 million to clean up. Federal investigators concluded that moving the capsules in and out of water more than 7,000 times caused temperature changes that damaged the steel. Radiation Sterilizers had removed temperature-measuring systems in its facility, among other failures cited by the DOE. The company, though, blamed the government for shipping a damaged capsule. Whatever the cause, the DOE recalled all capsules and returned them to the WESF.

The WESF now contains 1,335 capsules of cesium, in the form of cesium chloride. Most of that waste consists of nonradioactive isotopes of cesium; of the radioactive isotopes, cesium-137 dominates, with lesser amounts of cesium-135. Another 601 capsules contain strontium, in the form of strontium fluoride, with the main radioactive isotope being strontium-90.

Cesium-137 and strontium-90 have half-lives of 30 years and 29 years, respectively—relatively short periods compared with the half-lives of other materials in the nation’s nuclear inventory, such as uranium and plutonium. However, the present radioactivity of the capsules “is so great” that it will take more than 800 years for the strontium capsules to decay enough to be classified as low-level waste, according to a 2003 report by the U.S. National Research Council. And while the radioactivity of the cesium-137 will diminish significantly after several hundred years, cesium-135 has a half-life of 2.3 million years, which means that the isotope will eventually become the dominant source of radioactivity in the cesium capsules, the report said.

Workers at Hanford continue to monitor the condition of the capsules by periodically shaking the containers using a long metal gripping tool. If they hear a “clunk,” it means the inner stainless-steel pipe is moving freely and is thus considered to be in good condition. Some capsules, though, fail the clunk test, which indicates the inner pipe is damaged, rusty, or swollen, and thus can’t move. About two dozen of the failed capsules have been “overpacked”—that is, sealed in a larger stainless-steel container and held separately.

Moving the capsules from wet storage to dry is only temporary

The DOE has made substantial progress on the capsule-transfer work in recent years. In August 2019, CH2M Hill Plateau Remediation Company, one of the main environmental cleanup contractors at Hanford, completed designs to modify the WESF for removal of the capsules. In the weeks before COVID-19 temporarily shut down the site in late March, crews had started fabricating equipment to load capsules into sleeves, transfer them into casks, and move them outside. A team cleaned and painted part of the WESF to make way for the loading crane. At the nearby Maintenance and Storage Facility, workers were building a mock-up system to allow people to train and test equipment.

During the lockdown, employees working remotely continued with technical and design reviews and nuclear-safety assessments. With Hanford now in a phased reopening, CH2M Hill workers recently broke ground on the site of the future dry cask storage pad and have resumed construction at the mock-up facility. Last October, the DOE awarded Intermech, a construction firm owned by Emcor, a nearly $5.6 million contract to build a reinforced-concrete pad surrounded by two chain-link fences, along with utility infrastructure and a heavy-duty road connecting the WESF to the pad.

However, plans for fiscal year 2021, which starts in October, are less certain. In its budget request to Congress in February, the DOE proposed shrinking Hanford’s annual cleanup budget from $2.5 billion to about $1.8 billion. Officials sought no funding for WESF modification and storage work, eliminating $11 million from the current budget. Meanwhile, the agency sought to boost funding for tank-waste vitrification from $15 million to $50 million. Under its legal agreements, the DOE is required to start glassifying Hanford’s low-activity waste by 2023.

Reeploeg of the Tri-City Development Council says the budget cuts, if approved, would make it harder for the capsule-transfer project to stay on track.

Along with vitrification, he told Spectrum, “we think WESF is a top priority, too. Considering that the potential consequences of an event there are so significant, we want those capsules out of the pool and into dry-cask storage as quickly as possible.”

Reeploeg said the failure of another aging Hanford facility should have been a wake-up call. In 2017, a tunnel that runs into the Plutonium Uranium Extraction Plant partially collapsed, exposing highly radioactive materials. Officials had been aware of the tunnel’s structural problems since the 1970s. Ultimately, no airborne radiation leaks were detected, and no workers were hurt. But in a February 2020 report, the Government Accountability Office said the DOE hadn’t done enough to prevent such an event.

Hanford experts at Washington state’s Department of Ecology said a short-term delay on the WESF mission won’t significantly increase the threat to the environment or workers.

“We don’t believe that there’s an immediate health risk from a slowdown of work,” says Alex Smith, the department’s nuclear waste program manager. So long as conditions are properly maintained in the pool cells, the capsules shouldn’t see any noticeable aging or decay in the near-term, she says, but it still makes sense to transfer the capsules to reduce the risk of a worst-case disaster.

In an email to Spectrum, the DOE noted that routine daily inspections of the WESF pool walls haven’t revealed any visible degradation or spalling—flaking that occurs due to moisture in the concrete.

Still, for Hanford watchdogs, the possibility of any new delays compounds the seemingly endless nature of the environmental cleanup mission. Ever since Hanford shuttered its last nuclear reactor in 1987, efforts to extract, treat, contain, and demolish radioactive waste and buildings have proceeded in fits and starts, marked by a few successes—such as the recent removal of 27 cubic meters of radioactive sludge near the Columbia River—but also budgeting issues, technical hurdles, and the occasional accident.

“There are all these competing [cleanup projects], but the clock is running on all of them,” says Dunning, the Oregon nuclear expert. “And you don’t know when it’s going to run out.”

Japan on Track to Introduce Flying Taxi Services in 2023

Post Syndicated from John Boyd original https://spectrum.ieee.org/cars-that-think/aerospace/aviation/japan-on-track-to-introduce-flying-taxi-services-in-2023

Last year, Spectrum reported on Japan’s public-private initiative to create a new industry around electric vertical takeoff and landing vehicles (eVTOLs) and flying cars. Last Friday, start-up company SkyDrive Inc. demonstrated the progress made since then when it held a press conference to spotlight its prototype vehicle and show reporters a video taken three days earlier of the craft undergoing a piloted test flight in front of staff and investors.

The sleek, single-seat eVTOL, dubbed SD-03 (SkyDrive third generation), resembles a hydroplane on skis and weighs in at 400 kilograms. The body is made of carbon fiber, aluminum, and other materials that have been chosen for their weight, balance, and durability. The craft measures 4 meters in length and width, and is about 2 meters tall. During operation, the nose of the craft is lit with white LED lights; red lights run around the bottom to enable the vehicle to be seen in the sky and to distinguish the direction the craft is flying. 

The SD-03 uses four pairs of electrically driven coaxial rotors, with one pair mounted at each quadrant. These enable a flight time of 5 to 10 minutes at speeds up to 50 kilometers per hour. “The propellers on each pair counter-rotate,” explains Nobuo Kishi, Sky Drive’s chief technology officer. “This cancels out propeller torque.” It also makes for a compact design, “so all the craft needs to land is the space of two parked cars,” he adds.

But when it came to providing more details of the drive system, Kishi declined, saying it’s a trade secret that’s a source of competitive advantage. The same goes for the craft’s energy storage system: Other than disclosing the fact that the flying taxi currently uses a lithium polymer battery, he’s also keeping details about the powertrain confidential.

Underlying this need for secrecy is the technology’s restricted capabilities. “Total energy that can be stored in a battery is a major limiting factor here,” says Steve Wright, Senior Research Fellow in Avionics and Aircraft Systems at the University of West England. “Which is why virtually every one of these projects is aiming at the air-taxi market within megacities.”

SkyDrive video shows the SD-03 take off vertically then engage in maneuvers as it hovers up to two meters off the ground around a netted enclosure. The craft is shown moving about at walking speed for roughly 4 minutes before landing on a designated spot. For monitoring purposes and back-up, engineers used an additional computer-assisted control system to ensure the craft’s stability and safety.

Speaking at the press conference, Tomohiro Fukuzawa, SkyDrive’s CEO, estimated there are currently as many as 100 flying car projects underway around the world, “but only a few have succeeded with someone on board,” he said.

He went on to note that Japan lags behind other countries in the aviation industry but excels in manufacturing cars. Given the similarities between cars —especially electric cars—and VTOLs, he believes Japan can compete with companies in the United States, Europe, and China that are also developing eVTOLs.

SkyDrive’s advances have encouraged new venture capital investors to come on board and nearly triple investment to a total of 5.9 billion yen ($56 million). Original investors include large corporations that saw an opportunity to get in on the ground floor of a promising new industry backed by government. One investor, NEC, is aiming to create more options for its air-traffic management systems, while Japan’s largest oil company, Eneos, is interested in developing electric charging stations for all kinds of electric vehicles.

In May, SkyDrive unveiled a drone for commercial use that is based on the same drive and power systems as the SD-03. Named the Cargo Drone, it’s able to transport payloads of up to 30 kg and can be preprogrammed to fly autonomously or be piloted manually. It will be operated as a service by SkyDrive, starting at a minimum monthly rental charge of 380,000 yen ($3,600) that rises according to the purpose and frequency of use. 

Kishi says the drone is designed to work within a 3 km range in locations that are difficult or time-consuming to get to by road. For instance, Obayashi Corp., one of Japan’s big five construction companies and an investor in SkyDrive, has been testing the Cargo Drone to autonomously deliver materials like sandbags and timber to a remote, hard-to-reach location.

Fukuzawa established SkyDrive in 2018 after leaving Toyota Motor and working with Cartivator, a group of volunteer engineers interested in developing flying cars. SkyDrive now has a staff of fifty.

Also in 2018, the Japanese government formed the Public-Private Conference for Air Mobility made up of private companies, universities, and government ministries. The stated aim was to make flying vehicles a reality by 2023. Tomohiko Kojima of Japan’s Civil Aviation Bureau told Spectrum that since the Conference’s formation, the Ministry of Land, Infrastructure, Transport and Tourism has held a number of meetings with members to discuss matters like airspace for eVTOL use, flight rules, and permitted altitudes. “And last month, the Ministry established a working-level group to discuss certification standards for eVTOLs, a standard for pilots, and operational safety standards,” Kojima added.

Fukuzawa is also targeting 2023 to begin taxi services (single passenger and pilot) in the Osaka Bay area, flying between locations like Kansai and Kobe airports and tourist attractions such as Universal Studios Japan. These flights will take less than ten minutes—a practical nod to the limitations of the battery energy storage system.

“What SkyDrive is proposing is entirely do-able,” says Wright. “Almost all rotor-only eVTOL projects are limited to sub-30-minute endurance, which, with safety reserves, equate to about 10 to 20 minutes flying.”

NASA’s Mars Rover Required a Special Touch for Its Robotic Arms

Post Syndicated from ATI Industrial Automation original https://spectrum.ieee.org/aerospace/robotic-exploration/nasa_mars_rover_required_a_special_touch_for_its_robotic_arms

In July, NASA launched the most sophisticated rover the agency has ever built: Perseverance. https://mars.nasa.gov/mars2020/ Scheduled to land on Mars in February 2021, Perseverance will be able to perform unique research into the history of microbial life on Mars in large part due to its robotic arms. To achieve this robotic capability, NASA needed to call upon innovation-driven contractors to make such an engineering feat a reality.

One of the company’s that NASA enlisted to help develop Perseverance was ATI Industrial Automation. https://www.ati-ia.com/ NASA looked to have ATI adapt the company’s own Force/Torque Sensor to enable the robotic arm of Perseverance to operate in the environment of space. ATI Force/Torque sensors were initially developed to enable robots and automation systems to sense the forces applied while interacting with their environment in operating rooms or factory floors.

However, the environment of space presented unique engineering challenges for ATI’s Force/Torque Sensor. The extreme environment and the need for redundancy to ensure that any single failure wouldn’t compromise the sensor function were the key challenges the ATI engineers faced, according to Ian Stern, Force/Torque Sensor Product Manager at ATI. https://www.linkedin.com/in/ianhstern/

“ATI’s biggest technical challenge was developing the process and equipment needed to perform the testing at the environmental limits,” said Stern. “The challenges start when you consider the loads that the sensor sees during the launch of the Atlas 5 rocket from earth. The large G forces cause the tooling on the end of the sensor to generate some of the highest loads that the sensor sees over its life.”

Once on Mars the sensor must be able to accurately and reliably measure force/torques in temperatures ranging from -110° to +70° Celsius (C). This presents several challenges because of how acutely temperature influences the accuracy of force measurement devices. To meet these demands, ATI developed the capability to calibrate the sensors at -110°C. “This required a lot of specialized equipment for achieving these temperatures while making it safe for our engineers to perform the calibration process,” added Stern.

In addition to the harsh environment, redundancy strategies are critical for a sensor technology on a space mission. While downtime on the factory floor can be costly, a component failure on Mars can render the entire mission worthless since there are no opportunities for repairs.

This need for a completely reliable product meant that ATI engineers had to develop their sensor so that it was capable of detecting failures in its

measurements as well as accurately measuring forces and torques should there be multiple failures on the measurement signals. ATI developed a patented process for achieving this mechanical and electrical redundancy.

All of this effort to engineer a sensor for NASA’s Mars mission may enable a whole new generation of space exploration, but it’s also paying immediate dividends for ATI’s more terrestrial efforts in robotic sensors.

“The development of a sensor for the temperatures on Mars has helped us to develop and refine our process of temperature compensation,” said Stern. “This has benefits on the factory floor in compensating for thermal effects from tooling or the environment.”

Stern points out as an example of these new temperature compensation strategies a solution that was developed to address the heat produced by the motor mounted to a tool changer. This heat flux can cause undesirable output on the Force/Torque data, according to Stern.

“As a result of the Mars Rover project we now have several different processes to apply on our standard industrial sensors to mitigate the effects of temperature change,” said Stern.

The redundancy requirements translated into a prototype of a Standalone Safety Rated Force/Torque sensor capable of meeting Performance Level d (PL-d) safety requirements.

This type of sensor can actively check its health and provide extremely high-resolution data allowing a large, 500 kilogram payload robot handling automotive body parts to safely detect if a human finger was pinched.

ATI is also leveraging the work it did for Perseverance to inform some of its ongoing space projects. One particular project is for a NASA Tech demo that is targeting a moon rover for 2023, a future mars rovers and potential mission to Europa that would use sensors for drilling into ice.

Stern added: “The fundamental capability that we developed for the Perseverance Rover is scalable to different environments and different payloads for nearly any space application.”

For more information on ATI Industrial Automation please click here.

With Ultralight Lithium-Sulfur Batteries, Electric Airplanes Could Finally Take Off

Post Syndicated from Mark Crittenden original https://spectrum.ieee.org/aerospace/aviation/with-ultralight-lithiumsulfur-batteries-electric-airplanes-could-finally-take-off

Electric aircraft are all the rage, with prototypes in development in every size from delivery drones to passenger aircraft. But the technology has yet to take off, and for one reason: lack of a suitable battery.

For a large passenger aircraft to take off, cruise, and land hundreds of kilometers away would take batteries that weigh thousands of kilograms—far too heavy for the plane to be able to get into the air in the first place. Even for relatively small aircraft, such as two-seat trainers, the sheer weight of batteries limits the plane’s payload, curtails its range, and thus constrains where the aircraft can fly. Reducing battery weight would be an advantage not only for aviation, but for other electric vehicles, such as cars, trucks, buses, and boats, all of whose performance is also directly tied to the energy-to-weight ratio of their batteries.

For such applications, today’s battery of choice is lithium ion. It reached maturity years ago, with each new incremental improvement smaller than the last. We need a new chemistry.

Since 2004 my company, Oxis Energy, in Oxfordshire, England, has been working on one of the leading contenders—lithium sulfur. Our battery technology is extremely lightweight: Our most recent models are achieving more than twice the energy density typical of lithium-ion batteries. Lithium sulfur is also capable of providing the required levels of power and durability needed for aviation, and, most important, it is safe enough. After all, a plane can’t handle a sudden fire or some other calamity by simply pulling to the side of the road.

The new technology has been a long time coming, but the wait is now over. The first set of flight trials have already been completed.

Fundamentally, a lithium-sulfur cell is composed of four components:

  • The positive electrode, known as the cathode, absorbs electrons during discharge. It is connected to an aluminum-foil current collector coated with a mixture of carbon and sulfur. Sulfur is the active material that takes part in the electrochemical reactions. But it is an electrical insulator, so carbon, a conductor, delivers electrons to where they are needed. There is also a small amount of binder added to ensure the carbon and sulfur hold together in the cathode.
  • The negative electrode, or anode, releases electrons during discharge. It is connected to pure lithium foil. The lithium, too, acts as a current collector, but it is also an active material, taking part in the electrochemical reaction.
  • A porous separator prevents the two electrodes from touching and causing a short circuit. The separator is bathed in an electrolyte containing lithium salts.
  • An electrolyte facilitates the electrochemical reaction by allowing the movement of ions between the two electrodes.

These components are connected and packaged in foil as a pouch cell. The cells are in turn connected together—both in series and in parallel—and packaged in a 20 ampere-hour, 2.15-volt battery pack. For a large vehicle such as an airplane, scores of packs are connected to create a battery capable of providing tens or hundreds of amp-hours at several hundred volts.

Lithium-sulfur batteries are unusual because they go through multiple stages as they discharge, each time forming a different, distinct molecular species of lithium and sulfur. When a cell discharges, lithium ions in the electrolyte migrate to the cathode, where they combine with sulfur and electrons to form a polysulfide, Li2S8. At the anode, meanwhile, lithium molecules give up electrons to form positively charged lithium ions; these freed electrons then move through the external circuit—the load—which takes them back to the cathode. In the electrolyte, the newly produced Li2S8 immediately reacts with more lithium ions and more electrons to form a new polysulfide, Li2S6. The process continues, stepping through further polysulfides, Li2S4 and Li2S2, to eventually become Li2S. At each step more energy is given up and passed to the load until at last the cell is depleted of energy.

Recharging reverses the sequence: An applied current forces electrons to flow in the opposite direction, causing the sulfur electrode, or cathode, to give up electrons, converting Li2S to Li2S2. The polysulfide continues to add sulfur atoms step-by-step until Li2S8 is created in the cathode. And each time electrons are given up, lithium ions are produced that then diffuse through the electrolyte, combining with electrons at the lithium electrode to form lithium metal. When all the Li2S has been converted to Li2S8, the cell is fully charged.

This description is simplified. In reality, the reactions are more complex and numerous, taking place also in the electrolyte and at the anode. In fact, over many charge and discharge cycles, it is these side reactions that cause degradation in a lithium-sulfur cell. Minimizing these, through the selection of the appropriate materials and cell configuration, is the fundamental, underlying challenge that must be met to produce an efficient cell with a long lifetime.

One great challenge for both lithium-ion and lithium-sulfur technologies has been the tendency for repeated charging and discharging cycles to degrade the anode. In the case of lithium ion, ions arriving at that electrode normally fit themselves into interstices in the metal, a process called intercalation. But sometimes ions plate the surface, forming a nucleus on which further plating can accumulate. Over many cycles a filament, or dendrite, may grow until it reaches the opposing electrode and short-circuits the cell, causing a surge of energy, in the form of heat that irreparably damages the cell. If one cell breaks down like this, it can trigger a neighboring cell to do the same, beginning a domino effect known as a thermal runaway reaction—in common parlance, a fire.

With lithium-sulfur cells, degradation of the lithium-metal anode is also a problem. However, this occurs via a very different mechanism, one that does not involve the formation of dendrites. In lithium-sulfur cells, uneven current densities on the anode surface cause lithium to be plated and stripped unevenly as the battery is charged and discharged. Over time, this uneven plating and stripping causes mosslike deposits on the anode that react with the sulfide and polysulfides in the electrolyte. These mosslike deposits become electrically disconnected from the bulk anode, leaving less of the anode surface available for chemical reaction. Eventually, as this degradation progresses, the anode fails to operate, preventing the cell from accepting charge.

Developing solutions to this degradation problem is crucial to producing a cell that can perform at a high level over many charge-discharge cycles. A promising strategy we’ve been pursuing at Oxis involves coating the lithium-metal anode with thin layers of ceramic materials to prevent degradation. Such ceramic materials need to have high ionic conductivity and be electrically insulating, as well as mechanically and chemically robust. The ceramic layers allow lithium ions to pass through unimpeded and be incorporated into the bulk lithium metal beneath.

We are doing this work on the protection layer for the anode in partnership with Pulsedeon and Leitat, and we’re optimistic that it will dramatically increase the number of times a cell can be discharged and charged. And it’s not our only partnership. We’re also working with Arkema to improve the cathode in order to increase the power and energy density of the battery.

Indeed, the key advantage of lithium-ion batteries over their predecessors—and of lithium sulfur over lithium ion—is the great amount of energy the cells can pack into a small amount of mass. The lead-acid starter battery that cranks the internal combustion engine in a car can store about 50 watt-hours per kilogram. Typical lithium-ion designs can hold from 100 to 265 Wh/kg, depending on the other performance characteristics for which it has been optimized, such as peak power or long life. Oxis recently developed a prototype lithium-sulfur pouch cell that proved capable of 470 Wh/kg, and we expect to reach 500 Wh/kg within a year. And because the technology is still new and has room for improvement, it’s not unreasonable to anticipate 600 Wh/kg by 2025.

When cell manufacturers quote energy-density figures, they usually specify the energy that’s available when the cell is being discharged at constant, low power rates. In some applications such low rates are fine, but for the many envisioned electric aircraft that will take off vertically, the energy must be delivered at higher power rates. Such a high-power feature must be traded off for lower total energy-storage capacity.

Furthermore, the level of energy density achievable in a single cell might be considerably greater than what’s possible in a battery consisting of many such cells. The energy density doesn’t translate directly from the cell to the battery because cells require packaging—the case, the battery management system, and the connections, and perhaps cooling systems. The weight must be kept in check, and for this reason our company is using advanced composite materials to develop light, strong, flameproof enclosures.

If the packaging is done right, the energy density of the battery can be held to 80 percent of that of the cells: A cell rated at 450 Wh/kg can be packaged at more than 360 Wh/kg in the final battery. We expect to do better by integrating the battery into the aircraft, for instance, by making the wing space do double duty as the battery housing. We expect that doing so will get the figure up to 90 percent.

To optimize battery performance without compromising safety we rely, first and foremost, on a battery management system (BMS), which is a combination of software and hardware that controls and protects the battery. It also includes algorithms for measuring the energy remaining in a battery and others for minimizing the energy wasted during charging.

Like lithium-ion cells, lithium-sulfur cells vary slightly from one another. These differences, as well as differences in the cells’ position in the battery pack, may cause some cells to consistently run hotter than others. Over time, those high temperatures slowly degrade performance, so it is important to minimize the power differences from cell to cell. This is usually achieved using a simple balancing solution, in which several resistors are connected in parallel with a cell, all controlled by software in the BMS.

Even when charging and discharging rates are kept within safe limits, any battery may still generate excessive heat. So, typically, a dedicated thermal-management system is necessary. An electric car can use liquid cooling, but in aviation, air cooling is much preferred because it adds less weight. Of course, the battery can be placed at a point where air is naturally moving across the surface of the airplane—perhaps the wing. If necessary, air can be shunted to the battery through ducts. At Oxis, we’re using computational modeling to optimize such cooling. For instance, when we introduced this technique in a project for a small fixed-wing aircraft, it allowed us to design an effective thermal-management system, without which the battery would reach its temperature limits before it was fully discharged.

As noted above, a battery pack is typically arranged with the cells both in parallel and in series. However, there’s more to the arrangement of cells. Of course, the battery is a mission-critical component of an e-plane, so you’ll want redundancy, for enhanced safety. You could, for instance, design the battery in two equal parts, so that if one half fails it can be disconnected, leaving the aircraft with at least enough energy to manage a controlled descent and landing.

Another software component within the BMS is the state-of-charge algorithm. Imagine having to drive a car whose fuel gauge had a measurement error equivalent to 25 percent of the tank’s capacity. You’d never let the indicator drop to 25 percent, just to make sure that the car wouldn’t sputter to a halt. Your practical range would be only three-quarters of the car’s actual range. To avoid such waste, Oxis has put a great emphasis on the development of state-of-charge algorithms.

In a lithium-ion battery you can estimate the charge by simply measuring the voltage, which falls as the energy level does. But it’s not so simple for a lithium-sulfur battery. Recall that in the lithium-sulfur battery, different polysulfides figure in the electrochemical process at different times during charge and discharge. The upshot is that voltage is not a good proxy for the state of charge and, to make things even more complicated, the voltage curve is asymmetrical for charge and for discharge. So the algorithms needed to keep track of the state of charge are much more sophisticated. We developed ours with Cranfield University, in England, using statistical techniques, among them the Kalman filter, as well as neural networks. We can estimate state of charge to an accuracy of a few percent, and we are working to do better still.

All these design choices involve trade-offs, which are different for different airplanes. We vary how we manage these trade-offs in order to tailor our battery designs for three distinct types of aircraft.

  • High-altitude pseudo satellites (HAPS) are aircraft that fly at around 15,000 to 20,000 meters. The hope is to be able to fly for months at a time; the current record is 26 days, set in 2018 by the Airbus Zephyr S. By day, these aircraft use solar panels to power the motors and charge the batteries; by night, they fly on battery power. Because the 24-hour charge-and-discharge period demands only a little power, you can design a light battery and thus allow for a large payload. The lightness also makes it easier for such an aircraft to fly far from the equator, where the night lasts longer.
  • Electric vertical take-off and landing (eVTOL) aircraft are being developed as flying taxis. Lilium, in Germany, and Uber Elevate, among others, already have such projects under way. Again, weight is critical, but here the batteries need not only be light but must also be powerful. Oxis has therefore developed two versions of its cell chemistry. The high-energy version is optimized in many aspects of the cell design to minimize weight, but it is limited to relatively low power; it is best suited to HAPS applications. The high-power version weighs more, although still significantly less than a lithium-ion battery of comparable performance; it is well suited for such applications as eVTOL.
  • Light fixed-wing aircraft: The increasing demand for pilots is coming up against the high cost of training them; an all-electric trainer aircraft would dramatically reduce the operation costs. A key factor is longer flight duration, which is enabled by the lighter battery. Bye Aerospace, in Colorado, is one company leading the way in such aircraft. Furthermore, other companies—such as EasyJet, partnered with Wright Electric—are planning all-electric commercial passenger jets for short-haul, 2-hour flights.

Three factors will determine whether lithium-sulfur batteries ultimately succeed or fail. First is the successful integration of the batteries into multiple aircraft types, to prove the principle. Second is the continued refinement of the cell chemistry. Third is the continued reduction in the unit cost. A plus here is that sulfur is about as cheap as materials get, so there’s reason to hope that with volume manufacturing, the unit cost will fall below that of the lithium-ion design, as would be required for commercial success.

Oxis has already produced tens of thousands of cells, and it is currently scaling up two new projects. Right now, it is establishing a manufacturing plant for the production of both the electrolyte and the cathode active material in Port Talbot, Wales. Later, the actual mass production of lithium-sulfur cells will begin on a site that belongs to Mercedes-Benz Brazil, in Minas Gerais, Brazil.

This state-of-the-art plant should be commissioned and operating by 2023. If the economies of scale prove out, and if the demand for electric aircraft rises as we expect, then lithium-sulfur batteries could begin to supplant lithium-ion batteries in this field. And what works in the air ought to work on the ground, as well.

This article appears in the August 2020 print issue as “Ultralight Batteries for Electric Airplanes.”

About the Author

Mark Crittenden is head of battery development and integration at Oxis Energy, in Oxfordshire, U.K.

Amazon’s Project Kuiper is More Than the Company’s Response to SpaceX

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/aerospace/satellites/amazons-project-kuiper-is-more-than-the-companys-response-to-spacex

Amazon cleared an important hurdle when the U.S. Federal Communications Commission (FCC) announced on 30 July that the company was authorized to deploy and operate its Kuiper satellite constellation. The authorization came with the caveat that Amazon would still have to demonstrate that Kuiper would not interfere with previously authorized satellite projects, such as SpaceX’s Starlink.

Even with the FCC’s caveat, it’s tempting to imagine that the idea of putting a mega-constellation of thousands of satellites in low-Earth orbit to provide uninterrupted broadband access anywhere on Earth will become a battle between Jeff Bezos’ Kuiper and Elon Musk’s Starlink. After all, even in space, how much room can there be for two mega-constellations, let alone additional efforts like that of the recently-beleaguered OneWeb? But some experts suggest that Amazon’s real play will come from its ability to vertically integrate Kuiper into the rest of the Amazon ecosystem—an ability SpaceX cannot match with Starlink.

“With Amazon, it’s a whole different ballgame,” says Zac Manchester, an assistant professor of aeronautics and astronautics at Stanford University. “The thing that makes Amazon different from SpaceX and OneWeb is they have so much other stuff going for them.” If Kuiper succeeds, Amazon can not only offer global satellite broadband access—it can include that access as part of its Amazon Web Services (AWS), which already offers resources for cloud computing, machine learning, data analytics, and more.

First, some quick background on what Amazon plans with Kuiper itself. The FCC approved the launch of 3,236 satellites. Not all of those thousands of satellites have to be launched immediately, however. Amazon is now obligated to launch at least half of the total by 2026 to retain the operating license the FCC has granted the company.

Amazon has said it will invest US $10 billion to build out the constellation. The satellites themselves will circle the Earth in what’s referred to as “low Earth orbit,” or LEO, which is any orbital height below 2000 kilometers. The satellites will operate in the Ka band (26.5 to 40 gigahertz).

A common talking point for companies building satellite broadband systems is that the constellations will be able to provide ubiquitous broadband access. In reality, except for users in remote or rural locations, terrestrial fiber or cellular networks almost always win out. In other words, no one in a city or suburb should be clamoring for satellite broadband.

“If they think they’re competing against terrestrial providers, they’re deluded,” says Tim Farrar, a satellite communications consultant, adding that satellite broadband is for last-resort customers who don’t have any other choice for connectivity.

However, these last-resort customers also include industries that can weather the cost of expensive satellite broadband, such as defense, oil and gas, and aviation. There’s far more money to be made in catering to those industries than in building several thousand satellites just to connect individual rural broadband subscribers.

But what these far-flung industries also increasingly have in common, alongside industries like Earth-imaging and weather-monitoring that also depend on satellite connectivity, is data. Specifically, the need to move, store, and crunch large quantities of data. And that’s something Amazon already offers.

“You could see Project Kuiper being a middleman for getting data into AWS,” says Manchester. “SpaceX owns the space segment, they can get data from point A to point B through space. Amazon can get your data through the network and into their cloud and out to end users.” There are plenty of tech start-ups and other companies that already do machine learning and other data-intensive operations in AWS and could make use of Kuiper to move their data. (Amazon declined to comment on the record about their future plans for Kuiper for this story).

Amazon has also built AWS ground stations that connect satellites directly with the rest of the company’s web service infrastructure. Building and launching satellites is certainly expensive, but the ground stations to connect those satellites are also a not-insignificant cost. Because Amazon already offers access to these ground stations on a per-minute basis, Manchester thinks it’s not unreasonable for the company to expand that offering to Kuiper’s connectivity.

There’s also Blue Origin to consider. While the rocket company owned by Bezos currently has a heavy-lift rocket that could conceivably bring Kuiper satellites to LEO, that could change. The company has at least one such rocket —the New Glenn—in development. Farrar notes that Amazon could spend the next few years in satellite development before it needs to begin launching satellites in earnest, by which point Blue Origin could have a heavy-lift option available.

Farrar says that with an investment of $10 billion, Amazon will need to bring in millions of subscribers to consider the project a financial success. But Amazon can also play a longer game than, say, SpaceX. Whereas the latter is going to be dependent entirely on subscriptions to generate revenue for Starlink, Amazon’s wider business platform means Kuiper is not dependent solely on its own ability to attract users. Plus, Amazon has the resources to make a long-term investment in Kuiper before turning a profit, in a way Starlink cannot.

“They own all these things the other guys don’t,” says Manchester. “In a lot of ways, Amazon has a grander vision. They’re not trying to be a telco.”

China Launches Beidou, Its Own Version of GPS

Post Syndicated from Andrew Jones original https://spectrum.ieee.org/tech-talk/aerospace/satellites/final-piece-of-chinas-beidou-navigation-satellite-system-comes-online

The final satellite needed to complete China’s own navigation and positioning satellite system has passed final on-orbit tests. The completed independent system provides military and commercial value while also facilitating new technologies and services.

The Beidou was launched on a Long March 3B rocket from the Xichang Satellite Launch Center in a hilly region of Sichuan province at 01:43 UTC on Tuesday, 23 June. The satellite was sent into a geosynchronous transfer orbit before entering an orbital slot approximately 35,786 kilometers in altitude which keeps it at a fixed point above the Earth.

Like GPS, the main, initial motivation for Beidou was military. The People’s Liberation Army did not want to be dependent on GPS for accurate positioning data of military units and weapons guidance, as the U.S. Air Force could switch off open GPS signals in the event of conflict. 

As with GPS, Beidou also provides and facilitates a range of civilian and commercial services and activities, with an output value of $48.5 billion in 2019. 

Twenty four satellites in medium Earth orbits (at around 21,500 kilometers above the Earth) provide positioning, navigation and timing (PNT) services. The satellites use rubidium and hydrogen atomic clocks for highly-accurate timing that allows precise measurement of speed and location.

Additionally, thanks to a number of satellites in geosynchronous orbits, Beidou provides a short messaging service through which 120-character messages can be sent to other Beidou receivers. Beidou also aids international search and rescue services. Vessels at sea will be able to seek help from nearby ships in case of emergency despite no cellphone signal.

The Beidou satellite network is also testing inter-satellite links, removing reliance on ground stations for communications across the system.

Beidou joins the United States’ GPS and Russia’s GLONASS in providing global PNT services, with Europe’s Galileo soon to follow. These are all compatible and interoperable, meaning users can draw services from all of these to improve accuracy.

“The BeiDou-3 constellation transmits a civil signal that was designed to be interoperable with civil signals broadcast by Galileo, GPS III, and a future version of GLONASS. This means that civil users around the world will eventually be getting the same signal from more than 100 satellites across all these different constellations, greatly increasing availability, accuracy, and resilience,” says Brian Weeden, Director of Program Planning for Secure World Foundation

“This common signal is the result of international negotiations that have been going on since the mid-2000s within the International Committee of GNSS (ICG).”

The rollout of Beidou has taken two decades. The first Beidou satellites were launched in 2000, providing coverage to China. Second generation Beidou-2 satellites provided coverage for the Asia-Pacific region starting in 2012. Deployment of Beidou-3 satellites began in 2015, with Tuesday’s launch being the 30th such satellite. 

But this is far from the end of the line. China wants to establish a ‘ubiquitous, integrated and intelligent and comprehensive’ national PNT system, with Beidou as its core, by 2035, according to a white paper.

Chinese aerospace firms are also planning satellite constellations in low Earth orbit to augment the Beidou signal, improving accuracy while facilitating high-speed data transmission. Geely, an automotive giant, is now also planning its own constellation to improve accuracy for autonomous driving.

Although the space segment is complete, China still has work to do on the ground to make full use of Beidou, according to Weeden.

“It’s not just enough to launch the satellites; you also have to roll out the ground terminals and get them integrated into everything you want to make use of the system. Doing so is often much harder and takes much longer than putting up the satellites. 

“So, for the Chinese military to make use of the military signals offered by BeiDou-3, they need to install compatible receivers into every plane, tank, ship, bomb, and backpack. That will take a lot of time and effort,” Weeden states.

With the rollout of Beidou satellites complete, inhabitants downrange of Xichang will be spared any further disruption and possible harm. Long March 3B launches of Beidou satellites frequently see spent rocket stages fall near or on inhabited areas. Eighteen such launches have been carried out since 2018.

The areas calculated to be under threat from falling boosters were evacuated ahead of time for safety. Warnings about residual toxic hypergolic propellant were also issued. But close calls and damage to property were all too common.

Sergey Brin’s Revolutionary $19 Airship

Post Syndicated from Mark Harris original https://spectrum.ieee.org/tech-talk/aerospace/aviation/sergey-brins-revolutionary-20-airship

In March last year, Google co-founder Sergey Brin finally saw a return on the millions he has invested in a quest to build the world’s largest and greenest airship for humanitarian missions. After six years of development, his secretive airship company, LTA Research and Exploration, quietly made its first sale: an a 18-metre long, 12-engined, all-electric aircraft called Airship 3.0. The price? According to an FAA filing obtained by IEEE Spectrum, it was just $18.70.

This was not an effort by Brin to cash out his airship investment, but a key part of its development process. The FAA records show that the buyer of Airship 3.0 was Nicolas Garafolo, an associate professor in the Mechanical Engineering department at the University of Akron in Ohio.

When not working at the university, Garafolo leads LTA’s Akron research team, which includes undergraduates, graduate students and a number of alumni from UA’s College of Engineering. The nominal purchase price is probably a nod to UA’s founding year: 1870.

Airship 3.0 is actually LTA Akron’s second prototype. The first, registered in September 2018, was also a 12-engined electric airship, but only 15 meters long, or a little longer than a typical American school bus. It underwent flight tests in Akron earlier this year. Akron was where the US Navy built gargantuan airships in the 1930s, and is still home to the largest active airship hangar in the world—which, according to emails obtained under a public records request, LTA is also interested in leasing.

But Brin doesn’t want to recreate the glory days of the past, he wants to surpass them. Patents, official records and job listings suggest that LTA’s new airship will be safer, smarter and much more environmentally sustainable than the wallowing and dangerous airships of yore.

The biggest question in designing an airship is how to make it float. Hydrogen is cheap, plentiful and the lightest gas in the universe, but also extremely flammable and difficult to contain. Helium, the next lightest gas, is safely inert, but expensive and increasingly scarce. Virtually all new airships since the Hindenburg disaster have opted for helium as a lifting gas.

LTA’s airship, uniquely, will have both gases on board. Helium will be used to provide lift, while hydrogen will be used to power its electric engines. The lithium ion batteries used in today’s electric cars are too heavy for use in airships that LTA intends to deliver humanitarian aid to remote disasters. Instead, a hydrogen fuel cell will provide reliable power and could enable long range missions.

A patent application published last year shows that LTA is also rethinking how to manufacture very large airships. Traditionally, airships are kept stationary while their rigid frames, used to hold and shape the gas envelope, are being built. This requires workers to climb to great heights, adding risks and delays. 

LTA’s patent covers a “rollercoaster” structure [right] that allows a partially completed airship to be rotated around its central axis during construction, so that workers can stay safely on the ground. The patent application also describes a method for 3D printing airship components out of strong, lightweight carbon fiber.

LTA’s website says that it is working to create a family of aircraft with no operational carbon footprint to “substantially reduce the total global carbon footprint of aviation.” That would require both generating hydrogen using renewable electricity, and producing a variety of aircraft to satisfy passenger as well as cargo demand.

Paperwork filed by LTA suggests that its first full-size airship, called Pathfinder 1, could already be almost ready to take to the air from LTA’s headquarters at Moffett Field in Silicon Valley. FAA records show that the Pathfinder is powered by 12 electric motors and able to carry 14 people.  That would make it about the same size as the only passenger airship operating today, the Zeppelin NT, which conducts sightseeing tours in Germany and Switzerland.

In fact, LTA’s first airship could even be based on the  Zeppelin NT, modified to use electric propulsion. LTA has received numerous imports from Zeppelin over the last few years, including fins, rudders, and equipment for a passenger gondola.

Unlike experimental fixed-wing or VTOL aircraft, which can be quietly tested and flown from remote airfields, there will be no keeping the enormous Pathfinder 1 secret when it finally leaves its hangar at Moffett Field. In January, LTA flew a small unmarked airship, usually operated for aerial advertising, from there. Even the sight of that airship, probably for a test of LTA’s flight systems, got observers chattering. The unveiling of Pathfinder 1 will show once and for all that Sergey Brin’s dreams of an airship renaissance are anything but hot air.

AI Seeks ET: Machine Learning Powers Hunt for Life in the Solar System

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/aerospace/robotic-exploration/ai-seeks-et-machine-learning-life-solar-system

Can artificial intelligence help the search for life elsewhere in the solar system? NASA thinks the answer may be “yes”—and not just on Mars either.

A pilot AI system is now being tested for use on the ExoMars mission that is currently slated to launch in the summer or fall of 2022. The machine-learning algorithms being developed will help science teams decide how to test Martian soil samples to return only the most meaningful data.

For ExoMars, the AI system will only be used back on earth to analyze data gather by the ExoMars rover. But if the system proves to be as useful to the rovers as now suspected, a NASA mission to Saturn’s moon Titan (now scheduled for 2026 launch) could automate the scientific sleuthing process in the field. This mission will rely on the Dragonfly octocopter drone to fly from surface location to surface location through Titan’s dense atmosphere and drill for signs of life there.

The hunt for microbial life in another world’s soil, either as fossilized remnants or as present-day samples, is very challenging, says Eric Lyness, software lead of the NASA Goddard Planetary Environments Lab in Greenbelt, Md. There is of course no precedent to draw upon, because no one has yet succeeded in astrobiology’s holy grail quest.

But that doesn’t mean AI can’t provide substantial assistance. Lyness explained that for the past few years he’d been puzzling over how to automate portions of an exploratory mission’s geochemical investigation, wherever in the solar system the scientific craft may be.

Last year he decided to try machine learning. “So we got some interns,” he said. “People right out of college or in college, who have been studying machine learning. … And they did some amazing stuff. It turned into much more than we expected.” Lyness and his collaborators presented their scientific analysis algorithm at a geochemistry conference last month.

ExoMars’s rover—named Rosalind Franklin, after one of the co-discoverers of DNA—will be the first that can drill down to 2-meter depths, beyond where solar UV light might penetrate and kill any life forms. In other words, ExoMars will be the first Martian craft with the ability to reach soil depths where living soil bacteria could possibly be found.

“We could potentially find forms of life, microbes or other things like that,” Lyness said. However, he quickly added, very little conclusive evidence today exists to suggest that there’s present-day (microbial) life on Mars. (NASA’s Curiosity rover has sent back some inexplicable observations of both methane and molecular oxygen in the Martian atmosphere that could conceivably be a sign of microbial life forms, though non-biological processes could explain these anomalies too.)

Less controversially, the Rosalind Franklin rover’s drill could also turn up fossilized evidence of life in the Martian soil from earlier epochs when Mars was more hospitable.

NASA’s contribution to the joint Russian/European Space Agency ExoMars project is an instrument called a mass spectrometer that will be used to analyze soil samples from the drill cores. Here, Lyness said, is where AI could really provide a helping hand.

The spectrometer, which studies the mass distribution of ions in a sample of material, works by blasting the drilled soil sample with a laser and then mapping out the atomic masses of the various molecules and portions of molecules that the laser has liberated. The problem is any given mass spectrum could originate from any number of source compounds, minerals and components. Which always makes analyzing a mass spectrum a gigantic puzzle.

Lyness said his group is studying the mineral montmorillonite, a commonplace component of the Martian soil, to see the many ways it might reveal itself in a mass spectrum. Then his team sneaks in an organic compound with the montmorillonite sample to see how that changes the mass spectrometer output.

“It could take a long time to really break down a spectrum and understand why you’re seeing peaks at certain [masses] in the spectrum,” he said. “So anything you can do to point scientists into a direction that says, ‘Don’t worry, I know it’s not this kind of thing or that kind of thing,’ they can more quickly identify what’s in there.”

Lyness said the ExoMars mission will provide a fertile training ground for his team’s as-yet-unnamed AI algorithm. (He said he’s open to suggestions—though, please, no spoof Boaty McBoatface submissions need apply.)

Because the Dragonfly drone and possibly a future astrobiology mission to Jupiter’s moon Europa would be operating in much more hostile environments with much less opportunity for data transmission back and forth to Earth, automating a craft’s astrobiological exploration would be practically a requirement.

All of which points to a future in mid-2030s in which a nuclear-powered octocopter on a moon of Saturn flies from location to location to drill for evidence of life on this tantalizingly bio-possible world. And machine learning will help power the science.

“We should be researching how to make the science instruments smarter,” Lyness said. “If you can make it smarter at the source, especially for planetary exploration, it has huge payoffs.”

U.S. Eases Restrictions on Private Remote-Sensing Satellites

Post Syndicated from David Schneider original https://spectrum.ieee.org/tech-talk/aerospace/satellites/eased-restrictions-on-commercial-remote-sensing-satellites

Later this month, satellite-based remote-sensing in the United States will be getting a big boost. Not from a better rocket, but from the U.S. Commerce Department, which will be relaxing the rules that govern how companies provide such services.

For many years, the Commerce Department has been tightly regulating those satellite-imaging companies, because of worries about geopolitical adversaries buying images for nefarious purposes and compromising U.S. national security. But the newly announced rules, set to go into effect on July 20, represent a significant easing of restrictions.

Previously, obtaining permission to operate a remote-sensing satellite has been a gamble—the criteria by which a company’s plans were judged were vague, as was the process, an inter-agency review requiring input from the U.S. Department of Defense as well as the State Department. But in May of 2018, the Trump administration’s Space Policy Directive-2 made it apparent that the regulatory winds were changing. In an effort to promote economic growth, the Commerce Department was commanded to rescind or revise regulations established in the Land Remote Sensing Policy Act of 1992,  a piece of legislation that compelled remote-sensing satellite companies to obtain licenses and required that their operations not compromise national security.

Following that directive, in May of 2019 the Commerce Department issued a Notice of Proposed Rulemaking in an attempt to streamline what many in the satellite remote-sensing industry saw as a cumbersome and restrictive process.

But the proposed rules didn’t please industry players. To the surprise of many of them, though, the final rules announced last May were significantly less strict. For example, they allow satellite remote-sensing companies to sell images of a particular type and resolution if substantially similar images are already commercially available in other countries. The new rules also drop earlier restrictions on nighttime imaging, radar imaging, and short-wave infrared imaging.

On June 25th, Commerce Secretary Wilbur Ross explained at a virtual meeting of the National Oceanic and Atmospheric Administration’s Advisory Committee on Commercial Remote Sensing why the final rules differ so much from what was proposed in 2019:

Last year at this meeting, you told us that our first draft of the rule would be detrimental to the U.S. industry and that it could threaten a decade’s worth of progress. You provided us with assessments of technology, foreign competition, and the impact of new remote sensing applications. We listened. We made the case with our government colleagues that the U.S. industry must innovate and introduce new products as quickly as possible. We argued that it was no longer possible to control new applications in the intensifying global competition for dominance.

In other words, the cat was already out of the bag: there’s no sense prohibiting U.S. companies from offering satellite-imaging services already available from foreign companies.

An area where the new rules remain relatively strict though, concerns the taking of pictures of other objects in orbit. Companies that want to offer satellite inspection or maintenance services would need rules that allow what regulators call “non-Earth imaging.” But there are national security implications here, because pictures obtained in this way could blow the cover of U.S. spy satellites masquerading as space debris.

While the extent to which spy satellites cloak themselves in the guise of space debris isn’t known, it seems clear that this would be an ideal tactic for avoiding detection. That strategy won’t work, though, if images taken by commercial satellites reveal a radar-reflecting object to be a cubesat instead of a mangled mass of metal.

Because of that concern, the current rules demand that companies limit the detailed imaging of other objects in space to ones for which they have obtained permission from the satellite owner and from the Secretary of Commerce at least 5 days in advance of obtaining images. But that stipulation begs a key question: Who should a satellite-imaging company contact if it wants to take pictures of a piece of space debris? Maybe imaging space debris would only require the permission of the Secretary of Commerce. But then, would the Secretary ever give such a request a green light? After all, if permission were typically granted, instances when it wasn’t would become suspicious.

More likely, imaging space debris—or spy satellites trying to pass as junk—is going to remain off the table for the time being. So even though the new rules are a welcome development to most commercial satellite companies, some will remain disappointed, including those companies that make up the Consortium for the Execution of Rendezvous and Servicing Operations (CONFERS), which had recommended that “the U.S. government should declare the space domain as a public space and the ability to conduct [non-Earth imaging] as the equivalent of taking photos of public activities on a public street.”

No Propeller? No Problem. This Blimp Flies on Buoyancy Alone

Post Syndicated from Andrew Rae original https://spectrum.ieee.org/aerospace/aviation/no-propeller-no-problem-this-blimp-flies-on-buoyancy-alone

On a cold March night last year in Portsmouth, England, an entirely new type of aircraft flew for the first time, along a dimly lit 120-meter corridor in a cavernous building once used to build minesweepers for the Royal Navy.

This is the Phoenix, an uncrewed blimp that has no engines but propels itself forward by varying its buoyancy and its orientation. The prototype measures 15 meters in length, 10.5 meters in wingspan, and when fully loaded weighs 150 kilograms (330 pounds). It flew over the full length of the building, each flight requiring it to undulate up and down about five times.

Flying in this strange way has advantages. For one, it demands very little energy, allowing the craft to be used for long-duration missions. Also, it dispenses with whirring rotors and compressor blades and violent exhaust streams—all potentially dangerous to people or objects on the ground and even in the air. Finally, it’s cool: an airship that moves like a sea creature.

This propulsion concept has been around since 1864, when a patent for the technique, as applied to an airship, was granted to one Solomon Andrews, of New Jersey (U.S. Patent 43,449). Andrews called the ship the Aereon, and he proposed that it use hydrogen for lift, to make the ship ascend. The ship could then vent some of the hydrogen to reduce its buoyancy, allowing it to descend. A return to lighter-than-air buoyancy would then be achieved by discarding ballast carried aloft in a gondola suspended beneath the airship.

The pilot would control the ship’s attitude by walking along the length of the gondola. Walking to the front moved the center of gravity ahead of the center of buoyancy, making the nose of the airship pitch down; walking to the back would make the nose pitch up.

Andrews suggested that these two methods could be used in conjunction to propel the airship in a sinusoidal flight path. Raising the nose in ascent and lowering it in descent causes the combination of aerodynamic force with either buoyancy (when lighter than air) or with weight (when heavier than air) to have a vector component along the flight path. That component provides the thrust except at the top and bottom of the flight path, where momentum alone carries it through. The flight tests we performed were at walking pace, so the aerodynamic forces would have been very small. There will always be a component of either buoyancy or weight along the flight path.

The method Andrews describes in his patent means that the flight had to end when the airship ran out of either hydrogen or ballast. Later, he built a second airship, which used cables to compress the gas or let it expand again, so that the airship could go up and down without having to jettison ballast. His approach was sound: The key to unlocking this idea and creating a useful aircraft is thus the ability to vary the buoyancy in a sustainable manner.

A variation on this mode of propulsion has been demonstrated successfully underwater, in remotely operated vehicles. Many of these “gliders” vary the volume of water that they displace by using compressed air to expand and contract flexible bladders. Such gliders have been used as long-distance survey vehicles that surface periodically to upload the data they’ve collected. Because water is nearly 1,000 times as dense as air, these robot submarines needn’t change the volume of the bladders very much to attain the necessary changes in buoyancy.

Aeronautical versions of this variable-buoyancy concept have been tried—the Physical Science Laboratory at New Mexico State University ran a demonstration project called Aerobody in the early 2000s—but all anyone could do was demonstrate that this odd form of propulsion works. Before now, nobody ever took advantage of the commercial possibilities that it offered for ultralong endurance applications.

The Phoenix project grew out of a small demonstration system developed by Athene Works, a British company that specializes in innovation and that’s funded by the U.K. Ministry of Defense. That system was successful enough to interest Innovate UK, a government agency dedicated to testing new ideas, and the Aerospace Technology Institute, a government-funded body that promotes transformative technology in air transport. These two organizations put up half the £3.5 million budget for the Phoenix. The rest was supplied by four private companies, five universities, and three governmental organizations devoted to high-value manufacturing.

My colleagues and I had less than four years to develop many of the constituent technologies, most of which were bespoke solutions, and then build and test the new craft. A great number of organizations participated in the project, with the Centre for Process Innovation managing the overall collaboration. I served as the lead engineer.

For up-and-down motion, the aircraft takes in and compresses air into an internal “lung,” making itself heavier than air; then it releases that compressed air to again become lighter than air. Think of the aircraft as a creature that inhales and exhales as it propels itself forward.

The 15-meter-long fuselage, filled with helium to achieve buoyancy, has a teardrop shape, representing a compromise between a sphere (which would be the ideal shape for maximizing the volume of gas you can enclose with a given amount of material) and a long, thin needle (which would minimize drag). At the relatively low speeds such a craft can aspire to, it is enough that the teardrop be just streamlined enough to avoid eddy currents, which on a sphere would form when the boundary layer of air that lies next to the surface of the airship pulls away from it. With our teardrop, the only drag comes from the friction of the air as it flows smoothly over the surface.

The skin is made of Vectran[PDF], a fiber that’s strong enough to withstand the internal pressure and sufficiently closely knit so that together with a thermoplastic polyurethane coating it can seal the helium in. The point was to be strong enough to maintain the right shape, even when the airship’s internal bladder was inflating.

Whether ascending or descending, the aircraft must control its attitude. It therefore has wings with ailerons at the tips to control the aircraft’s roll. At the back is a cross-shaped structure with a pair of horizontal stabilizers incorporating elevators to control how the airship pitches up or down, and a similar pair of vertical stabilizers with rudders to control how it yaws left or right. These flight surfaces have much in common with the wood, fabric, and wire parts in the pioneering airplanes of the early 20th century.

Two carbon-fiber spars span the wings, giving them strength. Airfoil-shaped ribs are distributed along the spars, each made up of foam sandwiched between carbon fiber. A thin skin wraps around this skeleton to give the wing its shape. We designed the horizontal and vertical tail sections to be identical to one another and to the outer panels of the wings. Thus, we were able to limit the types of parts, making the craft easier to construct and repair.

An onboard power system supplies the electricity needed to operate the pumps and valves used to inflate and deflate the inner bladder. It also energizes the various actuators needed to adjust the flight-control surfaces and keeps the craft’s autonomous flight-control system functioning. A rechargeable lithium-ion battery with a capacity of 3 kilowatt-hours meets those requirements in darkness. During daylight hours, arrays of flexible solar cells (most of them on the upper surfaces of the wings, the rest on the upper surface of the horizontal tail) recharge that battery. We confirmed through ground tests outdoors in the sun that these solar cells could simultaneously power all of the aircraft’s systems and recharge the battery in a reasonable amount of time, proving that the Phoenix could be entirely self-sufficient for energy.

We had envisaged also using a hydrogen fuel cell, but because of fire-safety requirements it wasn’t quite ready for the indoor flight trials. We do plan to add this second power source later, for redundancy. Also, if we were to use hydrogen as the lift gas, the fuel cell could be used to replenish any hydrogen lost through the airship’s skin.

So how well did the thing fly? For our tests, we programmed the autonomous flight-control system to follow a sinusoidal flight path by operating the valves and compressors connected to the internal bladder. In this respect, the flight-control system has more in common with a submarine’s buoyancy controls than an airplane’s flight controls.

We had to set strict altitude limits to avoid contact with the roof and floor of the building during our indoor test. In normal operation, the aircraft will be free to determine for itself the amplitude of its up-and-down motion and the length of each undulation to achieve the necessary velocity. Doing that will require some complex calculations and precisely executed commands—a far cry from the meandering backward and forward in a wicker gondola that Andrews did.

Although our experiments to date are merely testing a previously unproven concept, the Phoenix can now serve as the prototype for a commercially valuable aircraft. The next step is getting the Phoenix certified as airworthy. For that, it must pass flight trials outdoors. When we planned the project, this certification had a series of weight thresholds, with 150 kg being the upper limit for approval through the U.K. Civil Aviation Authority under a “permit to fly.” Had it been heavier, approval by the European Union Aviation Safety Agency would have been needed, and trying to obtain that was beyond our budget of both time and money. After the United Kingdom fully exits from the European Union, certification will be different.

Commercial applications for such an aircraft are not hard to imagine. A good example is as a high-altitude pseudosatellite, a craft that can be positioned at will to convey wireless signals to remote places. Existing aircraft designed to perform this role all need very big arrays of solar cells and large batteries, which add to both the weight and cost of the aircraft. Because the Phoenix needs only small arrays of solar cells on the wings and horizontal tail, it can be built for a tenth the cost of the solar e-planes that have been designed for this purpose. It is a cheap, almost disposable, alternative with a much higher ratio of payload to mass than that of alternative aircraft. And our designs for a much larger version of the Phoenix show that it should be feasible to lift a payload of 100 kg to an altitude of 20 kilometers.

We are now beginning to develop such a successor to the Phoenix. Perhaps you will see it one day, a dot in the sky, hanging motionless or languidly porpoising to a new position high above your head.

This article appears in the July 2020 print issue as “This Blimp Flies on Buoyancy Alone.”

About the Author

Andrew Rae is a professor of engineering at the University of the Highlands and Islands, in Inverness, Scotland.

Tiny Satellites Could Distribute Quantum Keys

Post Syndicated from Neil Savage original https://spectrum.ieee.org/tech-talk/aerospace/satellites/tiny-satellites-could-distribute-quantum-keys

Unbreakable quantum keys that use the laws of physics to protect their secrets could be transmitted from orbiting devices a person could lift with one hand, according to experiments conducted from the International Space Station.

Researchers launched a tiny, experimental satellite loaded with optics that could emit entangled pairs of photons. Entangled photons share quantum mechanical properties in such a way that measuring the state of one member of the pair—such as its polarization—instantly tells you what the state of its partner is. Such entanglement could be the basis for quantum key distribution, in which cryptographic keys are used to decode messages. If an eavesdropper were to intercept one of the photons and measure it, that would change the state of both, and the user would know the key had been compromised.

NASA’s Next Mars Rover Will Carry a Tiny Helicopter

Post Syndicated from Ned Potter original https://spectrum.ieee.org/aerospace/robotic-exploration/nasas-next-mars-rover-will-carry-a-tiny-helicopter

If ever there was life on Mars, NASA’s Perseverance rover should be able to find signs of it. The rover, scheduled to launch from Kennedy Space Center, in Florida, in late July or early August, is designed to drill through rocks in an ancient lake bed and examine them for biosignatures, extract oxygen from the atmosphere, and collect soil samples that might someday be returned to Earth.

But to succeed at a Mars mission, you always need a little ingenuity; that’s literally what Perseverance is carrying. Bolted to the rover’s undercarriage is a small autonomous helicopter called Ingenuity. If all goes as planned, it will become the first aircraft to make a powered flight on another planet.

Flying a drone on Mars sounds simple, but it has been remarkably difficult to design a workable machine. Ingenuity’s worst enemy is the planet’s atmosphere, which is less than 1 percent as dense as Earth’s and can drop to –100 °C at night at the landing site.

“Imagine a breeze on Earth,” says Theodore Tzanetos, flight test conductor for the project at NASA’s Jet Propulsion Laboratory, in Pasadena, Calif. “Now imagine having 1 percent of that to bite into or grab onto for lift and control.” No earthly helicopter has ever flown in air that thin.

Perseverance and Ingenuity are set to land in a crater called Jezero on 18 February 2021 and then head off to explore. About 60 Martian days later, the rover should lower the drone to the ground, move about 100 meters away, and watch it take off.

While the car-size Perseverance has a mass of 1,025 kilograms, the drone is just 1.8 kg with a fuselage the size of a box of tissues. Ingenuity’s twin carbon-fiber rotors sit on top of one another and spin in opposite directions at about 2,400 rpm, five times as fast as most helicopter rotors on Earth. If they went any slower, the vehicle couldn’t get off the ground. Much faster and the outer edges of the rotors would approach supersonic speed, possibly causing shock waves and turbulence that would make the drone all but impossible to stabilize.

Ingenuity is intended as a technology demonstration. Mission managers say they hope to make up to five flights over a 30-day period. No flight is planned to last more than 90 seconds, reach altitudes of more than 10 meters, or span more than 300 meters from takeoff to landing.

“It may be a bit less maneuverable than a drone on Earth,” says Josh Ravich, the project’s mechanical engineering lead at JPL, “but it has to survive the rocket launch from Earth, the flight from Earth to Mars, entry, descent, and landing on the Martian surface, and the cold nights there.”

That’s why engineers struggled through years of design work, trying to meet competing needs for power, durability, maneuverability, and weight. Most of the drone’s power, supplied by a small solar panel above the rotors and stored in lithium-ion batteries, will be spent not on flying but on keeping the radio and guidance systems warm overnight. They considered insulating the electronics with aerogel, a super-lightweight foam, but decided even that would add too much weight. Modeling showed that the Martian atmosphere, which is mainly carbon dioxide, would supply some thermal buffering.

The team calculated that the best time of day for the first flight will be late in the Martian morning. By then, the light is strong enough to charge the batteries for brief hops. But if they wait longer, the sun’s warmth would also cause air to rise, thinning it at the surface and making it even more difficult to generate lift.

To see if the drone would fly at all, they put a test model in a three-story chamber filled with a simulated Martian atmosphere. A wire rig pulled up on it to simulate Mars’s 0.38-g gravity. It flew, but, says Ravich, the real test will be on Mars.

If Ingenuity succeeds, future missions could use drones as scouts to help rovers—and perhaps astronauts—explore hard-to-reach cliff sides and volcanoes. “We’ve only seen Mars from the surface or from orbit,” says Ravich. “In a 90-second flight, we can see hundreds of meters ahead.” 

This article appears in the July 2020 print issue as “A Mars Helicopter Preps for Launch.”

Zombie Satellites Return From the Graveyard

Post Syndicated from Nola Taylor Redd original https://spectrum.ieee.org/tech-talk/aerospace/satellites/zombie-satellites-return-from-the-graveyard

New technology may help to bring dead satellites back to life. Earlier this year, the Mission Extension Vehicle (MEV), a spacecraft jointly managed by NASA and Northrop Grumman, made history when it resurrected a decrepit satellite from the satellite graveyard. Reviving the spacecraft is a key step in extending the lifetime of orbiting objects; a second mission is set to extend the lifetime of another satellite later this summer.

Most satellites in geosynchronous orbits (GEO) have a design life of 15 years and are launched with enough fuel to cover that timeframe. At the end of their lifetime, the crafts are required to enter a graveyard orbit mandated by the 2002 draft Mitigation Guidelines issued by the Inter-Agency Space Debris Coordination Committee (IADC). Graveyard orbits comprise paths at least 300 kilometers above the geosynchronous region, giving the zombie spacecraft room to have their orbits incrementally ground down by the gravity of the sun and moon.

Even when they’re out of fuel, most satellites are still fully capable of functioning. “The technical degradation—besides fuel—of the satellite subsystems beyond 15 years is very marginal,” says Joe Anderson, vice president of business development and operations at SpaceLogistics, a subsidiary of Northrop Grumman. Anerson says he’s aware of satellites providing valuable services for nearly 30 years.

Defunct satellites are tracked primarily by the United States Air Force, which follows their mass rather than their radio signals. Smaller bits such as debris are difficult for the Air Force to follow, but satellites are usually large enough (though new technology is bringing them down in size). In contrast, the IADC is comprises just over a dozen space agencies and is the “gold standard” in space recommendations, says Jonathan McDowell, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics.

According to McDowell, who is also author of Jonathan’s Space Report, a weekly newsletter cataloguing the comings and goings of spacecraft, there are 915 objects within 200 km of the GEO, so they still have plenty of room to avoid one another. The looming issue is the 365 defunct spacecraft that—due to malfunction, lack of planning, or laziness—didn’t follow the IADC’s guidelines. In contrast, the graveyard region contains only 283 spacecraft. Dead satellites not parked in the agreed upon spot could lead to collisions (and therefore more debris) which could damage active spacecraft.

“The level of compliance is a little disappointing,” McDowell says. “The junk satellite environment close to GEO is more than one would want.”

In February, MEV-1 successfully brought an Intel satellite back from the graveyard back into geostationary orbit, where it now serves over 30 customers. Launched in 2001, the Instelsat eventually ran out of fuel and retired to the satellite graveyard. Without fuel, it could no longer adjust its orbit, though its other systems remained functional.

The MEV-1 was designed for interfacing with single-use satellites like the Intelsat. By docking with the satellite’s liquid apogee engine, a common feature that helps most geostationary satellites finalize their orbits at the start of their lifetime, MER-1 captured the satellite and began to lower its orbit, putting the Intelsat back into play at the start of April. MEV-1 will remain connected to the Intelsat for the next five years, then return it to the graveyard. MEV-1 will then proceed to its next customer.

MEV-1 is only the beginning. Anderson says that a second MEV will be launched later this summer. It will  reach geostationary orbit in January 2021 and extend the lifetime of a second Intel satellite for an additional five years. Northrup Grumman plans to have only two MEVs in space, but Anderson expects them to be utilized throughout their 15-year-plus lifetime. The company is also developing Mission Extension Pods, smaller propulsion augmentation devices installed on a client’s satellite to provide orbital boosts, extending missions for up to six years. The first pods should launch in 2023, Anderson says.

While Northrup Grumman touts refueling and refurbishing missions like MEV as a bonus to the company’s bottom line, McDowell sees it as a great way to solve the growing problem of space debris and the collection of defunct human-made objects in space. Spacecraft like the MEV could potentially relocate nonfunctioning satellites from GEO to the graveyard, though there are likely to be regulatory and legal issues over who has the right to haul away someone else’s trash. He has long advocated for a launch tax on the companies using space; those funds would sustain an “international space garbage trucking agency” responsible for cleaning up the messes from collisions and enforcing the removal of out-of-work satellites.

“The era of the space garbage truck is coming,” McDowell says.