China aims to become only the second country to land and operate a spacecraft on the surface of Mars (NASA was first witha pair of Viking landers in 1976 if you don’t count the former Soviet Union’s 1971 Mars 3 mission). With just a few months before launch, China is still keeping key mission details quiet. But we can discern a few points about where and how it will attempt a landing on the Red Planet from recent presentations and interviews.
Galactic Energy, a low-key private Chinese rocket firm, celebrated its second birthday in February. That’s early days for a launch company, and yet the company is set to make its first attempt to reach orbit this June.
The rocket is named Ceres-1, after the largest body in the asteroid belt, and will launch from China’s Jiuquan Satellite Launch Center in the Gobi Desert. With three solid fuel stages and a liquid propellant fourth stage, it will be able to lift 350 kilograms of payload to an altitude of 200 kilometers in low Earth orbit.
The firm’s ability to move this quickly is due to a mix of factors—strong corporate leadership, an experienced team, and policy support from the Chinese state.
If astronauts reach the moon as planned under NASA’sProject Artemis, they’ll have work to do. A major objective will be to mine deposits of ice in craters near the lunar south pole—useful not only for water but because it can be broken down into hydrogen and oxygen. But they’ll need guidance to navigate precisely to the spots where robotic spacecraft have pointed to the ice on the lunar map. They’ll also need to rendezvous with equipment sent on ahead of them such as landing ships, lunar rovers, drilling equipment, and supply vehicles. There can be no guessing. They will need to know exactly where they are in real time, whether they’re in lunar orbit or on the moon’s very alien surface.
Which got some scientists thinking. Here on Earth, our lives have been transformed by the Global Positioning System, fleets of satellites operated by the United States and other countries that are used in myriad ways to help people navigate. Down here, GPS is capable of pinpointing locations with accuracy measured in centimeters. Could it help astronauts on lunar voyages?
“We are trying to get it working, especially with the big flood of missions in the next few years,” said Cheung. “We have to have the infrastructure to do the positioning of those vehicles.”
Cheung and Lee plotted the orbits of navigation satellites from the United States’s Global Positioning System and two of its counterparts, Europe’s Galileo and Russia’s GLONASS system—81 satellites in all. Most of them have directional antennas transmitting toward Earth’s surface, but their signals also radiate into space. Those signals, say the researchers, are strong enough to be read by spacecraft with fairly compact receivers near the moon. Cheung, Lee and their team calculated that a spacecraft in lunar orbit would be able to “see” between five and 13 satellites’ signals at any given time—enough to accurately determine its position in space to within 200 to 300 meters. In computer simulations, they were able to implement various methods for improving the accuracy substantially from there.
Helping astronauts navigate after landing on the moon’s surface would be more complicated, mainly because in polar regions, the Earth would be low on the horizon. Signals could easily be blocked by hills or crater rims.
But the JPL team and colleagues at the Goddard Space Flight Center in Maryland anticipated that. To help astronauts, the team suggested using a transmitter located much closer to them as a reference point. Perhaps, the scientists wrote, they could use two satellites in lunar orbit—a new relay satellite in high lunar orbit to act as a locator beacon, combined with NASA’s Lunar Reconnaissance Orbiter, which has been surveying the moon since 2009.
This mini-network need not be terribly expensive by space-program standards. The relay satellite could be very small, take design cues from existing satellite designs, and ride piggyback on a rocket launching other payloads toward the moon ahead of astronauts.
The plans for Artemis have been in flux, slowed by debates over funding and mission architecture. NASA managers have been banking on a planned lunar-orbiting base station, known as the Gateway, to make future missions more practical. But if they are going to put astronauts on the surface of the moon by 2024, as the White House has ordered, they say the Gateway may have to wait until later in the decade. Still, scientists say early plans for lunar navigation will be useful, no matter how lunar landings play out.
“This can be done,” said Cheung. “It’s just a matter of money.”
A group of Chinese researchers has developed a compact, sabre-like antenna for unmanned aerial vehicles (UAVs) that can switch between two radiation patterns for better communication coverage. They describe their work in a study published 26 February in IEEE Transactions on Antennas and Propagation.
For UAVs cruising at high speeds, it’s desirable to have small, aerodynamic antennas that limit drag but can still yield sufficient bandwidth and coverage. Zhijun Zhang, a researcher at Tsinghua University, notes that sabre-shaped antennas are beneficial in the sense that they are very aerodynamic—but there is a major limitation that comes with this design.
“Conventional sabre-like antennas generate a donut-shape radiation pattern, which provides an omnidirectional coverage and is ideal for air-to-ground communication. However, a donut-shape pattern has a null at its zenith,” Zhang explains.
While this donut-shaped radiation pattern may be sufficient to help the UAV exchange signals with ground communication systems, the “blind spot” of coverage directly above the UAV is problematic when trying to establish communication with satellites (aka “hemisphere coverage”). Therefore Zhang and his team created a novel sabre-like antenna design that can provide a signal directly above the antenna as well.
To accomplish this, the researchers incorporated two metal radiators into the design. The first is a monopole, which is perpendicular to the ground with an omnidirectional pattern. The second is a dipole, which is parallel to the ground with broadside pattern – creating a signal that fills the blind spot of conventional antennas. “The two radiators not only generate two working modes and desired radiation patterns, but also provide a bonus capacitor loading effect, which shrinks the antenna size,” says Zhang. “The antenna can switch between two modes on the fly, and thus provides top hemisphere coverage.”
Simulations and tests suggest that the design can achieve roughly 20 percent bandwidth, which surprised even the researchers behind the design. Zhang says this efficiency happens because both radiators are used in both modes.
“As far as we know, it’s the first effort to realize such a compact aircraft antenna with upper hemispherical coverage and acceptable gain for onboard satellite communication. Next, we intend to design a simpler aircraft antenna with only one mode,” says Zhang, noting that this may involve sacrificing some bandwidth.
Dr. Arthur Kreitenberg and his son Elliot got some strange looks when they began the design work for the GermFalcon, a new machine that uses ultraviolet light to wipe out coronavirus and other germs inside an airplane. The father-son founders of Dimer UVC took tape measures with them on flights to unobtrusively record the distances that would form the key design constraints for their system.
“We definitely got lots of looks from passengers and lots of inquiries from flight attendants,” Dr. Kreitenberg recalls. “You can imagine that would cause some attention: taking out a tape measure midflight and measuring armrests. The truth is that when we explained to the flight attendants what we were doing and what we were designing, they [were] really excited about it.”
Test engineers spend as much as 50 percent of their time (or even more in some cases) actively dealing with obsolescence in their test program sets. Read about different solutions in the marketplace to help you overcome the challenges you face today while reducing the risk of a complete technology refresh.
Aerospace and defense companies are longtime innovators and technology leaders, so digital transformation should be easy, right? Wrong. Only three percent of these projects succeed. This infographic explains what’s really happening.
Don’t give up on digital transformation projects that have not delivered as expected. The root cause of failure often lies in data preparation. This tech brief explains how to find out what went wrong and how to get it right the next time.
At 35 meters, the wingspan of the new BAE Systems aircraft equals that of a Boeing 737, yet the plane weighs in at just 150 kilograms, including a 15 kg payload. The unmanned plane, dubbed the PHASA-35 (Persistent High-Altitude Solar Aircraft), made its maiden voyage on 10 February at the Royal Australian Air Force Woomera Test Range in South Australia.
“It flew for just under an hour—enough time to successfully test its aerodynamics, autopilot system, and maneuverability,” says Phil Varty, business development leader of emerging products at BAE Systems. “We’d previously tested other sub-systems such as the flight control system in smaller models of the plane in the U.K. and Australia, so we’d taken much of the risk out of the craft before the test flight.”
The prototype aircraft uses gallium arsenide–based triple-junction solar cell panels manufactured by MicroLink Devices in Niles, Ill. MicroLink claims an energy conversion efficiency of 31 percent for these specialist panels.
“For test purposes, the solar panels—which are as thin as paper—covered just part of the wingspan to generate 4 kilowatts of power,” says Varty. “For the production version, we’ll use all that space to produce 12 kilowatts.”
The energy is used to drive two brushless, direct-drive electric motors modified for the aircraft, and to charge a lithium-ion battery system comprising over 400 batteries that delivers the energy needed to fly the plane at night. They are supplied by Amprius Technologies in Fremont, Calif.
Varty says that unlike the solar panels, which have been chosen for their high efficiency, the batteries—similar to the kind powering smartphones—are not massively efficient. Instead, they are a proven, reliable technology that can easily be replaced when a more efficient version becomes available.
“Although the test flight took place in Australia’s summer, the aircraft is designed for flying during the worst time of the year—the winter solstice,” says Varty. “That’s why it has the potential to stay up for a whole year in the stratosphere, around 65,000 feet [20,000 meters], where there’s little wind and no clouds or turbulence.”
He describes the unmanned control flight system as a relatively simple design similar to that used in drones and model airplanes. A controller on the ground, known as the pilot, guides the plane, though the aircraft can also run on autopilot using a preprogrammed route. Other technicians may also be involved to control specialist applications such as cameras, surveillance, or communications equipment.
The aircraft was originally developed by Prismatic Ltd., a designer and manufacturer of unmanned aerial vehicles in Southwest England that was acquired last year by BAE.
Further test flights are planned for this year and engineers from Prismatic and BAE are using the results of the trial to improve the various subsystems.
“There were no surprises found during the test flight so perhaps the biggest challenge now is educating the market that what we are offering is different,” says Varty. “Different from satellites or drones.”
Its special feature is it can sit in the stratosphere over a particular point by flying into the wind or doing maneuvers like a figure eight and using cameras or surveillance equipment mounted on gimbals to provide constant monitoring. By comparison, Varty says, even the best military drones can stay airborne for a maximum of only three days; satellites are limited by virtue of the fact that they have to maintain a speed of 7 kilometers per second or more in order to stay in orbit.
“Satellites provide merely a date-time snapshot of what is going on below,” Varty points out. “Whereas we can monitor a spot for as long as necessary to draw conclusions about what is likely to happen next: where forest fires are going to spread, for instance, or where disaster relief is most needed.”
Compared to satellites, its “lower altitude means it should realize higher resolution monitoring, low power communications, dedicated local area services, and so on. But performance, he adds, “will be strongly dependent on the equipment constraints such as mass, volume, and power budget” because the aircraft’s lightweight design limits its payload.
As for possible drawbacks, Matsunaga suspects long-duration operations—which assumes no maintenance and no failures—may be more difficult to achieve and more expensive to deal with that BAE expects. “The flights must always be controlled, and the available time for [issue-free] flights will be short. Communications with the plane’s tracking control and operation systems may also be a concern over long flights. And there’s the possibility that external disturbances may affect sensor resolution.”
BAE plans to offer several different services to customers based on the aircraft’s ability to constantly monitor a particular point or to provide it with communications via onboard transceiver equipment. “A customer could use it to monitor how crops are growing or how minute by minute, hour by hour changes to weather affect areas of interest on the ground,” says Varty. “Now we’re working out the pricing for such services. Direct sales of the aircraft are also possible.”
Commercialization will soon follow completion of the flight trials, which could mean the launch of services as early as next year.
Witnessing the emergence of private space companies, new launch vehicles, and miniature satellites that have profoundly changed space activities in the United States, China needed to act. The government opened its space industry to private capital in 2014.
Hundreds of commercial companies, many with close ties to traditional space and defense enterprises, have now sprung up. They’re developing new rockets, building remote sensing and communications satellites, and aiming to fill gaps in ground station and space data services.
One of the first private space companies in China was Spacety, a small satellite maker with offices in Beijing and Changsha, in central China. Its founders were in part inspired by the activities of SpaceX and Planet. They left their jobs at institutes under the Chinese Academy of Sciences (CAS), a state-owned entity with a measure of space-related activities, to establish Spacety in January 2016.
Last April, a research team that I’m part of unveiled a picture that most astronomers never dreamed they would see: one of a massive black hole at the center of a distant galaxy. Many were shocked that we had pulled off this feat. To accomplish it, our team had to build a virtual telescope the size of the globe and pioneer new techniques in radio astronomy.
Our group—made up of more than 200 scientists and engineers in 20 countries—combined eight of the world’s most sensitive radio telescopes, a network of synchronized atomic clocks, two custom-built supercomputers, and several new algorithms in computational imaging. After more than 10 years of work, this collective effort, known as the Event Horizon Telescope (EHT) project, was finally able to illuminate one of the greatest mysteries of nature.
Within weeks of our announcement (and the publication of six journal articles), an estimated billion-plus people had seen the picture of the light-fringed shadow cast by M87*, a black hole at the center of Messier 87, a galaxy in the Virgo constellation. It is likely among the most labor-intensive scientific images yet created. In recognition of the global teamwork required to combine efforts in black-hole theory and modeling, electronic instrumentation and calibration, and image reconstruction and analysis, the 2020 Breakthrough Prize in Fundamental Physics was awarded to the entire collaboration.
Now we are adding more giant dish antennas to sharpen our view of such objects. Thanks to an infusion of new funding from the U.S. National Science Foundation and others, we have set an ambitious goal: to make a movie of the swirling gravitational monster that forms the black heart of our own Milky Way galaxy.
The existence of black holes was a dubious prediction of the general theory of relativity, which Einstein developed a little over a century ago. Astronomers debated for decades later whether massive stars would create black holes when they collapse under their own weight at the end of their lives. Even more mysterious were supermassive black holes, hypothesized to lurk in the hearts of galaxies. Astronomers observed extraordinarily bright, compact objects and powerful galactic-scale jets beaming from the centers of many galaxies—including Messier 87—as well as stars and gas clouds orbiting hidden central objects. Supermassive black holes could account for such observations, and no other explanation seemed very plausible.
By the turn of the 21st century, astronomers had come to believe that black holes must be common. Most black holes are probably far too small and distant for us ever to observe. These “ordinary” black holes form when a star 10 times as massive as the sun or heavier collapses into something so dense that it bends the very fabric of space-time enough to form a spherical trap. Any matter, light, or radiation crossing the edge of this trap, known as the event horizon, disappears forever.
But the gargantuan black holes that we now think inhabit the centers of most galaxies—like the 4-million-solar-mass Sagittarius A* (Sgr A* for short) cloaked behind a veil of dust in the Milky Way and the 6.5-billion-solar-mass M87*—are different beasts altogether. Fed by matter and energy spiraling in from their host galaxies over eons, these are the most massive objects known, and they create around them the most extreme conditions found anywhere in the universe.
So says the math of general relativity. And the Nobel Prize–winning detection in 2015 of gravitational waves—essentially, a chirp of rippling space-time created by the whirling merger of two ordinary black holes more than a billion light-years away—offered yet another piece of evidence that black holes are real. It took an incredible feat of technology to “hear” the vibrations of that cosmic crash: two laser interferometers, each 4 kilometers on a side and able to detect a change in the lengths of their arms less than 0.01 percent the width of a proton. But hearing is not seeing.
Encouraged by the success of initial pilot studies in the late 1990s and early 2000s, an international team of astronomers proposed a bold plan to make an image of a supermassive black hole. They believed that advances in electronics now made it possible to see these bizarre objects for the first time and to open a new window on the study of general relativity. Teams from six continents came together to form the Event Horizon Telescope project.
Creating such a picture would require another giant leap in astronomical interferometry. We would have to virtually link observatories across the planet to function together as one giant virtual telescope. It would have the resolving power of a dish almost as wide as Earth itself. With such an instrument, it was hoped, we could finally see a black hole.
While M87* itself is black, it is backlit by a blinding glow of radiation emanating from the material swirling around it. Friction and magnetic forces heat that matter to hundreds of billions of degrees. The result is an incandescent plasma that emits light and radio waves. The massive object bends those rays, and some of them head our way.
A shadow of the black hole, just bigger than the object and its event horizon, is imprinted on that radiation pattern. Measuring the size and shape of this eerie shadow could tell us a lot about M87*, settling arguments about its mass and whether it is a black hole or something more exotic still. Unfortunately, the cores of galaxies are obscured by giant, diffuse clouds of gas and dust, which leave us no hope of seeing M87* with visible light or at the very long wavelengths typically used in radio astronomy.
Those clouds become more transparent, however, at short radio wavelengths of around 1 millimeter; we chose to observe at 1.3 mm. Such radio waves, at the extremely high frequency of 230 gigahertz, also pass through air largely unimpeded, although atmospheric water vapor does attenuate and delay them somewhat in the last few miles of their 55-million-year journey from the periphery of M87* to our radio telescopes on Earth.
The brightness and relative proximity of M87* worked in our favor, but on the other side were some formidable technical challenges, starting with that wobbly signal delay caused by water vapor. Then there was the size problem. Although M87* is ultramassive, it is comparatively small—the shadow this black hole casts is only around the size of our solar system. Resolving it from Earth is like trying to read a newspaper from 5,000 km away. The resolving power of a lens or dish is set by the ratio of the wavelength observed to the instrument’s diameter. To resolve M87* using 1.3-mm radio waves, we would need a radio dish 13,000 km across.
Although that might seem like a showstopper, interferometry offers a way around that problem—but only if you can collect pieces of the same radio wave front as it arrives at different times at all our telescopes in far-flung locations, stretching from Hawaii and the Americas to Europe. To do that, we have to digitally time-stamp our measurements and then combine them with enough precision to extract the relevant signals and convert them into an image. This technique is called very-long-baseline interferometry (VLBI), and the members of the Event Horizon Telescope project believed it could be used to image a black hole.
The details proved truly devilish, however. By the time the radio signals from M87* bounce into a receiver connected to one of our 6- to 50-meter dish antennas on Earth, the signal power has dropped to roughly 10-16 W (0.1 femtowatt). That’s about a billionth of the strength of the signals a satellite TV dish typically picks up. So the very low signal-to-noise ratio posed one major problem, exacerbated by the fact that our largest single-dish telescope, the 50-meter Large Millimeter Telescope, in Mexico, was still being completed and wasn’t yet fully operational when we used it in 2017.
The 1.3-mm wavelength, considerably shorter than the norm in VLBI, also pushed the limits of our technology. The receivers we used converted the 230-GHz signals down to a more manageable frequency of around 2 GHz. But to get as much information as we could about M87*, we recorded both right- and left-hand circular polarizations at two frequencies centered around 230 GHz. As a result, our instrumentation had to sample and store four separate data streams pouring in at the prodigious rate of 32 gigabits per second at each of the participating telescopes.
Interferometry works only if you can precisely align the peaks in the signal recorded at each pair of telescopes, so the short wavelength also required us to install hydrogen-maser atomic clocks at each site that could sample the signal with subpicosecond accuracy. We used GPS signals to time-stamp the observations.
On four nights in April 2017, everything had to come together. Seven giant telescopes (some of them multidish arrays) pointed at the same minuscule point in the sky. Seven maser clocks locked into sync. A half ton of helium-filled, 6- to 8-terabyte hard drives started spinning. I along with a few dozen other bleary-eyed scientists sat at our screens in mountaintop observatories hoping that clouds would not roll in. Because of the way interferometry works, we would immediately lose 40 percent of our data if cloud cover or technical issues forced even one of the telescopes to drop out.
But the heavens smiled on us. By the end of the week, we were preparing 5 petabytes of raw data for shipment to MIT Haystack Observatory and the Max Planck Institute for Radio Astronomy, in Germany. There, researchers, using specially designed supercomputers to correlate the signals, aligned data segments in time. Then, to counter the phase-distorting influence of turbulent atmosphere above each telescope, we used purpose-built adaptive algorithms to perform even finer alignment, matching signals to within a trillionth of a second.
Now we faced another giant challenge: distilling all those quadrillions of bytes of data down to kilobytes of actual information that would go into an image we could show the world.
Nowhere in those petabytes of data were numbers we could simply plot as a picture. The “lens” of our telescope was a tremendous amount of software, which drew heavily on open-source packages and now is available online so that anyone can replicate or improve on our results.
Radio interferometry is relatively straightforward when you have many telescopes close together, aimed at a bright source, and observing at long wavelengths. The rotation of Earth during the night causes the baselines connecting pairs of the telescopes to sweep through a range of angles and effective lengths, filling in the space of possible measurements. After the data is collected, you line up the signals, extract a two-dimensional spatial-frequency pattern from the variations in amplitude and phase among them, and then do an inverse Fourier transform to convert the 2D frequency pattern into a 2D picture.
VLBI is a lot harder, especially when observing with just a handful of dishes at a short wavelength, as we were for M87*. Perfect calibration of the system was impossible, though we used an eighth telescope at the South Pole to help with that. Most problematic were differences in weather, altitude, and humidity at each telescope. The atmospheric noise scrambles the phase of the incoming signals.
The problem we faced in observing with just seven telescopes and scrambled phases is a bit like trying to make out the tune of a duet played on a piano on which most of the keys are broken and the two players start out of sync with each other. That’s hard—but not impossible. If you know what songs typically sound like, you can often still work out the tune.
It also helps that the noise scrambles the signal in an organized way that allows us to exploit a terrific trick called closure quantities. By multiplying correlated data from each pair in a trio of telescopes in the right order, we are able to cancel out a big chunk of the noise, though at the cost of adding some complexity to the problem.
The longest and shortest baselines in our telescope network set the limits of our resolution and field of view, and they were limited indeed. In effect, we could reconstruct a picture 160 microarcseconds wide (equivalent to 44 billionths of a degree on the sky) with roughly 20 microarcseconds of resolution. Literally an infinite number of images could fit such a data pattern. Somehow we would have to pick—and decide how confident to be in our choice.
To avoid fooling ourselves, we created lots of images from the M87* data, in lots of different ways, and developed a rigorous process—well beyond what is typically done in radio astronomy—to determine whether our reconstructions were reliable. Every step of this process, from initial correlation to final interpretation, was tested in multiple ways, including by using multiple software pipelines.
Before we ever started collecting data, we created a computer simulation of our telescope network and all the various sources of error that would affect it. Then we fed into this simulation a variety of synthetic images—some derived from astrophysical models of black holes and others we had completely made up, including one loosely based on Frosty the Snowman.
Next, we asked various groups to reconstruct images from the synthetic observations generated by the simulation. So we turned images into observations and then let others turn those back into images. The groups all produced pictures that were fuzzy and a bit off, but in the ballpark. The similarities and differences among those pictures taught us which aspects of the image were reliable and which spurious.
In June 2018, researchers on the imaging team split into four squads to work in complete isolation on the real data we had collected in 2017. For seven weeks, each squad worked incommunicado to make the best picture it could of M87*.
Two squads primarily used an updated version of an iterative procedure, known as the CLEAN algorithm, which was developed in the 1970s and has since been the standard tool for VLBI. Radio astronomers trust it, but in cases like ours, where the data is very sparse, the image-generation process often requires a lot of manual intervention.
Drawing on my experience with image reconstruction in other fields, my collaborators and I developed a different approach for the other two squads. It is a kind of forward modeling that starts with an image—say, a fuzzy blob—and uses the observational data to modify this starting guess until it finds an image that looks like an astronomical picture and has a high probability of producing the measurements we observed.
I’d seen this technique work well in other contexts, and we had tested it countless times with synthetic EHT data. Still, I was stunned when I fed the M87* data into our software and, within minutes, an image appeared: a fuzzy, asymmetrical ring, brighter on the bottom. I couldn’t help worrying, though, that the other groups might come up with something quite different.
On 24 July 2018, a group of about 40 EHT members reconvened in a conference room in Cambridge, Mass. We each put up our best images—and everyone immediately started clapping and laughing. All four were rings of about the same diameter, asymmetrical, and brighter on the bottom.
We knew then that we were going to succeed, but we still had to demonstrate that we hadn’t all just injected a human bias favoring a ring into our software. So we ran the data through three separate imaging pipelines and performed image reconstruction with a wide range of prototype images of the kind that we worried might fool us. In one of the imaging pipelines, we ran hundreds of thousands of simulations to systematically select roughly 1,500 of the best settings.
At the end, we took a high-scoring image from each of the three pipelines, blurred them to the same resolution, and took the average. The resulting picture made the front page of newspapers around the world.
A decade or two from now, astronomers will no doubt look back at this first snapshot of a black hole and consider it a milestone, but they’ll also smile at how indistinct and uncertain—and unmoving—it is compared with what they will probably be able to do. Although we are confident in the asymmetry of the ring and its size—roughly 40 microarcseconds in diameter—the fine structure in that image should be taken with a grain of salt.
But we did see a ring and the shadow of the black hole! That in itself is astonishing.
Measurements of that shadow add a lot of weight to the argument that M87* has a mass equal to 6.5 billion suns, consistent with the estimate astronomers had set by measuring the speed of stars circling the black hole. (In contrast, estimates made from the complicated effects of M87* on nearby gas were much lower: around 3.5 billion solar masses.) The size of the ring is also large enough to rule out speculation that M87* is not a supermassive black hole but rather a wormhole or a naked singularity—even stranger objects that appear to be consistent with general relativity but have never been observed.
Perhaps equally important, our initial success gives us good reason to believe that with further improvements to both the telescope network and the software, we will be able to image Sgr A* at the center of the Milky Way. Our nearest supermassive black hole is only a few hundred times as bright as the sun, and it is less than a thousandth as massive as M87*. But because it is 2,000 times closer than M87*, it would appear a little larger to us than M87*.
The biggest challenge in imaging Sgr A* is the speed at which it evolves. Blobs of plasma orbit M87* every couple of days, whereas those around Sgr A* complete an orbit every few minutes. So our goal is not to snap a still image of Sgr A* but to make a crude movie of it spinning like a billion-degree dervish at the center of the galaxy. This could be the next milestone in our quest to further constrain Einstein’s theory of gravity—or point to physics beyond it.
And there could be practical spinoffs. The methods we are developing to make movies of Sgr A* are strikingly similar to those needed to make a medical MRI of a child squirming in a scanner or to image subterranean movements during an earthquake.
For future observations, we expect to use 11 or more telescopes—including a 12-meter dish in Greenland, an array of a dozen 15-meter dishes in the French Alps, and one of Caltech’s radio dishes in Owens Valley, Calif.—to increase the number of baselines. We have also doubled the data-sampling rate from 32 Gb/s to 64 Gb/s by expanding the range of radio frequencies we record, which will strengthen our signals and eventually allow us to connect smaller dishes to the network. Together, these upgrades will boost the amount of data we collect by an order of magnitude, to about 100 petabytes a session.
And if all continues to go well, we hope that in the years or decades ahead the reach of our computational telescope will grow beyond the bounds of Earth itself, to include space-based radio telescopes in orbit. Adding fast-moving spacecraft into our existing VLBI network would be a tremendous challenge, of course. But for me, that is part of the appeal.
This article appears in the February 2020 print issue as “Portrait of a Black Hole.”
About the Author
Katherine (Katie) Bouman is an assistant professor in the departments of electrical engineering and of computing and mathematical sciences at Caltech.
For decades, the astronomical cost of launching a satellite meant that only government agencies and large corporations ever undertook such a herculean task. But over the last two decades or so, newer, commercial rocket designs that accommodate multiple payloads have reduced launch costs dramatically—from about US $54,000 per kilogram in 2000 to about $2,720 in 2018. That trend in turn has fostered a boom in the private satellite industry. Since 2012, the number of small satellites—roughly speaking, those under 50 kilograms—being launched into low Earth orbit (LEO) has increased 30 percent every year.
One huge problem with this proliferation of small satellites is communicating with the ground. Satellites in low Earth orbit circle the planet about once every 90 minutes, and so they usually have only about a 10-minute window during which to communicate with any given ground station. If the satellite can’t communicate with that ground station—because it’s on the other side of the planet, for example—any valuable data the satellite needs to send won’t arrive on Earth in a timely way.
At present, NASA’s Tracking and Data Relay Satellite System (TDRSS) is the only network that can help route signals from satellites to the correct ground stations. However, TDRSS is rarely accessible to companies, prohibitively expensive to use, and over 25 years old. It’s simply unable to handle the traffic created by all the new satellites. Getting data back to Earth from a satellite is oftentimes one of the bottlenecks that limits an observation system’s capabilities.
With three other engineers, I started Kepler Communications in 2015 to break this bottleneck. Our goal is to create a commercial replacement for TDRSS by building a constellation of many tiny satellites in LEO. The satellites will form the backbone of a space-based mesh network, sending data back and forth between Earth and space in real time. Each of our satellites, roughly the size of a loaf of bread, will operate much like an Internet router—except in space. Our first satellite, nicknamed KIPP after the companion robot from the 2014 sci-fi epic Interstellar, launched in January 2018.
When fully deployed by 2022, Kepler’s network will include 140 satellites spread equally among seven orbital planes. In essence, we’re building an Internet service provider high above Earth’s surface, to allow other satellites to stay in contact with one another and with ground stations, even if two satellites, or a satellite and a ground station, are on opposite sides of the planet. Our customers will include companies operating satellites or using satellite communications to transfer data, as well as government agencies like the Canadian Department of National Defense, the European Space Agency, and NASA. None of this would be possible without the ongoing developments in building tiny satellites.
Kepler’s satellites are what the aerospace community calls CubeSats. In the early 2000s, CubeSats were developed to try to reduce the cost of satellites by simplifying and standardizing their design and manufacture. At the time, each new satellite was a one-off, custom-built spacecraft, created by teams of highly specialized engineers using bespoke materials and fabrication methods. In contrast, a CubeSat is made up of a standardized multiple of 10- by 10- by 10-centimeter units. The fixed units allow manufacturers to develop essential CubeSat parts like batteries, solar panels, and computers as commercial, off-the-shelf components.
Thanks to CubeSats, space startups like Kepler can design, build, and launch a satellite, from napkin sketch to orbital insertion, in as little as 12 months. For comparison, a traditional satellite program can take three to seven years to complete.
The rise of CubeSats and the falling costs of launch have led to a surge in commercial satellite services. Companies around the world are building constellations of simultaneously operating spacecraft, with some planned constellations numbering in the hundreds. Companies like Planet are focused on delivering images of Earth, while others, like Spire Global, aim to monitor the weather.
So how are all those satellites getting all the data they collect back to customers on the ground? The short answer is, they’re not. A single Earth-imaging CubeSat, for example, can collect something like 2 gigabytes in one orbit, or 26 GB per day. Realistically, the CubeSat will be able to send back only a fraction of that data during its short window above a particular ground station. And that’s the case for all the companies now operating satellites to collect data on agriculture, climate, natural disasters, natural-resource management, and other topics. There is simply too much data for the communications infrastructure to handle efficiently.
To hand off its data, an Earth-observation satellite sends its imagery and other measurements by contacting its ground station when one is in sight. Such satellites are almost always in low Earth orbits to improve their image resolution, but, as I mentioned, that means they orbit the planet roughly once every 90 minutes. On average, the satellite has a line of sight—and therefore a line of communication—with a specific ground station for about 10 minutes. In that 10-minute window, the satellite must transmit all the data it has collected so that the ground station can then relay it through terrestrial networks to its final destination, such as a data center.
The result is that satellite operators often collect far more information than they can ever hope to send back to Earth, so they are constantly throwing away valuable data, or retrieving it only after a delay of hours or even days.
One recent solution is to operate ground stations as a service in order to increase the total number of ground stations available for use by any one company. Historically, when a company or government agency launched a satellite, it would also be responsible for developing its own ground stations—a very costly proposition. Imagine how expensive and complicated it would be if all cellphone users also had to purchase their own towers and operate their own network just to make a call. A cheaper alternative is for companies to build ground stations that anyone—for a price—can use to connect with their satellites, like Amazon’s AWS Ground Stations.
But there’s still a catch. To ensure that a LEO satellite can continuously communicate with a ground station, you basically need ground stations all over the globe. For continuous coverage, you would need several thousand ground stations—one every few hundred kilometers, though more closely spaced ground stations would ensure more reliable connections. That can be difficult at best in remote areas on land. It’s even more difficult to maintain a connection over oceans, where islands to build on are few and far between, and those islands rarely, if ever, have robust fiber connections to the Internet.
That’s why Kepler plans to move more of the communications infrastructure into orbit. Rather than creating a globe-spanning network of ground stations, we think it makes more sense to build a constellation of CubeSat routers, which can keep satellites connected to ground stations regardless of where the satellite or the ground station is.
At the heart of each 5-kilogram Kepler satellite is a software-defined radio (SDR) and a proprietary antenna. SDRs have been around since the 1990s. At the most fundamental level, they replace analog radio components like modulators (which convert analog signals to ones and zeros) and filters (which limit what part of the analog signal gets converted) with software. In Kepler’s SDR, these elements are implemented using software running on a field-programmable gate array, or FPGA. The result is a radio that’s cheaper to develop and easier to configure. The use of SDRs has in turn allowed us to shrink our spacecraft to CubeSat scale. It’s also one of the reasons our satellites cost one-hundredth as much as a traditional communication satellite.
To understand how the Kepler constellation will work, it helps to know how conventional satellite connections are made: with the “bent pipe” method. Imagine the pipe as two straight lengths of pipe joined together at an angle; the satellite sits where the two lengths meet so it has a continuous line of sight with both ends of the connection, whether they’re two ground stations on different continents or a ground station and another spacecraft. The satellite essentially acts as a relay, receiving the signal at one end of the connection and transmitting it in a different direction to the other end.
In Kepler’s network, when each satellite passes over a ground station it will receive data that has been routed to that ground station from elsewhere in terrestrial networks. The satellite will store the data and then transmit it later when the destination ground station becomes visible. Kepler’s network will include five ground-station sites spread across five continents to connect all of our satellites. Unfortunately, this method doesn’t allow for real-time communications. But those will become possible as our satellite constellation grows.
Here’s how: Future iterations of our network will add the ability to send data between our satellites to create a real-time connection between two ground stations located anywhere on the planet, as well as between a ground station and an orbiting satellite. We’re also planning to include new features such as transcoding—essentially a way to translate the data into different formats—and queueing the data according to what needs to be delivered most urgently.
We can make these big changes to how our satellites communicate relatively quickly, thanks to SDR. New code, for example, can be uploaded to an orbiting satellite like KIPP for testing. If the code passes muster, it can be deployed to the rest of the constellation without having to replace any of the hardware. Much like CubeSat standardization, SDR shortens development cycles and lets us prototype more ideas.
Some Assembly Required: A Kepler Communications engineer works to hand assemble KIPP the company’s first satellite in orbit. Photo: Kepler Communications
Software-defined radio components replace many analog parts and make it possible to build satellites that are the size of a loaf of bread. Photo: Kepler Communications
A mostly completed KIPP waits for the finishing touches. Photo: Kepler Communications
Kepler has also built ground stations so that its satellites in orbit can communicate with terrestrial networks. Photo: Kepler Communications
The SatOps team keeps an eye on Kepler’s orbiting satellites, which will grow in importance as the constellation becomes larger. Photo: Kepler Communications
Kepler is currently in the process of deploying our constellation. KIPP has been operating successfully for over two years and is supporting the communication needs of our ground users. The MOSAiC expedition, for example, is a yearlong effort to measure the arctic climate from an icebreaker ship near the North Pole. It’s the largest polar expedition in history. Since the start of the mission, KIPP’s high-bandwidth communication payload has been regularly transferring gigabytes of data from MOSAiC’s vessel to the project’s headquarters in Bremerhaven, Germany.
In December 2018, our second satellite, CASE (named after another robot companion from Interstellar), joined KIPP in orbit. Even with just two satellites in operation, we’re able to provide some level of service to our customers, mostly by taking up data from one ground station and delivering it to another, in the method I previously described. That has allowed us to avoid the fate of some other satellite-constellation companies, which went bankrupt in the process of trying to deploy a complete network prior to delivering service.
While we’ve been successful so far, establishing a constellation of 140 satellites is not without challenges. When two fast-moving objects—such as satellites—try to talk to each other, their communications are affected by a Doppler shift. This phenomenon causes the frequencies of radio waves transmitted between the two objects to change as their relative positions change. Specifically, the frequency is compressed as the objects approach each other and stretched as they grow more distant. It’s the same phenomenon that causes an ambulance siren to change in pitch as it speeds past you.
With our satellites traveling at over 7 kilometers per second relative to the ground or potentially communicating with another satellite moving in the opposite direction at the same speed, we end up with a very compressed or stretched signal. To deal with this issue, we’ve created a proprietary network architecture in which adjacent satellites will communicate with each other only when they’re traveling in the same direction. We’ve also installed software on KIPP and CASE to manage Doppler shift by tracking the change in frequency caused by its relative motion. At this point, we believe we’re able to compensate for any Doppler shifts, and we expect to improve upon that capability in future iterations of our network and software.
As the number of satellites in the constellation increases, we must also ensure that data is routed efficiently. We don’t want to beam data among, say, 30 satellites when just 3 or 4 will do the job. To solve this problem, our satellites will run an algorithm in orbit that uses something called a two-line element set to determine the position of each satellite. A two-line element set operates in a way that’s similar to how GPS identifies locations on Earth. Knowing every satellite’s position, we can run an optimization algorithm to figure out the route with the shortest transit time.
Of course, all these challenges will be moot if we can’t actually build the 140 satellites and place them in orbit. One thing we discovered early on is that supply chains for producing hundreds of spacecraft—even small ones—don’t yet exist, even if the components are standardized. Ultimately, we’ve had to do most of the production of our satellites in-house. Our manufacturing facility in downtown Toronto can produce up to 10 satellites per month by automating what were previously manual processes, such as testing circuit boards to ensure they meet our requirements.
As I’ve said before, Kepler’s constellation will be possible because of the drastic size and cost reductions in satellite components in recent years. But there is one area where efficiency has limited miniaturization: solar panels. Our CubeSats’ ability to generate power is still constrained by the surface area on which we can mount solar panels.
We’re also seeing limitations in antenna size, as antennas reach theoretical limits in efficiency. That means a certain amount of each satellite’s surface area must be reserved for the antennas. Such limitations will make it hard to further shrink our satellites. The upside is that it’s forcing us to be creative in finding new computational methods and software, and even develop foldable components inspired by origami.
By the end of 2020, we plan to have at least 10 more satellites operating in orbit—enough to run early tests on our in-space router network. If everything stays on schedule, by 2021 we will have 50 satellites in operation, and by 2022 all 140 satellites will be available to both users on Earth and other satellites in space.
Space is the new commercial frontier. With the increased level of access that the entrepreneurial space race has brought, upstart groups are imagining new opportunities that a spacecraft in orbit, and its data, can provide. By creating an Internet for space, Kepler’s network will give those opportunities a route to success.
This article appears in the February 2020 print issue as “Routers in Space.”
The European Space Agency (ESA) received a sizable budget boost in late 2019 and committed to joining NASA’s Artemis program, expanding Earth observation, returning a sample from Mars, and developing new rockets. Meanwhile, less glamorous projects will seek to safeguard and maintain the use of critical infrastructure in space and on Earth.
Space Debris Removal
ESA’s ClearSpace-1 mission, having just received funding in November, is designed to address the growing danger of space debris, which threatens the use of low Earth orbit. Thirty-four thousand pieces of space junk larger than 10 centimeters (cm) are now in orbit around Earth, along with 900,000 pieces larger than 1 cm. They stem from hundreds of space missions launched since Sputnik-1 heralded the beginning of the Space Age in 1957. Traveling at the equivalent of Mach 25, even the tiniest piece of debris can threaten, for example, the International Space Station and its inhabitants, and create more debris when it collides.
The ClearSpace-1 Active Debris Removal (ADR) mission will be carried out by a commercial consortium led by Swiss startup ClearSpace. Planned for launch in 2025, the mission will target a spent upper stage from an ESA Vega rocket orbiting at 720 kilometers above the Earth. Atmospheric drag is very low at this altitude, meaning objects remain in orbit for decades before reentry.
There, ClearSpace-1 will rendezvous with a target, which will be traveling at close to 8 kilometers per second. After making its approach, the spacecraft will employ ‘tentacles’ to reach beyond and around the object.
“It’s like tentacles that embrace the object because you can capture the object before you touch it. Dynamics in space are very interesting because if you touch the object on one side, it will immediately drift away,” says Holger Krag of ESA’s Space Safety department and head of the Space Debris Office in Darmstadt, Germany.
During the first mission, once ClearSpace-1 secures its target, the satellite will use its own propulsion to reenter Earth’s atmosphere, burning up in the process and destroying the piece it embraced. In future missions, ClearSpace hopes to build spacecraft that can remove multiple pieces of debris before the satellite burns up with all the debris onboard.
Collisions involving such objects create more debris and increase the odds of future impacts. This cascade effect is known as the Kessler Syndrome for the NASA scientist who first described it. The 2009 collision of the active U.S. commercial Iridium 33 and defunct Russian military Kosmos-2251 satellites created a cloud of thousands of pieces of debris.
With SpaceX, OneWeb, and other firms planning so-called megaconstellations of hundreds or even thousands of satellites, getting ahead of the situation is crucial to prevent low Earth orbit from becoming a graveyard.
Eventually, ClearSpace-1 is intended to be a cost-efficient, repeatable approach to reducing debris available at a low price for customers, says Krag. ESA backing for the project comes with the aim of helping to establish a new market for debris removal and in-orbit servicing. Northern Sky Research projects that revenues from such services could reach US $4.5 billion by 2028.
ESA is also looking to protect Earth from potential catastrophe with a mission to provide early warning of solar activity. The Carrington event, as the largest solar storm on record is known, was powerful enough to send aurora activity to as low as 20 degrees latitude and interfered with telegraph operators in North America. That was in 1859, with little vulnerable electrical infrastructure in place. A similar event today would disrupt GPS and communications satellites, cause power outages, affect oil drilling (which uses magnetic fields to navigate), and generally cause turmoil.
The L5 ‘Lagrange’ mission will head to the Sun-Earth Lagrange point 5, one of a number of stable positions created by gravitational forces of the two large bodies. From there, it will monitor the Sun for major events and warn of coronal mass ejections (CMEs) including estimates of their speed and direction.
These measurements would be used to provide space weather alerts and help mitigate against catastrophic damage to both orbital and terrestrial electronics. Krag, in an interview at a European Space Week meeting last month, states that these alerts could reduce potential harm and loss of life if used to postpone surgeries, divert flights over and near the poles, and stop trains during the peak of predicted activity from moderate-to-large solar storms.
“Estimates over the next 15 years are that damages with no pre-warning can be in the order of billions to the sensitive infrastructure we have,” Krag states. Developments like autonomous driving, which rely on wireless communications, would be another concern, as would crewed space missions, especially those traveling beyond low Earth orbit, such as NASA’s Artemis program to return astronauts to the moon.
Despite an overall budget boost, ESA’s request for 600 million euros from its member states for ‘space safety’ missions was not fully met. The L5 mission was not funded in its entirety so the team will concentrate first on developing the spacecraft’s instruments over the next three-year budget cycle, and hope for more funding in the future. Instruments currently under assessment include a coronagraph to help predict CME arrival times, a wide-angle, visible-light imaging system, a magnetograph to scan spectral absorption lines, and an X-ray flux monitor to quantify flare energy.
“The launch environment of tomorrow will more closely resemble that of airline operations—with frequent launches from a myriad of locations worldwide,” said Todd Master, DARPA’s program manager for the competition at the time. The U.S. military relies on space-based systems for much of its navigation and surveillance needs, and wants a way to quickly replace damaged or destroyed satellites in the future. At the moment, it takes at least three years to build, test, and launch spacecraft.
A typical geostationary satellite isas big as a van, as heavy as a hippopotamus, and as long-lived as a horse. These parameters shorten the list of companies and countries that can afford to build and operate the things.
Even so, such communications satellites have been critically important because their signals reach places that would otherwise be tricky or impossible to connect. For example, they can cover rural areas that are too expensive to hook up with optical fiber, and they provide Wi-Fi services to airliners and wireless communications to ocean-going ships.
The problem with these giant satellites is the cost of designing, building, and launching them. To become competitive, some companies are therefore scaling down: They’re building geostationary “smallsats.”
Two companies, Astranis and GapSat, plan to launch GEO smallsats in 2020 and 2021, respectively. Both companies acknowledge that there will always be a place for those hulking, minibus-size satellites. But they say there are plenty of business opportunities for a smaller, lighter, more flexible version—opportunities that the newly popular low Earth orbit (LEO) satellite constellations aren’t well suited for.
Market forces have made GEO smallsats desirable; technological advances have made them feasible. The most important innovations are in software-defined radio and in rocket launching and propulsion.
One reason geostationary satellites have to be so large is the sheer bulk of their analog radio components. “The biggest things are the filters,” says John Gedmark, the CEO and cofounder of Astranis, referring to the electronic component that keeps out unwanted radio frequencies. “This is hardware that’s bolted onto waveguides, and it’s very large and made of metal.”
Waveguides are essentially hollow metal tubes that carry a signal between transmitters and receivers to the actual antennas. For C-band frequencies, for instance, waveguides can be 3.5 centimeters wide. While that may not seem so big, analog satellites need a waveguide for every frequency they send and receive over, and the number of waveguides can add up quickly. Combined, the waveguides, filters, and other signal equipment are a massive, expensive volume of hardware that, until recently, had no work-around.
Software-defined radio provides a much more compact digital alternative. Although SDR as a concept has been around for decades, in recent years improved processors have made it increasingly practical to replace analog parts with much smaller digital equivalents.
Another technological improvement with an even longer pedigree is electric propulsion, which generates thrust by accelerating ionized propellant through an electric field. This method generates less thrust than chemical systems, which means it takes longer to move a satellite into a new orbit. However, electric propulsion gets far more mileage out of a given quantity of propellant, and that saves a lot of space and weight.
It’s worth mentioning that a smallsat may not be as minuscule as its name suggests. Just how small it can be depends on whom you’re talking to. For example, GapSat’s satellite, GapSat-2, weighs 20 kilograms and takes up about as much space as a mailbox, according to David Gilmore, the company’s chief operating officer. At one end, the smallest smallsats—sometimes called microsats or nanosats—are similar in size to CubeSats. Those briefcase-size things are being developed by SpaceX, OneWeb, and other companies for use in massive LEO constellations.
The miniaturization of geostationary satellites owes as much to market forces as to technology. Back in the 1970s, demand for broad spectrum access (for example, to broadcast television) favored large satellites bearing tons of transponders that could broadcast across many frequencies. But most of the fruits of the wireless spectrum have been harvested. Business opportunities now center on the scraps of spectrum remaining.
“There’s a lot of snippets of spectrum left over that the big guys have left behind,” says Rob Schwarz, the chief technology officer of space infrastructure at Maxar Technologies in Palo Alto, Calif. “That doesn’t fit into that big-satellite business model.”
Instead, companies like Astranis and GapSat are building business models around narrower spectrum access, transmitting over a smaller range of frequencies or covering a more localized geographic area.
Meanwhile, private rocket firms are cutting the cost of putting things up in orbit. SpaceX and Blue Origin have both developed reusable rockets that reduce costs per launch. And it’s easy enough to design rockets specifically to carry a lot of small packages rather than one huge one, which means the cost of a single launch can be spread over many more satellites.
That’s not to say that van-size GEO satellites are going extinct. “So there’s a trend to building bigger, more complicated, more sophisticated, much more expensive, and much higher-bandwidth satellites that make all the satellites ever built in the history of the world pale in comparison,” says Gregg Daffner, the chief executive officer of GapSat. These are the satellites that will replace today’s GEO communications satellites, the ones that can give wide-spectrum coverage to an entire continent.
But a second trend, the one GapSat is betting on, is that the new GEO smallsats won’t directly compete against those hemisphere-covering behemoths.
“For a customer that has the spectrum and the market demand, bigger is still better,” says Maxar’s Schwarz. “The challenge is whether or not their business case can consume a terabit per second. It’s sort of like going to Costco—you get the cheapest possible package of ground beef, but if you’re alone, you’ve got to wonder, am I going to eat 25 pounds of ground beef?” Companies like Astranis and GapSat are looking for the consumers that need just a bit of spectrum for only a very narrow application.
Astranis, for example, is targeting service providers that have no need for a terabit per second. “The smaller size means we can find customers who are interested in most, if not all, of the satellite’s capacity right off the bat,” says CEO Gedmark. Bigger satellites, for comparison, can take years to sell all their available spectrum: Astranis instead intends to launch each satellite already knowing what the specific single customer is. Astranis plans to launch its first satellite on a SpaceX Falcon 9 rocket in the last quarter of 2020. The satellite, which is for Alaska-based service provider Pacific Dataport, will have 7.5 gigabits per second of capacity in the Ka-band (26.5 to 40 gigahertz).
According to Gedmark, preventing overheating was one of the biggest technical challenges Astranis faced, because of the great power the company is packing into a small volume. There’s no air up there to carry away excess heat, so the satellite is entirely reliant on thermal radiation.
Although the Astranis satellite in many ways functions like a larger communications satellite, just for customers that need less capacity, GapSat—as its name implies—is looking to plug holes in the coverage of those larger satellites. Often, satellite service providers find that for a few months they need to supply a bit more bandwidth than a satellite in orbit can currently handle. That typically means they’ll need to borrow bandwidth from another satellite, which can involve tricky negotiations. Instead, GapSat believes, GEO smallsats can meet these temporary demands.
Historically, GapSat has been a bandwidth broker, connecting customers that needed temporary coverage with satellite operators that were in a position to offer it. Now the company plans to launch GapSat-2 to provide its own temporary service. The satellite would sit in place for just a few months before moving to another orbital slot for the next customer.
However, GapSat’s plan created a bit of a design conundrum. On the one hand, GapSat-2 needed to be small, to keep costs manageable and to be able to quickly shift into a new orbital slot. On the other, the satellite also had to work in whatever frequencies each customer required. Daffner calls the specifics of the company’s solution its “secret sauce,” but the upshot is that GapSat has developed wideband transponders to offer coverage in the Ku-band (12 to 18 GHz), Ka-band (26.5 to 40 GHz), and V/Q-band (33 to 75 GHz), depending on what each customer needs.
Don’t expect there to be much of a clash between the smallsat companies and the companies deploying LEO constellations. Gedmark says there’s little to no overlap between the two emerging forms of space-based communications.
He notes that because LEO constellations are closer to Earth, they have an advantage for customers that require low latency. LEO constellations may be a better choice for real-time voice communications, for example. If you’ve ever been on a phone call with a delay, you know how jarring it can be to have to wait for the other person to respond to what you said. However, by their nature, LEO constellations are all-or-nothing affairs because you need most if not all of the constellation in space to reliably provide coverage, whereas a small GEO satellite can start operating all by itself.
Astranis and GapSat will test that proposition soon after they launch in late 2020 and early 2021, respectively. They’ll be joined by Ovzon and Saturn Satellite Networks, which are also building GEO smallsats for 2021 launches as well.
There’s one final area in which the Astranis and GapSat satellites will differ from the larger communications satellites: their life span. GapSat-2 will have to be raised into a graveyard orbit after roughly 6 years, compared with 20 to 25 years for today’s huge GEO satellites. Astranis is also intentionally shooting for shorter life spans than those common for GEO satellites.
“And that is a good thing!” Gedmark says. “You just get that much faster of an injection of new technology up there, rather than having these incredibly long, 25-year technology cycles. That’s just not the world we live in anymore.”
This article appears in the January 2020 print issue as “Geostationary Satellites for Small Jobs.”
The article has been updated from its print version to reflect the most recent information on GapSat’s plans.
In 2020, Earth and Mars will align. Every 26 months, a launch window for a low-energy “Hohmann” transfer orbit opens between the two. This July, no fewer than four missions are hoping to begin the nine-month journey.
A joint mission between the European Space Agency (ESA) and Roscosmos, the Russian space agency, ExoMars 2020 is dedicated to the search for life, including anything that might be alive today. A Russian-built lander is intended to carry a European rover to the Martian surface. But the mission is technically challenging, and some experts doubt that Roscosmos is up to its task. “They’re relying on a Russian-built lander, which is worrying,” says Emily Lakdawalla, a senior editor at the Planetary Society. “Russia has not launched a successful planetary mission since 1984.” ESA has also acknowledged that there’s some doubt about whether the mission will launch at all, as the parachutes that will be used during landing have run into problems during high-altitude testing.
Assuming the spacecraft does alight on the Martian soil, the lander, called Kazachok, will monitor the climate and probe for subsurface water at the landing location. Meanwhile the rover, named Rosalind Franklin, will go further afield. Rosalind Franklin is equipped with a drill that can penetrate up to 2 meters beneath the ground, where organic molecules are more likely to be preserved than on the harsh surface. An onboard laboratory will analyze samples and in particular look at the molecules’ “handedness.” Complex molecules typically come in “right-handed” and “left-handed” versions. Nonbiological chemical reactions involve both in equal measure, but biology as we know it strongly prefers either right- or left-handed molecules in any given metabolic reaction.
NASA’s mission will explore the Martian surface using substantially the same rover design as that of the plutonium-powered Curiosity, which has been trundling around the Gale crater since 2012. The big difference is in the payload. Mars 2020 has a suite of scientific instruments based on the premise that Mars was much warmer and wetter billions of years ago, and that life may have originated then. Mars 2020 will look for evidence of ancient habitable environments and for chemical signatures of any microbes that lived in them.
Mars 2020 is also a test bed, with two major prototype systems. The first is the Mars Oxygen In-Situ Resource Utilization Experiment [PDF], or MOXIE. About the size of a car battery, MOXIE converts the carbon dioxide atmosphere of Mars into oxygen using electrolysis. Demonstrating this technology, albeit on a small scale, is a critical step toward human exploration of Mars.
The other prototype is the Mars Helicopter, a solar-powered, 1.8-kilogram autonomous twin-rotor drone. To cope with the thin Martian atmosphere, the rotors will spin 10 times as fast as those of a helicopter on Earth. If it can successfully take off, it will be the first aircraft to fly on another world. Currently five flights are planned for distances of up to a few hundred meters. “It makes the [rover] engineers twitch a little bit to help to accommodate that kind of stuff, but it is awfully fun,” says Lakdawalla.
In 2011, the first attempt by the China National Space Administration (CNSA) to send a probe to Mars failed when the Russian rocket carrying it couldn’t get out of Earth orbit. So the Chinese are going with their own Long March rocket for their next try. The HX-1 mission will search for the signs of life with an orbiter and a small rover. The rover has been fitted with a ground-penetrating radar that should have a much deeper range—up to 100 meters—than similar radars on ExoMars and Mars 2020, which can reach only a few meters down.
Although technical details about the mission are sparse, the CNSA has momentum behind it. “It’s really astonishing what they have accomplished with their lunar missions,” says Lakdawalla. She’s impressed that the Chinese engineers succeeded in depositing a lander and rover on the moon’s surface in 2013 “on their first-ever effort to land on another world.” Such success bodes well for Mars, she says: “They’ve clearly demonstrated technical capability and interplanetary navigation.”
Despite the UAE’s inexperience, its chances of entering the Mars club aren’t bad, explains Lakdawalla. “It is just an orbiter, so I think success on the first try is a lot easier, given how routine it’s become to put things in orbit,” she says. Lakdawalla notes that the UAE has excellent international partnerships, and says the new space agency isn’t focused on doing all of the technology development itself: “The UAE is about hiring the best consultants and getting help to leapfrog their own technology.”
This article appears in the January 2020 print issue as “Life on Mars?”
If you drive along the main northern road through South Australia with a good set of binoculars, you may soon be able to catch a glimpse of a strange, windowless jet, one that is about to embark on its maiden flight. It’s a prototype of the next big thing in aerial combat: a self-piloted warplane designed to work together with human-piloted aircraft.
The Royal Australian Air Force (RAAF) and Boeing Australia are building this fighterlike plane for possible operational use in the mid-2020s. Trials are set to start this year, and although the RAAF won’t confirm the exact location, the quiet electromagnetic environment, size, and remoteness of the Woomera Prohibited Area make it a likely candidate. Named for ancient Aboriginal spear throwers, Woomera spans an area bigger than North Korea, making it the largest weapons-testing range on the planet.
The autonomous plane, formally called the Airpower Teaming System but often known as “Loyal Wingman,” is 11 meters (38 feet) long and clean cut, with sharp angles offset by soft curves. The look is quietly aggressive.
Three prototypes will be built under a project first revealed by Boeing and the RAAF in February 2019. Those prototypes are not meant to meet predetermined specifications but rather to help aviators and engineers work out the future of air combat. This may be the first experiment to truly portend the end of the era of crewed warplanes.
“We want to explore the viability of an autonomous system and understand the challenges we’ll face,” says RAAF Air Commodore Darren Goldie.
Australia has chipped in US $27 million (AU $40 million), but the bulk of the cost is borne by Boeing, and the company will retain ownership of the three prototypes. Boeing says the project is the largest investment in uncrewed aircraft it’s ever made outside the United States, although a spokesperson would not give an exact figure.
The RAAF already operates a variety of advanced aircraft, such as Lockheed Martin F-35 jets, but these $100 million fighters are increasingly seen as too expensive to send into contested airspace. You don’t swat a fly with a gold mallet. The strategic purpose of the Wingman project is to explore whether comparatively cheap and expendable autonomous fighters could bulk up Australia’s air power. Sheer strength in numbers may prove handy in deterring other regional players, notably China, which are expanding their own fleets.
“Quantity has a quality of its own,” Goldie says.
The goal of the project is to put cost before capability, creating enough “combat mass” to overload enemy calculations. During operations, Loyal Wingman aircraft will act as extensions of the piloted aircraft they accompany. They could collect intelligence, jam enemy electronic systems, and possibly drop bombs or shoot down other planes.
“They could have a number of uses,” Goldie says. “An example might be a manned aircraft giving it a command to go out in advance to trigger enemy air defense systems—similar to that achieved by [U.S.-military] Miniature Air-Launched Decoys.”
The aircraft are also designed to operate as a swarm. Many of these autonomous fighters with cheap individual sensors, for example, could fly in a “distributed antenna” geometry, collectively creating a greater electromagnetic aperture than you could get with a single expensive sensor. Such a distributed antenna could also help the system resist jamming.
“This is a really big concept, because you’re giving the pilots in manned aircraft a bigger picture,” Boeing Australia director Shane Arnott says. These guidelines have created two opposing goals: On one hand, the Wingman must be stealthy, fast, and maneuverable, and with some level of autonomy. On the other, it must be cheap enough to be expendable.
The development of Wingman began with numerical simulations, as Boeing Australia and the RAAF applied computational fluid dynamics to calculate the aerodynamic properties of the plane. Physical prototypes were then built for testing in wind tunnels, designing electrical wiring, and the other stages of systems engineering. Measurements from sensors attached to a prototype were used to create and refine a “digital twin,” which Arnott describes as one of the most comprehensive Boeing has ever made. “That will become important as we upgrade the system, integrate new sensors, and come up with different approaches to help us with the certification phase,” Arnott says.
The physical result is a clean-sheet design with a custom exterior and a lot of off-the-shelf components inside. The composite exterior is designed to reflect radar as weakly as possible. Sharply angled surfaces, called chines, run from the nose to the air intakes on either side of the lower fuselage; chines then run further back from those intakes to the wings and to twin tail fins, which are slightly canted from the vertical.
This design avoids angles that might reflect radar signals straight back to the source, like a ball bouncing off the inside corner of a box. Instead, the design deflects them erratically. Payloads are hidden in the belly. Of course, if the goal is to trigger enemy air defense systems, such a plane could easily turn nonstealthy.
The design benefits from the absence of a pilot. There is no cockpit to break the line, nor a human who must be protected from the brain-draining forces of acceleration.
“The ability to remove the human means you’re fundamentally allowing a change in the design of the aircraft, particularly the pronounced forward part of the fuselage,” Goldie says. “Lowering the profile can lower the radar cross section and allow a widened flight envelope.”
The trade-off is cost. To keep it down, the Wingman uses what Boeing calls a “very light commercial jet engine” to achieve a range of about 3,700 km (2,300 miles), roughly the distance between Seville and Moscow. The internal sensors are derived from those miniaturized for commercial applications.
Additional savings have come from Boeing’s prior investments in automating its supply chains. The composite exterior is made using robotic manufacturing techniques first developed for commercial planes at Boeing’s aerostructures fabrication site in Melbourne, the company’s largest factory outside the United States.
The approach has yielded an aircraft that is cheaper, faster, and more agile than today’s drones. The most significant difference, however, is that the Wingman can make its own decisions. “Unmanned aircraft that are flown from the ground are just manned from a different part of the system. This is a different concept,” Goldie says. “There’s nobody physically telling the system to iteratively go up, left, right, or down. The aircraft could be told to fly to a position and do a particular role. Inherent in its design is an ability to achieve that reliably.”
Setting the exact parameters of the Loyal Wingman’s autonomy—which decisions will be made by the machine and which by a human—is the main challenge. If too much money is invested in perfecting the software, the Wingman could become too expensive; too little, however, may leave it incapable of carrying out the required operations.
The software itself has been developed using the digital twin, a simulation that has been digitally “flown” thousands of times. Boeing is also using 15 test-bed aircraft to “refine autonomous control algorithms, data fusion, object-detection systems, and collision-avoidance behaviors,” the company says on its website. These include five higher-performance test jets.
“We understand radar cross sections and g-force stress on an aircraft. We need to know more about the characteristics of the autonomy that underpins that, what it can achieve and how reliable it can be,” Goldie says.
“Say you have an autonomous aircraft flying in a fighter formation, and it suddenly starts jamming frequencies the other aircraft are using or [are] reliant upon,” he continues. “We can design the aircraft to not do those things, but how do we do that and keep costs down? That’s a challenge.”
Arnott also emphasizes the exploratory nature of the Loyal Wingman program. “Just as we’ve figured out what is ‘good enough’ for the airframe, we’re figuring out what level of autonomy is also ‘good enough,’ ” Arnott says. “That’s a big part of what this program is doing.”
The need to balance capability and cost also affects how the designers can protect the aircraft against enemy countermeasures. The Wingman’s stealth and maneuverability will make it harder to hit with antiaircraft missiles that rely on impact to destroy their targets, so the most plausible countermeasures are cybertechniques that hack the aircraft’s communications, perhaps to tell it to fly home, or electromagnetic methods that fry the airplane’s internal electronics.
Stealth protection can go only so far. And investing heavily in each aircraft’s defenses would raise costs. “How much do you build in resilience, or just accept this aircraft is not meant to be survivable?” Goldie says.
This year’s test flights should help engineers weigh trade-offs between resilience and cost. Those flights will also answer specific questions: Can the Wingman run low on fuel and decide to come home? Or can it decide to sacrifice itself to save a human pilot? And at the heart of it all is the fundamental question facing militaries the world over: Should air power be cheap and expendable or costly and capable?
Other countries have taken different approaches. The United Kingdom’s Royal Air Force has selected Boeing and several other contractors to produce design ideas for the Lightweight Affordable Novel Combat Aircraft program, with test flights planned in 2022. Boeing has also expressed interest in the U.S. Air Force’s similar Skyborg program, which uses the XQ-58 Valkyrie, a fighterlike drone made by Kratos, of San Diego.
China is also in the game. It has displayed the GJ-11 unmanned stealth combat aircraft and the GJ-2 reconnaissance and strike aircraft; the level of autonomy in these aircraft is not clear. China has also developed the LJ-1, a drone akin to the Loyal Wingman, which may also function as a cruise missile.
Military aerospace projects often have specific requirements that contractors must fulfill. The Loyal Wingman is instead trying to decide what the requirements themselves should be. “We are creating a market,” Arnott says.
The Australian project, in other words, is agnostic as to what role autonomous aircraft should play. It could result in an aircraft that is cheaper than the weapons that will shoot it down, meaning each lost Wingman is actually a net win. It could also result in an aircraft that can almost match a crewed fighter jet’s capabilities at half the cost.
This article appears in the January 2020 print issue as “A Robot Is My Wingman.”
When Amazon made public its plans to deliver packages by drone six years ago, many skeptics scoffed—including some at this magazine. It just didn’t seem safe or practical to have tiny buzzing robotic aircraft crisscrossing the sky with Amazon orders. Today, views on the prospect of getting stuff swiftly whisked to you this way have shifted, in part because some packages are already being delivered by drone, including examples in Europe, Australia, and Africa, sometimes with life-saving consequences. In 2020, we should see such operations multiply, even in the strictly regulated skies over the United States.
There are several reasons to believe that package delivery by drone may soon be coming to a city near you. The most obvious one is that technical barriers standing in the way are crumbling.
The chief challenge, of course, is the worry that an autonomous package-delivery drone might collide with an aircraft carrying people. In 2020, however, it’s going to be easier to ensure that won’t happen, because as of 1 January, airplanes and helicopters are required to broadcast their positions by radio using what is known as automatic dependent surveillance–broadcast out (ADS-B Out) equipment carried on board. (There are exceptions to that requirement, such as for gliders and balloons, or for aircraft operating only in uncontrolled airspace.) This makes it relatively straightforward for the operator of a properly equipped drone to determine whether a conventional airplane or helicopter is close enough to be of concern.
Indeed, DJI, the world’s leading drone maker, has promised that from here on out it will equip any drone it sells weighing over 250 grams (9 ounces) with the ability to receive ADS-B signals and to inform the operator that a conventional airplane or helicopter is flying nearby. DJI calls this feature AirSense. “It works very well,” says Brendan Schulman, vice president for policy and legal affairs at DJI—noting, though, that it works only “in one direction.” That is, pilots don’t get the benefit of ADS-B signals from drones.
Drones will not carry ADS-B Out equipment, Schulman explains, because the vast number of small drones would overwhelm air-traffic controllers with mostly useless information about their whereabouts. But it will eventually be possible for pilots and others to determine whether there are any drones close enough to worry about; the key is a system for the remote identification of drones that the U.S. Federal Aviation Administration is now working to establish. The FAA took the first formal step in that direction yesterday, when the agency published a Notice of Proposed Rulemaking on remote ID for drones.
Before the new regulations go into effect, the FAA will have to receive and react to public comments on its proposed rules for drone ID. That will take many months. But some form of electronic license plates for drones is definitely coming, and we’ll likely see that happening even before the FAA mandates it. This identification system will pave the way for package delivery and other beyond-line-of-sight operations that fly over people. (Indeed, the FAA has stated that it does not intend to establish rules for drone flights over people until remote ID is in place.)
One of the few U.S. sites where drones are making commercial deliveries already is Wake County, N.C. Since March of last year, drones have been ferrying medical samples at WakeMed’s sprawling hospital campus on the east side of Raleigh. Last September, UPS Flight Forward, the subsidiary of United Parcel Service that is carrying out these drone flights, obtained formal certification from the FAA as an air carrier. The following month, Wing, a division of Alphabet, Google’s parent company, launched the first residential drone-based delivery service to begin commercial operations in the United States, ferrying small packages from downtown Christiansburg, Va., to nearby neighborhoods. These projects in North Carolina and Virginia, two of a handful being carried out under the FAA’s UAS Integration Pilot Program, show that the idea of using drones to deliver packages is slowly but surely maturing.
“We’ve been operating this service five days a week, on the hour,” says Stuart Ginn, a former airline pilot who is now a head-and-neck surgeon at WakeMed. He was instrumental in bringing drone delivery to this hospital system in partnership with UPS and California-based Matternet.
Right now the drone flying at WakeMed doesn’t travel beyond the operators’ line of sight. But Ginn says that he and others behind the project should soon get FAA clearance to fly packages to the hospital by drone from a clinic located some 16 kilometers away. “I’d be surprised and disappointed if that doesn’t happen in 2020,” says Ginn. The ability to connect nearby medical facilities by drone, notes Ginn, will get used “in ways we don’t anticipate.”
This article appears in the January 2020 print issue as “The Delivery Drones Are Coming.”
After 50 years of lamenting that America had abandoned the moon, astronauts are in a rush again, trying to go back within five—and NASA has asked aerospace companies to design the lunar landers that will get them there. The project is called Artemis, and the agency is now reviewing proposals to build what it calls the Human Landing System, or HLS. In January, it says, it will probably select finalists.
NASA had said a landing was possible by 2028. Then, the White House said to do it by 2024.
NextAero, a startup based in Melbourne, Australia, wants to capitalize on this energy with its 3D-printed aerospike engines. Compared to conventional rocket engines with bell-shaped nozzles, aerospike engines have conical nozzles shaped like funnels that protrude from the engine—hence the spike.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.