Alphabet’s enthusiasm for balloons deflated earlier this year, when it announced that its high-altitude Internet company, Loon, could not become commercially viable.
But while the stratosphere might not be a great place to put a cellphone tower, it could be the sweet spot for cameras, argue a host of high-tech startups.
The market for Earth-observation services from satellites is expected to top US $4 billion by 2025, as orbiting cameras, radars, and other devices monitor crops, assess infrastructure, and detect greenhouse gas emissions. Low-altitude observations from drones could be worth.
Neither platform is perfect. Satellites can cover huge swaths of the planet but remain expensive to develop, launch, and operate. Their cameras are also hundreds of kilometers from the things they are trying to see, and often moving at tens of thousands of kilometers per hour.
Drones, on the other hand, can take supersharp images, but only over a relatively small area. They also need careful human piloting to coexist with planes and helicopters.
Balloons in the stratosphere, 20 kilometers above Earth (and 10 km above most jets), split the difference. They are high enough not to bother other aircraft and yet low enough to observe broad areas in plenty of detail. For a fraction of the price of a satellite, an operator can launch a balloon that lasts for weeks (even months), carrying large, capable sensors.
Unsurprisingly, perhaps, the U.S. military has funded development in stratospheric balloon tests across six Midwest states to “provide a persistent surveillance system to locate and deter narcotic trafficking and homeland security threats.”
But the Pentagon is far from the only organization flying high. An IEEE Spectrum analysis of applications filed with the U.S. Federal Communications Commission reveals at least six companies conducting observation experiments in the stratosphere. Some are testing the communications, navigation, and flight infrastructure required for such balloons. Others are running trials for commercial, government, and military customers.
The illustration above depicts experimental test permits granted by the FCC from January 2020 to June 2021, together covering much of the continental United States. Some tests were for only a matter of hours; others spanned days or more.
A new 3-D printed electric thruster could one day help make advanced miniature satellites significantly easier and more affordable to build, a new study finds.
Conventional rockets use chemical reactions to generate propulsion. In contrast, electric thrusters produce thrust by using electric fields to accelerate electrically charged propellants away from a spacecraft.
The main weakness of electric propulsion is that it generates much less thrust than chemical rockets, making it too weak to launch a spacecraft from Earth’s surface. On the other hand, electric thrusters are extremely efficient at generating thrust, given the small amount of propellant they carry. This makes them very useful where every bit of weight matters, as in the case of a satellite that’s already in orbit.
The impact of COVID-19 has disrupted the global satellite production in an unprecedented way.
Many of the satellite industry manufacturing processes came to a halt, when staff lockdowns and social distancing measures had to be employed. Recovery is slowly underway but the effect of the impact is far from over.
The industry is now looking for new ways of making their satellite production more resilient towards the effects of the pandemic. Especially in the test and measurement domain, new technologies and solutions offer manufacturers the opportunity to remain productive and operational, while respecting social-distancing measures.
Much of the equipment used to test satellite electronics can be operated remotely and test procedures can be created, automated and controlled by the same engineers from their homes.
This webinar provides an overview on related test and measurement solutions from Rohde & Schwarz and explains how engineers can control test equipment remotely to continue producing, while respecting social-distancing.
In this webinar you will learn:
The interfaces/standards used to command test equipment remotely How to maintain production while using social-distanced testing Solutions from Rohde & Schwarz for cloud-based testing and cybersecurity
Sascha Laumann,Product Manager, Rohde & Schwarz
Sascha Laumann is a product owner for digital products at Rohde & Schwarz. His main activities are definition, development and marketing of test and measurement products that address future challenges of the ever-changing market. Sascha is an alumni of the Technical University of Munich, having majored in EE with a special focus on efficient data acquisition, transfer and processing. His previous professional background comprises of developing solutions for aerospace testing applications.
Dr. Rajan Bedi,CEO & Founder, Spacechips
Dr. Rajan Bedi is the CEO and founder of Spacechips, a UK SME disrupting the global space industry with its award-winning on-board processing and transponder products, space-electronics design-consultancy, technical-marketing and training and business-intelligence services. Spacechips won Start- Dr. Rajan Bedi has previously taught at Oxford University, was featured in Who’s Who in the World and is a winner of a Royal Society fellowship and a highly sought keynote speaker.
The content of this webcast is provided by the sponsor and may contain a commercial message
Something new happened in space in January 2019. For the first time, a previously unknown leak of natural gas was spotted from orbit by a microsatellite, and then, because of that detection, plugged.
The microsatellite, Claire, had been flying since 2016. That day, Claire was monitoring the output of a mud volcano in Central Asia when it spied a plume of methane where none should be. Our team at GHGSat, in Montreal, instructed the spacecraft to pan over and zero in on the origin of the plume, which turned out to be a facility in an oil and gas field in Turkmenistan.
The need to track down methane leaks has never been more important. In the slow-motion calamity that is climate change, methane emissions get less public attention than the carbon dioxide coming from smokestacks and tailpipes. But methane—which mostly comes from fossil-fuel production but also from livestock farming and other sources—has an outsize impact. Molecule for molecule, methane traps 84 times as much heat in the atmosphere as carbon dioxide does, and it accounts for about a quarter of the rise in atmospheric temperatures. Worse, research from earlier this year shows that we might be enormously underestimating the amount released—by as much as 25 to 40 percent.
Satellites have been able to see greenhouse gases like methane and carbon dioxide from space for nearly 20 years, but it took a confluence of need and technological innovation to make such observations practical and accurate enough to do them for profit. Through some clever engineering and a more focused goal, our company has managed to build a 15-kilogram microsatellite and perform feats of detection that previously weren’t possible, even with a US $100 million, 1,000-kg spacecraft. Those scientific behemoths do their job admirably, but they view things on a kilometer scale. Claire can resolve methane emissions down to tens of meters. So a polluter (or anybody else) can determine not just what gas field is involved but which well in that field.
Since launching Claire, our first microsatellite, we’ve improved on both the core technology—a miniaturized version of an instrument known as a wide-angle Fabry-Pérot imaging spectrometer—and the spacecraft itself. Our second methane-seeking satellite, dubbed Iris, launched this past September, and a third is scheduled to go up before the end of the year. When we’re done, there will be nowhere on Earth for methane leaks to hide.
The creation of Claire and its siblings was driven by a business case and a technology challenge. The business part was born in mid-2011, when Quebec (GHGSat’s home province) and California each announced that they would implement a market-based “cap and trade” system. The systems would attribute a value to each ton of carbon emitted by industrial sites. Major emitters would be allotted a certain number of tons of carbon—or its equivalent in methane and other greenhouse gases—that they could release into the atmosphere each year. Those that needed to emit more could then purchase emissions credits from those that needed less. Over time, governments could shrink the total allotment to begin to reduce the drivers of climate change.
Even in 2011, there was a wider, multibillion-dollar market for carbon emissions, which was growing steadily as more jurisdictions imposed taxes or implemented carbon-trading mechanisms. By 2019, these carbon markets covered 22 percent of global emissions and earned governments $45 billion, according to the World Bank’s State and Trends of Carbon Pricing 2020.
Despite those billions, it’s methane, not carbon dioxide, that has become the focus of our systems. One reason is technological—our original instrument was better tuned for methane. But the business reason is the simpler one: Methane has value whether there’s a greenhouse-gas trading system or not.
Markets for greenhouse gases motivate the operators of industrial sites to better measure their emissions so they can control and ultimately reduce them. Existing, mostly ground-based methods using systems like flux chambers, eddy covariance towers, and optical gas imaging were fairly expensive, of limited accuracy, and varied as to their geographic availability. Our company’s bet was that industrial operators would flock to a single, less expensive, more precise solution that could spot greenhouse-gas emissions from individual industrial facilities anywhere in the world.
Once we’d decided on our business plan, the only question was: Could we do it?
One part of the question had already been answered, to a degree, by pioneering space missions such as Europe’s Envisat (which operated from 2002 to 2012) and Japan’s GOSat (launched in 2009). These satellites measure surface-level trace gases using spectrometers that collect sunlight scattering off the earth. The spectrometers break down the incoming light by wavelength. Molecules in the light’s path will absorb a certain pattern of wavelengths, leaving dark bands in the spectrum. The greater the concentration of those molecules, the darker the bands. This method can measure methane concentrations from orbit with a precision that’s better than 1 percent of background levels.
While those satellites proved the concept of methane tracking, their technology was far from what we needed. For one thing, the instruments are huge. The spectrometer portion of Envisat, called SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY), contained nearly 200 kg of complex optics; the entire spacecraft carried eight other scientific instruments and weighed 8.2 metric tons. GOSat, which is dedicated to greenhouse-gas sensing, weighs 1.75 metric tons.
Furthermore, these systems were designed to measure gas concentrations across the whole planet, quickly and repeatedly, in order to inform global climate modeling. Their instruments scan huge swaths of land and then average greenhouse-gas levels over tens or hundreds of square kilometers. And that is far too coarse to pinpoint an industrial site responsible for rogue emissions.
To achieve our goals, we needed to design something that was the first of its kind—an orbiting hyperspectral imager with spatial resolution in the tens of meters. And to make it affordable enough to launch, we had to fit it in a 20-by-20-by-20-centimeter package.
The most critical enabling technology to meet those constraints was our spectrometer—the wide-angle Fabry-Pérot etalon (WAF-P). (An etalon is an interferometer made from two partially reflective plates.) To help you understand what that is, we’ve first got to explain a more common type of spectrometer and how it works in a hyperspectral imaging system.
Hyperspectral imaging detects a wide range of wavelengths, some of which, of course, are beyond the visible. To achieve such detection, you need both a spectrometer and an imager.
The spectrometers in SCIAMACHY are based on diffraction gratings. A diffraction grating disperses the incoming light as a function of its wavelength—just as a prism spreads out the spectrum of white light into a rainbow. In space-based hyperspectral imaging systems, one dimension of the imager is used for spectral dispersion, and the other is used for spatial imaging. By imaging a narrow slit of a scene at the correct orientation, you get a spectrum at each point along that thin strip of land. As the spacecraft travels, sequential strips can be imaged to form a two-dimensional array of points, each of which has a full spectrum associated with it.
If the incoming light has passed through a gas—say, Earth’s atmosphere—in a region tainted with methane, certain bands in the infrared part of that spectrum should be dimmer than otherwise in a pattern characteristic of that chemical.
Such a spectral-imaging system works well, but making it compact is challenging for several reasons. One challenge is the need to minimize optical aberrations to achieve a sharp image of ground features and emission plumes. However, in remote sensing, the signal strength (and hence signal-to-noise ratio) is driven by the aperture size, and the larger this is, the more difficult it is to minimize aberrations. Adding a dispersive grating to the system leads to additional complexity in the optical system.
A Fabry-Pérot etalon can be much more compact without the need for a complex imaging system, despite certain surmountable drawbacks. It is essentially two partially mirrored pieces of glass held very close together to form a reflective cavity. Imagine a beam of light of a certain wavelength entering the cavity at a slight angle through one of the mirrors. A fraction of that beam would zip across the cavity, squeak straight through the other mirror, and continue on to a lens that focuses it onto a pixel on an imager placed a short distance away. The rest of that beam of light would bounce back to the front mirror and then across to the back mirror. Again, a small fraction would pass through, the rest would continue to bounce between the mirrors, and the process would repeat. All that bouncing around adds distance to the light’s paths toward the pixel. If the light’s angle and its wavelength obey a particular relationship to the distance between the mirrors, all that light will constructively interfere with itself. Where that relation holds, a set of bright concentric rings forms. Different wavelengths and different angles would produce a different set of rings.
In an imaging system with a Fabry-Pérot etalon like the ones in our satellites, the radius of the ring on the imager is roughly proportional to the ray angle. What this means for our system is that the etalon acts as an angle-dependent filter. So rather than dispersing the light by wavelength, we filter the light to specific wavelengths, depending on the light’s radial position within the scene. Since we’re looking at light transmitted through the atmosphere, we end up with dark rings at specific radii corresponding to molecular absorption lines.
The etalon can be miniaturized more easily than a diffraction-grating spectrometer, because the spectral discrimination arises from interference that happens within a very small gap of tens to hundreds of micrometers; no large path lengths or beam separation is required. Furthermore, since the etalon consists of substrates that are parallel to one another, it doesn’t add significantly to aberrations, so you can use relatively straightforward optical-design techniques to obtain sufficient spatial resolution.
However, there are complications associated with the WAF-P imaging spectrometer. For example, the imager behind the etalon picks up both the image of the scene (where the gas well is) and the interference pattern (the methane spectrum). That is, the spectral rings are embedded in—and corrupted by—the actual image of the patch of Earth the satellite is pointing at. So, from a single camera frame, you can’t distinguish variability in how much light reflects off the surface from changes in the amount of greenhouse gases in the atmosphere. Separating spatial and spectral information, so that we can pinpoint the origin of a methane plume, took some innovation.
The computational process used to extract gas concentrations from spectral measurements is called a retrieval. The first step in getting this to work for the WAF-P was characterizing the instrument properly before launch. That produces a detailed model that can help predict precisely the spectral response of the system for each pixel.
But that’s just the beginning. Separating the etalon’s mixing of spectral and spatial information took some algorithmic magic. We overcame this issue by designing a protocol that captures a sequence of 200 overlapping images as the satellite flies over a site. At our satellite’s orbit, that means maximizing the time we have to acquire images by continuously adjusting the satellite’s orientation. In other words, we have the satellite stare at the site as it passes by, like a rubbernecking driver on a highway passing a car wreck.
The next step in the retrieval procedure is to align the images, basically tracking all the ground locations within the scene through the sequence of images. This gives us a collection of up to 200 readings where a feature, say, a leaking gas well, passes across the complete interference pattern. This effectively is measuring the same spot on Earth at decreasing infrared wavelengths as that spot moves outward from the center of the image. If the methane concentration is anomalously high, this leads to small but predictable changes in signal level at specific positions on the image. Our retrievals software then compares these changes to its internal model of the system’s spectral response to extract methane levels in parts per million.
At this point, the WAF-P’s drawbacks become an advantage. Some other satellites use separate instruments to visualize the ground and sense the methane or CO2 spectra. They then have to realign those two. Our system acquires both at once, so the gas plume automatically aligns with its point of origin down to the level of tens of meters. Then there’s the advantage of high spatial resolution. Other systems, such as Tropomi (TROPOspheric Monitoring Instrument, launched in 2017), must average methane density across a 7-kilometer-wide pixel. The peak concentration of a plume that Claire could spot would be so severely diluted by Tropomi’s resolution that it would seem only 1/200th as strong. So high-spatial-resolution systems like Claire can detect weaker emitters, not just pinpoint their location.
Just handing a customer an image of their methane plume on a particular day is useful, but it’s not a complete picture. For weaker emitters, measurement noise can make it difficult to detect methane point sources from a single observation. But temporal averaging of multiple observations using our analytics tools reduces the noise: Even with a single satellite we can make 25 or more observations of a site per year, cloud cover permitting.
Using that average, we then produce an estimation of the methane emission rate. The process takes snapshots of methane density measurements of the plume column and calculates how much methane must be leaking per hour to generate that kind of plume. Retrieving the emission rate requires knowledge of local wind conditions, because the excess methane density depends not only on the emission rate but also on how quickly the wind transports the emitted gas out of the area.
We’ve learned a lot in the four years since Claire started its observations. And we’ve managed to put some of those lessons into practice in our next generation of microsatellites, of which Iris is the first. The biggest lesson is to focus on methane and leave carbon dioxide for later.
If methane is all we want to measure, we can adjust the design of the etalon so that it better measures methane’s corner of the infrared spectrum, instead of being broad enough to catch CO2’s as well. This, coupled with better optics that keep out extraneous light, should result in a 10-fold increase in methane sensitivity. So Iris and the satellites to follow will be able to spot smaller leaks than Claire can.
We also discovered that our next satellites would need better radiation shielding. Radiation in orbit is a particular problem for the satellite’s imaging chip. Before launching Claire, we’d done careful calculations of how much shielding it needed, which were then balanced with the increased cost of the shielding’s weight. Nevertheless, Claire’s imager has been losing pixels more quickly than expected. (Our software partially compensates for the loss.) So Iris and the rest of the next generation sport heavier radiation shields.
Another improvement involves data downloads. Claire has made about 6,000 observations in its first four years. The data is sent to Earth by radio as the satellite streaks past a single ground station in northern Canada. We don’t want future satellites to run into limits in the number of observations they make just because they don’t have enough time to download the data before their next appointment with a methane leak. So Iris is packed with more memory than Claire has, and the new microsatellite carries an experimental laser downlink in addition to its regular radio antenna. If all goes to plan, the laser should boost download speeds 1,000-fold, to 1 gigabit per second.
In its polar orbit, 500 kilometers above Earth, Claire passes over every part of the planet once every two weeks. With Iris, the frequency of coverage effectively doubles. And the addition in December of Hugo and three more microsatellites due to launch in 2021 will give us the ability to check in on any site on the planet almost daily—depending on cloud cover, of course.
With our microsatellites’ resolution and frequency, we should be able to spot the bigger methane leaks, which make up about 70 percent of emissions. Closing off the other 30 percent will require a closer look. For example, with densely grouped facilities in a shale gas region, it may not be possible to attribute a leak to a specific facility from space. And a sizable leak detectable by satellite might be an indicator of several smaller leaks. So we have developed an aircraft-mounted version of the WAF-P instrument that can scan a site with 1-meter resolution. The first such instrument took its test flights in late 2019 and is now in commercial use monitoring a shale oil and gas site in British Columbia. Within the next year we expect to deploy a second airplane-mounted instrument and expand that service to the rest of North America.
By providing our customers with fine-grained methane surveys, we’re allowing them to take the needed corrective action. Ultimately, these leaks are repaired by crews on the ground, but our approach aims to greatly reduce the need for in-person visits to facilities. And every source of fugitive emissions that is spotted and stopped represents a meaningful step toward mitigating climate change.
This article appears in the November 2020 print issue as “Microsatellites Spot Mystery Methane Leaks.”
The possibility of space mining has long captured the imagination and even inspired business ventures. Now, a space startup in China is taking its first steps towards testing capabilities to identify and extract off-Earth resources.
Origin Space, a Beijing-based private space resources company, is set to launch its first ‘space mining robot’ in November. NEO-1 is a small (around 30 kilograms) satellite intended to enter a 500-kilometer-altitude sun-synchronous orbit. It will be launched by a Chinese Long March series rocket as a secondary payload.
This small spacecraft will not be doing actual mining; instead, it will be testing technologies. “The goal is to verify and demonstrate multiple functions such as spacecraft orbital maneuver, simulated small celestial body capture, intelligent spacecraft identification and control,” says Yu Tianhong, an Origin Space co-founder.
Origin Space, established in 2017, describes itself as China’s first firm focused on the utilization of space resources. China’s private space sector emerged following a 2014 government decision to open up the industry. Because asteroid mining has often been talked of as potentially a trillion-dollar industry, it is no surprise that a company focused on this area has joined the likes of others developing rockets and small satellites.
Another mission, Yuanwang-1 (‘Look up-1’), and nicknamed “Little Hubble”, is slated to launch in 2021. A deal for development of the satellite was reached with DFH Satellite Co., Ltd., a subsidiary of China’s main state-owned space contractor CASC, earlier this year.
The “Little Hubble” satellite will carry an optical telescope designed to observe and monitor Near Earth Asteroids. Origin Space notes that identifying suitable targets is the first step toward space resources utilization.
Beyond this, Origin Space will also be taking aim at the moon with NEO-2, with a target launch date of late 2021 or early 2022.
Yu says the lunar project plan is not completed, but includes an eventual lunar landing. The tentative mission profile envisions an indirect journey to our celestial neighbor. The spacecraft will first be launched into low-Earth orbit and then gradually raise its orbit with onboard propulsion until it reaches a lunar orbit. The spacecraft will—after completing its observation goals—make a hard landing on the lunar surface.
While Chandrayaan-2, India’s second lunar mission, used a circuitous route to go from geosynchronous transfer orbit out to lunar orbit, a small spacecraft with limited propulsion may take a long time to reach the moon.
The issue of space resources became a hot topic once again after NASA administrator Jim Bridenstine last week announced that the agency will purchase lunar regolith and rock samples from commercial companies once they have collected moon material.
But Brian Weeden, Director of Program Planning for the Secure World Foundation, says that space resources companies still face myriad challenges, including the logistics extracting resources and the small matter of who (other than NASA) is going to buy them.
“We’ve heard a lot about water on the Moon, but if you talk to any lunar scientist they will tell you we don’t actually know what the chemical composition of that water is and how difficult it will be to extract and refine it into a usable product,” says Weeden.
“The same thing goes for asteroids to an even greater degree. On Earth, we have massive mining operations and factories and smelteries to refine raw materials into usable products. How much of that will you need in space and how do you build it?” Weeden says.
He adds: “Right now the only real customers are the national space agencies that are planning to do things on the Moon. They might have a use for lunar regolith as a building material and water for fuel and life support. But aside from the very small contract we saw from NASA last week, I haven’t seen any major interest from governments in buying those materials commercially or at what price.”
Origin Space is far from the only or first space mining company. Planetary Resources, a U.S.-based firm, was established back in 2009 before suffering funding issues and eventually being acquired by blockchain firm ConsenSys in 2018. Another U.S. venture, Deep Space Industries, was acquired in January 2019 and is apparently pivoting away from asteroid mining towards developing small satellites. Meanwhile Tokyo-based ispace recently raised $28 million for the first of a series of lunar landers.
Asked about learning from the case of companies such as Planetary Resources, Yu stated that the firm was a pioneer in the space resources industry, adding that it is always challenging for the first players in the game. “We think they lack important milestones and revenue. We are working hard to accelerate the progress of milestone projects while generating revenue.”
36Kr, a Chinese technology publishing and data company, reports (Chinese) that Origin Space will launch a pre-A financing round at the end of the year to fund the planned lunar exploration mission.
Even with the FCC’s caveat, it’s tempting to imagine that the idea of putting a mega-constellation of thousands of satellites in low-Earth orbit to provide uninterrupted broadband access anywhere on Earth will become a battle between Jeff Bezos’ Kuiper and Elon Musk’s Starlink. After all, even in space, how much room can there be for two mega-constellations, let alone additional efforts like that of the recently-beleaguered OneWeb? But some experts suggest that Amazon’s real play will come from its ability to vertically integrate Kuiper into the rest of the Amazon ecosystem—an ability SpaceX cannot match with Starlink.
“With Amazon, it’s a whole different ballgame,” says Zac Manchester, an assistant professor of aeronautics and astronautics at Stanford University. “The thing that makes Amazon different from SpaceX and OneWeb is they have so much other stuff going for them.” If Kuiper succeeds, Amazon can not only offer global satellite broadband access—it can include that access as part of its Amazon Web Services (AWS), which already offers resources for cloud computing, machine learning, data analytics, and more.
First, some quick background on what Amazon plans with Kuiper itself. The FCC approved the launch of 3,236 satellites. Not all of those thousands of satellites have to be launched immediately, however. Amazon is now obligated to launch at least half of the total by 2026 to retain the operating license the FCC has granted the company.
Amazon has said it will invest US $10 billion to build out the constellation. The satellites themselves will circle the Earth in what’s referred to as “low Earth orbit,” or LEO, which is any orbital height below 2000 kilometers. The satellites will operate in the Ka band (26.5 to 40 gigahertz).
A common talking point for companies building satellite broadband systems is that the constellations will be able to provide ubiquitous broadband access. In reality, except for users in remote or rural locations, terrestrial fiber or cellular networks almost always win out. In other words, no one in a city or suburb should be clamoring for satellite broadband.
“If they think they’re competing against terrestrial providers, they’re deluded,” says Tim Farrar, a satellite communications consultant, adding that satellite broadband is for last-resort customers who don’t have any other choice for connectivity.
However, these last-resort customers also include industries that can weather the cost of expensive satellite broadband, such as defense, oil and gas, and aviation. There’s far more money to be made in catering to those industries than in building several thousand satellites just to connect individual rural broadband subscribers.
But what these far-flung industries also increasingly have in common, alongside industries like Earth-imaging and weather-monitoring that also depend on satellite connectivity, is data. Specifically, the need to move, store, and crunch large quantities of data. And that’s something Amazon already offers.
“You could see Project Kuiper being a middleman for getting data into AWS,” says Manchester. “SpaceX owns the space segment, they can get data from point A to point B through space. Amazon can get your data through the network and into their cloud and out to end users.” There are plenty of tech start-ups and other companies that already do machine learning and other data-intensive operations in AWS and could make use of Kuiper to move their data. (Amazon declined to comment on the record about their future plans for Kuiper for this story).
Amazon has also built AWS ground stations that connect satellites directly with the rest of the company’s web service infrastructure. Building and launching satellites is certainly expensive, but the ground stations to connect those satellites are also a not-insignificant cost. Because Amazon already offers access to these ground stations on a per-minute basis, Manchester thinks it’s not unreasonable for the company to expand that offering to Kuiper’s connectivity.
There’s also Blue Origin to consider. While the rocket company owned by Bezos currently has a heavy-lift rocket that could conceivably bring Kuiper satellites to LEO, that could change. The company has at least one such rocket —the New Glenn—in development. Farrar notes that Amazon could spend the next few years in satellite development before it needs to begin launching satellites in earnest, by which point Blue Origin could have a heavy-lift option available.
Farrar says that with an investment of $10 billion, Amazon will need to bring in millions of subscribers to consider the project a financial success. But Amazon can also play a longer game than, say, SpaceX. Whereas the latter is going to be dependent entirely on subscriptions to generate revenue for Starlink, Amazon’s wider business platform means Kuiper is not dependent solely on its own ability to attract users. Plus, Amazon has the resources to make a long-term investment in Kuiper before turning a profit, in a way Starlink cannot.
“They own all these things the other guys don’t,” says Manchester. “In a lot of ways, Amazon has a grander vision. They’re not trying to be a telco.”
The final satellite needed to complete China’s own navigation and positioning satellite system has passed final on-orbit tests. The completed independent system provides military and commercial value while also facilitating new technologies and services.
The Beidou was launched on a Long March 3B rocket from the Xichang Satellite Launch Center in a hilly region of Sichuan province at 01:43 UTC on Tuesday, 23 June. The satellite was sent into a geosynchronous transfer orbit before entering an orbital slot approximately 35,786 kilometers in altitude which keeps it at a fixed point above the Earth.
Like GPS, the main, initial motivation for Beidou was military. The People’s Liberation Army did not want to be dependent on GPS for accurate positioning data of military units and weapons guidance, as the U.S. Air Force could switch off open GPS signals in the event of conflict.
As with GPS, Beidou also provides and facilitates a range of civilian and commercial services and activities, with an output value of $48.5 billion in 2019.
Twenty four satellites in medium Earth orbits (at around 21,500 kilometers above the Earth) provide positioning, navigation and timing (PNT) services. The satellites use rubidium and hydrogen atomic clocks for highly-accurate timing that allows precise measurement of speed and location.
Additionally, thanks to a number of satellites in geosynchronous orbits, Beidou provides a short messaging service through which 120-character messages can be sent to other Beidou receivers. Beidou also aids international search and rescue services. Vessels at sea will be able to seek help from nearby ships in case of emergency despite no cellphone signal.
The Beidou satellite network is also testing inter-satellite links, removing reliance on ground stations for communications across the system.
Beidou joins the United States’ GPS and Russia’s GLONASS in providing global PNT services, with Europe’s Galileo soon to follow. These are all compatible and interoperable, meaning users can draw services from all of these to improve accuracy.
“The BeiDou-3 constellation transmits a civil signal that was designed to be interoperable with civil signals broadcast by Galileo, GPS III, and a future version of GLONASS. This means that civil users around the world will eventually be getting the same signal from more than 100 satellites across all these different constellations, greatly increasing availability, accuracy, and resilience,” says Brian Weeden, Director of Program Planning for Secure World Foundation.
“This common signal is the result of international negotiations that have been going on since the mid-2000s within the International Committee of GNSS (ICG).”
The rollout of Beidou has taken two decades. The first Beidou satellites were launched in 2000, providing coverage to China. Second generation Beidou-2 satellites provided coverage for the Asia-Pacific region starting in 2012. Deployment of Beidou-3 satellites began in 2015, with Tuesday’s launch being the 30th such satellite.
But this is far from the end of the line. China wants to establish a ‘ubiquitous, integrated and intelligent and comprehensive’ national PNT system, with Beidou as its core, by 2035, according to a white paper.
Chinese aerospace firms are also planning satellite constellations in low Earth orbit to augment the Beidou signal, improving accuracy while facilitating high-speed data transmission. Geely, an automotive giant, is now also planning its own constellation to improve accuracy for autonomous driving.
Although the space segment is complete, China still has work to do on the ground to make full use of Beidou, according to Weeden.
“It’s not just enough to launch the satellites; you also have to roll out the ground terminals and get them integrated into everything you want to make use of the system. Doing so is often much harder and takes much longer than putting up the satellites.
“So, for the Chinese military to make use of the military signals offered by BeiDou-3, they need to install compatible receivers into every plane, tank, ship, bomb, and backpack. That will take a lot of time and effort,” Weeden states.
With the rollout of Beidou satellites complete, inhabitants downrange of Xichang will be spared any further disruption and possible harm. Long March 3B launches of Beidou satellites frequently see spent rocket stages fall near or on inhabited areas. Eighteen such launches have been carried out since 2018.
The areas calculated to be under threat from falling boosters were evacuated ahead of time for safety. Warnings about residual toxic hypergolic propellant were also issued. But close calls and damage to property were all too common.
Later this month, satellite-based remote-sensing in the United States will be getting a big boost. Not from a better rocket, but from the U.S. Commerce Department, which will be relaxing the rules that govern how companies provide such services.
For many years, the Commerce Department has been tightly regulating those satellite-imaging companies, because of worries about geopolitical adversaries buying images for nefarious purposes and compromising U.S. national security. But the newly announced rules, set to go into effect on July 20, represent a significant easing of restrictions.
Previously, obtaining permission to operate a remote-sensing satellite has been a gamble—the criteria by which a company’s plans were judged were vague, as was the process, an inter-agency review requiring input from the U.S. Department of Defense as well as the State Department. But in May of 2018, the Trump administration’s Space Policy Directive-2 made it apparent that the regulatory winds were changing. In an effort to promote economic growth, the Commerce Department was commanded to rescind or revise regulations established in the Land Remote Sensing Policy Act of 1992, a piece of legislation that compelled remote-sensing satellite companies to obtain licenses and required that their operations not compromise national security.
Following that directive, in May of 2019 the Commerce Department issued a Notice of Proposed Rulemaking in an attempt to streamline what many in the satellite remote-sensing industry saw as a cumbersome and restrictive process.
But the proposed rules didn’t please industry players. To the surprise of many of them, though, the final rules announced last May were significantly less strict. For example, they allow satellite remote-sensing companies to sell images of a particular type and resolution if substantially similar images are already commercially available in other countries. The new rules also drop earlier restrictions on nighttime imaging, radar imaging, and short-wave infrared imaging.
On June 25th, Commerce Secretary Wilbur Ross explained at a virtual meeting of the National Oceanic and Atmospheric Administration’s Advisory Committee on Commercial Remote Sensing why the final rules differ so much from what was proposed in 2019:
Last year at this meeting, you told us that our first draft of the rule would be detrimental to the U.S. industry and that it could threaten a decade’s worth of progress. You provided us with assessments of technology, foreign competition, and the impact of new remote sensing applications. We listened. We made the case with our government colleagues that the U.S. industry must innovate and introduce new products as quickly as possible. We argued that it was no longer possible to control new applications in the intensifying global competition for dominance.
In other words, the cat was already out of the bag: there’s no sense prohibiting U.S. companies from offering satellite-imaging services already available from foreign companies.
An area where the new rules remain relatively strict though, concerns the taking of pictures of other objects in orbit. Companies that want to offer satellite inspection or maintenance services would need rules that allow what regulators call “non-Earth imaging.” But there are national security implications here, because pictures obtained in this way could blow the cover of U.S. spy satellites masquerading as space debris.
While the extent to which spy satellites cloak themselves in the guise of space debris isn’t known, it seems clear that this would be an ideal tactic for avoiding detection. That strategy won’t work, though, if images taken by commercial satellites reveal a radar-reflecting object to be a cubesat instead of a mangled mass of metal.
Because of that concern, the current rules demand that companies limit the detailed imaging of other objects in space to ones for which they have obtained permission from the satellite owner and from the Secretary of Commerce at least 5 days in advance of obtaining images. But that stipulation begs a key question: Who should a satellite-imaging company contact if it wants to take pictures of a piece of space debris? Maybe imaging space debris would only require the permission of the Secretary of Commerce. But then, would the Secretary ever give such a request a green light? After all, if permission were typically granted, instances when it wasn’t would become suspicious.
More likely, imaging space debris—or spy satellites trying to pass as junk—is going to remain off the table for the time being. So even though the new rules are a welcome development to most commercial satellite companies, some will remain disappointed, including those companies that make up the Consortium for the Execution of Rendezvous and Servicing Operations (CONFERS), which had recommended that “the U.S. government should declare the space domain as a public space and the ability to conduct [non-Earth imaging] as the equivalent of taking photos of public activities on a public street.”
Unbreakable quantum keys that use the laws of physics to protect their secrets could be transmitted from orbiting devices a person could lift with one hand, according to experiments conducted from the International Space Station.
Researchers launched a tiny, experimental satellite loaded with optics that could emit entangled pairs of photons. Entangled photons share quantum mechanical properties in such a way that measuring the state of one member of the pair—such as its polarization—instantly tells you what the state of its partner is. Such entanglement could be the basis for quantum key distribution, in which cryptographic keys are used to decode messages. If an eavesdropper were to intercept one of the photons and measure it, that would change the state of both, and the user would know the key had been compromised.
New technology may help to bring dead satellites back to life. Earlier this year, the Mission Extension Vehicle (MEV), a spacecraft jointly managed by NASA and Northrop Grumman, made history when it resurrected a decrepit satellite from the satellite graveyard. Reviving the spacecraft is a key step in extending the lifetime of orbiting objects; a second mission is set to extend the lifetime of another satellite later this summer.
Most satellites in geosynchronous orbits (GEO) have a design life of 15 years and are launched with enough fuel to cover that timeframe. At the end of their lifetime, the crafts are required to enter a graveyard orbit mandated by the 2002 draft Mitigation Guidelines issued by the Inter-Agency Space Debris Coordination Committee (IADC). Graveyard orbits comprise paths at least 300 kilometers above the geosynchronous region, giving the zombie spacecraft room to have their orbits incrementally ground down by the gravity of the sun and moon.
Even when they’re out of fuel, most satellites are still fully capable of functioning. “The technical degradation—besides fuel—of the satellite subsystems beyond 15 years is very marginal,” says Joe Anderson, vice president of business development and operations at SpaceLogistics, a subsidiary of Northrop Grumman. Anerson says he’s aware of satellites providing valuable services for nearly 30 years.
Defunct satellites are tracked primarily by the United States Air Force, which follows their mass rather than their radio signals. Smaller bits such as debris are difficult for the Air Force to follow, but satellites are usually large enough (though new technology is bringing them down in size). In contrast, the IADC is comprises just over a dozen space agencies and is the “gold standard” in space recommendations, says Jonathan McDowell, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics.
According to McDowell, who is also author of Jonathan’s Space Report, a weekly newsletter cataloguing the comings and goings of spacecraft, there are 915 objects within 200 km of the GEO, so they still have plenty of room to avoid one another. The looming issue is the 365 defunct spacecraft that—due to malfunction, lack of planning, or laziness—didn’t follow the IADC’s guidelines. In contrast, the graveyard region contains only 283 spacecraft. Dead satellites not parked in the agreed upon spot could lead to collisions (and therefore more debris) which could damage active spacecraft.
“The level of compliance is a little disappointing,” McDowell says. “The junk satellite environment close to GEO is more than one would want.”
In February, MEV-1 successfully brought an Intel satellite back from the graveyard back into geostationary orbit, where it now serves over 30 customers. Launched in 2001, the Instelsat eventually ran out of fuel and retired to the satellite graveyard. Without fuel, it could no longer adjust its orbit, though its other systems remained functional.
The MEV-1 was designed for interfacing with single-use satellites like the Intelsat. By docking with the satellite’s liquid apogee engine, a common feature that helps most geostationary satellites finalize their orbits at the start of their lifetime, MER-1 captured the satellite and began to lower its orbit, putting the Intelsat back into play at the start of April. MEV-1 will remain connected to the Intelsat for the next five years, then return it to the graveyard. MEV-1 will then proceed to its next customer.
MEV-1 is only the beginning. Anderson says that a second MEV will be launched later this summer. It will reach geostationary orbit in January 2021 and extend the lifetime of a second Intel satellite for an additional five years. Northrup Grumman plans to have only two MEVs in space, but Anderson expects them to be utilized throughout their 15-year-plus lifetime. The company is also developing Mission Extension Pods, smaller propulsion augmentation devices installed on a client’s satellite to provide orbital boosts, extending missions for up to six years. The first pods should launch in 2023, Anderson says.
While Northrup Grumman touts refueling and refurbishing missions like MEV as a bonus to the company’s bottom line, McDowell sees it as a great way to solve the growing problem of space debris and the collection of defunct human-made objects in space. Spacecraft like the MEV could potentially relocate nonfunctioning satellites from GEO to the graveyard, though there are likely to be regulatory and legal issues over who has the right to haul away someone else’s trash. He has long advocated for a launch tax on the companies using space; those funds would sustain an “international space garbage trucking agency” responsible for cleaning up the messes from collisions and enforcing the removal of out-of-work satellites.
“The era of the space garbage truck is coming,” McDowell says.
Claire, a microsatellite, was monitoring a mud volcano in Central Asia when a mysterious plume appeared in its peripheral view. The 15-kilogram spacecraft had spotted a massive leak of methane—a powerful climate pollutant—erupting from an oil and gas facility in western Turkmenistan. The sighting in January 2019 eventually spurred the operator to fix its equipment, plugging one of the world’s largest reported methane leaks to date.
Canadian startup GHGSat launched Claire four years ago to begin tracking greenhouse gas emissions. Now the company is ready to send its second satellite into orbit. On 20 June, the next-generation Iris satellite is expected to hitch a ride on Arianespace’s Vega 16 rocket from a site in French Guiana. The launch follows back-to-back delays due to a rocket failure last year and the COVID-19 outbreak.
GHGSat is part of a larger global effort by startups, energy companies, and environmental groups to develop new technologies for spotting and quantifying methane emissions.
Although the phrase “greenhouse gas emissions” is almost synonymous with carbon dioxide, it refers to a collection of gases, including methane. Methane traps significantly more heat in the atmosphere than carbon dioxide, and it’s responsible for about one-fourth of total atmospheric warming to date. While mud volcanoes, bogs, and permafrost are natural methane emitters, a rising share is linked to human activities, including cattle operations, landfills, and the production, storage, and transportation of natural gas. In February, a scientific study found that human-caused methane emissions might be 25 to 40 percent higher than previously estimated.
Iris’s launch also comes as the Trump administration works to ease regulations on U.S. fossil fuel companies. The U.S. Environmental Protection Agency in May sought to expedite a rollback of federal methane rules on oil and gas sites. The move could lead to an extra 5 million tons of methane emissions every year, according to the Environmental Defense Fund.
Stéphane Germain, president of Montreal-based GHGSat, said the much-improved Iris satellite will enhance the startup’s ability to document methane in North America and beyond.
“We’re expecting 10 times the performance relative to Claire, in terms of detection,” he said ahead of the planned launch date.
The older satellite is designed to spot light absorption patterns for both carbon dioxide and methane. But, as Germain explained, the broader spectral detection range requires some compromise on the precision and quality of measurements. Iris’s spectrometer, by contrast, is optimized for only methane plumes, which allows it to spot smaller emission sources in fewer measurements.
Claire also collects about 25 percent of the stray light from outside its field of view, which impinges on its detector. It also experiences “ghosting,” or the internal light reflections within the camera and lens that lead to spots or mirror images. And space radiation has caused more damage to the microsat’s detector than developers initially expected.
With Iris, GHGSat has tweaked the optical equipment and added radiation shielding to minimize such issues on the new satellite, Germain said.
Other technology upgrades include a calibration feature that corrects for any dead or defective pixels that might mar the observational data. Iris will test an experimental computing system with 10 times the memory and four times the processing power of Claire. The new satellite will also test optical communications downlink, allowing the satellite to bypass shared radio frequencies. The laser-based, 1-gigabit-per-second downlink promises to be more than a thousand times faster than current radio transmission.
GHGSat is one of several ventures aiming to monitor methane from orbit. Silicon Valley startup Bluefield Technologies plans to launch a backpack-sized microsatellite in 2020, following a high-altitude balloon test of its methane sensors at nearly 31,000 meters. MethaneSAT, an independent subsidiary of the Environmental Defense Fund, expects to complete its satellite by 2022.
The satellites could become a “big game changer” for methane-monitoring, said Arvind Ravikumar, an assistant professor of energy engineering at the Harrisburg University of Science and Technology in Pennsylvania.
“The advantage of something like satellites is that it can be done remotely,” he said. “You don’t need to go and ask permission from an operator — you can just ask a satellite to point to a site and see what its emissions are. We’re not relying on the industry to report what their emissions are.”
Such transparency “puts a lot of public pressure on companies that are not managing their methane emissions well,” he added.
Ravikumar recently participated in two research initiatives to test methane-monitoring equipment on trucks, drones, and airplanes. The Mobile Monitoring Challenge, led by Stanford University’s Natural Gas Initiative and the Environmental Defense Fund, studied 10 technologies at controlled test sites in Colorado and California. The Alberta Methane Field Challenge, an industry-backed effort, studied similar equipment at active oil-and-gas production sites in Alberta, Canada.
Both studies suggest that a combination of technologies is needed to effectively identify leaks from wellheads, pipelines, tanks, and other equipment. A plane can quickly spot methane plumes during a flyover, but more precise equipment, such as a handheld optical-gas-imaging camera, might be necessary to further clarify the data.
GHGSat’s technology could play a similarly complementary role with government-led research missions, Germain said.
Climate-monitoring satellites run by space agencies tend to have “very coarse resolutions, because they’re designed to monitor the whole planet all the time to inform climate change models. Whereas ours are designed to monitor individual facilities,” he said. The larger satellites can spot large leaks faster, while Iris or Claire could help pinpoint the exact point source.
After Iris, GHGSat plans to launch a third satellite in December, and it’s working to add an additional eight spacecraft — the first in a “constellation” of pollution-monitoring satellites. “The goal ultimately is to track every single source of carbon dioxide and methane in the world, routinely,” Germaine said.
A space-based, virtually unhackable quantum Internet may be one step closer to reality due to satellite experiments that linked ground stations more than 1,000 kilometers apart, a new study finds.
Quantum physics makes a strange effect known as entanglement possible. Essentially, two or more particles such as photons that get linked or “entangled” can influence each other simultaneously no matter how far apart they are.
Entanglement is an essential factor in the operations of quantum computers, the networks that would connect them, and the most sophisticated kinds of quantum cryptography, a theoretically unhackable means of securing information exchange.
The U.S. Global Positioning System fleet of satellites provides critical data for navigation apps, banks, power grids, and other commercial and government infrastructure. But for the past decade, it has operated without a safety net, with no backup system in place. Now, two U.S. federal agencies want to change that, and they could select one or more alternatives by September.
Next month, the U.S. Department of Transportation (DOT) is due to deliver the results of a recent demonstration of potential GPS backup technologies to the National Executive Committee for Space-Based Positioning, Navigation, and Timing (PNT). The committee, which is cochaired by deputy secretaries of the U.S. Departments of Transportation and Defense, is expected to use the findings to announce next steps sometime in August. Those steps may include selecting one or more technologies and issuing a request for proposals for companies to develop them.
Eleven finalists participated in the two-week, mid-March demo, in which they showed how their respective PNT systems would perform if GPS went down because of jamming, spoofing, or other problems. The companies, which tested both space- and ground-based systems and include venture-backed startups and industry old-timers, were awarded a total of approximately US $2.5 million to prepare for the demos.
Tests were split between NASA’s Langley Research Center in Hampton, Va., and a 155-acre test range operated by the DOT and the John A. Volpe National Transportation Systems Center at Joint Base Cape Cod in Buzzards Bay, Mass. The test portion was finished by the time states began issuing orders to shelter in place because of COVID-19; however, a 20 March VIP day that would have concluded the demo at Joint Base Cape Cod was canceled because of the outbreak. The DOT did not respond to a request for comment, but it has not indicated that the timeline for its decision would be affected by ongoing efforts to stem the pandemic.
A GPS fail-safe has been a long time coming. A previous backup was built on the Loran-C radio navigation system that had been in use in some form since World War II, but it was determined to be obsolete and was dismantled in 2010. Four years later, lawmakers and federal agencies began investigating a new alternative. Although Congress passed laws in 2017 and 2018 authorizing tests of backup options, red tape and lack of funding delayed activity until last year, when a newly appointed DOT assistant secretary for research and technology fast-tracked funding for a test.
Companies that participated in the demo had to show systems that could provide either timing or positioning data or both, and operate independently of GPS or broadcast signals from any other Global Navigation Satellite System (GNSS). On NASA’s technology-readiness-level system, which measures the maturity of a particular technology on a scale of 1 to 9, demo systems had to operate at TRL 6 or higher.
Here are some of the companies that participated:
UrsaNav, of North Billerica, Mass., is one of several in the demo developing enhanced long-range navigation, or e-Loran, which according to researchers has better receiver design and transmission than the older, analog-based Loran-C technology it replaces. In both schemes, ground stations emit low-frequency radio waves that receivers can use to triangulate positioning. The new version features additional pulses that can transmit auxiliary data. “Government studies and academia say it’s the best option,” said UrsaNav’s cofounder and CEO Chuck Schue, who’s been in the industry since the 1970s and has set up such systems around the world.
If tapped as a GPS backup provider, Hellen Systems, of Middleburg, Va., intends to act as a systems integrator. The company would create an e-Loran system from existing technology from Continental Electronics Corp., which makes solid-state transmitters, and Microsemi Corp., which produces advanced timing and frequency products and receiver and reference systems, among others. “It’s plug-and-play, the products are commercially available, and it’s inexpensive for users to adopt,” said Trowbridge “Bridge” Littleton, Hellen’s cofounder and copresident.
Echo Ridge’s GPS alternative is made up of a wireless augmented positioning system that uses signals from the existing Globalstar network of 24 low Earth orbit satellites combined with its own proprietary software and end-user device. “We receive the signals they are using for communications without modification and make measurements on them to determine position and navigation,” said Joe Kennedy, president of the Sterling, Va., company.
The venture-backed Dutch company OPNT is the only finalist to offer a time-based backup, using multiple national timing sources rather than a satellite network to triangulate positioning. The service connects to existing fiber networks based on the White Rabbit protocol developed by the European Organization for Nuclear Research (CERN). OPNT has launched beta versions of the service in the United States and the Netherlands. “We could deploy as fast as our investors give us money or customers sign up,” said CEO Monty Johnson.
Other finalists include NextNav, PhasorLab, Satelles, Serco, Seven Solutions Sociedad Limitada, Skyhood Holdings, and TRX Systems.
This article appears in the May 2020 print issue as “Wanted: A Fallback for GPS.”
If astronauts reach the moon as planned under NASA’sProject Artemis, they’ll have work to do. A major objective will be to mine deposits of ice in craters near the lunar south pole—useful not only for water but because it can be broken down into hydrogen and oxygen. But they’ll need guidance to navigate precisely to the spots where robotic spacecraft have pointed to the ice on the lunar map. They’ll also need to rendezvous with equipment sent on ahead of them such as landing ships, lunar rovers, drilling equipment, and supply vehicles. There can be no guessing. They will need to know exactly where they are in real time, whether they’re in lunar orbit or on the moon’s very alien surface.
Which got some scientists thinking. Here on Earth, our lives have been transformed by the Global Positioning System, fleets of satellites operated by the United States and other countries that are used in myriad ways to help people navigate. Down here, GPS is capable of pinpointing locations with accuracy measured in centimeters. Could it help astronauts on lunar voyages?
“We are trying to get it working, especially with the big flood of missions in the next few years,” said Cheung. “We have to have the infrastructure to do the positioning of those vehicles.”
Cheung and Lee plotted the orbits of navigation satellites from the United States’s Global Positioning System and two of its counterparts, Europe’s Galileo and Russia’s GLONASS system—81 satellites in all. Most of them have directional antennas transmitting toward Earth’s surface, but their signals also radiate into space. Those signals, say the researchers, are strong enough to be read by spacecraft with fairly compact receivers near the moon. Cheung, Lee and their team calculated that a spacecraft in lunar orbit would be able to “see” between five and 13 satellites’ signals at any given time—enough to accurately determine its position in space to within 200 to 300 meters. In computer simulations, they were able to implement various methods for improving the accuracy substantially from there.
Helping astronauts navigate after landing on the moon’s surface would be more complicated, mainly because in polar regions, the Earth would be low on the horizon. Signals could easily be blocked by hills or crater rims.
But the JPL team and colleagues at the Goddard Space Flight Center in Maryland anticipated that. To help astronauts, the team suggested using a transmitter located much closer to them as a reference point. Perhaps, the scientists wrote, they could use two satellites in lunar orbit—a new relay satellite in high lunar orbit to act as a locator beacon, combined with NASA’s Lunar Reconnaissance Orbiter, which has been surveying the moon since 2009.
This mini-network need not be terribly expensive by space-program standards. The relay satellite could be very small, take design cues from existing satellite designs, and ride piggyback on a rocket launching other payloads toward the moon ahead of astronauts.
The plans for Artemis have been in flux, slowed by debates over funding and mission architecture. NASA managers have been banking on a planned lunar-orbiting base station, known as the Gateway, to make future missions more practical. But if they are going to put astronauts on the surface of the moon by 2024, as the White House has ordered, they say the Gateway may have to wait until later in the decade. Still, scientists say early plans for lunar navigation will be useful, no matter how lunar landings play out.
“This can be done,” said Cheung. “It’s just a matter of money.”
Witnessing the emergence of private space companies, new launch vehicles, and miniature satellites that have profoundly changed space activities in the United States, China needed to act. The government opened its space industry to private capital in 2014.
Hundreds of commercial companies, many with close ties to traditional space and defense enterprises, have now sprung up. They’re developing new rockets, building remote sensing and communications satellites, and aiming to fill gaps in ground station and space data services.
One of the first private space companies in China was Spacety, a small satellite maker with offices in Beijing and Changsha, in central China. Its founders were in part inspired by the activities of SpaceX and Planet. They left their jobs at institutes under the Chinese Academy of Sciences (CAS), a state-owned entity with a measure of space-related activities, to establish Spacety in January 2016.
For decades, the astronomical cost of launching a satellite meant that only government agencies and large corporations ever undertook such a herculean task. But over the last two decades or so, newer, commercial rocket designs that accommodate multiple payloads have reduced launch costs dramatically—from about US $54,000 per kilogram in 2000 to about $2,720 in 2018. That trend in turn has fostered a boom in the private satellite industry. Since 2012, the number of small satellites—roughly speaking, those under 50 kilograms—being launched into low Earth orbit (LEO) has increased 30 percent every year.
One huge problem with this proliferation of small satellites is communicating with the ground. Satellites in low Earth orbit circle the planet about once every 90 minutes, and so they usually have only about a 10-minute window during which to communicate with any given ground station. If the satellite can’t communicate with that ground station—because it’s on the other side of the planet, for example—any valuable data the satellite needs to send won’t arrive on Earth in a timely way.
At present, NASA’s Tracking and Data Relay Satellite System (TDRSS) is the only network that can help route signals from satellites to the correct ground stations. However, TDRSS is rarely accessible to companies, prohibitively expensive to use, and over 25 years old. It’s simply unable to handle the traffic created by all the new satellites. Getting data back to Earth from a satellite is oftentimes one of the bottlenecks that limits an observation system’s capabilities.
With three other engineers, I started Kepler Communications in 2015 to break this bottleneck. Our goal is to create a commercial replacement for TDRSS by building a constellation of many tiny satellites in LEO. The satellites will form the backbone of a space-based mesh network, sending data back and forth between Earth and space in real time. Each of our satellites, roughly the size of a loaf of bread, will operate much like an Internet router—except in space. Our first satellite, nicknamed KIPP after the companion robot from the 2014 sci-fi epic Interstellar, launched in January 2018.
When fully deployed by 2022, Kepler’s network will include 140 satellites spread equally among seven orbital planes. In essence, we’re building an Internet service provider high above Earth’s surface, to allow other satellites to stay in contact with one another and with ground stations, even if two satellites, or a satellite and a ground station, are on opposite sides of the planet. Our customers will include companies operating satellites or using satellite communications to transfer data, as well as government agencies like the Canadian Department of National Defense, the European Space Agency, and NASA. None of this would be possible without the ongoing developments in building tiny satellites.
Kepler’s satellites are what the aerospace community calls CubeSats. In the early 2000s, CubeSats were developed to try to reduce the cost of satellites by simplifying and standardizing their design and manufacture. At the time, each new satellite was a one-off, custom-built spacecraft, created by teams of highly specialized engineers using bespoke materials and fabrication methods. In contrast, a CubeSat is made up of a standardized multiple of 10- by 10- by 10-centimeter units. The fixed units allow manufacturers to develop essential CubeSat parts like batteries, solar panels, and computers as commercial, off-the-shelf components.
Thanks to CubeSats, space startups like Kepler can design, build, and launch a satellite, from napkin sketch to orbital insertion, in as little as 12 months. For comparison, a traditional satellite program can take three to seven years to complete.
The rise of CubeSats and the falling costs of launch have led to a surge in commercial satellite services. Companies around the world are building constellations of simultaneously operating spacecraft, with some planned constellations numbering in the hundreds. Companies like Planet are focused on delivering images of Earth, while others, like Spire Global, aim to monitor the weather.
So how are all those satellites getting all the data they collect back to customers on the ground? The short answer is, they’re not. A single Earth-imaging CubeSat, for example, can collect something like 2 gigabytes in one orbit, or 26 GB per day. Realistically, the CubeSat will be able to send back only a fraction of that data during its short window above a particular ground station. And that’s the case for all the companies now operating satellites to collect data on agriculture, climate, natural disasters, natural-resource management, and other topics. There is simply too much data for the communications infrastructure to handle efficiently.
To hand off its data, an Earth-observation satellite sends its imagery and other measurements by contacting its ground station when one is in sight. Such satellites are almost always in low Earth orbits to improve their image resolution, but, as I mentioned, that means they orbit the planet roughly once every 90 minutes. On average, the satellite has a line of sight—and therefore a line of communication—with a specific ground station for about 10 minutes. In that 10-minute window, the satellite must transmit all the data it has collected so that the ground station can then relay it through terrestrial networks to its final destination, such as a data center.
The result is that satellite operators often collect far more information than they can ever hope to send back to Earth, so they are constantly throwing away valuable data, or retrieving it only after a delay of hours or even days.
One recent solution is to operate ground stations as a service in order to increase the total number of ground stations available for use by any one company. Historically, when a company or government agency launched a satellite, it would also be responsible for developing its own ground stations—a very costly proposition. Imagine how expensive and complicated it would be if all cellphone users also had to purchase their own towers and operate their own network just to make a call. A cheaper alternative is for companies to build ground stations that anyone—for a price—can use to connect with their satellites, like Amazon’s AWS Ground Stations.
But there’s still a catch. To ensure that a LEO satellite can continuously communicate with a ground station, you basically need ground stations all over the globe. For continuous coverage, you would need several thousand ground stations—one every few hundred kilometers, though more closely spaced ground stations would ensure more reliable connections. That can be difficult at best in remote areas on land. It’s even more difficult to maintain a connection over oceans, where islands to build on are few and far between, and those islands rarely, if ever, have robust fiber connections to the Internet.
That’s why Kepler plans to move more of the communications infrastructure into orbit. Rather than creating a globe-spanning network of ground stations, we think it makes more sense to build a constellation of CubeSat routers, which can keep satellites connected to ground stations regardless of where the satellite or the ground station is.
At the heart of each 5-kilogram Kepler satellite is a software-defined radio (SDR) and a proprietary antenna. SDRs have been around since the 1990s. At the most fundamental level, they replace analog radio components like modulators (which convert analog signals to ones and zeros) and filters (which limit what part of the analog signal gets converted) with software. In Kepler’s SDR, these elements are implemented using software running on a field-programmable gate array, or FPGA. The result is a radio that’s cheaper to develop and easier to configure. The use of SDRs has in turn allowed us to shrink our spacecraft to CubeSat scale. It’s also one of the reasons our satellites cost one-hundredth as much as a traditional communication satellite.
To understand how the Kepler constellation will work, it helps to know how conventional satellite connections are made: with the “bent pipe” method. Imagine the pipe as two straight lengths of pipe joined together at an angle; the satellite sits where the two lengths meet so it has a continuous line of sight with both ends of the connection, whether they’re two ground stations on different continents or a ground station and another spacecraft. The satellite essentially acts as a relay, receiving the signal at one end of the connection and transmitting it in a different direction to the other end.
In Kepler’s network, when each satellite passes over a ground station it will receive data that has been routed to that ground station from elsewhere in terrestrial networks. The satellite will store the data and then transmit it later when the destination ground station becomes visible. Kepler’s network will include five ground-station sites spread across five continents to connect all of our satellites. Unfortunately, this method doesn’t allow for real-time communications. But those will become possible as our satellite constellation grows.
Here’s how: Future iterations of our network will add the ability to send data between our satellites to create a real-time connection between two ground stations located anywhere on the planet, as well as between a ground station and an orbiting satellite. We’re also planning to include new features such as transcoding—essentially a way to translate the data into different formats—and queueing the data according to what needs to be delivered most urgently.
We can make these big changes to how our satellites communicate relatively quickly, thanks to SDR. New code, for example, can be uploaded to an orbiting satellite like KIPP for testing. If the code passes muster, it can be deployed to the rest of the constellation without having to replace any of the hardware. Much like CubeSat standardization, SDR shortens development cycles and lets us prototype more ideas.
Some Assembly Required: A Kepler Communications engineer works to hand assemble KIPP the company’s first satellite in orbit. Photo: Kepler Communications
Software-defined radio components replace many analog parts and make it possible to build satellites that are the size of a loaf of bread. Photo: Kepler Communications
A mostly completed KIPP waits for the finishing touches. Photo: Kepler Communications
Kepler has also built ground stations so that its satellites in orbit can communicate with terrestrial networks. Photo: Kepler Communications
The SatOps team keeps an eye on Kepler’s orbiting satellites, which will grow in importance as the constellation becomes larger. Photo: Kepler Communications
Kepler is currently in the process of deploying our constellation. KIPP has been operating successfully for over two years and is supporting the communication needs of our ground users. The MOSAiC expedition, for example, is a yearlong effort to measure the arctic climate from an icebreaker ship near the North Pole. It’s the largest polar expedition in history. Since the start of the mission, KIPP’s high-bandwidth communication payload has been regularly transferring gigabytes of data from MOSAiC’s vessel to the project’s headquarters in Bremerhaven, Germany.
In December 2018, our second satellite, CASE (named after another robot companion from Interstellar), joined KIPP in orbit. Even with just two satellites in operation, we’re able to provide some level of service to our customers, mostly by taking up data from one ground station and delivering it to another, in the method I previously described. That has allowed us to avoid the fate of some other satellite-constellation companies, which went bankrupt in the process of trying to deploy a complete network prior to delivering service.
While we’ve been successful so far, establishing a constellation of 140 satellites is not without challenges. When two fast-moving objects—such as satellites—try to talk to each other, their communications are affected by a Doppler shift. This phenomenon causes the frequencies of radio waves transmitted between the two objects to change as their relative positions change. Specifically, the frequency is compressed as the objects approach each other and stretched as they grow more distant. It’s the same phenomenon that causes an ambulance siren to change in pitch as it speeds past you.
With our satellites traveling at over 7 kilometers per second relative to the ground or potentially communicating with another satellite moving in the opposite direction at the same speed, we end up with a very compressed or stretched signal. To deal with this issue, we’ve created a proprietary network architecture in which adjacent satellites will communicate with each other only when they’re traveling in the same direction. We’ve also installed software on KIPP and CASE to manage Doppler shift by tracking the change in frequency caused by its relative motion. At this point, we believe we’re able to compensate for any Doppler shifts, and we expect to improve upon that capability in future iterations of our network and software.
As the number of satellites in the constellation increases, we must also ensure that data is routed efficiently. We don’t want to beam data among, say, 30 satellites when just 3 or 4 will do the job. To solve this problem, our satellites will run an algorithm in orbit that uses something called a two-line element set to determine the position of each satellite. A two-line element set operates in a way that’s similar to how GPS identifies locations on Earth. Knowing every satellite’s position, we can run an optimization algorithm to figure out the route with the shortest transit time.
Of course, all these challenges will be moot if we can’t actually build the 140 satellites and place them in orbit. One thing we discovered early on is that supply chains for producing hundreds of spacecraft—even small ones—don’t yet exist, even if the components are standardized. Ultimately, we’ve had to do most of the production of our satellites in-house. Our manufacturing facility in downtown Toronto can produce up to 10 satellites per month by automating what were previously manual processes, such as testing circuit boards to ensure they meet our requirements.
As I’ve said before, Kepler’s constellation will be possible because of the drastic size and cost reductions in satellite components in recent years. But there is one area where efficiency has limited miniaturization: solar panels. Our CubeSats’ ability to generate power is still constrained by the surface area on which we can mount solar panels.
We’re also seeing limitations in antenna size, as antennas reach theoretical limits in efficiency. That means a certain amount of each satellite’s surface area must be reserved for the antennas. Such limitations will make it hard to further shrink our satellites. The upside is that it’s forcing us to be creative in finding new computational methods and software, and even develop foldable components inspired by origami.
By the end of 2020, we plan to have at least 10 more satellites operating in orbit—enough to run early tests on our in-space router network. If everything stays on schedule, by 2021 we will have 50 satellites in operation, and by 2022 all 140 satellites will be available to both users on Earth and other satellites in space.
Space is the new commercial frontier. With the increased level of access that the entrepreneurial space race has brought, upstart groups are imagining new opportunities that a spacecraft in orbit, and its data, can provide. By creating an Internet for space, Kepler’s network will give those opportunities a route to success.
This article appears in the February 2020 print issue as “Routers in Space.”
The European Space Agency (ESA) received a sizable budget boost in late 2019 and committed to joining NASA’s Artemis program, expanding Earth observation, returning a sample from Mars, and developing new rockets. Meanwhile, less glamorous projects will seek to safeguard and maintain the use of critical infrastructure in space and on Earth.
Space Debris Removal
ESA’s ClearSpace-1 mission, having just received funding in November, is designed to address the growing danger of space debris, which threatens the use of low Earth orbit. Thirty-four thousand pieces of space junk larger than 10 centimeters (cm) are now in orbit around Earth, along with 900,000 pieces larger than 1 cm. They stem from hundreds of space missions launched since Sputnik-1 heralded the beginning of the Space Age in 1957. Traveling at the equivalent of Mach 25, even the tiniest piece of debris can threaten, for example, the International Space Station and its inhabitants, and create more debris when it collides.
The ClearSpace-1 Active Debris Removal (ADR) mission will be carried out by a commercial consortium led by Swiss startup ClearSpace. Planned for launch in 2025, the mission will target a spent upper stage from an ESA Vega rocket orbiting at 720 kilometers above the Earth. Atmospheric drag is very low at this altitude, meaning objects remain in orbit for decades before reentry.
There, ClearSpace-1 will rendezvous with a target, which will be traveling at close to 8 kilometers per second. After making its approach, the spacecraft will employ ‘tentacles’ to reach beyond and around the object.
“It’s like tentacles that embrace the object because you can capture the object before you touch it. Dynamics in space are very interesting because if you touch the object on one side, it will immediately drift away,” says Holger Krag of ESA’s Space Safety department and head of the Space Debris Office in Darmstadt, Germany.
During the first mission, once ClearSpace-1 secures its target, the satellite will use its own propulsion to reenter Earth’s atmosphere, burning up in the process and destroying the piece it embraced. In future missions, ClearSpace hopes to build spacecraft that can remove multiple pieces of debris before the satellite burns up with all the debris onboard.
Collisions involving such objects create more debris and increase the odds of future impacts. This cascade effect is known as the Kessler Syndrome for the NASA scientist who first described it. The 2009 collision of the active U.S. commercial Iridium 33 and defunct Russian military Kosmos-2251 satellites created a cloud of thousands of pieces of debris.
With SpaceX, OneWeb, and other firms planning so-called megaconstellations of hundreds or even thousands of satellites, getting ahead of the situation is crucial to prevent low Earth orbit from becoming a graveyard.
Eventually, ClearSpace-1 is intended to be a cost-efficient, repeatable approach to reducing debris available at a low price for customers, says Krag. ESA backing for the project comes with the aim of helping to establish a new market for debris removal and in-orbit servicing. Northern Sky Research projects that revenues from such services could reach US $4.5 billion by 2028.
ESA is also looking to protect Earth from potential catastrophe with a mission to provide early warning of solar activity. The Carrington event, as the largest solar storm on record is known, was powerful enough to send aurora activity to as low as 20 degrees latitude and interfered with telegraph operators in North America. That was in 1859, with little vulnerable electrical infrastructure in place. A similar event today would disrupt GPS and communications satellites, cause power outages, affect oil drilling (which uses magnetic fields to navigate), and generally cause turmoil.
The L5 ‘Lagrange’ mission will head to the Sun-Earth Lagrange point 5, one of a number of stable positions created by gravitational forces of the two large bodies. From there, it will monitor the Sun for major events and warn of coronal mass ejections (CMEs) including estimates of their speed and direction.
These measurements would be used to provide space weather alerts and help mitigate against catastrophic damage to both orbital and terrestrial electronics. Krag, in an interview at a European Space Week meeting last month, states that these alerts could reduce potential harm and loss of life if used to postpone surgeries, divert flights over and near the poles, and stop trains during the peak of predicted activity from moderate-to-large solar storms.
“Estimates over the next 15 years are that damages with no pre-warning can be in the order of billions to the sensitive infrastructure we have,” Krag states. Developments like autonomous driving, which rely on wireless communications, would be another concern, as would crewed space missions, especially those traveling beyond low Earth orbit, such as NASA’s Artemis program to return astronauts to the moon.
Despite an overall budget boost, ESA’s request for 600 million euros from its member states for ‘space safety’ missions was not fully met. The L5 mission was not funded in its entirety so the team will concentrate first on developing the spacecraft’s instruments over the next three-year budget cycle, and hope for more funding in the future. Instruments currently under assessment include a coronagraph to help predict CME arrival times, a wide-angle, visible-light imaging system, a magnetograph to scan spectral absorption lines, and an X-ray flux monitor to quantify flare energy.
“The launch environment of tomorrow will more closely resemble that of airline operations—with frequent launches from a myriad of locations worldwide,” said Todd Master, DARPA’s program manager for the competition at the time. The U.S. military relies on space-based systems for much of its navigation and surveillance needs, and wants a way to quickly replace damaged or destroyed satellites in the future. At the moment, it takes at least three years to build, test, and launch spacecraft.
A typical geostationary satellite isas big as a van, as heavy as a hippopotamus, and as long-lived as a horse. These parameters shorten the list of companies and countries that can afford to build and operate the things.
Even so, such communications satellites have been critically important because their signals reach places that would otherwise be tricky or impossible to connect. For example, they can cover rural areas that are too expensive to hook up with optical fiber, and they provide Wi-Fi services to airliners and wireless communications to ocean-going ships.
The problem with these giant satellites is the cost of designing, building, and launching them. To become competitive, some companies are therefore scaling down: They’re building geostationary “smallsats.”
Two companies, Astranis and GapSat, plan to launch GEO smallsats in 2020 and 2021, respectively. Both companies acknowledge that there will always be a place for those hulking, minibus-size satellites. But they say there are plenty of business opportunities for a smaller, lighter, more flexible version—opportunities that the newly popular low Earth orbit (LEO) satellite constellations aren’t well suited for.
Market forces have made GEO smallsats desirable; technological advances have made them feasible. The most important innovations are in software-defined radio and in rocket launching and propulsion.
One reason geostationary satellites have to be so large is the sheer bulk of their analog radio components. “The biggest things are the filters,” says John Gedmark, the CEO and cofounder of Astranis, referring to the electronic component that keeps out unwanted radio frequencies. “This is hardware that’s bolted onto waveguides, and it’s very large and made of metal.”
Waveguides are essentially hollow metal tubes that carry a signal between transmitters and receivers to the actual antennas. For C-band frequencies, for instance, waveguides can be 3.5 centimeters wide. While that may not seem so big, analog satellites need a waveguide for every frequency they send and receive over, and the number of waveguides can add up quickly. Combined, the waveguides, filters, and other signal equipment are a massive, expensive volume of hardware that, until recently, had no work-around.
Software-defined radio provides a much more compact digital alternative. Although SDR as a concept has been around for decades, in recent years improved processors have made it increasingly practical to replace analog parts with much smaller digital equivalents.
Another technological improvement with an even longer pedigree is electric propulsion, which generates thrust by accelerating ionized propellant through an electric field. This method generates less thrust than chemical systems, which means it takes longer to move a satellite into a new orbit. However, electric propulsion gets far more mileage out of a given quantity of propellant, and that saves a lot of space and weight.
It’s worth mentioning that a smallsat may not be as minuscule as its name suggests. Just how small it can be depends on whom you’re talking to. For example, GapSat’s satellite, GapSat-2, weighs 20 kilograms and takes up about as much space as a mailbox, according to David Gilmore, the company’s chief operating officer. At one end, the smallest smallsats—sometimes called microsats or nanosats—are similar in size to CubeSats. Those briefcase-size things are being developed by SpaceX, OneWeb, and other companies for use in massive LEO constellations.
The miniaturization of geostationary satellites owes as much to market forces as to technology. Back in the 1970s, demand for broad spectrum access (for example, to broadcast television) favored large satellites bearing tons of transponders that could broadcast across many frequencies. But most of the fruits of the wireless spectrum have been harvested. Business opportunities now center on the scraps of spectrum remaining.
“There’s a lot of snippets of spectrum left over that the big guys have left behind,” says Rob Schwarz, the chief technology officer of space infrastructure at Maxar Technologies in Palo Alto, Calif. “That doesn’t fit into that big-satellite business model.”
Instead, companies like Astranis and GapSat are building business models around narrower spectrum access, transmitting over a smaller range of frequencies or covering a more localized geographic area.
Meanwhile, private rocket firms are cutting the cost of putting things up in orbit. SpaceX and Blue Origin have both developed reusable rockets that reduce costs per launch. And it’s easy enough to design rockets specifically to carry a lot of small packages rather than one huge one, which means the cost of a single launch can be spread over many more satellites.
That’s not to say that van-size GEO satellites are going extinct. “So there’s a trend to building bigger, more complicated, more sophisticated, much more expensive, and much higher-bandwidth satellites that make all the satellites ever built in the history of the world pale in comparison,” says Gregg Daffner, the chief executive officer of GapSat. These are the satellites that will replace today’s GEO communications satellites, the ones that can give wide-spectrum coverage to an entire continent.
But a second trend, the one GapSat is betting on, is that the new GEO smallsats won’t directly compete against those hemisphere-covering behemoths.
“For a customer that has the spectrum and the market demand, bigger is still better,” says Maxar’s Schwarz. “The challenge is whether or not their business case can consume a terabit per second. It’s sort of like going to Costco—you get the cheapest possible package of ground beef, but if you’re alone, you’ve got to wonder, am I going to eat 25 pounds of ground beef?” Companies like Astranis and GapSat are looking for the consumers that need just a bit of spectrum for only a very narrow application.
Astranis, for example, is targeting service providers that have no need for a terabit per second. “The smaller size means we can find customers who are interested in most, if not all, of the satellite’s capacity right off the bat,” says CEO Gedmark. Bigger satellites, for comparison, can take years to sell all their available spectrum: Astranis instead intends to launch each satellite already knowing what the specific single customer is. Astranis plans to launch its first satellite on a SpaceX Falcon 9 rocket in the last quarter of 2020. The satellite, which is for Alaska-based service provider Pacific Dataport, will have 7.5 gigabits per second of capacity in the Ka-band (26.5 to 40 gigahertz).
According to Gedmark, preventing overheating was one of the biggest technical challenges Astranis faced, because of the great power the company is packing into a small volume. There’s no air up there to carry away excess heat, so the satellite is entirely reliant on thermal radiation.
Although the Astranis satellite in many ways functions like a larger communications satellite, just for customers that need less capacity, GapSat—as its name implies—is looking to plug holes in the coverage of those larger satellites. Often, satellite service providers find that for a few months they need to supply a bit more bandwidth than a satellite in orbit can currently handle. That typically means they’ll need to borrow bandwidth from another satellite, which can involve tricky negotiations. Instead, GapSat believes, GEO smallsats can meet these temporary demands.
Historically, GapSat has been a bandwidth broker, connecting customers that needed temporary coverage with satellite operators that were in a position to offer it. Now the company plans to launch GapSat-2 to provide its own temporary service. The satellite would sit in place for just a few months before moving to another orbital slot for the next customer.
However, GapSat’s plan created a bit of a design conundrum. On the one hand, GapSat-2 needed to be small, to keep costs manageable and to be able to quickly shift into a new orbital slot. On the other, the satellite also had to work in whatever frequencies each customer required. Daffner calls the specifics of the company’s solution its “secret sauce,” but the upshot is that GapSat has developed wideband transponders to offer coverage in the Ku-band (12 to 18 GHz), Ka-band (26.5 to 40 GHz), and V/Q-band (33 to 75 GHz), depending on what each customer needs.
Don’t expect there to be much of a clash between the smallsat companies and the companies deploying LEO constellations. Gedmark says there’s little to no overlap between the two emerging forms of space-based communications.
He notes that because LEO constellations are closer to Earth, they have an advantage for customers that require low latency. LEO constellations may be a better choice for real-time voice communications, for example. If you’ve ever been on a phone call with a delay, you know how jarring it can be to have to wait for the other person to respond to what you said. However, by their nature, LEO constellations are all-or-nothing affairs because you need most if not all of the constellation in space to reliably provide coverage, whereas a small GEO satellite can start operating all by itself.
Astranis and GapSat will test that proposition soon after they launch in late 2020 and early 2021, respectively. They’ll be joined by Ovzon and Saturn Satellite Networks, which are also building GEO smallsats for 2021 launches as well.
There’s one final area in which the Astranis and GapSat satellites will differ from the larger communications satellites: their life span. GapSat-2 will have to be raised into a graveyard orbit after roughly 6 years, compared with 20 to 25 years for today’s huge GEO satellites. Astranis is also intentionally shooting for shorter life spans than those common for GEO satellites.
“And that is a good thing!” Gedmark says. “You just get that much faster of an injection of new technology up there, rather than having these incredibly long, 25-year technology cycles. That’s just not the world we live in anymore.”
This article appears in the January 2020 print issue as “Geostationary Satellites for Small Jobs.”
The article has been updated from its print version to reflect the most recent information on GapSat’s plans.
For the first time, we have a complete, representative number for the overall orbital collision risk of a satellite mega-constellation.
Last month, Amazon provided the U.S. Federal Communications Commission (FCC) with data for its planned fleet of 3,236 Kuiper System broadband Internet satellites.
If one in 10 satellites fails while on orbit, and loses its ability to dodge other spacecraft or space junk, Amazon’s figures [PDF] show that there is a 12 percent chance that one of those failed satellites will suffer a collision with a piece of space debris measuring 10 centimeters or larger. If one in 20 satellites fails—the same proportion as failed in rival SpaceX’s first tranche of Starlink satellites—there is a six percent chance of a collision.
More than a third of all the orbital debris being tracked today came from just two collisions that occurred about a decade ago. Researchers are concerned that more explosions or breakups could accelerate the Kessler Syndrome—a runaway chain reaction of orbital collisions that could render low earth orbit (LEO) hostile to almost any spacecraft.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.