Relativity Space has signed a lease with NASA and plans to test its first 3D-printed rocket in a flight next year
In a leased NASA spaceflight facility in southern Mississippi, a new factory that uses robots and 3D printers to manufacture rockets will soon open. Relativity, an Englewood, Calif.-based aerospace startup company, announced this week that it has signed a nine-year lease with NASA’s Stennis Space Center in Hancock County, Miss.
Relativity’s new 220,000-square-foot facility at Stennis complements the company’s existing 18,000-square-foot California R&D lab and factory, where it has operated since July 2016.
The company’s mission, says Brandon Pearce, vice president of avionics and integrated software, is to simplify the process of designing and assembling rockets by 3D printing as much of the rocket as possible. Relativity’s first rocket, a satellite-launching vehicle called Terran 1, has many fewer parts than conventional rockets, according to the company.
Pearce says 3D printing a rocket can greatly reduce the mass of the printed rocket—compared to the rocket’s weight if it were conventionally manufactured. And every gram of a rocket also costs rocket fuel to launch that gram into space. “The more you can pull out of your structure, the more payload you can get to orbit,” he says.
Amazon Web Services has promised immediate service but FCC filings suggest the company has yet to obtain the long-term licenses necessary to operate
When Amazon Web Services (AWS) announced the general availability of its satellite Ground Station service last month, it claimed that operators switching to its facilities could control satellites and “download, process, store, analyze, and act upon satellite data quicker with substantial cost savings.” Amazon announced eight satellite customers and partners, saying that Ground Station would be available immediately in the United States, with service in other countries rolling out in the next 12 months.
However, a review of U.S. Federal Communications Commission (FCC) filings suggests that the full Ground Station experience is currently only available to a single company, with two more startups enjoying partial service. And while Amazon has earned a reputation for moving aggressively into new markets, the technology giant also made several regulatory missteps that suggest it’s still finding its way in the world of satellite operations.
Post Syndicated from David Schneider original https://spectrum.ieee.org/automaton/aerospace/aviation/dji-promises-to-add-airsense-to-its-new-drones
The company plans to include ADS-B receivers in next year’s drones weighing more than 250 grams
There’s been a lot of talk over the years about endowing drones with the ability to “sense and avoid” nearby aircraft—adding radar or optical sensors that would mimic what a pilot does when she peers out the window in all directions to be sure there are no other aircraft nearby on a collision course. But incorporating such sense-and-avoid capability is quite challenging technically, and would probably never be possible to add to small consumer drones.
Thankfully, the threat of mid-air collision can be reduced using a strategy that is much easier to implement: Have aircraft track their positions with GPS and then broadcast that information to one another, so that everybody knows where everybody else is flying. Drones could at least receive those broadcasts and alert their operators or automatically adjust their flying to steer well clear of planes and helicopters.
Indeed, DJI, the world’s leading drone manufacturer, has already outfitted some if its drones with just this ability, and today it announced that by 2020 it would include that feature, which it calls “AirSense,” in all new drones it releases that weigh more than 250 grams.
Post Syndicated from Mark Harris original https://spectrum.ieee.org/tech-talk/aerospace/satellites/spacex-preps-selfdriving-satellites-for-launch
Elon Musk says SpaceX’s Starlink satellites will autonomously avoid hazards in orbit. Experts are not so sure
The 60 Starlink satellites that SpaceX is preparing to launch tomorrow, after two delays last week, are the first in a planned mega-constellation of nearly 12,000 satellites that Elon Musk hopes will bring affordable broadband Internet to everyone in the world.
However, once launched, they will occupy low earth orbits (LEO) that are becoming increasingly crowded with other satellites and space debris.
Last Wednesday, SpaceX revealed that its Starlink satellites will be the first to autonomously avoid collisions with objects in orbit. Elon Musk told reporters, “They will use their thrusters to maneuver automatically around anything that [the U.S. military] is tracking.”
According to multiple experts, however, a single source of orbital data is not good enough to automate critical decisions about safety.
Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energywise/aerospace/military/a-qa-with-us-nuclear-weapons-expert
Vice Admiral Dave Kriete explains how the United States maintains its stockpile, and why the nation is developing new low-yield nuclear weapons
Earlier this year, at a sprawling complex in the Texas Panhandle, a new type of nuclear weapon began rolling off the production line and into the United States arsenal. The ballistic missile warheads are low-yield and relatively small, and they reflect a growing push by U.S. President Donald Trump’s administration to modernize the nation’s nuclear weapons program after decades of stagnation.
Vice Admiral Dave Kriete has played a key role in developing U.S. nuclear weapons policies. Since June 2018, he has served as deputy commander of U.S. Strategic Command, the military unit responsible for detecting and deterring nuclear, space, and cyber attacks against the United States and its allies. Kriete also helped craft the 2010 and 2018 Nuclear Posture Reviews—the Pentagon’s guiding document for U.S. nuclear policy, strategy, and capabilities.
Kriete, who is based in Omaha, Nebraska, spoke with Spectrum during a recent visit to New York City. He discussed plans for nuclear weapons modernization, and the challenges to achieving them. This conversation has been edited and condensed for clarity.
Air Race E organizer looks to field inaugural race by late 2020
Although racing airplanes for sport has been a popular pastime around the world since the 1930s, its essential technology hasn’t changed much. Although computer design and engineering have greatly enhanced race planes’ thrust, economy, and maneuverability, the air racers still rely on petroleum-fueled propeller engines the way they did decades ago.
One entrepreneur is looking to fundamentally change the equation as soon as next year. He’s attempting to make air racing leapfrog past hybrid EVs and biofuels and go straight to all-electric propulsion.
“So, what we’re doing is taking the Formula One Air Racing rules,” says Jeff Zaltman, CEO of Dubai-headquartered Air Race Events, “and just changing the parts relevant to the propulsion system [so they run on electricity].” Zaltman adds that “We’re trying to change as little as possible as a starting point so the sport can transfer and migrate very easily.”
Air Race Events is currently scouting out a location to host the first ever event for Air Race E, the moniker given to the new, all-electric air racing division. (Expect an announcement, Zaltman says, by the end of the year or the beginning of 2020.)
Post Syndicated from Tracy Staedter original https://spectrum.ieee.org/tech-talk/aerospace/space-flight/3d-printers-could-build-future-homes-on-mars
AI SpaceFactory bests Penn State in a one-of-a-kind competition to see whose innovative building techniques could someday allow humans to live on Mars or the moon
In a cavernous arena outside of Peoria, Illinois, two industrial robots worked against the clock last weekend to finish their tasks. Each had been converted into a towering 3-D printer and programmed to build one-third-scale models of extraterrestrial habitats. For 30 hours over three days, generators chugged and hydraulics hissed as robotic arms moved in patterns, stacking long beads of thick “ink” into layers. Gradually, familiar forms began to emerge from the facility’s dirt floor: a gray, igloo-like dwelling and a tall, maroon egg.
Humanity’s future on Mars was taking shape.
The machines belonged to two teams, one from Penn State and the other from a New York-based design agency called AI SpaceFactory, that were competing in the final phase of NASA’s 3D-Printed Habitat Challenge. Each team had to develop an autonomous printer that operated with as little human intervention as possible, used materials or recyclables found on Mars or the moon, and passed the scrutiny of judges as well as rigorous structural testing.
The stakes were high. The winner would take home US $500,000. Lessons the teams learned would inform not only how humans might one day survive on Mars, but also how they might live more sustainably on Earth.
“It’s taking high risks that could potentially bring high paybacks,” said Monsi Roman, program director for NASA’s Centennial Challenges (of which the 3-D Habitats competition is one).
Technological enhancements to the Nobel Prize-winning detectors include ultra-efficient mirrors and “squeezed” laser light
At 4:18 a.m. Eastern time on 25 April, according to preliminary observations, a gravitational wave that had been traveling through deep space for many millions of years passed through the Earth. Like a patient spider sensitive to every jiggle in its web, a laser gravitational wave detector in the United States detected this subtle passing ripple in spacetime. Computer models of the event concluded the tiny wobbles were consistent with two neutron stars that co-orbited and then collided 500 million light-years away.
Next came scientific proof that when it rains it pours. The very next day at 11:22 am ET, the Laser Interferometer Gravitational-Wave Observatory (LIGO) picked up another gravitational wave signal. This time, computer models pointed to a potential first-ever observation of a black hole drawing in a neutron star and swallowing it whole. This second spacetime ripple, preliminary models suggest, crossed some 1.2 billion light years of intergalactic space before it arrived at Earth.
In both cases, LIGO could thank a recent series of enhancements to its detectors for such its ability to sense such groundbreaking science crossing its threshold.
LIGO’s laser facilities, in Louisiana and Washington State, are separated by 3002 kilometers (3,030 km over the earth’s surface). Each LIGO facility splits a laser beam in two, sending the twinned streams of light down two perpendicular arms 4 km long. The light in the interferometer arms bounces back and forth between carefully calibrated mirrors and optics that then recombine the rays, producing a delicate interference pattern.
The pattern is so distinct that even the tiniest warps in spacetime that occur along the light rays’ travel paths—the very warps of spacetime that a passing gravitational wave would produce—will produce a noticeable change. One problem: The interferometer is also extremely sensitive to thermal noise in the mirrors and optics, electronics noise in the equipment, and even seismic noise from nearby vehicle traffic and earthquakes around the globe.
Noise was so significant an issue that, from 2006 to 2014, LIGO researchers observed no gravitational waves. However, on September 14, 2015, LIGO detected its first black hole collision—which netted three of LIGO’s chief investigators the 2017 Physics Nobel Prize.
Over the ensuing 394 days of operations between September 2015 and August 2017, LIGO observed 11 gravitational wave events. That averages out to one detection every 35 days.
Then, after the latest round of enhancements to its instruments, LIGO’s current run of observations began at the start of this month. In April alone, it’s observed five likely gravitational wave events: three colliding black holes and now the latest two neutron star/neutron star-black hole collisions.
This once-per-week frequency may indeed represent the new normal for LIGO. (Readers can bookmark this page to follow LIGO’s up to the minute progress.)
Most promisingly, both of last week’s LIGO chirps involve one or two neutron stars. Because neutron stars don’t gobble up the light their collisions might otherwise emit, such an impact offers up the promise of Earth being bathed in detectible gravitational and electromagnetic radiation. (Such dual-pronged observations constitute what’s called “multi-messenger astronomy.”)
“Neutron stars also emit light, so a lot of telescopes around the world chimed in to look for that and locate it in the sky in all different wavelengths of light,” says Sheila Dwyer, staff scientist at LIGO in Richland, Wash. “One of the big goals and motivations for LIGO was to make that possible—to see something with both gravitational waves and light.”
The first such multi-messenger observation made by LIGO began in August 2017 with a gravitational wave detection. Soon thereafter came a stunning 84 scientific papers, examining the electromagnetic radiation from the collision across the spectrum from gamma rays to radio waves. The science spawned by this event, known as GW170817, led to precise timing of the speed of gravitational waves (the speed of light, as Einstein predicted), a solution to the mystery of gamma-ray bursts, and an overnight updating of models of the cosmic source of heavy elements on the periodic table. (Studies of the collision’s gravitational and electromagnetic radiation concluded that a large fraction of the universe’s elements heavier than iron originate from neutron star collisions just like GW170817.)
When the S190425z and S190426c signals came in, telescopes around the world pointed to the regions of the sky that the gravitational wave observations suggested. As of press time, however, no companion source in the sky has yet been found for either.
Yet because of LIGO’s increased sensitivity, the promise of yet more observations increase the likelihood that another GW170817 multi-messenger watershed event is imminent.
Dwyer says LIGO’s latest incarnation uses high-efficiency mirrors that reflect light back with low mechanical or thermal energy transfer from the light ray to the mirror. This is especially significant because, on average, the laser light bounces back and forth along the interferometer arms 1000 times before recombining and forming the detector’s interference pattern.
“Right now we have a very low-absorption coating,” she says. “A very small absorption of that [laser light] can heat up the optics in a way that causes a distortion.”
If the LIGO team can design even lower-loss mirror coatings (which of course could have spinoff applications in photonics, communications and optics) they can increase the power of the laser light traveling through the interferometer arms from the current 200 kilowatts to a projected 3 megawatts.
And according to Daniel Sigg, a LIGO lead scientist in Richland, Wash., another enhancement involves “squeezing” the laser light so that the breadth of its amplitude is sharper than Heisenberg’s Uncertainty Principle would normally allow.
“We can’t measure both the phase and the amplitude or intensity of photons [with high precision] simultaneously,” Sigg says. “But that gives you a loophole. Because we’re only counting photons, we don’t really care about their phase and frequency.”
So LIGO’s lasers use “squeezed light” beams that have higher noise in one domain (amplitude) in order to narrow the uncertainty in the other (phase or frequency). So between these two photon observables, Heisenberg is kept happy.
And that keeps LIGO’s ear tuned to more and more of the most energetic collisions in the universe—and allows it to turn up new science and potential spinoff technologies each time a black hole or neutron star goes bump in the night.
Post Syndicated from Andrew Jones original https://spectrum.ieee.org/aerospace/space-flight/private-space-launch-firms-in-china-race-to-orbit
Four companies set the pace with scheduled launches over the next two years
In the early years of rocketry at Caltech, there was no figure more influential than the Chinese cyberneticist Qian Xuesen. Then, in 1955, the United States repatriated him to China, suspecting him of being a spy.
Qian returned to China to become the father of the country’s space-launch vehicle and ballistic-missile programs and contributed greatly to the “Two Bombs, One Satellite” nuclear weapons and space project. And his efforts were not wasted—on 9 March of this year, the People’s Republic of China launched its 300th Long March rocket, which put China’s 506th spacecraft into orbit.
To do more exploration at a lower cost, the Chinese government has initiated policies aimed at establishing a private space industry like the one that exists in the United States, where companies such as SpaceX, Blue Origin, and Rocket Lab are bringing low-cost launch services to the space sector.
In 2014, China’s State Council issued a proposal called Document 60 that would open the nation’s launch and small satellite sectors to private capital. The government followed this announcement with helpful policies, including a national civil-military integration strategy to transfer crucial, complex, and sensitive technologies from state-owned space sector giants to startups approved by authorities.
Today, more than 10 private launch companies in China are working on launch vehicles or their components, and four are now prepared to make their first attempts to reach orbit.
Two Beijing-based companies, OneSpace and iSpace, are close to putting small satellites into orbit with their own rockets. The first OneSpace OS-M1 rocket failed around one minute after launch from Jiuquan Satellite Launch Center in the Gobi Desert on 27 March and, at press time, the iSpace Hyperbola-1 was expected to follow up with its own attempt at Jiuquan as early as April. Both launch vehicles are relatively small and use a premixed solid combination of fuel and oxidizer, which is cheap, reliable, and simple to make but less efficient than liquid fuel.
LandSpace Technology Corp. made the first private Chinese orbital launch attempt in October using a solid-propellant rocket. After successful burns and separations of the first and second stages, a problem with the rocket’s third stage saw the Zhuque-1 rocket and its small satellite payload fall from an apogee of 337 kilometers into the Indian Ocean. It reached a top speed of 6.3 kilometers per second, just shy of the 7.9 km/s required to achieve orbit.
The company has moved on to develop a much larger and more capable two-stage launch vehicle powered by liquid methane and liquid oxygen. It hopes to carry out the maiden flight of the Zhuque-2 in 2020 and plans to eventually make the rocket reusable, though doing so will reduce lift capability.
Meanwhile, LinkSpace Aerospace Technology Group, founded in 2014, has set its sights on building an orbital launch vehicle capable of vertical takeoff and landing, as demonstrated by SpaceX’s Falcon 9. The company wants to have a maiden flight of the liquid-propellant launcher NewLine-1 in 2021, after testing its NewLine Baby suborbital rocket throughout this year.
Lan Tianyi, founder of Ultimate Blue Nebula Co., a space consultancy in Beijing, says China’s launch companies each have different goals and capabilities. Some firms are focusing on developing launchers powered by solid fuel, while others opt for liquid propellants, which may allow the rockets to be reused. Some are also exploring creative options to provide space tourism services. “The whole launch-vehicle ecosystem is getting more and more complete,” he notes.
While Chinese firms race to reach orbit and score commercial contracts to launch constellations of remote-sensing and communications satellites, these companies will also help China drive down launch costs, and make more missions possible with fewer resources.
“If the entire world is moving in the direction of lower-cost, reusable, commercially driven launch systems, anyone who does not keep up with this development may find themselves out of the game,” says John Horack, a professor of mechanical and aerospace engineering at Ohio State University.
That these companies have come so far so quickly is an indicator of the level of state support for aerospace in China, and a sign that this mature industry is full of expertise. But the question of whether or not private launch firms are truly ready for takeoff can be answered only on the launchpad.
This article appears in the May 2019 print issue as “Private Rockets Ready for Liftoff in China.”
Post Syndicated from Andrew Driesman, Jack Ercol, Edward Gaddy and Andrew Gerger original https://spectrum.ieee.org/aerospace/robotic-exploration/how-the-parker-solar-probe-survives-close-encounters-with-the-sun
An elaborate cooling system is designed to protect the space probe through sizzling flybys
Over the past six decades, 12 people have walked on the moon, spacecraft have visited every planet from Mercury to Neptune, and four rovers have racked up more than 60 kilometers traveling on the surface of Mars. And yet, despite the billions of dollars spent on the world’s civilian space programs, never has a probe journeyed very close to the sun. The nearest approach, by the Helios B probe in 1976, came no closer than 43 million km.
Why is that? There’s been no lack of interest in the sun—quite the opposite. Of all extraterrestrial bodies, the sun has the largest influence on us: It controls the radiation doses that astronauts experience and also affects the electronics in the myriad satellites on which we increasingly rely. Solar storms can even disrupt electric power grids, as famously happened in 1989, when one such storm blacked out the entire province of Quebec and caused ripple effects on electric grids in the United States.
So there’s no shortage of practical reasons to study the sun. And it still holds many deep scientific mysteries. Unlike other bodies in the solar system, the atmosphere high above it is more than two orders of magnitude hotter than it is at the surface. What causes that phenomenon, known as coronal heating? And what mechanisms create the solar wind, accelerating parts of the sun’s atmosphere to velocities ranging from 300 to 700 kilometers per second? These questions, among others, have baffled scientists for decades.
Indeed, the case for studying the sun has never been hard to make. The limiting factor since the dawn of the space age has been the ability of a probe to endure the hellishly hostile near-sun environment. But this obstacle has recently given way to some new technologies, which we and others at the Johns Hopkins University Applied Physics Laboratory have built into a spacecraft called the Parker Solar Probe. That probe (named after astrophysicist Eugene Parker) was launched last August and has just recently completed its second close pass by the sun.
At the probe’s closest approach, about 6 million km from the sun’s surface, solar irradiance is almost 500 times as much as it is at Earth’s orbital distance—a whopping 650 kilowatts per square meter. Along with that immense intensity come extreme ultraviolet rays, which degrade materials rapidly. But ultraviolet radiation is not the only challenge. Sun-grazing comets are torn apart by the sun’s intense gravity, leaving the near-sun environment filled with dust that speeds along at upwards of 300 km/s, an order of magnitude faster than such particles travel in the vicinity of Earth.
For the Parker Solar Probe to complete its mission, the spacecraft will need to survive this barrage of dust and intense radiation for at least seven years while making scientific measurements autonomously. The technologies required to accomplish that feat simply did not exist until the late 1990s, when high-temperature carbon-composite foams became available. At that point it became possible to fabricate a heat shield that was lightweight and sufficiently stiff, one that could provide the needed shade for the rest of the spacecraft. Before that, the refractory materials needed to construct an adequate heat shield would have been too heavy to fly.
It took some years for the Parker probe team to adapt these materials for use on spacecraft. And the design of a mission to the sun went through a decade of revisions before NASA settled on the current strategy. The probe won’t get quite as close to the sun as some scientists had wanted, but this compromise simplified the construction of the spacecraft and allows it to study the sun during multiple close passes, rather than just one or two.
Although the heat shield is absolutely critical to the mission’s success, the technology behind it has been well covered already, so here we will instead describe the development of the probe’s equally important solar array and the system used to keep it cool. This cooling system is found on no other spacecraft, meaning that to design and build it our Johns Hopkins engineering team had to, well, boldly go where no one had gone before.
Every spacecraft needs a source of electrical power. Some probes sent to the far reaches of the solar system use radioisotope thermoelectric generators to produce electricity. The Apollo command module used fuel cells. But more typically, electrical power comes from photovoltaic panels.
Because the Parker Solar Probe would be operating close to the sun, you might think the engineers designing the spacecraft’s power-generating solar panels had it easy. Not so. Our team had to figure out how to manage as many as 13 watts of heating in the panels for every watt of electrical power generated. This is almost 500 times as much heat as a satellite circling Earth experiences, so the standard method of cooling panels passively doesn’t apply.
The probe’s solar array consists of two wings. Each is about two-thirds of a meter wide and a little more than a meter long. The wings house solar cells that were designed to work well at high irradiance levels. But sufficient electrical output was not the issue—the challenge was keeping them cool.
In large part, that’s done by controlling the geometry of the probe’s solar panels. As the spacecraft nears the sun, the wings with these panels are rotated back toward the body of the spacecraft so that the sun’s rays impinge on them at a shallow angle. This decreases the amount of sunlight falling on the panels and puts more of the wings into the shadow or partial shadow of the 11-centimeter-thick heat shield mounted on the sun-facing side of the spacecraft.
To make this strategy work better, we designed each wing to have two distinct sections. Most of the area of a wing, the primary section, serves as the source of power most of the time. But when the spacecraft swoops close to the sun, the wings are angled so that their primary sections are entirely in the shadow of the heat shield. Only the small secondary sections at the tip of each wing are exposed to the sun’s rays.
Early on, we decided to mount the secondary sections so that they would be a few degrees closer to facing the sun than the primary sections are. Our objective was to ensure that the angle of the sunlight striking the surface was never less than 10 degrees. If this angle were to become any more acute, small variations in that angle would cause large changes in power output, which could be difficult to control.
We modeled performance very carefully as we were designing this solar array. That modeling was especially tricky for this mission because we couldn’t just treat the sun as a distant point source of radiation: We had to take into account that when the probe approached the sun, the partial shading from the heat shield would cause the solar cells to receive only rays coming from the outer fringes of the solar disk, what astronomers call the limb.
The limb is redder and dimmer than the sun as a whole. That’s because the sunlight coming from the limb has to pass through more of the sun’s lower atmosphere, so the only light that escapes comes from higher up, where temperatures are lower. To be able to predict the electrical output of the solar panels, we had to calculate how the color of the light falling on them would change with the probe’s varying distances from the sun.
Our biggest worry while we were designing the probe’s photovoltaic array was not that it wouldn’t work correctly after launch, but rather that it would degrade over time faster than expected. That’s what had happened to some previous space probes with trajectories that brought them close to the sun.
For instance, the solar array of the MESSENGER mission to Mercury (which was also designed, built, and operated by the Johns Hopkins Applied Physics Lab) was predicted to degrade by 10 percent over the course of its 11-year mission. In fact, its output diminished by 50 percent, for reasons nobody could quite understand.
MESSENGER was nevertheless successful because it stayed relatively close to the sun, where sunlight was intense enough for the craft to work properly even with an array that wasn’t performing as desired. This wouldn’t be the case, though, for the Parker Solar Probe. Its highly elliptical orbit periodically takes it outside the orbit of Venus. At those distances from the sun, it couldn’t function if its solar panels started providing much less energy than expected.
When the Parker probe was first conceived, we decided to design its solar array so it would still be able to do the job even if it suffered as much as MESSENGER’s did. But as the probe’s design matured, we and others at the Applied Physics Lab decided to take a different tack. We resolved to figure out what had caused the anomalous degradation of the solar panels on earlier spacecraft so that we could avoid whatever the problem was.
Initially, we suspected that two issues might be at play. One possibility could be outgassing from the adhesives used in the construction of these panels. Such outgassing would, we reasoned, deposit a film of adhesive material on the top of the cells, which would then turn brown after exposure to ultraviolet light. All solar arrays outgas to some extent, but the effect is worsened by high heat and radiation.
Another possibility was that the transparent adhesive used to attach the glass covers to the solar cells had turned brown, again because of exposure to ultraviolet light. Through extensive testing, we discovered that this process indeed accounted for most of the degradation.
We soon figured out that we could reduce this darkening by driving out the more volatile components in the cover-glass adhesive. Doing so involved heating the array under vacuum while exposing it to intense ultraviolet light, provided by light-emitting diodes. Some degradation of the solar panels would still occur, just as it does for other sorts of electronic components when they are “burned in,” but this would be a small price to pay for avoiding dangerous amounts of darkening later during the mission.
Of course, we needed to be sure our strategy would actually work. To test the theory, we had to place the arrays in an environment that resembled what they would experience close to the sun. And such conditions are not so easy to come by.
To meet that objective, we used 80 sets of eight mirrors. Each set could be individually rotated to reflect sunlight on a fixed target, even as the sun moves. Such devices, called heliostats, are sometimes used to provide daylight in places that would otherwise remain in the shadows. Large numbers of them are employed at solar-thermal energy plants to focus sunlight on a central tower. We set these heliostats up at a facility in New Mexico that was built specifically for this testing, where the solar panels could be held under vacuum while they were exposed to concentrated sunlight.
Such a use of heliostats, to concentrate sunlight on a spacecraft’s solar array, was clearly an odd one. But it worked marvelously. By the time the probe was assembled, its photovoltaic panels had experienced conditions and exposure durations never before used to test a full-size solar array destined for space.
With all the precautions we took and all the testing we carried out, we were fully confident that the panels wouldn’t suffer from anomalous darkening. But we still needed to be sure that the cells in them would remain sufficiently cool.
A typical spacecraft solar array manages its temperature passively: The absorbed light heats the solar cells, which are typically mounted on one side of a graphite-epoxy composite panel. Heat conducts through the honeycomb mesh of the panel and radiates out into space from both the front and back sides. A typical solar panel in space operates at a temperature between –70 and 100 °C—low enough not to require any special materials or coatings.
The photovoltaic panels on the Parker Solar Probe are different. Their solar cells must be actively cooled. For that, the cells were mounted to sheets of titanium containing a large number of narrow channels through which cooling water flows. It’s not unlike the system used to keep the engine in your car from overheating. The cooling system on the Parker probe doesn’t use antifreeze, though. It uses ordinary deionized water, just as you might use in a steam iron. And just like the cooling system in your car, the system is pressurized to prevent the cooling fluid from boiling at high temperatures.
At the probe’s closest approach to the sun, the cooling system must be able to handle about 5,900 watts of heat load on the two solar wings. This heat is shed using four separate radiators, each about one meter square, located in the shadow of the probe’s heat shield. Those radiators, along with the coolant pumps and other critical components, all have backups on board, which can be activated if a primary component were to fail.
Although we focused most of our energies on how to combat the danger of overheating, there would be equally dire consequences if things ever got too cold. Because the spacecraft would not be exposed to the warming rays of the sun for almost an hour after launch, and would be in the shadow of Venus for a brief time later, we had to take care that the temperature of the water in the cooling system would not drop below freezing. As anyone who has experienced a burst pipe at home during a chilly winter night can tell you, water expands when frozen. So an ice plug in the probe’s cooling system could cause a rupture, which would end the mission. We clearly needed to avoid any chance of that.
At launch, most of the cooling system was evacuated, with the water stored in a reservoir that we call the accumulator. That water was heated before launch to about 50 °C—enough to prevent it from freezing should it encounter any cold spots in the spacecraft’s plumbing when the cooling system was initially flooded.
Before activation, the solar arrays were deployed and warmed by the sun. At the same time, two of the four radiators were similarly heated by turning the spacecraft so that they weren’t in the shadow of the heat shield. After these components became sufficiently warm, valves opened to allow the preheated water to flow from the accumulator. After half the cooling system was loaded with water, spacecraft operations commenced. Some weeks later, after the probe was closer to the sun, the other two radiators were warmed in the same manner and filled with coolant. This cooling system ensures that the highest temperature that any photovoltaic cell will ever reach is 120 °C.
The Parker probe, which has been in space for almost nine months now, has really just embarked on its multiyear mission to explore the sun’s mysterious corona, a part of the solar system previously considered too hostile for spacecraft to explore directly. That the probe is functioning well and deepening scientific interest in the workings of the sun is a tribute to the many men and women who have together contributed some 10 million hours to the immense engineering effort that was needed to make the mission a success.
Sure, the measurements taken during the first two of the probe’s 24 planned passes close to the sun are generating as many questions as they are answering. But that’s to be expected—it’s a hallmark of all the best scientific endeavors.
This article appears in the May 2019 print issue as “Journey to the Center of the Solar System.”
About the Authors
The collective thoughts of the interwebz
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.