The first targets will be the sun and Jupiter, which are expected to have strong emissions at low frequencies. But the team also hopes to pick up much weaker signals from the ‘Cosmic Dawn’—when the first stars lit up around 12 billion years ago—and even ultra-faint signals from the preceding Cosmic Dark Ages. Detections would give unprecedented insights into these formative periods of the universe.
New supercomputer simulations have successfully modeled a mysterious process believed to produce some of the hottest and most dangerous solar flares—flares that can disrupt satellites and telecommunications networks, cause power outages, and otherwise wreak havoc on the grid. And what researchers have learned may also help physicists design more efficient nuclear fusion reactors.
In the past, solar physicists have had to get creative when trying to understand and predict flares and solar storms. It’s difficult, to put it mildly, to simulate the surface of the sun in a lab. Doing so would involve creating and then containing an extended region of dense plasma with extremely high temperatures (between thousands of degrees and one million degrees Celsius) as well as strong magnetic fields (of up to 100 Tesla).
However, a team of researchers based in the United States and France developed a supercomputer simulation (originally run on Oak Ridge National Lab’s recently retiredTitan machine) that successfully modeled a key part of a mysterious process that produces solar flares. The group presented its results last month at the annual meeting of the American Physical Society’s (APS) Plasma Physics division, in Fort Lauderdale, Fla.
On Earth, no natural phenomenon is quite as dependable as gravity. Even a child playing on a beach knows that the sand she is excavating will just sit there in her trowel, pulled downward by this powerful force.
But on small, low-gravity celestial bodies like asteroids, the rules of gravity that we know so well no longer apply—at least, not in the ways that we’re used to. And that’s a problem for the scientists who collect samples of regolith, the dusty or pebbly material found on the surfaces of these bodies.
Asteroids are remnants of the early solar system: essentially chunks of material that did not become planets. Regolith samples from asteroids and other small celestial bodies are critical for researchers to better understand how the solar system began, and how it has evolved since.
In the absence of strong gravitational influences, even electrostatic forces that would be considered weak to negligible on Earth may hold outsized importance in space. Hartzell, a participating scientist on the OSIRIS-REx mission currently orbiting the asteroid Bennu, studies these electrostatic forces. A better understanding of electrostatic forces on particles improves understanding of the natural evolution of asteroids and helps inform the design of sampling methods and instruments on future asteroid exploration missions.
Electrostatic forces occur when oppositely charged particles interact with each other. This causes regolith particles to behave curiously in three ways.
First, they cause dusty particles that rub against each other to stick together, or clump. Second, dust exposed to the flow of charged particles from solar wind plasma can detach, or loft away from the surface, drawn to opposite charges in the solar wind flowing past. Third, particles can levitate after being kicked up by a small meteorite impact or blasted by a visiting spacecraft, because the electrostatic forces on those particles cancel out any gravitational pull.
And it’s possible that it’s not just tiny dust particles that may behave unusually—but larger grains, due to the extremely weak effects of gravity on asteroids, as well.
The catch, however, is that none of these behaviors have been directly observed in space, nor the forces causing them to occur measured there. Though Hartzell’s work has demonstrated these forces in laboratory experiments, many questions remain about what they look like on an asteroid, to what degree electrostatic forces affect dust behavior, how strong those forces are, and how the presence of a spacecraft in close proximity to an asteroid’s surface might change the environment.
Whether or not lofting occurs depends on the strength of the forces causing particles to stick together and, by extension, to other objects, such as spacecraft surfaces and optics. Hartzell is developing an experimental method to measure this cohesion.
How the method will work: an electrically charged plate is placed at a set distance above a surface with dusty particles, in an area of known gravity. By controlling the height and electrical charge of the plate, the electrostatic forces on the dust grains can be controlled. A camera is used to observe the size of dust grains and when they begin to be drawn to the plate. By controlling the electrostatic force and knowing the gravity, the unknown, cohesive force can be mathematically derived.
Hartzell’s method could potentially be used for actual sampling, as well. She suggests that charged plates could be used to attract dust samples, then drop them into sample collectors or directly onto analysis instruments by removing the plate’s charge.
More likely, however, is that the method might be employed to better characterize the surface of a site intended for longer-term use by, for example, an asteroid mining mission. Early planning stages would involve understanding the chemistry and behavior of any dusty surface, including how its cohesive properties may affect the function of tools like drill bits.
Harnessing electrostatic forces to control dusty particles might also mean cleaner, better functioning solar panels on Mars. An electrostatic dust shield could use coils embedded in solar arrays to “bounce” dust grains off the surface via alternating electrical charges.
But for now, Hartzell’s work involves a lot of creative lab experimentation and lab-based modeling, but with one goal in mind.
“We want to keep the spacecraft safe during operations,” she says.
A newly developed graphene-based telescope detector may usher in a new wave of astronomical observations in a band of radiation between microwaves and infrared light. Applications including medical imaging, remote sensing, and manufacturing could ultimately be beneficiaries of this detector, too.
Microwave and radio wave radiation oscillate at frequencies measured in gigahertz or megahertz—slow enough to be manipulated and electronically processed in conventional circuits and computer systems. Light in the infrared range (with frequencies beginning around 20 THz) can be manipulated by traditional optics and imaged by conventional CCDs.
But the no-man’s land between microwaves and infrared (known as the “terahertz gap”) has been a challenging although not entirely impossible band in which astronomers could observe the universe.
To observe terahertz waves from astronomical sources first requires getting up above the atmosphere or at least up to altitudes where the Earth’s atmosphere hasn’t completely quenched the signal. The state-of-the-art in THz astronomy today is conducted with superconducting detectors, says Samuel Lara-Avila, associate research professor in the Department of Microtechnology and Nanoscience at Chalmers University of Technology in Sweden.
Observatories like the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile and the South Pole Telescope might use such detectors combined with local oscillators pumping out reference signals at frequencies very close to the target signal the astronomers are trying to detect. If a telescope is looking for radiation at 1 THz, adding a local oscillator at 1.001 THz would produce a combined signal with beat frequencies in the 1 GHz (0.001 THz) range, for instance. And gigahertz signals represent a stream of data that won’t overwhelm a computer’s ability to track it.
Sounds simple. But here’s the rub: According to Lara-Avila, superconducting detectors require comparatively powerful local oscillators—ones that operate in the neighborhood of a microwatt of power. (That may not sound like much, but the detectors operate at cryogenic temperatures. So a little bit of local oscillator power goes a long way.)
By contrast, the new graphene detector would require less than a nanowatt of local oscillator power, or three orders of magnitude less. The upshot: A superconducting detector in this scenario might generate a single pixel of resolution on the sky, whereas the new graphene technology could enable detectors with as many as 1000 pixels.
“It’s possible to dream about making [THz] detector arrays,” Lara-Avila says.
Probably the most famous observation in THz or near-THz astronomy is the Event Horizon Telescope, which earlier this month won the Breakthrough Prize in Fundamental Physics. (Pictured) Some of the frequencies it operated at, according to Wikipedia, were between 0.23 and 0.45 THz.
The graphene detector pioneered by Lara-Avila and colleagues in Sweden, Finland, and the UK is described in a recent issue of the journal Nature Astronomy.
The group doped its graphene by adding polymer molecules (like good old 2,3,5,6-Tetrafluoro-7,7,8,8-tetracyanoquinodimethane, or F4-TCNQ ) atop the pure carbon sheets. Tuned just right, these dopants can bring the ensemble to a delicate quantum balance state (the so-called “Dirac point”) in which the system is highly sensitive to a broad range of electromagnetic frequencies from 0.09 to 0.7 THz and, they speculate, potentially higher frequencies still.
All of which adds up to a potential THz detector that, the researchers say, could represent a new standard for THz astronomy. Yet astronomical applications for technology often just represents the first wave of technology that labs and companies spin off for many more down-to-earth applications. That CCD detector powering the cameras on your cellphone originated in no small part from the work of engineers in the 1970s and ‘80s developing sensitive CCDs whose first applications were in astronomy.
Terahertz technologies for medical applications, remote sensing, and manufacturing are already works in progress. This latest graphene detector could be a next-gen development in these or other as yet unanticipated applications.
At this point, says Lara-Avila, his group’s graphene-based detector version 1.0 is still a sensitive and refined piece of kit. It won’t directly beget THz technology that would find its way into consumers’ pockets. More likely, he says, is that this detector could be lofted into space for next-generation THz orbital telescopes.
“It’s like the saying that you shouldn’t shoot a mosquito with a cannon,” Lara-Avila says. “In this case, the graphene detector is a cannon. We need a range and a target for that.”
Astronomers need a quiet place to observe the cosmic dawn
The far side of the moon offers a unique opportunity to radio astronomers: an observatory built there could peer into the early universe, shielded from electromagnetic interference from Earth. Illustration: Peter Sanitra
Early designs for far-side radio observatories envisioned large parabolic antennas nestled in craters, much as Earth’s Arecibo telescope is nestled into a sinkhole in Puerto Rico. Illustration: Peter Sanitra
But modern plans for moon-based astronomy focus on the low-frequency signals from the cosmic dawn, when the first stars and galaxies formed. These frequencies, which are below 100 megahertz, can best be detected by a large array of antennas. Illustration: Peter Sanitra
In one construction approach, dipole antennas would be attached to spools of flexible film. Then a teleoperated rover would unroll the spools on the lunar surface. Illustration: Peter Sanitra
The lunar regolith doesn’t conduct electricity, so antennas won’t short out on the ground, as they would on Earth. But scientists still need to study how the regolith might otherwise affect radio waves. Illustration: Peter Sanitra
The central electronics box would sift and compress signals from the antenna before transmitting data back to Earth via a relay satellite. The equipment will have to withstand extremes of heat and cold during the moon’s month-long day/night cycle. Illustration: Peter Sanitra
Because the relay satellite’s radio uses much higher frequencies than 100 MHz, it won’t interfere with the observatory. Illustration: Peter Sanitra
Thousands of dipole antennas would be attached to the film, along with the wires carrying the signals they pick up. (An alternate approach would deposit many individual pizza-box-size antennas across the surface.) Illustration: Peter Sanitra
For decades, astronomers have gazed up at the moon and dreamed about what they would do with its most unusual real estate. Because the moon is gravitationally locked to our planet, the same side of the moon always faces us. That means the lunar far side is the one place in the solar system where you can never see Earth—or, from a radio astronomer’s point of view, the one place where you can’t hear Earth. It may therefore be the ideal location for a radio telescope, as the receiver would be shielded by the bulk of the moon from both human-made electromagnetic noise and emissions from natural occurrences like Earth’s auroras.
Early plans for far-side radio observatories included telescopes that would use a wide range of frequencies and study many different phenomena. But as the years rolled by, ground- and satellite-based telescopes improved, and the scientific rationale for such lunar observatories weakened. With one exception: A far-side telescope would still be best for observing phenomena that can be detected only at low frequencies, which in the radio astronomy game means below 100 megahertz. Existing telescopes run into trouble below that threshold, when Earth’s ionosphere, radio interference, and ground effects begin to play havoc with observations; by 30 MHz, ground-based observations are precluded.
In recent years, scientific interest in those low frequencies has exploded. Understanding the very early universe could be the “killer app” for a far-side radio observatory, says Jack Burns, an astrophysics professor at the University of Colorado and the director of the NASA-funded Network for Exploration and Space Science. After the initial glow of the big bang faded, no new light came into the universe until the first stars formed. Studying this “cosmic dawn [PDF],” when the first stars, galaxies, and black holes formed, means looking at frequencies between 10 and 50 MHz, Burns says; this is where signature emissions from hydrogen are to be found, redshifted to low frequencies by the expansion of the universe.
With preliminary funding from NASA, Burns is developing a satellite mission that will orbit the moon and observe the early universe while it travels across the far side. But to take the next step scientifically requires a far larger array with thousands of antennas. That’s not practical in orbit, says Burns, but it is feasible on the far side. “The lunar surface is stable,” he says. “You just put these things down. They stay where they need to be.”
This article appears in the July 2019 print issue as “The View From the Far Side.”
Technological enhancements to the Nobel Prize-winning detectors include ultra-efficient mirrors and “squeezed” laser light
At 4:18 a.m. Eastern time on 25 April, according to preliminary observations, a gravitational wave that had been traveling through deep space for many millions of years passed through the Earth. Like a patient spider sensitive to every jiggle in its web, a laser gravitational wave detector in the United States detected this subtle passing ripple in spacetime. Computer models of the event concluded the tiny wobbles were consistent with two neutron stars that co-orbited and then collided 500 million light-years away.
Next came scientific proof that when it rains it pours. The very next day at 11:22 am ET, the Laser Interferometer Gravitational-Wave Observatory (LIGO) picked up another gravitational wave signal. This time, computer models pointed to a potential first-ever observation of a black hole drawing in a neutron star and swallowing it whole. This second spacetime ripple, preliminary models suggest, crossed some 1.2 billion light years of intergalactic space before it arrived at Earth.
In both cases, LIGO could thank a recent series of enhancements to its detectors for such its ability to sense such groundbreaking science crossing its threshold.
LIGO’s laser facilities, in Louisiana and Washington State,are separated by 3002 kilometers (3,030 km over the earth’s surface). Each LIGO facility splits a laser beam in two, sending the twinned streams of light down two perpendicular arms 4 km long. The light in the interferometer arms bounces back and forth between carefully calibrated mirrors and optics that then recombine the rays, producing a delicate interference pattern.
The pattern is so distinct that even the tiniest warps in spacetime that occur along the light rays’ travel paths—the very warps of spacetime that a passing gravitational wave would produce—will produce a noticeable change. One problem: The interferometer is also extremely sensitive to thermal noise in the mirrors and optics, electronics noise in the equipment, and even seismic noise from nearby vehicle traffic and earthquakes around the globe.
Noise was so significant an issue that, from 2006 to 2014, LIGO researchers observed no gravitational waves. However, on September 14, 2015, LIGO detected its first black hole collision—which netted three of LIGO’s chief investigators the 2017 Physics Nobel Prize.
Over the ensuing 394 days of operations between September 2015 and August 2017, LIGO observed 11 gravitational wave events. That averages out to one detection every 35 days.
Then, after the latest round of enhancements to its instruments, LIGO’s current run of observations began at the start of this month. In April alone, it’s observed five likely gravitational wave events: three colliding black holes and now the latest two neutron star/neutron star-black hole collisions.
This once-per-week frequency may indeed represent the new normal for LIGO. (Readers can bookmark this page to follow LIGO’s up to the minute progress.)
Most promisingly, both of last week’s LIGO chirps involve one or two neutron stars. Because neutron stars don’t gobble up the light their collisions might otherwise emit, such an impact offers up the promise of Earth being bathed in detectible gravitational and electromagnetic radiation. (Such dual-pronged observations constitute what’s called “multi-messenger astronomy.”)
“Neutron stars also emit light, so a lot of telescopes around the world chimed in to look for that and locate it in the sky in all different wavelengths of light,” says Sheila Dwyer, staff scientist at LIGO in Richland, Wash. “One of the big goals and motivations for LIGO was to make that possible—to see something with both gravitational waves and light.”
The first such multi-messenger observation made by LIGO began in August 2017 with a gravitational wave detection. Soon thereafter came a stunning 84 scientific papers, examining the electromagnetic radiation from the collision across the spectrum from gamma rays to radio waves. The science spawned by this event, known as GW170817, led to precise timing of the speed of gravitational waves (the speed of light, as Einstein predicted), a solution to the mystery of gamma-ray bursts, and an overnight updating of models of the cosmic source of heavy elements on the periodic table. (Studies of the collision’s gravitational and electromagnetic radiation concluded that a large fraction of the universe’s elements heavier than iron originate from neutron star collisions just like GW170817.)
When the S190425z and S190426c signals came in, telescopes around the world pointed to the regions of the sky that the gravitational wave observations suggested. As of press time, however, no companion source in the sky has yet been found for either.
Yet because of LIGO’s increased sensitivity, the promise of yet more observations increase the likelihood that another GW170817 multi-messenger watershed event is imminent.
Dwyer says LIGO’s latest incarnation uses high-efficiency mirrors that reflect light back with low mechanical or thermal energy transfer from the light ray to the mirror. This is especially significant because, on average, the laser light bounces back and forth along the interferometer arms 1000 times before recombining and forming the detector’s interference pattern.
“Right now we have a very low-absorption coating,” she says. “A very small absorption of that [laser light] can heat up the optics in a way that causes a distortion.”
If the LIGO team can design even lower-loss mirror coatings (which of course could have spinoff applications in photonics, communications and optics) they can increase the power of the laser light traveling through the interferometer arms from the current 200 kilowatts to a projected 3 megawatts.
And according to Daniel Sigg, a LIGO lead scientist in Richland, Wash., another enhancement involves “squeezing” the laser light so that the breadth of its amplitude is sharper than Heisenberg’s Uncertainty Principle would normally allow.
“We can’t measure both the phase and the amplitude or intensity of photons [with high precision] simultaneously,” Sigg says. “But that gives you a loophole. Because we’re only counting photons, we don’t really care about their phase and frequency.”
So LIGO’s lasers use “squeezed light” beams that have higher noise in one domain (amplitude) in order to narrow the uncertainty in the other (phase or frequency). So between these two photon observables, Heisenberg is kept happy.
And that keeps LIGO’s ear tuned to more and more of the most energetic collisions in the universe—and allows it to turn up new science and potential spinoff technologies each time a black hole or neutron star goes bump in the night.
The collective thoughts of the interwebz
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.