The coronavirus outbreak has sent the global economy reeling as businesses shutter and billions of people hunker down. Air travel, vehicle traffic, and industrial production have swiftly declined in recent weeks, with much of the world frozen in place until the virus—which has killed more than 39,000 people globally—can be safely contained. One consequence of the crisis may be a sizable, if temporary, decline in heat-trapping emissions this year.
Global carbon dioxide emissions could fall by 0.3 percent to 1.2 percent in 2020,says Glen Peters, research director of the Center for International Climate Research in Norway. He based his estimates on new projections for slower economic growth in 2020. In Europe, CO2 emissions from large sources could plunge by more than 24 percent this year. That’s according to an early assessment of the Emissions Trading Scheme, which sets a cap on the European Union’s emissions. In Italy, France, and other nations under quarantine, power demand has dropped considerably since early March.
As experts look to the future, Lauri Myllyvirta is tracking how the new coronavirus is already affecting China—the world’s largest carbon emitter, where more than a dozen cities were on lockdown for nearly two months. Myllyvirta is an analyst at the Centre for Research on Energy and Clean Air, an independent organization. Previously based in Beijing, he now lives in Helsinki, where I recently reached him by phone. Our conversation is edited and condensed for clarity.
Battery makers have for years been trying to replace the graphite anode in lithium-ion batteries with a version made of silicon, which would give electric vehicles a much longer range. Some batteries with silicon anodes are getting close to market for wearables and electronics. The recipes for these silicon-rich anodes that a handful of companies are developing typically use silicon oxide or a mix of silicon and carbon.
But Irvine, CA-based Enevate is using an engineered porous film made mainly of pure silicon. In addition to being inexpensive, the new anode material, which founder and chief technology officer Benjamin Park has spent more than 10 years developing, will lead to an electric vehicle (EV) that has 30 percent more range on a single charge than today’s EVs. What’s more, the battery Enevate envisions could be charged up enough in five minutes to deliver 400 km of driving range.
Big names in the battery and automotive business are listening. Carmakers Renault, Nissan, and Mitsubishi, as well as battery-makers LG Chem and Samsung, are investors. And lithium battery pioneer and 2019 Chemistry Nobel Prize winner John Goodenough is on the company’s Advisory Board.
When lithium-ion batteries are charged, lithium ions move from the cathode to the anode. The more ions the anode can hold, the higher its energy capacity, and the longer the battery can run. Silicon can in theory hold ten times the energy of graphite. But it also expands and contracts dramatically, falling apart after a few charge cycles.
To get around that, battery makers such as Tesla today add just a tiny bit of silicon to graphite powder. The powder is mixed with a glue-like plastic called a binder and is coated on a thin copper foil to make the anode. But, says Park, lithium ions react with silicon first, before graphite. “The silicon still expands quite a bit, and that plastic binder is weak,” he says, explaining that the whole electrode is more likely to degrade as the amount of silicon is ramped up.
Enevate does not use plastic binders. Instead, its patented process creates the porous 10- to 60-µm-thick silicon film directly on a copper foil. The cherry on top is a nanometers-thick protective coating, which, says Park, “prevents the silicon from reacting with the electrolyte.” That type of reaction can also damage a battery.
The process does not require high-quality silicon, so anodes of this type cost less than their graphite counterparts of the same capacity. And because the material is mostly silicon, lithium ions can slip in and out very quickly, charging the battery to 75 percent of its capacity in five minutes, without causing much expansion. Park likens it to a high-capacity movie theater. “If you have a full movie theater it takes a long time to find the one empty seat. We have a theater with ten times more capacity. Even if we fill that theater halfway, [it still doesn’t take long] to find empty seats.”
The company’s roll-to-roll processing techniques can make silicon anodes quickly enough for high-volume manufacturing, says Park. By coupling the silicon anode with conventional cathode materials such as nickel-manganese-cobalt, they have made battery cells with energy densities as high as 350 watt-hours per kilogram, which is about 30 percent more than the specific energy of today’s lithium-ion batteries. Enevate says it is now working with multiple major automotive companies to develop standard-size battery cells for 2024-25 model year EVs.
Back in the early 1990s, when local firefighters received a call from Moli Energy, they knew exactly where to head: the company’s battery warehouse. The Vancouver-based firm was the first to mass produce rechargeable lithium-metal batteries. But the batteries had a nasty habit of exploding, which eventually led to a huge recall that bankrupted the firm.
Thirty years have passed, but today’s lithium-ion batteries are still wont to blow up. One culprit is the liquid electrolyte, a usually flammable organic solvent that facilitates the flow of ions between a battery’s electrodes. Replacing this combustible material with a solid, some argue, could produce safer batteries.
The reality, however, is never as simple. Solid-state electrolytes, while certainly less flammable than their liquid counterparts, aren’t entirely immune to fires either. But that could now change, thanks to new technology developed by a team led by Yi Cui, a materials scientist at Stanford University.
As South Sudan emerges from the wreckage of civil war, its leaders are beginning to build the nation’s electric sector from the ground up. With only a handful of oil-fired power plants and crumbling poles and wires in place, the country is striving for a system that runs primarily on renewable energy and reaches more homes and businesses.
Today, only about 1 percent of South Sudan’s 12.5 million people can access the electric grid, according to the state-run utility. Many people use rooftop solar arrays or noisy, polluting diesel generators to keep the lights on; still many more are left in the dark. Those who can access the grid must pay some of the highest electricity rates in the world for a spotty and unreliable service.
Kilometers off the coast of Basque Country in northern Spain, a new twist on offshore wind energy will soon face its final test. The Spanish firm Saitec Engineering made headlines late last year with its distinctive floating turbine concept, and promised to deploy a prototype in April. Last week, that launch took on new significance when Saitec announced a partnership with the renewables division of the German energy titan RWE.
The potential to harvest wind from beyond the shoreline is substantial. “The farther from shore [the wind farm is located], the bigger the wind resource is,” said Luis González-Pinto, chief operating officer of Saitec Offshore Technologies.
As the sun sets across the Netherlands, streetlights twinkle on, town by town. But it’s not in lockstep: some city managers can set their lights to respond to local sunset time or a schedule of their own or they can control individual lights for local events. That’s because in 2017 those cities adopted a smart grid software platform built by Dutch public utility Alliander that may be the first open smart grid platform in everyday use.
Before, these cities could only operate their lights collectively because they used ripple control technology, a widespread control method that sends a pulse over the grid. While smarter control of streetlights may be handy for cities and save them some energy and cash, Alliander has also re-used the platform to manage a growing number of additional services and, earlier this month, passed control of the platform to LF Energy, part of the Linux Foundation.
“Utilities want to get rid of the black box,” says Shuli Goodman, executive director of LF Energy. Alliander started developing its own black box in 2013 but took it open source in 2015 thanks to lobbying by Sander Jansen, a data architect there.
“What I saw was the big [grid software] vendors had their own roadmap, their own product managers, their own vision and it doesn’t always align with what clients want,” Jansen recalls. Developing their own solution gave Alliander more options and prevented it from being stuck with any one provider’s service. Now that it is open source, it also allows third parties to develop their own uses for the platform.
So far, most of the outside interest has been in smart meters, Jansen says. Another project involves interfacing with municipal charging stations for electric cars. Other projects focus on more traditional grid management concerns such as distribution automation.
The electricity grid’s relationship to open source actually dates back to 1997, if not before, when some North American utilities and research organizations used it to simulate local grid management scenarios. Academics also developed their own open source research tools, such as the 2005 open source grid tool called PSAT, developed by Federico Milano at University College in Dublin, Ireland.
But there wasn’t much collaboration between academia and utilities, Milano says: “The [electric utility] community is very closed and not willing to help at all except for some, few individuals. The problem is [the people who use] open source tools are PhD students… Then, when they are hired by some company, they are forced to use some commercial software tool and do not have time to spare to contribute to the community with their code.”
Today, most major transmission and system operators still use commercial software, often from companies such as Siemens and ABB, with custom modifications. They also focus heavily on security, to ensure reliable electricity for hospitals and other critical infrastructure.
But changes in electricity supply may be favoring smarter grids and a more software-focused approach. As energy grids take on more intermittent sources of power, such as solar and wind, it can get harder for ripple control technology to send a reliable signal across the whole grid, Jansen says.
Other changes may also favor more openness, Milano says: “If power system ‘granularity’ is going to increase (e.g., grid-connected microgrids, smart building, aggregators, etc.), then there will be many small companies that will get into the power business from scratch and some of them might be attracted by the ‘open source software’ model.”
Renewables are poised to expand by 50 percent in the next five years, according to the International Energy Agency. Much of that wind and solar power will need to be stored. But a growing electric-vehicle market might not leave enough lithium and cobalt for lithium-ion grid batteries.
Some battery researchers are taking a fresh look at lithium’s long-ignored cousin, potassium, for grid storage. Potassium is abundant, inexpensive, and could in theory enable a higher-power battery. However, efforts have lagged behind research on lithium and sodium batteries.
But potassium could catch up quickly, says Shinichi Komaba, who leads potassium-ion battery research at the Tokyo University of Science: “Although potassium-battery development has just been going on for five years, I believe that it is already competitive with sodium-ion batteries and expect it to be comparable and superior to lithium-ion.”
People have historically shied away from potassium because the metal is highly reactive and dangerous to handle. What’s more, finding electrode materials to hold the much heftier potassium ions is difficult.
Yet a flurry of reports in the past five years detail promising candidates for the cathode. Among the leaders are iron-based compounds with a crystalline structure similar to Prussian blue particles, which have wide open spaces for potassium ions to fill. A group from the University of Texas at Austin led by John Goodenough, coinventor of the lithium-ion battery and a winner of the 2019 Nobel Prize in Chemistry, has reported Prussian blue cathodes with an exceptionally high energy density of 510 watt-hours per kilogram, comparable to that of today’s lithium batteries.
But Prussian blue isn’t perfect. “The problem is, we don’t know how water content in the material affects energy density,” says Haegyeom Kim, a materials scientist at Lawrence Berkeley National Laboratory. “Another issue is that it’s difficult to control its chemical composition.”
Kim is placing bets on polyanionic compounds, which are made by combining potassium with any number of elements plucked from the periodic table. Potassium vanadium fluorophosphate seems to hold special promise. Kim and his colleagues have developed a cathode with the compounds that has an energy density of 450 Wh/kg.
Other researchers are looking at organic compounds for cathodes. These cost less than inorganic compounds, and their chemical bonds can stretch to take up potassium ions more easily.
While Goodenough is giving potassium a chance, his fellow lithium-battery inventor and Nobel Prize winner M. Stanley Whittingham, professor of chemistry at Binghamton University, in New York, isn’t sold. “It’s a scientific curiosity,” he says. “There’s no startup looking at potassium batteries.”
Potassium, says Whittingham, is not a practical technology because of its heft and volatility. Potassium also melts at a lower temperature than lithium or sodium, which can trigger reactions that lead to thermal runaway.
Those are valid concerns, says Vilas Pol, a professor of chemical engineering at Purdue University, in West Lafayette, Ind. But he points out that in a battery, potassium ions shuttle back and forth, not reactive potassium metal. Special binders on the electrode can tame the heat-producing reactions.
Developing the right electrolyte will be key to battery life and safety, says Komaba, of the Tokyo University of Science. Conventional electrolytes contain flammable solvents that, when combined with potassium’s reactivity, could be dangerous. Selecting the right solvents, potassium salts, salt concentration, and additives can prevent fires.
Komaba’s group has made electrolytes using potassium-fluoride salts, superconcentrated electrolytes that have fewer solvents than traditional mixes, and ionic liquid electrolytes that don’t use solvents. In January, materials scientist Zaiping Guo and her team from the University of Wollongong, Australia, reported a nonflammable electrolyte for potassium batteries. They added a flame retardant to the solvent.
Potassium enthusiasts point out that the technology is still at an early stage. It’s never going to match the high energy density of lithium, or be suitable for electric cars. Yet for immense grid batteries, cheap potassium might have an upper hand. “Potassium-ion [batteries] could have worked earlier, but there was no need for [them],” says Pol. “Lithium isn’t enough now.”
In the end, the sum will have to be as good as its parts. Most research has focused on the materials that go into the electrodes and the electrolyte. Put it all together in a battery cell and the energy density drops after just 100 charging cycles or so; practical batteries will need to withstand several hundred.
“It will take time to figure out the exact combination of electrolyte, cathode, and anode,” Pol says. “It might take another 15 years from now to get to the market.”
This article appears in the March 2020 print issue as “Potassium Batteries Show Promise.”
The ancient Romans were the first to mix sand and gravel with water and a bonding agent to make concrete. Although they called it opus cementitium, the bonding agent differed from that used in modern cement: It was a mixture of gypsum, quicklime, and pozzolana, a volcanic sand from Puteoli, near Mount Vesuvius, that made an outstanding material fit for massive vaults. Rome’s Pantheon, completed in 126 C.E., still spans a greater distance than any other structure made of nonreinforced concrete.
The modern cement industry began in 1824, when Joseph Aspdin, of England, patented his ﬁring of limestone and clay at high temperatures. Lime, silica, and alumina are the dominant constituents of modern cement; adding water, sand and gravel produces a slurry that hardens into concrete as it cures. The typical ratios are 7 to 15 percent cement, 14 to 21 percent water, and 60 to 75 percent sand and gravel.
Concrete is remarkably strong under compression. Today’s formulations can resist a crushing pressure of more than 100 megapascals (14,500 pounds per square inch)—about the weight of an African bull elephant balanced on a coin. However, a pulling force of just 2 to 5 MPa can tear concrete apart; human skin [PDF] is far stronger in this respect.
This tensile weakness can be offset by reinforcement. This technique was first used in iron-reinforced troughs for plants built by Joseph Monier, a French gardener, during the 1860s. Before the end of the 19th century, steel reinforcement was common in construction. In 1903 the Ingalls Building, in Cincinnati, became the world’s first reinforced-concrete skyscraper. Eventually engineers began pouring concrete into forms containing steel wires or bars that were tensioned just before or after the concrete was cast. Such pre- or poststressing further enhances the material’s tensile strength.
1. Three Gorges Dam, China: 65.5 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
3. Panama Canal: 6.8 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
4. Hoover Dam, Arizona and Nevada: 6.0 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
5. King Fahd Causeway, Saudi Arabia and Bahrain: 0.84 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
6. The Pentagon, Washington, D.C.: 0.80 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
7. Petronas Twin Towers, Malaysia: 0.39 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
8. Burj Khalifa Tower, United Arab Emirates: 0.11 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
9. The Venetian Hotel, Las Vegas: 0.039 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
10. Wilshire Grand Hotel, Los Angeles: 0.037 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
2. Grand Coulee Dam, Washington: 21.7 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
Today concrete is everywhere. It can be seen in the Burj Khalifa Tower in Dubai, the world’s tallest building, and in the sail-like Sydney Opera House, perhaps the most visually striking application. Reinforced concrete has made it possible to build massive hydroelectric dams, long bridges, and gigantic offshore drilling platforms, as well as to pave roads, freeways, parking lots, and airport runways.
From 1900 to 1928, the U.S. consumption of cement (recall that cement makes up no more than 15 percent of concrete) rose tenfold, to 30 million metric tons. The postwar economic expansion, including the construction of the Interstate Highway System, raised consumption to a peak of about 128 million tons by 2005; recent rates are around 100 million tons a year. China became the world’s largest producer in 1985, and its output of cement—above 2.3 billion metric tons in 2018—now accounts for nearly 60 percent of the global total. In 2017 and 2018 China made slightly more cement (about 4.7 billion tons) than the United States had made throughout the entire 20th century.
But concrete does not last forever, the Pantheon’s extraordinary longevity constituting a rare exception. Concrete deteriorates in all climates in a process that is accelerated by acid deposition, vibration, structural overloading, and salt-induced corrosion of the reinforcing steel. As a result, the concretization of the world has produced tens of billions of tons of material that will soon have to be replaced, destroyed, or simply abandoned.
The environmental impact of concrete is another worry. The industry burns low-quality coal and petroleum coke, producing roughly a ton of carbon dioxide per ton of cement, which works out to about 5 percent of global carbon emissions from fossil fuels. This carbon footprint can be reduced by recycling concrete, by using blast-furnace slag and fly ash captured in power plants, or by adopting one of the several new low-carbon or no-carbon processes. But these improvements would make only a small dent in a business whose global output now surpasses 4 billion metric tons.
This article appears in the March 2020 print issue as “Concrete Facts.”
A rapid-charging and non-flammable battery developed in part by 2019 Nobel Prize winner John Goodenough has been licensed for development by the Canadian electric utility Hydro-Québec. The utility says it hopes to have the technology ready for one or more commercial partners in two years.
For years, experts have predicted that solid-state batteries will be the next-generation technology for electric vehicles (EVs). These batteries promise to be safer by relying on a solid electrolyte instead of the flammable liquids used in today’s lithium-ion batteries. They could also last longer and weigh less, with a 10 times higher energy density, because they use a lithium metal anode instead of graphite.
Ford, Hyundai, Nissan, Toyota, and Volkswagen are all investing in solid-state battery research. And startups in the space abound.
But Eric Wachsman says his company, Ion Storage Systems, stands out for a few reasons. The company’s strong, dense ceramic electrolyte is only about 10 micrometers thick, which is the same thickness as the plastic separators used in today’s lithium-ion batteries, and it conducts lithium ions as well as current liquid electrolytes. And according to Wachsman, it overcomes two key issues with solid-state batteries: high electrolyte resistance and a low current capability.
Scientists continue to tinker with recipes for turning sunlight into electricity. By testing new materials and components, in varying sizes and combinations, their goal is to produce solar cells that are more efficient and less expensive to manufacture, allowing for wider adoption of renewable energy.
The latest development in that effort comes from researchers in St. Petersburg, Russia. The group recently created a tiny prototype of a high-efficiency solar cell using gallium phosphide and nitrogen. If successful, the cells could nearly double today’s efficiency rates—that is, the degree to which incoming solar energy is converted into electrical power.
The new approach could theoretically achieve efficiencies of up to 45 percent, the scientists said. By contrast, conventional silicon cells are typically less than 20 percent efficient.
In “5 Big Ideas for Making Fusion Power a Reality,” I described how building a fusion reactor capable of producing electricity for the power grid may be engineering’s biggest challenge. But fusing atoms in home-built reactors is well within the reach of many amateur scientists. Indeed, it’s something of a trend. The website fusor.net, for example, lists hundreds of people who are active in this area.
Working in well-shielded basements and garages, most fusioneers are in it for their own edification. Carl Greninger, a data center manager at Microsoft, decided to take his project a step further. In 2010, he built a 60-kilovolt Farnsworth-Hirsch fusion reactor—commonly known as a fusor—in his Seattle-area basement.
A fusor consists of a spherical vacuum chamber surrounding a negatively charged spherical grid. When the reactor is fueled with deuterium and electrified, high-voltage current strips electrons off the deuterium atoms, converting them into positively charged atomic nuclei that fly toward the negatively charged inner cage. With the right combination of fuel, vacuum pressure, and voltage, some of the nuclei will collide violently enough for them to fuse together, releasing high-energy neutrons.
Unlike in a tokamak or a laser-fusion reactor, there is little hope of a fusor ever producing a breakeven reaction, where the energy output exceeds the energy input. Still, it’s a useful machine for running experiments that require neutrons and for learning about high-energy physics.
Shortly after he built his fusor, Greninger began inviting students in the area to come by and use it. To date, about 85 students have accepted his offer. After his workweek at Microsoft is done, he typically spends his Friday evenings with groups of students, who run experiments on the reactor or other high-tech equipment in what he calls the Northwest Nuclear Laboratories.
“Basically, I want to give them the chance to step into the persona of a scientist in a way they can’t at their schools,” says Greninger. “The experience is designed to inspire them, not necessarily to pour in a bunch more knowledge.”
And yet, some of his protégés have done impressive research. Collectively, the students have won more than US $660,000 in scholarships and other awards.
“Having that experience in high school led to a huge number of opportunities,” says Jake Hecla, who with two teammates developed an experiment that won second place in physics and astronomy at the International Science and Engineering Fair in 2013. The project, titled “Investigation of Anisotropic Neutron Radiation from a Farnsworth IEC Fusion Reactor,” revealed that in fusors with a geometry like Greninger’s, more fusion may take place in the walls than in the center well.
After graduating from high school, Hecla got a scholarship to MIT, where as a freshman he was awarded an elite research position based on his extensive experience working with high voltages. He’s now pursuing a Ph.D. in nuclear engineering at the University of California, Berkeley, where his research focuses on next-generation radiation detectors.
“Carl’s basement is where I developed the passion for the kinds of things I’m doing now,” Hecla says. “He gave us tools and direction and opportunities to pursue things we were curious about. For me, that has turned out to be an enormous advantage.”
A vast supply of heat lies beneath our feet. Yet today’s drilling methods can barely push through dense rocks and high-pressure conditions to reach it. A new generation of “enhanced” drilling systems aims to obliterate those barriers and unlock unprecedented supplies of geothermal energy.
AltaRock Energy is leading an effort to melt and vaporize rocks with millimeter waves. Instead of grinding away with mechanical drills, scientists use a gyrotron—a specialized high-frequency microwave-beam generator—to open holes in slabs of hard rock. The goal is to penetrate rock at faster speeds, to greater depths, and at a lower cost than conventional drills do.
The Seattle-based company recently received a US $3.9 million grant from the U.S. Department of Energy’s Advanced Research Projects Agency–Energy (ARPA-E). The three-year initiative will enable scientists to demonstrate the technology at increasingly larger scales, from burning through hand-size samples to room-size slabs. Project partners say they hope to start drilling in real-world test sites before the grant period ends in September 2022.
AltaRock estimates that just 0.1 percent of the planet’s heat content could supply humanity’s total energy needs for 2 million years. Earth’s core, at a scorching 6,000 °C, radiates heat through layers of magma, continental crust, and sedimentary rock. At extreme depths, that heat is available in constant supply anywhere on the planet. But most geothermal projects don’t reach deeper than 3 kilometers, owing to technical or financial restrictions. Many wells tap heat from geysers or hot springs close to the surface.
That’s one reason why, despite its potential, geothermal energy accounts for only about 0.2 percent of global power capacity, according to the International Renewable Energy Association.
“Today we have an access problem,” says Carlos Araque, CEO of Quaise, an affiliate of AltaRock. “The promise is that, if we could drill 10 to 20 km deep, we’d basically have access to an infinite source of energy.”
The ARPA-E initiative uses technology first developed by Paul Woskov, a senior research engineer at MIT’s Plasma Science and Fusion Center. Since 2008, Woskov and his colleagues have used a 10-kilowatt gyrotron to produce millimeter waves at frequencies between 30 and 300 gigahertz. Elsewhere, millimeter waves are used for many purposes, including 5G wireless networks, airport security, and astronomy. While producing those waves requires only milliwatts of power, it takes several megawatts to drill through rocks.
To start, MIT researchers place a piece of rock in a test chamber, then blast it with high-powered, high-frequency beams. A metallic waveguide directs the beams to form holes. Compressed gas is injected to prevent plasma from breaking down and bursting into flames, which would hamper the process. In trials, millimeter waves have bored holes through granite, basalt, sandstone, and limestone.
The ARPA-E grant will allow the MIT team to develop their process using megawatt-size gyrotrons at Oak Ridge National Laboratory, in Tennessee. “We’re trying to bring forward a disruption in technology to open up the way for deep geothermal energy,” Araque says.
Other enhanced geothermal systems now under way use mechanical methods to extract energy from deeper wells and hotter sources. In Iceland, engineers are drilling 5 km deep into magma reservoirs, boring down between two tectonic plates. Demonstration projects in Australia, Japan, Mexico, and the U.S. West—including one by AltaRock—involve drilling artificial fractures into continental rocks. Engineers then inject water or liquid biomass into the fractures and pump it to the surface. When the liquid surpasses 374 °C and 22,100 kilopascals of pressure, it becomes a “supercritical” fluid, meaning it can transfer energy more efficiently and flow more easily than water from a typical well.
However, such efforts can trigger seismic activity, and projects in Switzerland and South Korea were shut down after earthquakes rattled surrounding cities. Such risks aren’t expected for millimeter-wave drilling. Araque says that while beams could spill outside their boreholes, any damage would be confined deep below ground.
Maria Richards, coordinator at Southern Methodist University’s Geothermal Laboratory, in Dallas, says that one advantage of using millimeter waves is that the drilling can occur almost anywhere—including alongside existing power plants. At shuttered coal facilities, deep geothermal wells could produce steam to drive the existing turbines.
The Texas laboratory previously explored using geothermal power to help natural-gas plants operate more efficiently. “In the end, it was too expensive. But if we could have drilled deeper and gotten higher temperatures, a project like ours would’ve been more profitable,” Richards says. She notes that millimeter-wave beams could also reach high-pressure offshore oil and gas reservoirs that are too dangerous for mechanical drills to tap.
This article appears in the March 2020 print issue as “AltaRock Melts Rock For Geothermal Wells.”
For owners of electric vehicles, range anxiety—the fear of running out of power before the next charging station—is real. Car manufacturers, keen to bring EVs to the mass market, have for years sought alternatives that could store more charge than today’s lithium-ion batteries.
One option is lithium-air, and a team of researchers has invented a new type of cathode that they claim can lengthen the life of such batteries. In a study published in Applied Catalysis B: Environmental, the team from South Korea and Thailand describe how they coated nickel cobalt sulfide nanoflakes onto a graphene cathode doped with sulfur. The result: an electrode that boasts both improved electrical conductivity and catalytic activity.
On a vast grassy field in northern Wyoming, a coal-fired power plant will soon do more than generate electricity. The hulking facility will also create construction materials by supplying scientists with carbon dioxide from its exhaust stream.
A team from the University of California, Los Angeles, has developed a system that transforms “waste CO2” into gray blocks of concrete. In March, the researchers will relocate to the Wyoming Integrated Test Center, part of the Dry Fork power plant near the town of Gillette. During a three-month demonstration, the UCLA team plans to siphon half a ton of CO2 per day from the plant’s flue gas and produce 10 tons of concrete daily.
“We’re building a first-of-a-kind system that will show how to do this at scale,” said Gaurav Sant, a civil engineering professor who leads the team.
Carbon Upcycling UCLA is one of 10 teams competing in the final round of the NRG COSIA Carbon XPrize. The global competition aims to develop breakthrough technologies for converting carbon emissions into valuable products. Four more finalists are demonstrating projects in Wyoming, including CarbonCure, a Canadian startup making greener concrete, and Carbon Capture Machine, a Scottish venture focused on building materials. (Five other teams are competing at a natural gas plant in Alberta, Canada.)
Worldwide, hundreds of companies and research groups are working to keep CO2 out of the atmosphere and store it someplace else—including in deep geologic formations, soils, soda bubbles, and concrete blocks. By making waste CO2 into something marketable, entrepreneurs can begin raising revenues needed to scale their technologies, said Giana Amador, managing director of Carbon180, a nonprofit based in Oakland, California.
The potential global market for waste-CO2 products could be $5.9 trillion a year, of which $1.3 trillion includes cements, concretes, asphalts, and aggregates, according to Carbon180 [PDF]. Amador noted the constant and growing worldwide demand for building materials, and a rising movement within U.S. states and other countries to reduce construction-related emissions.
Cement, a key ingredient in concrete, has a particularly big footprint. It’s made by heating limestone with other materials, and the resulting chemical reactions can produce significant CO2 emissions. Scorching, energy-intensive kilns add even more. The world produces 4 billion tons of cement every year, and as a result, the industry generates about 8 percent of global CO2 emissions, according to think tank Chatham House.
The cement industry is one that’s really difficult to decarbonize, and we don’t have a lot of cost-effective solutions today,” Amador said. Carbon “utilization” projects, she added, can start to fill that gap.
The UCLA initiative began about six years ago, as researchers contemplated the chemistry of Hadrian’s Wall—the nearly 1,900-year-old Roman structure in northern England. Masons built the wall by mixing calcium oxide with water, then letting it absorb CO2 from the atmosphere. The resulting reactions produced calcium carbonate, or limestone. But that cementation process can take years or decades to complete, an unimaginably long wait by today’s standards. “We wanted to know, ‘How do you make these reactions go faster?’” Sant recalled.
The answer was portlandite, or calcium hydroxide. The compound is combined with aggregates and other ingredients to create the initial building element. That element then goes into a reactor, where it comes in contact with the flue gas coming directly out of a power plant’s smokestack. The resulting carbonation reaction forms a solid building component akin to concrete.
Sant likened the process to baking cookies. By tinkering with the ingredients, curing temperatures, and the flow of CO2, they found a way to, essentially, transform the wet dough into baked goods. “You stick it in a convection oven, and when they come out they’re ready to eat. This is exactly the same,” he said.
The UCLA system is unique among green concrete technologies because it doesn’t require the expensive step of capturing and purifying CO2 emissions from power plants. Sant said his team’s approach is the only one so far that directly uses the flue gas stream. The group has formed a company, CO2Concrete, to commercialize their technology with construction companies and other industrial partners.
After Wyoming, Sant and colleagues will dismantle the system and haul it to Wilsonville, Alabama. Starting in July, they’ll repeat the three-month pilot at the National Carbon Capture Center, a research facility sponsored by the U.S. Department of Energy.
The UCLA team will learn in September if they’ve won a $7.5 million Carbon XPrize, though Sant said he’s not fretting about the outcome. “Winning is great, but what we’re really focused on is making a difference and [achieving] commercialization,” he said.
New batteries are often described with comparatives: they’re safer, lighter, or longer-lived than today’s versions. Solid-state batteries—those which contain no liquid—can make two such claims. With inorganic electrolytes, they’re much less likely to catch fire than traditional lithium-ion batteries, which have organic electrolytes. And by swapping out graphite for lithium as the anode, you can get a massive increase (up to 10-fold) in energy density, making solid-state batteries look especially promising for electric vehicles.
“That’s the Holy Grail. Lithium metal has the highest gravimetric density of all materials,” says Adam Best, who’s in charge of battery research at the Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia’s national science agency.
But a major snag remains in bringing solid-state batteries to market—how to manufacture electrolytes that are strong and durable, yet thin enough to be good ion conductors. Ideally, these electrolytes should be tens of microns thick, similar to the separators in today’s lithium-ion batteries, says materials scientist Ping Liu from the University of California, San Diego. “But because most solid electrolytes are ceramic, when you make a thin layer, they’re inherently brittle,” he says.
The generations-old trend toward lower electricity prices now appears to have ended. In many affluent countries, prices tilted upward at the turn of the century, and they continue to rise, even after adjusting for inflation.
Even so, the price we pay for electricity is an extraordinary bargain, and that’s why this form of power has become ubiquitous in the modern world. When expressed in constant 2019 dollars, the average price of electricity in the United States fell from $4.79 per kilowatt-hour in 1902 (the first year for which the national mean is available) to 32 cents in 1950. The price had dropped to 10 cents by 2000, and in late 2019 it was just marginally higher, at about 11 cents per kilowatt-hour. This represents a relative decline of more than 97 percent. A dollar now buys nearly 44 times as much electricity as it did in 1902.
Because average inflation-adjusted manufacturing wages have quadrupled since 1902, blue-collar households now find electricity about 175 times as affordable as it was nearly 120 years ago. And it gets better: We buy electricity in order to convert it into light, kinetic energy, or heat, and the improved efficiency of such conversions have made the end uses of electricity an even greater bargain.
Lighting offers the most impressive gain: In 1902, a lightbulb with a tantalum filament produced 7 lumens per watt; in 2019 a dimmable LED light delivered 89 lm/W (see “Luminous Efficacy,” IEEE Spectrum, April 2019). That means a lumen of electric light is now three orders of magnitude (approximately 2,220 times) more affordable for a working-class household than it was in the early 20th century. Lower but still impressive reductions in end-use costs apply in the case of electric motors that run kitchen appliances and force hot air into the ducts to heat houses using natural-gas furnaces.
An international perspective shows some surprising differences. The United States has cheaper residential electricity than other affluent nations, with the exception of Canada and Norway, which derive high shares of their power from hydroelectric generation (60 percent and 95 percent, respectively).
When using prevailing exchange rates, the U.S. residential price is about 45 percent of the European Union’s mean, about half the Japanese average, and about a third of the German rate. Electricity prices in India, Mexico, Turkey, and South Africa are lower than in the United States in terms of the official exchange rates, but they are considerably higher in terms of purchasing power parity—more than twice the U.S. level in India and nearly three times as much in Turkey.
A naive observer, reading the reports of falling prices for photovoltaic cells and wind turbines, might conclude that the rising shares of solar and wind power will bring a new era of falling electricity prices. Just the opposite has been true.
Before the year 2000, when Germany embarked on its large-scale and expensive Energiewende (energy transition), that country’s residential electricity prices were low and declining, bottoming at less than €0.14/kWh ($.13/kWh, using the prevailing exchange rate) in 2000.
By 2015, Germany’s combined solar and wind capacity of nearly 84 gigawatts had surpassed the total installed in fossil-fueled plants, and by March 2019 more than 20 percent of all electricity came from the new renewables. However, over an 18-year period (2000 to 2018) electricity prices more than doubled, to €0.31/ kWh. The E.U.’s largest economy thus has the continent’s highest electricity prices, followed by heavily wind-dependent Denmark, at €0.3/kWh.
A similar contrast can be seen in the United States. In California, where the new renewables have taken an increasing share, electricity prices have been rising five times as fast as the national mean and are now nearly 60 percent higher than the countrywide average.
This article appears in the February 2020 print issue as “Electricity Prices: A Changing Bargain.”
Electric three-wheelersferry people and goods around Bangladesh but are banned in its capital. Batteries and motors could accelerate the bicycle rickshaws that gum up Dhaka’s traffic and eliminate exhaust from tuk tuks, gas-powered three-wheelers. But charging such EVs would further burden already strained power lines.
That’s just one of many opportunity costs that Bangladesh pays for a weak electrical grid. Frequent power outages hurt businesses and deter foreign investment. A sweeping grid-modernization program promises to alleviate such troubles.
In 2018, the government-run Power Grid Company of Bangladesh (PGCB) doubled the capacity of its first international transmission link—a high-voltage DC connection delivering 1 gigawatt from India. Next month, it hopes to finalize requirements for generators that promise to stabilize the voltage and frequency of the grid’s alternating current.
And next year, Bangladesh expects to achieve universal electricity access for the country’s 160 million people, only half of whom had electricity a decade ago. “It’s a whole social transformation,” says Tawfiq-e-Elahi Chowdhury, special advisor on energy to Bangladesh prime minister Sheikh Hasina.
However, it’s not clear what the grid revamp will mean for Bangladesh’s energy mix. Domestic natural gas is running out, and the country is scrambling to replace it and maintain rapid economic growth.
A nuclear power plant is now under construction, and Bangladesh is importing liquefied natural gas. But the government sees coal-fired and imported electricity as its cheapest options, and both come with challenges and risks.
Coal delivered less than 2 percent of Bangladesh’s electricity last year, but plants burning imported coal could soon match the scale of its gas-fired generation. Three coal plants under construction are each capable of serving about 10 percent of the country’s current 13-GW peak power demand. And Chowdhury expects similar projects in development to lift total coal capacity to about 10 GW by 2030.
The government expects to boost imports fivefold, to 5 GW, by 2030. Importing more electricity will provide access to relatively low-cost and renewable hydropower. A deal struck with Nepal should provide 500 megawatts, and more interconnections to India, as well as Bhutan, China, and Myanmar, are under discussion.
To convey these new power flows around the country, PGCB is building a network of 400-kilovolt lines atop its existing 230-kV and 130-kV lines, with several 765-kV circuits on the drawing board [see map]. The firm is simultaneously improving power quality—which will allow Bangladesh to accommodate more imported power and operate the nuclear plant.
Imports will be costlier if high-voltage DC converter stations must be erected at each border crossing. Instead, the government has agreed to synchronize its AC grid with India’s, enabling power to flow freely between the two. Synchronization will not be possible, however, until PGCB eliminates its grid’s large voltage and frequency deviations.
Sahbun Nur Rahman, PGCB’s executive engineer for grid planning, says most private generators don’t properly adjust the power they produce to maintain the grid’s voltage and frequency. Stability has improved over the last two years, however, as government plants have stepped up. He says the grid could be ready for synchronization in as little as five years.
Coal power will push the country’s annual per capita greenhouse gas emissions up to about 1 metric ton—still tiny, Chowdhury says, since the average developed economy generates 12 metric tons. Still, betting on coal is controversial for a low-lying country contending with climate change. By some estimates, global coal use needs to drop by 80 percent within a decade to hold global warming to 1.5 °C this century. And one of Bangladesh’s first coal-plant projects is 14 kilometers upstream from the Sundarbans, the world’s largest contiguous mangrove forest, which serves as a buffer against cyclones and sea level rise.
What’s missing from the grid push, say critics, is wind and solar. Bangladesh pioneered the use of solar power to electrify rural communities. At the peak, at least 20 million Bangladeshis relied on off-grid solar systems, and millions still do. But Mahmood Malik, CEO of the country’s Infrastructure Development Company, says the expanding national grid means there’s “not much need” to build more.
Off-grid solar still contributes more than half of Bangladesh’s renewable electricity, which makes up less than 3 percent of its power supply. Meanwhile on-grid solar is growing slowly, and wind development has barely begun. As a result, the government will miss its commitment to source 10 percent of the nation’s electricity from renewable sources by 2021.
Abdul Hasib Chowdhury, a grid expert at the Bangladesh University of Engineering and Technology, in Dhaka, says the best long-term bet for Bangladesh is imported power from beyond South Asia. He looks to the rich winds and sunshine in sparsely populated Central Asia. “South Asia is nearly 2 billion people crammed into this small space,” says A.H. Chowdhury. “They will require a lot of energy in the next 50 years.”
This article appears in the February 2020 print issue as “Bangladesh Scrambles to Grow Power Supply.”
Dozens 0f electric general aviation projects are underway around the world, not counting the urban air taxis that dominate the electric propulsion R&D scene. The first all-electric commercial aircraft, a seaplane intended for short flights, completed a 15-minute test flight in December.
Shortly after, luxury icon Rolls Royce unveiled what it hopes will be the world’s fastest electric aircraft. The current speed record for that type of plane is 335 kilometers per hour (210 mph). The new one-seater craft, slated to fly this spring, will top out at 480 km/h (300 mph). It should also be able to fly from London to Paris, about 320 km (200 miles), on a single charge.
That’s thanks to “the world’s most energy-dense flying battery pack,” according to Rolls Royce. The aircraft has three batteries powering three motors that will deliver 750kW to spin the propellers. Each 72 kilowatt-hour battery pack weighs 450kg and has 6,000 densely packed lithium-ion cells.
Getting all this power on board wasn’t easy, says Matheu Parr, project manager for the ACCEL project, short for Accelerating the Electrification of Flight. Careful thought and engineering went into each step, right from selecting the type of battery cell. Lithium-ion cells come in many forms, including pouches as well as prismatic and cylindrical cells. Cylindrical ones turn out to be best for holding a lot of energy and discharging it quickly at high power, he says.
Next came the critical task of assembling the cells into a pack. Rolls Royce’s partner, Electroflight, a startup specializing in aviation batteries, began that effort by analyzing innovations in the relatively new all-electric auto-racing space.
“Really, the challenge for electric aviation is one of packaging,” Parr says. “So we’ve looked at how Formula E [air racing] tackles packaging and then taken it a step further.” By using lightweight materials—and only the bare minimum of those—the Formula E teams manage to cut their planes’ packaging-to –battery cell weight ratio in half compared with the amount of battery packaging an electric car has to carry around for each kilogram of battery cell.
The high-power, closely packed cells get pretty hot. So, designing an advanced active-cooling system was important. Instead of the air-cooling used in car batteries, Rolls Royce engineers chose a liquid-cooling system. All the cells directly contact a cooling plate through which a water-and-glycol mixture is piped.
Finally, the engineers built in safety features such as an ultra-strong outside case and continual monitoring of each battery’s temperature and voltage. Should something go wrong with one of the batteries, it would automatically be shut off. Better still, the airplane can land even if two of its three batteries are turned off.
The ACCEL battery comes out to a specific energy of 165 watt-hours per kilogram, which puts it on par with the battery pack powering the Tesla Model 3. That’s still a long way from the 500 Wh/kg needed to compete with traditional jet-propulsion aircraft for commercial flights (aviation batteries are not expected to store that much energy per unit mass until 2030). For now, Rolls Royce and others believe all-electric propulsion will power smaller aircraft while larger planes will have hybrid fuel-electric systems. The company has teamed up with Airbus and Siemens to develop a hybrid airplane.
With its high-speed racing aircraft, Rolls Royce wants to pioneer the transition to the “third age of aviation, from propeller aircraft to jet aircraft to electric,” says Parr. The project will also provide know-how that will shape future designs. “We’re learning an awful lot that we want to see packed into a future aircraft. Innovations in the battery and system integration, packaging and management will all help us shape any future electric product, be it all-electric or hybrid.”
The joke has been around almost as long as the dream: Nuclear fusion energy is 30 years away…and always will be. But now, more than 80 years after Australian physicist Mark Oliphant first observed deuterium atoms fusing and releasing dollops of energy, it may finally be time to update the punch line.
Over the past several years, more than two dozen research groups—impressively staffed and well-funded startups, university programs, and corporate projects—have achieved eye-opening advances in controlled nuclear fusion. They’re building fusion reactors based on radically different designs that challenge the two mainstream approaches, which use either a huge, doughnut-shaped magnetic vessel called a tokamak or enormously powerful lasers.
What’s more, some of these groups are predicting significant fusion milestones within the next five years, including reaching the breakeven point at which the energy produced surpasses the energy used to spark the reaction. That’s shockingly soon, considering that the mainstream projects pursuing the conventional tokamak and laser-based approaches have been laboring for decades and spent billions of dollars without achieving breakeven.
In Cambridge, Mass., MIT-affiliated researchers at Commonwealth Fusion Systems say their latest reactor design is on track to exceed breakeven by 2025. In the United Kingdom, a University of Oxford spin-off called First Light Fusion claims it will demonstrate breakeven in 2024. And in Southern California, the startup TAE Technologies has issued a breathtakingly ambitious five-year timeline for commercialization of its fusion reactor.
Irrational exuberance? Maybe. Fusion research is among the most costly of endeavors, depending on high inflows of cash just to pay a lab’s electricity bills. In the pursuit of funding, the temptation to overstate future achievements is strong. And past expectations of impending breakthroughs have repeatedly been dashed. What’s changed now is that advances in high-speed computing, materials science, and modeling and simulation are helping to topple once-recalcitrant technical hurdles, and significant amounts of money are flowing into the field.
Some of the new fusion projects are putting the newest generation of supercomputers to work to better understand and tweak the behavior of the ultrahigh-temperature plasma in which hydrogen nuclei fuse to form helium. Others have reopened promising lines of inquiry that were shelved decades ago. Still others are exploiting new superconductors or hybridizing the mainstream concepts.
Despite their powerful tools and creative approaches, many of these new ventures will fail. But if just one succeeds in building a reactor capable of producing electricity economically, it could fundamentally transform the course of human civilization. In a fusion reaction, a single gram of the hydrogen isotopes that are most commonly used could theoretically yield the same energy as 11 metric tons of coal, with helium as the only lasting by-product.
As climate change accelerates and demand for electricity soars, nuclear fusion promises a zero-carbon, low-waste baseload source of power, one that is relatively clean and comes with no risk of meltdowns or weaponization. This tantalizing possibility has kept the fusion dream alive for decades. Could one of these scrappy startups finally succeed in making fusion a practical reality?
Not so long ago, the outlook for fusion power was pretty bleak, with two of the biggest projects seemingly stalled. In 2016, the U.S. Department of Energy admitted that its US $3.5 billion National Ignition Facility (NIF) had failed to meet its goal of using lasers to “ignite” a self-sustaining fusion reaction. A DOE report suggested [PDF] that NIF’s research should shift from investigating laser-sparked ignition to determining whether such ignition is even possible.
The same year, the U.S. and several other governments began debating whether to pull their support from the International Thermonuclear Experimental Reactor (ITER). First proposed in 1985 and now under construction in southern France, ITER is the world’s biggest fusion experiment. It is a type of tokamak, which uses magnetic forces to confine and isolate the ferociously hot, energetic plasma needed to initiate and sustain fusion. But the project has been plagued by delays and cost overruns that have quintupled its original $5 billion price tag and pushed its projected completion date to 2035. (And even if it makes that date, it could be decades after that before commercial plants based on the design are in operation.) The setbacks and enormous expense of NIF and ITER had the effect of draining not just money but also enthusiasm from the field.
Even as the government-backed megaprojects foundered, alternative fusion-energy research began to gain momentum. The hope of those pursuing these new efforts is that their novel and smaller-scale approaches can accelerate past the decades-long incremental slog. Investors are finally taking notice and pouring money into the field. Over the past five years, private capitalists have injected about $1.5 billion into small-scale fusion-energy companies. Among those who have made significant bets on fusion are Amazon’s Jeff Bezos, Microsoft’s Bill Gates, and venture capitalist Peter Thiel. A few major corporations, including Lockheed Martin, have launched their own small-fusion projects.
Jesse Treu, a Ph.D. physicist who spent much of his career investing in biotech and med-tech startups, says he realized in 2016 that “wonderful things were starting to happen in fusion energy, but funding wasn’t catching up. It’s clear that private equity and venture capital are part of the solution to develop this technology, which is clearly the best energy answer for the planet.” He cofounded the Stellar Energy Foundation to connect fusion researchers with funding sources and to provide support and advocacy.
And public money has started to follow private: U.S. Department of Energy grant makers, who for decades funneled most nondefense fusion allocations to ITER, are now channeling some funding to projects at the fringes of mainstream research. The federal budget includes a $107 million increase for fusion projects in fiscal year 2020, including a research partnership program that allows small companies to conduct major experiments at the DOE’s national laboratories.
The U.S. government’s renewed interest stems in part from a perceived need to keep up with China, which recently restarted its fusion-energy program after a three-year moratorium. The Chinese government plans to switch on a new fusion reactor in Sichuan province this year. Meanwhile, the Chinese energy company ENN Energy Holdings has been investing in research programs abroad and is building a duplicate of Princeton Fusion Systems’ compact reactor in central China, with help from top U.S. scientists.
“Now that it’s looking like China will gobble up every idea the U.S. has failed to fund,” says Matthew J. Moynihan, a nuclear engineer and fusion consultant to investors, “that’s serving as a wake-up for the U.S. government.”
For all this activity and investment, fusion power remains as tough a problem as ever.
Unlike nuclear fission, in which a large, unstable nucleus is split into smaller elements, a fusion reaction occurs when the nuclei of a lightweight element, typically hydrogen, collide with enough force to fuse and form a heavier element. In the process, some of the mass is released and converted into energy, as laid out in Albert Einstein’s famous formula: E = mc2.
There’s an abundance of fusion energy in our universe—the sun and other stable stars are powered by thermonuclear fusion—but the task of triggering and controlling a self-sustaining fusion reaction and harnessing its power is arguably the most difficult engineering challenge humans have ever attempted.
To fuse hydrogen nuclei, earthbound reactor designers need to find ways to overcome the positively charged ions’ mutual repulsion—the Coulomb force—and get them close enough to bind via what’s known as the strong nuclear force. Most methods involve temperatures that are so high—several orders of magnitude hotter than the sun’s core temperature of 15 million °C—that matter can exist only in the plasma state, in which electrons break free of their atomic nuclei and circulate freely in gaslike clouds.
But a high-energy-density plasma is notoriously unstable and difficult to control. It wriggles and writhes and attempts to break free, migrating to the edges of the field that contains it, where it quickly cools and dissipates. Most of the challenges surrounding fusion energy center around plasma: how to heat it, how to contain it, how to shape it and control it. The two mainstream approaches are magnetic confinement and inertial confinement. Magnetic-confinement reactors such as ITER attempt to hold the plasma steady within a tokamak, by means of powerful magnetic fields. Inertial-confinement approaches, such as NIF’s, generally use lasers to compress and implode the plasma so quickly that it’s held in place long enough for the reaction to get going.
Scientists have long thought that bigger is better when it comes to creating stable and energy-dense plasma fields. But with recent advances in supercomputing and complex modeling, researchers are unraveling more of the mysteries underlying plasma behavior and developing new tricks for handling it without huge, complex machinery.
Among the researchers at the forefront of this work is physicist C. Wendell Horton Jr. of the University of Texas Institute of Fusion Studies. He uses the university’s Stampede supercomputer to build simulations of plasma flow and turbulence inside magnetic-confinement reactors. “We’re making calculations that were impossible just a few years ago and modeling data about plasma in three dimensions and in time,” Horton says. “Now we can see what’s happening with much more nuance and detail than we would get with analytic theories and even the most advanced probes and diagnostic measurements. That’s giving us a more holistic picture of what’s needed to improve reactor design.”
Horton’s findings have informed the design of large-scale experiments such as ITER, as well as small-scale projects. “The problem with ITER is that no matter how well they get the plasma to behave, they haven’t figured out how to get the reaction to self-sustain,” he says. “It’s still going to burn out in a matter of minutes, and that’s obviously not solving the energy problem.” He and other researchers believe that some of the small-scale efforts are much closer to achieving a steady-state reaction that could generate baseload electricity.
Among the most mature of the fusion startups is California-based TAE Technologies (formerly Tri Alpha Energy), which launched in 1998.
The TAE reactor is designed to make use of what’s called a field-reversed configuration (FRC) to create a swirling ring of plasma that contains itself in its own magnetic field. (Princeton Fusion Systems’ design is also an FRC.) Instead of using deuterium and tritium—the hydrogen-isotope blend that fuels most fusion reactors—the TAE reactor injects beams of high-energy neutral hydrogen particles into hydrogen-boron fuel, forcing a reaction that produces alpha particles (ionized helium nuclei). Heat generated in the containment vessel caused by the deposit of soft X-ray energy will be converted into electricity using a conventional thermal conversion system, which heats water into steam to drive a turbine.
Hydrogen-boron fusion is aneutronic, meaning that the primary reaction does not produce damaging neutron radiation. The drawback is that burning the fuel requires extraordinary temperatures, as high as 3 billion °C. “When you’re that hot, the electrons are radiating like crazy,” says William Dorland, a physics professor at the University of Maryland. “They’re going to cool off the plasma faster than you can heat it.” Although FRC machines seem to be less prone to plasma instabilities than some other magnetic-confinement methods, no one has yet demonstrated an FRC reactor that can create a stable plasma.
TAE cofounder and CEO Michl Binderbauer says the company’s latest machine, dubbed Norman (in honor of company cofounder Norman Rostoker), is achieving “significant improvements in plasma containment and stability over the previous-generation machine.” What’s driving the improvements are advances in artificial intelligence and machine learning, enabled by a cutting-edge algorithm developed by Google called Optometrist. TAE adapted the algorithm in partnership with Google to analyze the plasma-behavior data and home in on the combination of variables that will create the most ideal conditions for fusion. The researchers described it in a Nature paper published in 2017.
“We’re doing things we could have never done 10 years ago, and that’s driving faster and faster cycles of learning,” says Binderbauer.
Advanced computing is also breathing new life into promising lines of inquiry that were abandoned years ago due to budget cuts or technical roadblocks. General Fusion, based near Vancouver, was founded by Canadian plasma physicist Michel Laberge. He quit a lucrative job developing laser printers to pursue an approach called magnetized target fusion (MTF). The company has attracted more than $200 million, including investments from Jeff Bezos and the governments of Canada and Malaysia.
General Fusion’s design combines features of magnetic-confinement and inertial-confinement fusion. It injects pulses of magnetically confined plasma fuel into a sphere filled with a vortex of molten lead and lithium. Pistons surrounding the reactor drive shock waves toward the center, compressing the fuel and forcing the particles into a fusion reaction. The resulting heat is absorbed in the liquid metal and used to produce steam to spin a turbine and generate electricity.
“You can think of it in some ways as the opposite of a tokamak,” says Laberge. “Tokamaks work with a big plasma field that’s [relatively] low density. We’re trying to make a mini-size plasma that’s extremely high density, by squashing it in with the shock waves. Because the field is so dense and small, we only need to keep it together for a millisecond for it to react.”
In the 1970s, the U.S. Naval Research Laboratory experimented with a piston system to trigger nuclear fusion. Those experiments failed, due in large part to an inability to precisely control the timing of the shock waves. Laberge’s team has developed advanced algorithms and highly precise control systems to fine-tune the speed and timing of the shock waves and compression.
“In those experiments in the 1970s, the problem was symmetry,” says Laberge. “We’ve now achieved the accuracy and force we need, so that part’s solved.”
Using liquid metal could solve another of fusion energy’s primary challenges: Neutron radiation erodes a reactor’s walls, which must be replaced frequently and disposed of as low-level radioactive waste. The liquid metal protects the solid outer wall from damage. There’s some irradiation of the liquid metal, but there’s no need to regularly replace it, and so the reactor doesn’t produce a steady stream of low-level waste.
General Fusion’s newest reactor, which generated plasma for the first time in late 2018, is the centerpiece of a facility that Laberge says will demonstrate an end-to-end capability to produce electricity from nuclear fusion. “Now that we’ve successfully created a stable, long-lived plasma, we can see that we have a viable path toward having the plasma generate more energy than it consumes,” he says. “In terms of commercialization, our timeline is now a matter of years, not decades.”
Virginia-based HyperJet Fusion Corp. has an approach similar to General Fusion’s, but instead of pistons, some 600 plasma guns fire jets of plasma into the reactor. The merging of the jets forms a plasma shell, or liner, which then implodes and ignites a magnetized target plasma. The system doesn’t need a heating system to bring the fuel to fusion temperatures, says HyperJet CEO and chief scientist F. Douglas Witherspoon. “The imploding plasma liner contains the target plasma and provides the energy to elevate the temperature to fusion conditions. And because we’re using a much higher-density plasma than a magnetic-confinement system would, it reduces the size of the fusing plasma from meter scale to centimeter scale.”
Witherspoon says the advantage of the HyperJet approach over tokamaks is that it doesn’t require expensive superconducting magnets to generate the enormous magnetic fields needed to confine the fusion-burning plasma.
Tokamaks themselves are also getting a reboot, thanks to the use of different superconducting materials that could make magnetic confinement more viable. MIT spin-off Commonwealth Fusion Systems is employing yttrium-barium-copper oxide (YBCO), a high-temperature superconductor, in the magnets on its Sparc reactor.
Commonwealth cofounder Martin Greenwald, who is also the deputy director of MIT’s Plasma Science and Fusion Center, calculates that the Sparc reactor’s YBCO magnets will be able to generate a field of about 21 teslas at their surface and 12 T at the center of the plasma, roughly doubling the field strength of tokamak magnets made of niobium-tin. Stronger magnetic fields produce a stronger confining force on the charged particles in the plasma, improving insulation and enabling a much smaller, cheaper, and potentially better performing fusion device.
“If you can double the magnetic field and cut the size of the device in half, with identical performance, that will be a game changer,” Greenwald says.
Indeed, one advantage of the newer, small-scale fusion projects is that they can concentrate on the novel aspects of their designs, while taking advantage of decades of hard-won knowledge about the fundamentals of fusion science. As Greenwald puts it, “We think we can get to commercial deployment of fusion power plants faster by accepting the conventional physics basis developed around the ITER experiment and focusing on our collaborations between physicists and magnet engineers who have been setting records for decades.”
Some promising startups, though, aren’t content to accept the conventional wisdom, and they’re tackling the underlying physics of fusion in new ways. One of the more radical approaches is that of First Light Fusion. The British company intends to produce fusion using an inertial-confinement reactor design inspired by a very noisy crustacean.
The pistol shrimp’s defining feature is its oversize pistol-like claw, which it uses to stun prey. After drawing back the “hammer” part of its claw, the shrimp snaps it against the opposite side of the claw, creating a rapid pressure change that produces vapor-filled voids in the water called cavitation bubbles. As these bubbles collapse, shock waves pulse through the water at 25 meters per second, enough to take out small marine animals.
“The shrimp just wants to use the pressure wave to stun its prey,” says Nicholas Hawker, First Light’s cofounder and CEO. “It doesn’t care that as the cavity implodes, the vapor inside is compressed so forcefully that it causes plasma to form—or that it has created the Earth’s only example of inertial-confinement fusion.” The plasma reaches temperatures of over 4,700 °C, and it creates a 218-decibel bang.
Hawker focused on the pistol shrimp’s extraordinary claw in his doctoral dissertation at the University of Oxford, and he began studying whether it might be possible to mimic and scale up the shrimp’s physiology to spark a fusion reaction that could produce electricity.
After raising £25 million (about $33 million) and teaming up with international engineering group Mott MacDonald, First Light is building an ICF reactor in which the “claw” consists of a metal disk-shaped projectile and a cube with a cavity filled with deuterium-tritium fuel. The projectile’s impact creates shock waves, which produce cavitation bubbles in the fuel. As the bubbles collapse, the fuel within them is compressed long enough and forcefully enough to fuse.
Hawker says First Light hopes to initiate its first fusion reaction this year and to demonstrate net energy gain by 2024. But he acknowledges that those achievements won’t be enough. “Fusion energy doesn’t just need to be scientifically feasible,” he says. “It needs to be commercially viable.”
No one believes it will be easy, but the extraordinary challenge of fusion energy—not to mention the pressing need—is part of the attraction for the many scientists and engineers who’ve recently been drawn to the field. And increasingly, they have the resources to finance their work.
“This notion that you hear about fusion being another 30 or 40 or 50 years away is wrong,” says TAE’s Binderbauer, whose company has raised more than $600 million. “We’re going to see commercialization of this technology in time frames of a half decade.”
Veteran fusion researchers such as Dorland and Horton tend to have a more tempered outlook. They worry that grand promises that fall short may undercut public and investor support, as has happened in the past. Any claims of commercialization within the decade “are just not true,” says Dorland. “We’re still a lot more than one breakthrough away from having a pathway to fusion power.”
What few will argue with, though, is the dire need for nuclear fusion in the near future.
“I think it’s not going too far to say that fusion is having its Kitty Hawk moment,” says MIT’s Greenwald. “We don’t have a 747 jet, but we’re flying.”
This article appears in the February 2020 print issue as “5 Big Ideas for Fusion Power.”
About the Author
Tom Clynes is a freelance writer and photojournalist who covers science and environmental issues. His 2015 book The Boy Who Played With Fusion (Houghton Mifflin Harcourt) tells the unlikely tale of a 14-year-old who became the youngest person to build a working fusion reactor.
The collective thoughts of the interwebz
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.