Lucas Joppa thinks big. Even while gazing down into his cup of tea in his modest office on Microsoft’s campus in Redmond, Washington, he seems to see the entire planet bobbing in there like a spherical tea bag.
As Microsoft’s first chief environmental officer, Joppa came up with the company’s AI for Earth program, a five-year effort that’s spending US $50 million on AI-powered solutions to global environmental challenges.
The program is not just about specific deliverables, though. It’s also about mindset, Joppa told IEEE Spectrum in an interview in July. “It’s a plea for people to think about the Earth in the same way they think about the technologies they’re developing,” he says. “You start with an objective. So what’s our objective function for Earth?” (In computer science, an objective function describes the parameter or parameters you are trying to maximize or minimize for optimal results.)
AI for Earth launched in December 2017, and Joppa’s team has since given grants to more than 400 organizations around the world. In addition to receiving funding, some grantees get help from Microsoft’s data scientists and access to the company’s computing resources.
In a wide-ranging interview about the program, Joppa described his vision of the “ultimate optimization problem”—figuring out which parts of the planet should be used for farming, cities, wilderness reserves, energy production, and so on.
Every square meter of land and water on Earth has an infinite number of possible utility functions. It’s the job of Homo sapiens to describe our overall objective for the Earth. Then it’s the job of computers to produce optimization results that are aligned with the human-defined objective.
I don’t think we’re close at all to being able to do this. I think we’re closer from a technology perspective—being able to run the model—than we are from a social perspective—being able to make decisions about what the objective should be. What do we want to do with the Earth’s surface?
In 1896, Svante Arrhenius, of Sweden, became the first scientist to quantify the effects [PDF] of man-made carbon dioxide on global temperatures. He calculated that doubling the atmospheric level of the gas from its concentration in his time would raise the average midlatitude temperature by 5 to 6 degrees Celsius. That’s not too far from the latest results, obtained by computer models running more than 200,000 lines of code.
The United Nations held its first Framework Convention on Climate Change in 1992, and this was followed by a series of meetings and climate treaties. But the global emissions of carbon dioxide have been rising steadily just the same.
At the beginning of the 19th century, when the United Kingdom was the only major coal producer, global emissions of carbon from fossil fuel combustion were minuscule, at less than 10 million metric tons a year. (To express them in terms of carbon dioxide, just multiply by 3.66.) By century’s end, emissions surpassed half a billion metric tons of carbon. By 1950, they had topped 1.5 billion metric tons. The postwar economic expansion in Europe, North America, the U.S.S.R., and Japan, along with the post-1980 economic rise of China, quadrupled emissions thereafter, to about 6.5 billion metric tons of carbon by the year 2000.
The new century has seen a significant divergence. By 2017, emissions had declined by about 15 percent in the European Union, with its slower economic growth and aging population, and also in the United States, thanks largely to the increasing use of natural gas instead of coal. However, all these gains were outbalanced by Chinese carbon emissions, which rose from about 1 billion to about 3 billion metric tons—enough to increase the worldwide total by nearly 45 percent, to 10.1 billion metric tons.
By burning huge stocks of carbon that fossilized ages ago, human beings have pushed carbon dioxide concentrations to levels not seen for about 3 million years. The sampling of air locked in tiny bubbles in cores drilled into Antarctic and Greenland ice has enabled us to reconstruct carbon dioxide concentrations going back some 800,000 years. Back then the atmospheric levels of the gas fluctuated between 180 and 280 parts per million (that is, from 0.018 to 0.028 percent). During the past millennium, the concentrations remained fairly stable, ranging from 275 ppm in the early 1600s to about 285 ppm before the end of the 19th century. Continuous measurements of the gas began near the top of Mauna Loa, in Hawaii, in 1958: The 1959 mean was 316 ppm, the 2015 average reached 400 ppm, and 415 ppm was first recorded in May 2019.
Emissions will continue to decline in affluent countries, and the rate at which they grow in China has begun to slow down. However, it is speeding up in India and Africa, and hence it is unlikely that we will see any substantial global declines anytime soon.
The Paris agreement of 2015 was lauded as the first accord containing specific national commitments to reduce future emissions, but even if all its targets were met by 2030, carbon emissions would still rise to nearly 50 percent above the 2017 level. According to the 2018 study by the Intergovernmental Panel for Climate Change, the only way to keep the average world temperature rise to no more than 1.5 °C would be to put emissions almost immediately into a decline steep enough to bring them to zero by 2050.
That is not impossible—but it is very unlikely. The contrast between the expressed concerns about global warming and the continued releases of record volumes of carbon could not be starker.
This article appears in the October 2019 print issue as “The Carbon Century.”
Buildings use more than one-third of the world’s energy, most of it for heating spaces and water. Most of this heat is generated by burning natural gas, oil, or propane. And where these fossil fuels are consumed, greenhouse gas emissions are a given.
Electric heat pumps, first widely used in the 1970s in Europe, could be the best solution to cut that fossil fuel use. They could slash the carbon emissions of buildings by half. And if powered by renewables, emissions can potentially go down to zero.
Keeping food cold is an energy-gobbling endeavor. Refrigerated food warehouses and factories consume immense amounts of energy, and this cooling demand is expected to increase as the climate warms while global incomes and food consumption rise. A team of researchers and companies in Europe are now developing a cryogenic energy storage system that could reduce carbon emissions from the food sector while providing a convenient way to store wind and solar power.
The CryoHub project will use extra wind and solar electricity to freeze air to cryogenic temperatures, where it becomes liquid, and in the process shrinks by 700 times in volume. The liquid air is stored in insulated low-pressure tanks similar to ones used for liquid nitrogen and natural gas.
When the grid needs electricity, the subzero liquid is pumped into an evaporator where it expands back into a gas that can spin a turbine for electricity. As it expands, the liquid also sucks heat from surrounding air. “So you can basically provide free cooling for food storage,” says Judith Evans, a professor of air conditioning and refrigeration engineering at London South Bank University who is coordinating the CryoHub project.
However there are at least two reasons why data centers will likely play a key role in any attempt to curb global emissions. First, as cloud computing becomes more energy-efficient and increasingly relies on renewable sources, other sectors such as manufacturing, transportation, and buildings could turn to green data centers to reduce their own emissions. For example—a car manufacturer might outsource all of its in-house computing to zero-emission data centers.
Even without such partnerships, though, data centers will likely play an important part in the climate’s future. The rise of AI, machine learning, big data, and the Internet of Things mean that data centers’ global electricity consumption will continue to increase. By one estimate, consumption could jump to as much as 13 percent of the world’s total electricity demand by 2030.
For these reasons, says Johan Falk, senior innovation fellow at the Stockholm Resilience Center in Sweden, data centers will have outsized importance in climate change mitigation efforts. And the more progress society makes in the near term, the sooner the benefits will begin to multiply.
Amid the sand dunes of the western Sahara, workers are putting the finishing touches on one of the world’s largest solar installations. There, as many as 7.2 million photovoltaic panels will make up Benban Solar Park—a renewable energy project so massive, it will be visible from space.
The 1.8-gigawatt installation is the first utility-scale PV plant in Egypt, a nation blessed with some of the best solar resources on the planet. The ambitious project is part of Egypt’s efforts to increase its generation capacity and incorporate more renewable sources into the mix.
“I think Benban Solar Park is the first real step to put Egypt on the solar production world map,” says Mohamed Orabi, a professor of power electronics at Aswan University.
Last week, the city of Los Angeles inked a deal for a solar-plus-storage system at a record-low price. The 400-MW Eland solar power project will be capable of storing 1,200 megawatt-hours of energy in lithium-ion batteries to meet demand at night. The project is a part of the city’s climate commitment to reach 100 percent renewable energy by 2045.
Electricity and heat production are the largest sources of greenhouse gas emissions in the world. Carbon-free electricity will be critical for keeping the average global temperature rise to within the United Nations’ target of 1.5 degrees Celsius and avoid the worst effects of climate change. As world leaders meet at the United Nations Climate Action Summit next week, boosting renewable energy and energy storage will be major priorities.
Berkeley, California banned the installation of natural gas pipes to new residential construction projects last month. The city is committed to slashing its carbon footprint, and natural gas is a carbon double-whammy: when burned, it releases carbon dioxide and, when leaked, its main ingredient, methane, is a far more potent greenhouse gas than CO2.
Those leaks, meanwhile, may soon have nowhere to hide thanks to a growing wave of private, methane-detecting satellites being placed in orbit. Canada’s GHGSat led the charge in 2016 with its carbon-tracking Claire microsatellite, and the company now has a second-generation microsat ready to launch. Several more methane-detecting satellites are coming, including one from the Environmental Defense Fund. If gas producers don’t find and squelch their own pollution, this proliferation of remote observers will make it increasingly likely that others will shine a spotlight on it.
Transmission lines in the United States and Canada require approval from every state and province traversed, and that political fragmentation hinders deployment of long power links of the type connecting vast swaths of territory in regions such as China, India, and Brazil. As a result, few studies detail how technologies that efficiently move power over thousands of kilometers, such as ultrahigh-voltage direct current (UHV DC) systems, might perform in North America. Earlier this week, the Beijing-based Global Energy Interconnection Development and Cooperation Organization (GEIDCO) stepped in to fill that gap, outlining an ambitious upgrade for North America’s grids.
GEIDCO’s plan promises to greatly shrink North America’s carbon footprint, but its boldest prescriptions represent technical and economic optimizations that run counter to political interests and recent trends. “Thinking out of the box is how you solve complicated, difficult problems,” said former Southern California Edison CEO Ted Craver in response to the plan. But GEIDCO’s approach, he said, raises concerns about energy sovereignty that could prove difficult to settle. As Craver put it: “There’s theory and then there’s practice.”
Through GEIDCO, Liu is proselytizing for UHV deployment worldwide. At the Vancouver meeting, Liu warned of “unimaginable damage to mankind” if greenhouse gas emissions continued at their current pace. He argued that beefy grids moving power across and between continents are a prerequisite for accessing and sharing the world’s best wind, solar, and hydropower resources, and thus dialing-down fossil fuel consumption.
GEIDCO’s plan for North America mirrors the combination of UHV AC and DC transmission deployed by State Grid in China. In that scheme, a series of 800-kV UHV DC lines running east to west across the U.S. would share wind and solar power widely, while north-south lines would provide continent-wide access to Canada’s giant hydropower plants [see map below]. One more UHV DC line—a 5,200-kilometer stretch from Mexico to Peru—would enable power exchanges with South America.
As in China, UHV AC lines would be added atop North America’s five existing AC grids, strengthening them so they could safely absorb the 8-gigawatt output of each DC line. In two cases, UHV AC would eventually cross boundaries between distinct AC grids to create larger and more stable synchronous zones: Mexico’s grid would fold into the larger grid that currently covers the Western U.S. and Canada; and Texas’ grid would merge with North America’s big Eastern grid.
Consolidating grids helps explain the benefit of GEIDCO’s plan. Texas’ grid is only weakly linked to its neighbors at present, which limits its ability to share resources such as extra wind power that may go to waste for lack of in-state demand or to important renewable power when its wind farms are still. GEIDCO’s UHV build-out unlocks such resources by enabling each power plant to serve a larger area.
The payoffs are numerous. Electrification to decarbonize vehicles, home heating, and industries would proceed 60 percent faster than the most ambitious U.S. electrification scenarios projected by the Electric Power Research Institute. Renewable energy installation would also accelerate, pushing the share of zero-carbon energy on the continent from 40 percent of the power supply in 2017, to 64 percent in 2035, and 74 percent in 2050. Nuclear energy’s contribution would fall by nearly half to just 11 percent of generation in 2050—mostly due to its higher cost, according to GEIDCO director of energy planning Gao Yi. Overall, energy-related greenhouse gas emissions would drop 80 percent by 2050, relative to a business-as-usual scenario—even as average power generating costs drop.
However, consolidating grids is also where the political sensitivity begins. Texas fiercely guards its independent AC grid, which shields its power industry from oversight by federal regulators. Craver’s take on the prospects for bringing Texas into the fold: “Good luck on that.”
Broader political concerns, meanwhile, could hold up international UHV links. Deeper integration of power supplies across borders implies a high level of trust between nations. That cuts against the recent trend toward populist governments advocating more nationalist agendas, as exemplified by the U.K.’s ongoing effort to leave the European Union.
A populist Mexican President elected last year has blocked international investment in renewable energy and could undo recent efforts to expand the country’s grid interconnections. U.S. President Trump has decreased trust in the United States as an international partner with his America-first trade policies and plans to withdraw the U.S. from global pacts such as the Paris Agreement on Climate Change.
At the Vancouver forum, organized by the Edison Electric Institute, a Washington-based trade group, most grid experts identified politics and social acceptance as the greatest challenges to power network expansion. In the U.S., climate and energy policy has made it more difficult for government researchers to even study North American grid integration.
Last year, political appointees at the U.S. Department of Energy blocked publication of a modeling study exploring integration of the Eastern and Western grids via DC lines. The study, led by the grid modeling group at the National Renewable Energy Lab (NREL) in Boulder, Colo., projected that long DC lines linking the grids would reduce power costs and accelerate renewable energy development. According to the study’s leader, it showed that “building national-scale transmission makes sense.”
NREL’s grid modelers are now wrapping up a larger continent-wide study looking at the challenges and opportunities for grid integration of very large levels of wind, solar and hydropower. That North American Renewable Integration Study is a collaborative effort by the U.S., Mexican, and Canadian governments. It was supposed to be completed this month, but remains under review by the three governments.
Japanese scientists have developed a thermal battery that converts heat into electricity when buried in a geothermal zone
You can fry an egg on the ground in Las Vegas in August, but try that in Iceland or Alaska and you’ll just end up with the stuff on your face—unless you know how to tap into the Earth’s vast reservoirs of geothermal energy.
Researchers at the Tokyo Institute of Technology have developed a new kind of battery that can reliably generate electric power from heat in environments with temperatures ranging from 60 degreesC to 100 degreesC—which is low enough to mimic geothermal heat.
In an earlier experiment, the researchers developed sensitized thermal cells (STCs) that employed dye-sensitized solar cells to convert light into electric power. In their latest advance, team leader Sachiko Matsushita, an associate professor at Tokyo Tech, explained that they replaced the dye with a semiconductor to enable the cells to operate using heat instead of light.
Legislation passed by the U.S. House and Senate supports isolating power equipment to block cyber attacks
Late last week, the U.S. House of Representatives passed legislation to mandate federal research on a radically ‘retro’ approach to protect power grids from cyber attack: unplugging or otherwise isolating the most critical equipment from grid operators’ digital control systems. Angus King, an independent senator from Maine whose identical bill passed the Senate last month, says such a managed retreat from networked controls may be required to thwart the grid’s most sophisticated online adversaries.
Grid cyber experts say the Securing Energy Infrastructure Act moving through Congress is a particular testament to Michael Assante, a gifted and passionate cybersecurity expert who died earlier this month from leukaemia at the age of 48. “If you were to point to just one person as the primary driver, it would have to be Michael,” says colleague Andrew Bochman, senior cyber and energy security strategist at Idaho National Laboratory (INL). Senator King recently told The Washington Post that research at INL kicked-off by Assante had inspired the bill.
A Canadian company emerges from stealth mode to provide grid-scale energy storage with its high-density battery tech
A new Canadian company with roots in Vermont has emerged from stealth mode and has ambitious plans to roll out a new grid-scale battery in the year ahead. The longshot storage technology, targeted at utilities, offers four times the energy density and four times the lifetime of lithium-ion batteries, the company says, and will be available for half the price.
The new company’s CEO, a former Democratic nominee for governor of Vermont, founded Cross Border Power in the wake of her electoral loss last November. Within days after the election, she was at her computer and writing a thesis (since posted on her campaign website) that she boldly calls “[The] North American Solution to Climate Change.”
One of Christine Hallquist’s planks as gubernatorial candidate was to set the Green Mountain state on a path to obtaining 90 percent of its energy from renewable sources by 2050. In the final weeks of the election, the Republican Governors Association attacked Hallquist’s campaign by claiming her vision would raise taxes on Vermonters and hike gasoline prices at the pump.
Today, she might agree that economics may indeed shape the future of renewable energy—but through low prices, not high ones. “I think we’re at the point, especially with our batteries, that renewables are going to be cheaper than any of the fossil fuels,” she says.
A material called ZIF-8 swells up when carbon dioxide molecules are trapped inside, new images reveal
A new kind of molecular-scale microscope has been trained for the first time on a promising wonder material for carbon capture and storage. The results, researchers say, suggest a few tweaks to this material could further enhance its ability to scrub greenhouse gases from emissions produced by traditional power plants.
The announcement comes in the wake of a separate study concerning carbon capture published in the journal Nature. The researchers involved in that study found that keeping the average global temperature change to below 1.5 degrees C (the goal of the Paris climate accords) may require more aggressive action than previously anticipated. It will not be enough, they calculated, to stop building new greenhouse-gas-emitting power stations and allow existing plants to age out of existence. Some existing plants will also need to be shuttered or retrofitted with carbon capture and sequestration technology.
Säntis Tower in the Swiss Alps is struck by lightning more than 100 times a year
Atop a rocky peak in the Swiss Alps sits a telecommunications tower that gets struck by lightning more than 100 times a year, making it perhaps the world’s most frequently struck object. Taking note of the remarkable consistency with which lightning hits this 124-meter structure, researchers have adorned it with instruments for a front-row view of these violent electric discharges.
On Wednesday, a small team installed a new gadget near Säntis Tower in their years-long quest to better understand how lightning forms and why it behaves the way it does. About two kilometers from the tower, they set up a broadband interferometer that one member, Mark Stanley of New Mexico Tech, had built back in his lab near Jemez, New Mexico.
“You can’t really go to a company and find an instrument that’s built just for studying lightning,” says Bill Rison, Stanley’s collaborator who teaches electrical engineering at New Mexico Tech. “You have to build your own.”
The one Stanley built has three antennas with bandwidth from 20 to 80 megahertz (MHz) to record powerful electromagnetic pulses in the very high-frequency range that lightning is known to produce. The device also has a fourth antenna to measure sferics, which are low-frequency signals that result from the movement of charge that occurs with a strike or from storm activity within clouds. “Basically, lightning is a giant spark,” Rison explains. “Sparks give off radio waves and the interferometer detects the radio waves.”
To anyone who has witnessed a lightning strike, everything seems to happen all at once. But Stanley’s sensor captures several gigabytes of data about the many separate pulses that occur within each flash. Those data can be made into a video that replays, microsecond by microsecond, how “channels” of lightning form in the clouds.
By mapping lightning in this way, the Säntis team, which hired Stanley and Rison to haul their interferometer to Switzerland, hopes to better understand what prompts lightning’s “initiation”—that mysterious moment when it cracks into existence.
So far, measurements have raised more questions than they’ve answered. One sticking point is, in order for a thunderstorm to emit a lightning strike, the electric field within it must build to an intensity on the order of several megavolts per meter. But while researchers have sent balloons into thunderstorms, no one has measured a field beyond 200 kilovolts per meter, or one-tenth of the required value, says Farhad Rachidi of the Swiss Federal Institute of Technology (EPFL), who co-leads the Säntis research team.
“The conditions required for lightning to be started within the clouds never seem to exist based on the measurements made in the clouds,” says Marcos Rubinstein, a telecommunications professor at Switzerland’s School of Business and Engineering Vaud and co-leader of the Säntis team with Rachidi. “This is a big, big question.”
In his own research at New Mexico Tech, Rison has laid some groundwork that could explain how small electric fields can produce such big sparks. In 2016, he and his colleagues published a paper in NatureCommunications that described experimental evidence showing that a process known as fast positive breakdown can create a series of streamers, or tiny sparks, and may arise from much stronger local electric fields that occur in small pockets within a storm.
If enough streamers occur in quick succession and within close vicinity to one another, they make more streamers, adding up to a streamer “avalanche” that turns into positive leaders, or mini-bolts that branch toward clouds or the ground.
“We haven’t hit any roadblocks yet to say, this is something that isn’t the process for the initiation of lightning,” Rison says. With his evidence in hand, theorists are now trying to explain exactly how and why these fast positive breakdowns occur in the first place.
Meanwhile, the Säntis team wants to adapt a mathematical technique called time-reversal, which was originally pioneered for acoustics, to better understand lightning’s initiation. With this method, they intend to use data gathered by the tower’s many instruments (which include a collection of six antennas called a lightning mapping array, two Rogowski coils to measure current, two B-Dot sensors to measure the current time-derivative, broadband electric and magnetic field sensors, and a high-speed camera) to reconstruct the total path of strikes soon after they happen, tracing the electromagnetic radiation all the way back to its original source.
As has been true of past lightning research, their findings may someday inform the design of airplanes or electric grids, and help protect people and equipment against lightning strikes and other sudden power surges. The Säntis team’s work has held particular relevance for wind farm operators. That’s because most strikes recorded at the tower are examples of upward lightning—which travels from ground-to-cloud instead of cloud-to-ground.
Upward lightning often originates from tall buildings and structures, which can actually create a lightning bolt that shoots skyward, and this process can damage wind turbines. In 2013, the team published one of the most extensive descriptions to date of this type of flash.
More recently, their work has raised questions about why industry safety certifications for aircraft are based on data about downward strikes, instead of upward ones, which commonly occur with aircraft and cause particular kinds of damage that look more like lightning damage reported by pilots and mechanics.
By the end of this year, the Säntis team expects to record its 1,000th lightning strike at the tower. And there’s one more elusive scientific matter with massive practical implications they hope to someday resolve. “If we understand how lightning is initiated, we could take a big step forward on one of the other questions we’ve been trying to solve for a long time, and that’s to be able to predict lightning before it happens,” says Rubinstein.
Which algorithms are best at integrating solar arrays with electrical grid storage?
By analyzing the kinds of algorithms that control the flow of electricity between solar cells and lithium-ion batteries, scientists have identified the best types of algorithms to govern electrical grid storage of solar power.
A dizzying number of algorithms exist to help manage the flow of electricity between photovoltaic cells and lithium-ion batteries in the most profitable manner. These come in a variety of complexities and have diverse computational power requirements.
“Lithium-ion batteries are expensive components, and photovoltaic plant owners have to pay large amounts of money in order to install lithium-ion batteries on plant,” says study lead author Alberto Berrueta, an engineering researcher at the Public University of Navarre’s Institute of Smart Cities in Pamplona, Spain. “Management algorithms are of capital importance in order to preserve a long lifetime for the batteries to make the most out of the batteries.”
To see which types of these algorithms work best at getting the most out of lithium-ion batteries, researchers developed models based off the amount of power generated over the course of a year from a medium-sized roughly 100-kilowatt solar cell array located in Navarre. They focused on concerns such as the computational requirements needed, the price of electricity, battery life, battery costs, and battery charging and discharging rates.
The researchers looked at three families of algorithms currently used in managing electricity from commercial solar cell arrays: dynamic, quadratic and linear. Dynamic algorithms tackle complex, sequential optimization problems by breaking them down into several simpler sub-problems. Quadratic algorithms each involve at least one squared variable and often find use in calculating areas, computing the profit of a product, and pinpointing the speed and position of an object. Linear algorithms each involve variables that are not squared and have the simplest computational requirements.
The scientists found the dynamic algorithms required far more computational power than the other two families of algorithms; as the number of variables grew, they experienced an exponential increase in problem complexity. A commercial PC that would take about 10 seconds to compute the energy flow between the solar cells and lithium-ion batteries using the linear and quadratic algorithms would take 42 minutes with the dynamic algorithms.
Linear algorithms had the lowest computational requirements but suffered in terms of accuracy. For instance, their simplified models did not account for how electrical current can reduce battery lifetime. All in all, the linear algorithms provided an average of 20 percent lower profits than the maximum achievable.
The researchers concluded that quadratic algorithms provided the best trade-off between accuracy and computational simplicity for solar power applications. Quadratic algorithms had about the same low computational requirements as linear algorithms while achieving revenues similar to dynamic algorithms for all battery sizes.
In the future, scientists can investigate which management algorithms might work best with hybrid energy storage systems, Berrueta says. Future research can also investigate which computer models work best at calculating all the factors affecting the lifetime of lithium-ion batteries, including batteries discarded from electric vehicles that might find a second life working in renewable energy plants, he adds.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.