Fossil-fuel power plants are one of the largest emitters of the greenhouse gases that cause climate change. Collectively, these 18,000 or so plants account for 30 percent of global greenhouse gas emissions, including an estimated 15 billion metric tons of carbon dioxide per year. The pollutants produced by burning fossil fuels also seriously degrade air quality and public health. They contribute to heart and respiratory diseases and lung cancer and are responsible for nearly 1 in 10 deaths worldwide.
Averting the most severe impacts of air pollution and climate change requires understanding the sources of emissions. The technology exists to measure CO2 and other gases in the atmosphere, but not with enough granularity to pinpoint who emitted what and how much. Last month, a new initiative called Climate TRACE was unveiled, with the aim of accurately tracking man-made CO2 emissions right to the source, no matter where in the world that source is. The coalition of nine organizations and former U.S. Vice President Al Gore has already begun to track such emissions across seven sectors, including electricity, transportation, and forest fires.
I’m a machine-learning researcher, and in conjunction with the nonprofits WattTime, Carbon Tracker, and the World Resources Institute (with funding from Google.org), I’m working on the electricity piece of Climate TRACE. Using existing satellite imagery and artificial intelligence, we’ll soon be able to estimate emissions from every fossil-fuel power plant in the world. Here’s how we’re doing it.
The current limits of monitoring emissions from space
The United States is one of the few countries that publicly releases high-resolution data on emissions from individual power plants. Every major U.S. plant has on-site emissions monitoring equipment and reports data to the Environmental Protection Agency. But the costs of installing and maintaining these systems make them impractical for use in many countries. Monitoring systems can also be tampered with. Other countries report annual emissions totals that may be rough estimates instead of actual measurements. These estimates lack verification, and they may under-report emissions.
Greenhouse gas emissions are surprisingly difficult to estimate. For one thing, not all of it is man-made. CO2 and methane releases from the ocean, volcanoes, decomposition, and soil, plant, and animal respiration also put greenhouse gases into the atmosphere. Then there are the non-obvious man-made contributors such as cement production and fertilizers. Even if you know the source, it can be tricky to estimate quantities because the emissions fluctuate. Power plants burning fossil fuels adjust their generation depending on local demand and electricity prices, among other factors.
Concentrations of CO2 are measured locally at observatories such as Mauna Loa, in Hawaii, and globally by satellites such as NASA’s OCO-2. Rather than directly measuring the concentration, satellites estimate it based on how much of the sunlight reflected from Earth is absorbed by carbon dioxide molecules in the air. The European Space Agency’s Sentinel-5P uses similar technology for measuring other greenhouse gases. These spectral measurements are great for creating regional maps of atmospheric CO2 concentrations. Such regional estimates have been particularly revealing during the pandemic, as stay-at-home orders led to decreased pollutants reported around cities, largely driven by decreases in transportation.
But the resolution of these measurements is too low. Each measurement from OCO-2, for example, represents a 1.1-square-mile (2.9-square-kilometer) area on the ground, so it can’t reveal how much an individual power plant emitted (not to mention CO2 from natural sources in the area). OCO-2 provides daily observations of each location, but with a great deal of noise due to clouds, wind, and other atmospheric changes. To get a reliable signal and suppress noisy data points, multiple observations of the same site should be averaged over a month.
To estimate emissions at the source, we need both spatial resolution that’s high enough to see plant operations and frequent observations to see how those measurements change over time.
How to model power plant emissions with AI
We’re fortunate that at any given moment, dozens of satellite networks and hundreds of satellites are capturing the kind of high-resolution imagery we need. Most of these Earth-observing satellites observe in the visible spectrum. We also use thermal infrared to detect heat signatures.
Having human analysts review images from multiple satellites and cross-referencing them with other data would be too time-consuming, expensive, and error-prone. Our prototype system is starting with data from three satellite networks, from which we collect about 5,000 non-cloudy images per day. The number of images will grow as we incorporate data from additional satellites. Some observations contain information at multiple wavelengths, which means even more data to be analyzed and requiring a finely tuned eye to interpret accurately. No human team could process that much data within a reasonable time frame.
With AI, the game has changed. Using the same deep-learning approach being applied to speech recognition and to obstacle avoidance in self-driving cars, we’re creating algorithms that lead to much faster prediction of emissions and an enhanced ability to extract patterns from satellite images at multiple wavelengths. The exact patterns the algorithm learns are dependent on the type of satellite and the power plant’s technology.
We start by matching historical satellite images with plant-reported power generation to create machine-learning models that can learn the relationship between them. Given a novel image of a plant, the model can then predict the plant’s power generation and emissions.
We have enough ground truth on power generation to train the models. The United States and Taiwan are two of the few countries that report both plant emissions and power generation at hourly intervals. Australia and countries in Europe report generation only, while still other countries report daily aggregated generation. Knowing the power generation and fuel type, we can estimate emissions where that data isn’t reported.
Once our models have been trained on plants with known power generation, we can apply the models worldwide to any power plant. Our algorithms create predictive models for various satellites and various types of power plants, and we can aggregate the predictions to estimate emissions over a period of time—say, one month.
What our deep-learning models look for in satellite images
In a typical fossil-fuel power plant, greenhouse gases exhaust through a chimney called the flue stack, producing a telltale smoke plume that our models can spot. Plants that are more efficient or have secondary collection measures to reduce emissions may have plumes that are difficult to see. In those cases, our models look for other visual and thermal indicators when the power plant’s characteristics are known.
Another sign the models look for is cooling. Fossil-fuel plants burn fuel to boil water that creates steam to spin a turbine that generates electricity. The steam must then be cooled back into water so that it can be reused to produce more electricity. Depending on the type of cooling technology, a large water vapor plume may be produced from cooling towers, or heat may be released as warm water discharged to a nearby source. We use both visible and thermal imaging to quantify these signals.
Applying our deep-learning models to power plant emissions worldwide
So far, we have created and validated an initial set of models for coal-burning plants using generation data from the United States and Europe. Our cross-disciplinary team of scientists and engineers continues to gather and analyze ground-truth data for other countries. As we begin to test our models globally, we will also validate them against reported annual country totals and fuel consumption data. We are starting with CO2 emissions but hope to expand to other greenhouse gases.
Our goal is global coverage of fossil-fuel power plant emissions—that is, for any fossil fuel plant in any country, we will be able to accurately predict its emissions of greenhouse gases. Our work for the energy sector is not happening in isolation. Climate TRACE grew out of our project on power plants, and it now has a goal to cover 95 percent of man-made greenhouse gas emissions in every sector by mid-2021.
What comes next? We will make the emissions data public. Renewable energy developers will be able to use it to pinpoint locations where new wind or solar farms will have the most impact. Regulatory agencies will be able to create and enforce new environmental policy. Individual citizens can see how much their local power plants are contributing to climate change. And it may even help track progress toward the Paris Agreement on climate, which is set to be renegotiated in 2021.
About the Author
Heather D. Couture is the founder of the machine-learning consulting firm Pixel Scientia Labs, which guides R&D teams to fight cancer and climate change more successfully with AI.
The seas had turned rough as a sudden squall whipped up the winds enough to howl through the rigging. And with those winds came a powerful smell of oil. Soon I could see the characteristic rainbow sheen from my position on the rail of this fishing trawler. It was May of 2016, and we were in the Gulf of Mexico, about 16 kilometers off the southeast coast of Louisiana.
“Skimmer in the water,” bellowed Kevin Kennedy, an Alaskan fisherman turned oil-spill remediation entrepreneur. Ropes groaned as the boat’s winches lowered his prototype oil-recovery system into the heaving seas. As the trawler bobbed up and down, Kennedy’s contraption rode the waves, its open mouth facing into the slick, gulping down a mix of seawater and crude oil.
The stomach of Kennedy’s device, to continue the analogy, was a novel separator, which digested the mixture of seawater and oil. By virtue of its clever engineering, it excreted essentially nothing but water. At the top of the separator’s twin tanks, the collected oil began to swell. When enough had accumulated, the oil was sucked safely into a storage tank. Then the cycle would begin again.
How much oil there was on the water here was a subject of great dispute. But its source was clear enough. In 2004, Hurricane Ivan blasted through the Gulf of Mexico triggering submarine landslides that toppled a drilling platform erected by New Orleans–based Taylor Energy. The mangled tops of Taylor’s subsea oil wells were then buried under some 20 meters of mud. But all that loose mud didn’t do much to stem the flow of oil and gas from many of these wells.
Efforts to contain the flow from the shattered wells, conducted between 2009 and 2011, saw only partial success. Oil continued to flow from some of these wells and rise to the surface for years to come.
While this oil spill posed a nasty threat to the marine environment, it served as a valuable testing ground for Kennedy’s invention. This former fisherman has spent a small fortune to prove he has created an effective system for cleaning up oil spilled on the water, one that works well in real-world conditions. But for all he can tell so far, it’s a system that nobody wants.
“I thought if I built a better mousetrap, everyone would want one,” Kennedy says. “Instead, the world has decided they’re okay with mice.”
There are countless oil tankers, barges, rigs, and pipelines that operate in, around, and through U.S. coastal waters. Every year, some of them leak some of their contents. In a typical year the leakage amounts to no more than a million gallons or so. But every now and then a monster mishap spills considerably more: In 1989, the Exxon Valdeztanker ran aground on a reef and gushed some 11 million U.S. gallons (42,000 cubic meters) of oil into the pristine waters of Prince William Sound, Alaska. In 2005, Hurricane Katrina unleashed more than 8 million gallons (30,000 cubic meters) from Louisiana storage facilities. And even those incidents pale in comparison with the Deepwater Horizon disaster in 2010, in which a drilling rig leased by BP exploded in the Gulf of Mexico, killing 11 people and ultimately releasing some 210 million gallons (almost 800,000 cubic meters) of oil.
Such disasters not only ravage huge, complex, and delicate marine ecosystems, but they are also economically devastating: The damage to tourism and commercial fisheries is often measured in the hundreds of millions of dollars.
To deal with such fiascoes, engineers, chemists, and inventors have devised, sometimes on the fly, a grab bag of equipment, systems, chemicals, and procedures for collecting the oil and removing it, or for breaking it up or burning it in place to lessen its environmental impacts.
Today, the oil-spill-management market is a roughly US $100-billion-a-year industry with hundreds of companies. But multiple studies of the biggest episodes, including the Deepwater Horizon disaster, have questioned the industry’s motives, methods, track record, and even its utility.
After decades in the industry, Kennedy, a small player in a big business, has unique perspectives on what ails it. His involvement with oil spills stretches back to 1989, when he bought his first fishing boat and paid for a license to trawl for shrimp near Prince William Sound. He couldn’t have chosen a worse time to begin a fishing career. On the first day of the shrimping season, the Exxon Valdez ran aground on Bligh Reef, and Kennedy found himself drafted as a first responder. He spent more than four months corralling oil with his nets and using his fish pumps to transfer the unending gobs of sticky mess into his boat’s hold.
Meanwhile, millions of dollars of oil-skimming equipment was airlifted to the nearby port of Valdez, most of it ultimately proving useless. Kennedy has witnessed something similar every time there’s a spill nearby: There may be lots of cleanup activity, but often, he insists, it’s just to put on a good show for the cameras. In the end, most of the oil winds up on the beach—“Nature’s mop,” he calls it.
In 2004, Kennedy participated in the cleanup that followed the grounding of the cargo ship Selendang Ayu—a tragic accident that cost six sailors their lives and released 336,000 gallons (1,272 cubic meters) of fuel oil into Alaskan waters. After that incident, he became convinced he could design gear himself that could effectively recover the oil spilled on the water before it hit the beach. His design employed fishing nets and fish pumps, normally used to transfer fish from the nets into the holds of fishing vessels. (Fish pumps use vacuum instead of whirling impellers, meaning no chopped-up fish.)
Fast forward to 2010 and the Deepwater Horizon disaster. The amount of oil released into the water by that well blowout seemed limitless—as did the money available to try to clean things up. Desperate for solutions, BP was exploring every avenue, including leasing oil-water separators built by actor Kevin Costner’s oil-cleanup company, Ocean Therapy Solutions, now defunct. BP ultimately spent some $69 billion on its response to the disaster, including legal fees.
In the midst of those frenzied cleanup efforts, Kennedy packed up a hastily assembled oil-recovery system and drove from Anchorage to Louisiana. He presented himself to BP, which worked out a contract with Kennedy. But before he could sign it, the oil well was capped.
Although enormous oil slicks still covered the sea, Kennedy was no longer allowed to participate in the cleanup. Only an emergency, the relevant regulators felt, gave them the flexibility to try out new technology to address a spill. With the emergency now officially over, cleanup would be done strictly by the book, following specific U.S. Coast Guard guidelines.
First and foremost, only equipment from a very short list of certified systems would be used. So Kennedy watched from the shore as others went to work. Less than 3 percent of the oil from the BP spill was ever recovered, despite billions spent on efforts that mostly involved burning the oil in place or applying chemical dispersants—measures for addressing the problem that pose environmental hazards of their own.
In 2011, in the wake of the Deepwater Horizon spill, the XPrize Foundation mounted the Wendy Schmidt Oil Cleanup XChallenge, named for the philanthropist and wife of Eric Schmidt, the former executive chairman of Google. The purpose of the contest was to foster technical advances in the oil-spill cleanup industry. Kennedy submitted his system on something of a lark and was startled to learn he was chosen as one of 10 finalists from among hundreds of entrants, including some of the biggest names in the oil-spill field.
“All the global players were there: Lamor, Elastec, Nofi. Some of these are hundred-million-dollar companies,” says Kennedy. “When I finished packing up the shipping container to go down to the competition, I think I had $123 left in my checking account.” His life savings depleted, Kennedy was forced to ask friends for donations to afford the plane ticket to New Jersey, where the competition was being held.
Located in Leonardo, N.J., the Department of the Interior’s Ohmsett facility houses a giant pool more than 200 meters long and almost 20 meters wide. Researchers there use computerized wave generators to simulate a variety of open-water environments. Whereas the industry standard for an oil skimmer was 1,100 gallons of oil per minute, the organizers of this XPrize competition sought a machine that could recover upwards of 2,500 gallons per minute with no more than 30 percent water in the recovered fluid.
Kennedy had cobbled together his skimmer from used fishing gear, including a powerful 5,000-gallon-per-minute fish pump. In addition, Kennedy’s system used lined fishing nets to capture the oil at the surface. This equipment would be familiar to just about anyone who has worked on fishing boats, which are often the first on the scene of an oil spill. So there would be a minimal learning curve for such first responders.
When the XPrize competition began, Kennedy’s team was the second of the 10 finalists to be tested. Perhaps due to inexperience, perhaps to carelessness, Ohmsett staff left the valves to the collection tank closed. Kennedy’s equipment roared to full power and promptly exploded. The massive fish pump had been trying to force 5,000 gallons a minute through a sealed valve. The pressure ruptured pipes, bent heavy steel drive shafts, and warped various pressure seals.
Replacement parts arrived with just an hour to spare, narrowly allowing Kennedy to finish his test runs. Although his damaged pump could no longer run at full capacity, his skimmer delivered impressive efficiency numbers. “On some of the runs, we got 99 percent oil-to-water ratio,” he says.
Kennedy didn’t win the contest—or the $1 million dollar prize his fledgling company sorely needed. The team that took first place in this XPrize competition, Elastec, fielded a device that could pump much more fluid per minute, but what it collected was only 90 percent oil. The second-prize winner’s equipment, while also pumping prodigious volumes of fluid, collected only 83 percent oil.
Although Kennedy’s system demonstrated the best efficiency at the XPrize competition, buyers were not forthcoming. It wasn’t surprising. “The real problem is you don’t get paid by the gallon for recovered oil,” says Kennedy, who soon discovered that the motivations of the people carrying out oil-spill remediation often aren’t focused on the environment. “It’s a compliance industry: You’re required to show you have enough equipment to handle a spill on paper, regardless of whether the stuff actually works or not.”
The problem, in a nutshell, is this: When there’s an oil spill, responders are typically hired by the day instead of being paid for the amount of oil they collect. So there’s little incentive for them to do a better job or upgrade their equipment to a design that can more efficiently separate oil from water. If anything, there’s a reverse incentive, argues Kennedy: Clean up a spill twice as quickly, and you’ll make only half as much money.
The key term used by regulators in this industry is EDRC, which stands for Effective Daily Recovery Capacity. This is the official estimate of what a skimmer can collect when deployed on an oil spill. According to the regulations of the Bureau of Safety and Environmental Enforcement, EDRC is computed by “multiplying the manufacturer’s rated throughput capacity over a 24-hour period by 20 percent…to determine if you have sufficient recovery capacity to respond to your worst-case discharge scenario.”
Reliance on the equipment’s rated throughput, as determined by tank testing, andassumed effectiveness is at the heart of the agreement hammered out between government and oil companies in the wake of the Exxon Valdez disaster. It’s a calculation of what theoretically would work to clean up a spill. Unfortunately, as Kennedy has seen time and again on actual spills, performance in the field rarely matches those paper estimates, which are based on tests that don’t reflect real-world conditions.
Even though he thought the rules made no sense, Kennedy needed to get his equipment certified according to procedures established by ASTM International (an organization formerly known as American Society for Testing and Materials). So in 2017 he paid to have his equipment tested to establish its official ratings.
Those recovery ratings are determined by placing skimmers in a test tank with a 3-inch-thick (almost 8-centimeter) layer of floating oil. They are powered up for a minimum of 30 seconds, and the amount of oil they transfer is measured. It’s an unrealistic test: Oil spills almost never result in a 3-inch layer of oil. Oil slicks floating on the ocean are usually measured in millimeters. And the thickness of an oil sheen, like that seen at the Taylor spill, is measured in micrometers.
“So many tests are really just a pumping exercise,” says Robert Watkins, a consultant with Anchorage-based ASRC Energy Services who specializes in spill response. “But that isn’t a true demonstration of response.” The value of ASTM ratings, he explains, is allowing a reproducible “apples-to-apples” comparison of oil-spill equipment. He doesn’t argue, however, that apples are the right choice in the first place.
Kennedy knows that it’s not difficult to game the system to get high numbers. According to ASTM’s testing rules, even a monstrous pump that doesn’t separate oil from water at all can still get credit for doing the job. If a company stockpiles enough of those pumps in a warehouse somewhere—or maintains enough barges loaded with oil-absorbent pads—it will be certified as having a compliant spill-response plan, he contends. In the event of an actual spill, Kennedy says, most of that gear is useless: “Good luck cleaning anything up with pumps and diapers!”
In recent years, the Bureau of Safety and Environmental Enforcement (BSEE) and a consultancy called Genwest have worked to develop a better guide, hoping to replace Effective Daily Recovery Capacity with a different metric: Estimated Recovery System Potential (ERSP). This new measure looks at the entire system functionality and delivers far more realistic numbers.
A highly efficient system like Kennedy’s would stack up favorably according to ERSP calculations. But according to Elise DeCola, an oil-spill contingency planning expert with Alaska-based Nuka Research and Planning Group, there has been limited adoption of the ERSP calculator by the industry.
“While BSEE recommends ERSP as an industry best practice, they do not require its use,” says DeCola. “Operators that have deep inventories of low-efficiency skimmers—equipment that is still compliant with current guidelines—could suddenly be forced to invest in new skimmers.” For many, moving the goal posts would simply cost too much.
The current rules, with their lack of emphasis on efficiency, accept pumping a large amount of oily water into your tanks—a mixture that must then be disposed of as hazardous waste. The better goal is to remove only the oil, and Kennedy’s equipment is about as good as you can get in this regard, with its most recent ASTM-certified oil-to-water rating being 99 percent.
What’s more, that “test tank” rating matches Kennedy’s experiences with his equipment under real-world conditions. Whether on the Taylor slick with its micrometer-thick sheen, a Lake Superior Superfund site with spilled creosote as viscous as peanut butter, or a toxic spill in California’s Long Beach Harbor, his efficiency numbers have always been very high, he claims.
Kennedy attributes this performance to his unique separation system. It uses a pair of collection vessels, in which the oil floats to the top of the mixture taken in. A specially designed float valve closes once the oil is drawn off the top. That extraction is done by a vacuum pump, which has the virtue of creating a partial vacuum, causing any water that gets caught up in the oil to boil off. The resultant water vapor is exhausted to the air before it can condense and dilute the recovered oil. The fuel oil his system collects often has even lower moisture content than it did when it came fresh out of the refinery.
Yet even with a skimmer that has remarkable performance, Kennedy has faced an uphill climb to find buyers. In 2016, he offered his equipment to Taylor Energy, only to be turned down. For the next two years, he repeatedly approached the Coast Guard, offering evidence that the Taylor Spill was larger than reported and insisting he had a potential solution. Even on a no-cure-no-pay basis, the Coast Guard wasn’t interested.
“The Coast Guard shines when it tackles major crises, like Hurricane Katrina or the devastation in Puerto Rico,” says retired Coast Guard Capt. Craig Barkley Lloyd, now president and general manager of Alaska Clean Seas. “But this was a slowly boiling frog.”
It wasn’t until 2018 that the Coast Guard was finally goaded to act. The Justice Department had hired Oscar Garcia-Pineda, a consultant who had studied the Taylor Spill, to do an independent assessment, which found the spill to be far more expansive than previously reported. According to the new calculations, the total volume of oil released over time rivaled that of the epic Deepwater Horizon spill. Picking up on that analysis, in October 2018 a Washington Post story labeled it “one of the worst offshore disasters in U.S. history.”
In response to that newspaper article, the Coast Guard began to look for solutions in earnest. It quickly hired the Louisiana-based Couvillion Group to build a giant collection apparatus that could be lowered onto the seafloor to capture the leaking oil before it reached the surface. In the spring of 2019, Couvillion installed this system, which has since been collecting more than 1,000 gallons of oil a day.
For 14 years after the Taylor spill commenced, oil covered large swaths of the sea, and not a single gallon of that oil was recovered until Kennedy demonstrated that it could be done. The incentives just weren’t there. Indeed, there were plenty of disincentives. That’s the situation that some regulators, environmentalists, and spill-cleanup entrepreneurs, including this former fisherman, are trying to change. With the proper equipment, oil recovered from a spill at sea might even be sold at a profit.
During Kennedy’s trial runs at the site of the Taylor spill in 2016, the crew of the shrimp boat he hired began to realize spilled oil could be worth more than shrimp. With the right technology and a market to support them—those same men might someday be fishing for oil.
This article appears in the June 2020 print issue as “Fishing for Oil.”
About the Author
Larry Herbst is a filmmaker and videographer with Cityline Inc., in Pasadena, Calif.
Predictions of when we would run out of oil have been around for a century, but the idea that peak production was rapidly approaching and that it would be followed by ruinous decline gained wider acceptance thanks to the work of M. King Hubbert, an American geologist who worked for Shell in Houston.
In 1956, he predicted that U.S. oil output would top out during the late 1960s; in 1969 he placed it in the first half of the 1970s. In 1970, when the actual peak came—or appeared to come—it was nearly 20 percent above Hubbert’s prediction. But few paid attention to the miss—the timing was enough to make his name as a prophet and give credence to his notion that the production curve of a mineral resource was symmetrical, with the decline being a mirror image of the ascent.
But reality does not follow perfect models, and by the year 2000, actual U.S. oil output was 2.3 times as much as indicated by Hubbert’s symmetrically declining forecast. Similarly, his forecasts of global peak oil (either in 1990 or in 2000) had soon unraveled. But that made no difference to a group of geologists, most notably Colin Campbell, Kenneth Deffeyes, L.F. Ivanhoe, and Jean Laherrère, who saw global oil peaking in 2000 or 2003. Deffeyes set it, with ridiculous accuracy, on Thanksgiving Day, 24 November 2005.
These analysts then predicted unprecedented economic disruptions. Ivanhoe went so far as to speak of “the inevitable doomsday” followed by “economic implosion” that would make “many of the world’s developed societies look more like today’s Russia than the United States.” Richard C. Duncan, an electrical engineer, proffered his “Olduvai theory,” which held that declining oil extraction would plunge humanity into a life comparable to that of the hunter-gatherers who lived near the famous Kenyan gorge some 2.5 million years ago.
In 2006, I reacted [PDF]to these prophecies by noting that “the recent obsession with an imminent peak of oil extraction has all the marks of a catastrophist apocalyptic cult.” I concluded that “conventional oil will become relatively a less important part of the world’s primary energy supply. But this spells no imminent end of the oil era as very large volumes of the fuel, both from traditional and nonconventional sources, will remain on the world market during the first half of the 21st century.”
And so they have. With the exception of a tiny (0.2 percent) dip in 2007 and a larger (2.5 percent) decline in 2009 (following the economic downturn), global crude oil extraction has set new records every year. In 2018, at nearly 4.5 billion metric tons, it was nearly 14 percent higher than in 2005.
A large part of this gain has been due to expansion in the United States, where the combination of horizontal drilling and hydraulic fracturing made the country, once again, the world’s largest producer of crude oil, about 16 percent ahead of Saudi Arabia and 19 percent ahead of Russia. Instead of following a perfect bell-shaped curve, since 2009 the trajectory of the U.S. crude oil extraction has been on the rise, and it is now surpassing the record set in 1970.
As for the global economic product, in 2019 it was 82 percent higher, in current prices, than in 2005, a rise enabled by the adequate supply of energy in general and crude oil in particular. I’d like to think that there are many lessons to be learned from this peak oil–mongering, but I have no illusions: Those who put models ahead of reality are bound to make the same false calls again and again.
This article appears in the May 2020 print issue as “Peak Oil: A Retrospective.”
Crusoe’s Digital Flare Mitigation technology is a fancy term for rugged, modified shipping containers that contain temperature-controlled racks of computers and data servers. The company launched in 2018 to mine cryptocurrency, which requires a tremendous amount of computing power. But when the novel coronavirus started spreading around the world, Lochmiller and his childhood friend Cully Cavness, who is the company’s president and co-founder, knew it was a chance to help.
Coronaviruses get their name from their crown of spiky proteins that attach to receptors on human cells. Proteins are complicated beasts that undergo convoluted twists and turns to take on unique structures. A recent Nature study showed that the new coronavirus the world is now battling, known as SARS-CoV-2, has a narrow ridge at its tip that helps it bind more strongly to human cells than previous similar viruses.
Understanding how spike proteins fold will help scientists find drugs that can block them. Stanford University’s [email protected]project is simulating these protein-folding dynamics. Studying the countless folding permutations and protein shapes requires enormous amounts of computations, so the project relies on crowd-sourced computing.
The ancient Romans were the first to mix sand and gravel with water and a bonding agent to make concrete. Although they called it opus cementitium, the bonding agent differed from that used in modern cement: It was a mixture of gypsum, quicklime, and pozzolana, a volcanic sand from Puteoli, near Mount Vesuvius, that made an outstanding material fit for massive vaults. Rome’s Pantheon, completed in 126 C.E., still spans a greater distance than any other structure made of nonreinforced concrete.
The modern cement industry began in 1824, when Joseph Aspdin, of England, patented his ﬁring of limestone and clay at high temperatures. Lime, silica, and alumina are the dominant constituents of modern cement; adding water, sand and gravel produces a slurry that hardens into concrete as it cures. The typical ratios are 7 to 15 percent cement, 14 to 21 percent water, and 60 to 75 percent sand and gravel.
Concrete is remarkably strong under compression. Today’s formulations can resist a crushing pressure of more than 100 megapascals (14,500 pounds per square inch)—about the weight of an African bull elephant balanced on a coin. However, a pulling force of just 2 to 5 MPa can tear concrete apart; human skin [PDF] is far stronger in this respect.
This tensile weakness can be offset by reinforcement. This technique was first used in iron-reinforced troughs for plants built by Joseph Monier, a French gardener, during the 1860s. Before the end of the 19th century, steel reinforcement was common in construction. In 1903 the Ingalls Building, in Cincinnati, became the world’s first reinforced-concrete skyscraper. Eventually engineers began pouring concrete into forms containing steel wires or bars that were tensioned just before or after the concrete was cast. Such pre- or poststressing further enhances the material’s tensile strength.
1. Three Gorges Dam, China: 65.5 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
3. Panama Canal: 6.8 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
4. Hoover Dam, Arizona and Nevada: 6.0 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
5. King Fahd Causeway, Saudi Arabia and Bahrain: 0.84 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
6. The Pentagon, Washington, D.C.: 0.80 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
7. Petronas Twin Towers, Malaysia: 0.39 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
8. Burj Khalifa Tower, United Arab Emirates: 0.11 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
9. The Venetian Hotel, Las Vegas: 0.039 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
10. Wilshire Grand Hotel, Los Angeles: 0.037 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
2. Grand Coulee Dam, Washington: 21.7 million metric tons. Data sources: Architizer.com, U.S. Bureau of Reclamation, Panama Canal Museum/University of Florida; Photo: iStockphoto
Today concrete is everywhere. It can be seen in the Burj Khalifa Tower in Dubai, the world’s tallest building, and in the sail-like Sydney Opera House, perhaps the most visually striking application. Reinforced concrete has made it possible to build massive hydroelectric dams, long bridges, and gigantic offshore drilling platforms, as well as to pave roads, freeways, parking lots, and airport runways.
From 1900 to 1928, the U.S. consumption of cement (recall that cement makes up no more than 15 percent of concrete) rose tenfold, to 30 million metric tons. The postwar economic expansion, including the construction of the Interstate Highway System, raised consumption to a peak of about 128 million tons by 2005; recent rates are around 100 million tons a year. China became the world’s largest producer in 1985, and its output of cement—above 2.3 billion metric tons in 2018—now accounts for nearly 60 percent of the global total. In 2017 and 2018 China made slightly more cement (about 4.7 billion tons) than the United States had made throughout the entire 20th century.
But concrete does not last forever, the Pantheon’s extraordinary longevity constituting a rare exception. Concrete deteriorates in all climates in a process that is accelerated by acid deposition, vibration, structural overloading, and salt-induced corrosion of the reinforcing steel. As a result, the concretization of the world has produced tens of billions of tons of material that will soon have to be replaced, destroyed, or simply abandoned.
The environmental impact of concrete is another worry. The industry burns low-quality coal and petroleum coke, producing roughly a ton of carbon dioxide per ton of cement, which works out to about 5 percent of global carbon emissions from fossil fuels. This carbon footprint can be reduced by recycling concrete, by using blast-furnace slag and fly ash captured in power plants, or by adopting one of the several new low-carbon or no-carbon processes. But these improvements would make only a small dent in a business whose global output now surpasses 4 billion metric tons.
This article appears in the March 2020 print issue as “Concrete Facts.”
On a vast grassy field in northern Wyoming, a coal-fired power plant will soon do more than generate electricity. The hulking facility will also create construction materials by supplying scientists with carbon dioxide from its exhaust stream.
A team from the University of California, Los Angeles, has developed a system that transforms “waste CO2” into gray blocks of concrete. In March, the researchers will relocate to the Wyoming Integrated Test Center, part of the Dry Fork power plant near the town of Gillette. During a three-month demonstration, the UCLA team plans to siphon half a ton of CO2 per day from the plant’s flue gas and produce 10 tons of concrete daily.
“We’re building a first-of-a-kind system that will show how to do this at scale,” said Gaurav Sant, a civil engineering professor who leads the team.
Carbon Upcycling UCLA is one of 10 teams competing in the final round of the NRG COSIA Carbon XPrize. The global competition aims to develop breakthrough technologies for converting carbon emissions into valuable products. Four more finalists are demonstrating projects in Wyoming, including CarbonCure, a Canadian startup making greener concrete, and Carbon Capture Machine, a Scottish venture focused on building materials. (Five other teams are competing at a natural gas plant in Alberta, Canada.)
Worldwide, hundreds of companies and research groups are working to keep CO2 out of the atmosphere and store it someplace else—including in deep geologic formations, soils, soda bubbles, and concrete blocks. By making waste CO2 into something marketable, entrepreneurs can begin raising revenues needed to scale their technologies, said Giana Amador, managing director of Carbon180, a nonprofit based in Oakland, California.
The potential global market for waste-CO2 products could be $5.9 trillion a year, of which $1.3 trillion includes cements, concretes, asphalts, and aggregates, according to Carbon180 [PDF]. Amador noted the constant and growing worldwide demand for building materials, and a rising movement within U.S. states and other countries to reduce construction-related emissions.
Cement, a key ingredient in concrete, has a particularly big footprint. It’s made by heating limestone with other materials, and the resulting chemical reactions can produce significant CO2 emissions. Scorching, energy-intensive kilns add even more. The world produces 4 billion tons of cement every year, and as a result, the industry generates about 8 percent of global CO2 emissions, according to think tank Chatham House.
The cement industry is one that’s really difficult to decarbonize, and we don’t have a lot of cost-effective solutions today,” Amador said. Carbon “utilization” projects, she added, can start to fill that gap.
The UCLA initiative began about six years ago, as researchers contemplated the chemistry of Hadrian’s Wall—the nearly 1,900-year-old Roman structure in northern England. Masons built the wall by mixing calcium oxide with water, then letting it absorb CO2 from the atmosphere. The resulting reactions produced calcium carbonate, or limestone. But that cementation process can take years or decades to complete, an unimaginably long wait by today’s standards. “We wanted to know, ‘How do you make these reactions go faster?’” Sant recalled.
The answer was portlandite, or calcium hydroxide. The compound is combined with aggregates and other ingredients to create the initial building element. That element then goes into a reactor, where it comes in contact with the flue gas coming directly out of a power plant’s smokestack. The resulting carbonation reaction forms a solid building component akin to concrete.
Sant likened the process to baking cookies. By tinkering with the ingredients, curing temperatures, and the flow of CO2, they found a way to, essentially, transform the wet dough into baked goods. “You stick it in a convection oven, and when they come out they’re ready to eat. This is exactly the same,” he said.
The UCLA system is unique among green concrete technologies because it doesn’t require the expensive step of capturing and purifying CO2 emissions from power plants. Sant said his team’s approach is the only one so far that directly uses the flue gas stream. The group has formed a company, CO2Concrete, to commercialize their technology with construction companies and other industrial partners.
After Wyoming, Sant and colleagues will dismantle the system and haul it to Wilsonville, Alabama. Starting in July, they’ll repeat the three-month pilot at the National Carbon Capture Center, a research facility sponsored by the U.S. Department of Energy.
The UCLA team will learn in September if they’ve won a $7.5 million Carbon XPrize, though Sant said he’s not fretting about the outcome. “Winning is great, but what we’re really focused on is making a difference and [achieving] commercialization,” he said.
Electric three-wheelersferry people and goods around Bangladesh but are banned in its capital. Batteries and motors could accelerate the bicycle rickshaws that gum up Dhaka’s traffic and eliminate exhaust from tuk tuks, gas-powered three-wheelers. But charging such EVs would further burden already strained power lines.
That’s just one of many opportunity costs that Bangladesh pays for a weak electrical grid. Frequent power outages hurt businesses and deter foreign investment. A sweeping grid-modernization program promises to alleviate such troubles.
In 2018, the government-run Power Grid Company of Bangladesh (PGCB) doubled the capacity of its first international transmission link—a high-voltage DC connection delivering 1 gigawatt from India. Next month, it hopes to finalize requirements for generators that promise to stabilize the voltage and frequency of the grid’s alternating current.
And next year, Bangladesh expects to achieve universal electricity access for the country’s 160 million people, only half of whom had electricity a decade ago. “It’s a whole social transformation,” says Tawfiq-e-Elahi Chowdhury, special advisor on energy to Bangladesh prime minister Sheikh Hasina.
However, it’s not clear what the grid revamp will mean for Bangladesh’s energy mix. Domestic natural gas is running out, and the country is scrambling to replace it and maintain rapid economic growth.
A nuclear power plant is now under construction, and Bangladesh is importing liquefied natural gas. But the government sees coal-fired and imported electricity as its cheapest options, and both come with challenges and risks.
Coal delivered less than 2 percent of Bangladesh’s electricity last year, but plants burning imported coal could soon match the scale of its gas-fired generation. Three coal plants under construction are each capable of serving about 10 percent of the country’s current 13-GW peak power demand. And Chowdhury expects similar projects in development to lift total coal capacity to about 10 GW by 2030.
The government expects to boost imports fivefold, to 5 GW, by 2030. Importing more electricity will provide access to relatively low-cost and renewable hydropower. A deal struck with Nepal should provide 500 megawatts, and more interconnections to India, as well as Bhutan, China, and Myanmar, are under discussion.
To convey these new power flows around the country, PGCB is building a network of 400-kilovolt lines atop its existing 230-kV and 130-kV lines, with several 765-kV circuits on the drawing board [see map]. The firm is simultaneously improving power quality—which will allow Bangladesh to accommodate more imported power and operate the nuclear plant.
Imports will be costlier if high-voltage DC converter stations must be erected at each border crossing. Instead, the government has agreed to synchronize its AC grid with India’s, enabling power to flow freely between the two. Synchronization will not be possible, however, until PGCB eliminates its grid’s large voltage and frequency deviations.
Sahbun Nur Rahman, PGCB’s executive engineer for grid planning, says most private generators don’t properly adjust the power they produce to maintain the grid’s voltage and frequency. Stability has improved over the last two years, however, as government plants have stepped up. He says the grid could be ready for synchronization in as little as five years.
Coal power will push the country’s annual per capita greenhouse gas emissions up to about 1 metric ton—still tiny, Chowdhury says, since the average developed economy generates 12 metric tons. Still, betting on coal is controversial for a low-lying country contending with climate change. By some estimates, global coal use needs to drop by 80 percent within a decade to hold global warming to 1.5 °C this century. And one of Bangladesh’s first coal-plant projects is 14 kilometers upstream from the Sundarbans, the world’s largest contiguous mangrove forest, which serves as a buffer against cyclones and sea level rise.
What’s missing from the grid push, say critics, is wind and solar. Bangladesh pioneered the use of solar power to electrify rural communities. At the peak, at least 20 million Bangladeshis relied on off-grid solar systems, and millions still do. But Mahmood Malik, CEO of the country’s Infrastructure Development Company, says the expanding national grid means there’s “not much need” to build more.
Off-grid solar still contributes more than half of Bangladesh’s renewable electricity, which makes up less than 3 percent of its power supply. Meanwhile on-grid solar is growing slowly, and wind development has barely begun. As a result, the government will miss its commitment to source 10 percent of the nation’s electricity from renewable sources by 2021.
Abdul Hasib Chowdhury, a grid expert at the Bangladesh University of Engineering and Technology, in Dhaka, says the best long-term bet for Bangladesh is imported power from beyond South Asia. He looks to the rich winds and sunshine in sparsely populated Central Asia. “South Asia is nearly 2 billion people crammed into this small space,” says A.H. Chowdhury. “They will require a lot of energy in the next 50 years.”
This article appears in the February 2020 print issue as “Bangladesh Scrambles to Grow Power Supply.”
Eighty years ago,the world’s first industrial gas turbine began to generate electricity in a municipal power station in Neuchâtel, Switzerland. The machine, installed by Brown Boveri, vented the exhaust without making use of its heat, and the turbine’s compressor consumed nearly three-quarters of the generated power. That resulted in an efficiency of just 17 percent, or about 4 MW.
The disruption of World War II and the economic difficulties that followed made the Neuchâtel turbine a pioneering exception until 1949, when Westinghouse and General Electric introduced their first low-capacity designs. There was no rush to install them, as the generation market was dominated by large coal-fired plants. By 1960 the most powerful gas turbine reached 20 MW, still an order of magnitude smaller than the output of most steam turbo generators.
In November 1965, the great power blackout in the U.S. Northeast changed many minds: Gas turbines could operate at full load within minutes. But rising oil and gas prices and a slowing demand for electricity prevented any rapid expansion of the new technology.
The shift came only during the late 1980s. By 1990 almost half of all new installed U.S. capacity was in gas turbines of increasing power, reliability, and efficiency. But even efficiencies in excess of 40 percent—matching today’s best steam turbo generators—produce exhaust gases of about 600 °C, hot enough to generate steam in an attached steam turbine. These combined-cycle gas turbines (CCGTs) arrived during the late 1960s, and their best efficiencies now top 60 percent. No other prime mover is less wasteful.
Gas turbines are now much more powerful. Siemens now offers a CCGT for utility generation rated at 593 MW, nearly 40 times as powerful as the Neuchâtel machine and operating at 63 percent efficiency. GE’s 9HA delivers 571 MW in simple-cycle generation and 661 MW (63.5 percent efficiency) by CCGT.
Their near-instant availability makes gas turbines the ideal suppliers of peak power and the best backups for new intermittent wind and solar generation. In the United States they are now by far the most affordable choice for new generating capacities. The levelized cost of electricity—a measure of the lifetime cost of an energy project—for new generation entering service in 2023 is forecast to be about US $60 per megawatt-hour for coal-fired steam turbo generators with partial carbon capture, $48/MWh for solar photovoltaics, and $40/MWh for onshore wind—but less than $30/MWh for conventional gas turbines and less than $10/MWh for CCGTs.
Gas turbines are also used for the combined production of electricity and heat, which is required in many industries and is used to energize central heating systems in many large European cities. These turbines have even been used to heat and light extensive Dutch greenhouses, which additionally benefit from their use of the generated carbon dioxide to speed up the growth of vegetables. Gas turbines also run compressors in many industrial enterprises and in the pumping stations of long-distance pipelines. The verdict is clear: No other combustion machines combine so many advantages as do modern gas turbines. They’re compact, easy to transport and install, relatively silent, affordable, and efficient, offering nearly instant-on power and able to operate without water cooling. All this makes them the unrivaled stationary prime mover.
And their longevity? The Neuchâtel turbine was decommissioned in 2002, after 63 years of operation—not due to any failure in the machine but because of a damaged generator.
This article appears in the December 2019 print issue as “Superefficient Gas Turbines.”
In 1896, Svante Arrhenius, of Sweden, became the first scientist to quantify the effects [PDF] of man-made carbon dioxide on global temperatures. He calculated that doubling the atmospheric level of the gas from its concentration in his time would raise the average midlatitude temperature by 5 to 6 degrees Celsius. That’s not too far from the latest results, obtained by computer models running more than 200,000 lines of code.
The United Nations held its first Framework Convention on Climate Change in 1992, and this was followed by a series of meetings and climate treaties. But the global emissions of carbon dioxide have been rising steadily just the same.
At the beginning of the 19th century, when the United Kingdom was the only major coal producer, global emissions of carbon from fossil fuel combustion were minuscule, at less than 10 million metric tons a year. (To express them in terms of carbon dioxide, just multiply by 3.66.) By century’s end, emissions surpassed half a billion metric tons of carbon. By 1950, they had topped 1.5 billion metric tons. The postwar economic expansion in Europe, North America, the U.S.S.R., and Japan, along with the post-1980 economic rise of China, quadrupled emissions thereafter, to about 6.5 billion metric tons of carbon by the year 2000.
The new century has seen a significant divergence. By 2017, emissions had declined by about 15 percent in the European Union, with its slower economic growth and aging population, and also in the United States, thanks largely to the increasing use of natural gas instead of coal. However, all these gains were outbalanced by Chinese carbon emissions, which rose from about 1 billion to about 3 billion metric tons—enough to increase the worldwide total by nearly 45 percent, to 10.1 billion metric tons.
By burning huge stocks of carbon that fossilized ages ago, human beings have pushed carbon dioxide concentrations to levels not seen for about 3 million years. The sampling of air locked in tiny bubbles in cores drilled into Antarctic and Greenland ice has enabled us to reconstruct carbon dioxide concentrations going back some 800,000 years. Back then the atmospheric levels of the gas fluctuated between 180 and 280 parts per million (that is, from 0.018 to 0.028 percent). During the past millennium, the concentrations remained fairly stable, ranging from 275 ppm in the early 1600s to about 285 ppm before the end of the 19th century. Continuous measurements of the gas began near the top of Mauna Loa, in Hawaii, in 1958: The 1959 mean was 316 ppm, the 2015 average reached 400 ppm, and 415 ppm was first recorded in May 2019.
Emissions will continue to decline in affluent countries, and the rate at which they grow in China has begun to slow down. However, it is speeding up in India and Africa, and hence it is unlikely that we will see any substantial global declines anytime soon.
The Paris agreement of 2015 was lauded as the first accord containing specific national commitments to reduce future emissions, but even if all its targets were met by 2030, carbon emissions would still rise to nearly 50 percent above the 2017 level. According to the 2018 study by the Intergovernmental Panel for Climate Change, the only way to keep the average world temperature rise to no more than 1.5 °C would be to put emissions almost immediately into a decline steep enough to bring them to zero by 2050.
That is not impossible—but it is very unlikely. The contrast between the expressed concerns about global warming and the continued releases of record volumes of carbon could not be starker.
This article appears in the October 2019 print issue as “The Carbon Century.”
Berkeley, California banned the installation of natural gas pipes to new residential construction projects last month. The city is committed to slashing its carbon footprint, and natural gas is a carbon double-whammy: when burned, it releases carbon dioxide and, when leaked, its main ingredient, methane, is a far more potent greenhouse gas than CO2.
Those leaks, meanwhile, may soon have nowhere to hide thanks to a growing wave of private, methane-detecting satellites being placed in orbit. Canada’s GHGSat led the charge in 2016 with its carbon-tracking Claire microsatellite, and the company now has a second-generation microsat ready to launch. Several more methane-detecting satellites are coming, including one from the Environmental Defense Fund. If gas producers don’t find and squelch their own pollution, this proliferation of remote observers will make it increasingly likely that others will shine a spotlight on it.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.