Tag Archives: Energy/Renewables

Tomorrow’s Hydropower Begins With Retrofitting Existing Dams

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/energywise/energy/renewables/tomorrows-hydropower-begins-retrofitting-dams

With wind and solar prices dropping, it can be easy to forget that two-thirds of the globe’s renewable energy comes from hydropower. But hydro’s future is muddled with ghostly costs—and sometimes dubious environmental consequences.  

Most dams weren’t, in fact, actually built for hydropower. They help stop floods and supply farms and families with water, but they’re not generating electricity—especially in developing countries. Nearly half of Europe’s large dams are primarily used for hydropower, but fewer than a sixth of Asian dams and a tenth of African dams generate substantial amounts of electricity—according to Tor Haakon Bakken, a civil engineer at the Norwegian University of Science and Technology (NTNU).

People like Bakken see such dams as opportunities. He’s one of a few researchers proposing to retrofit old, non-generating dams by installing turbines at their bases. That, he thinks, would create electricity without adding an ecological burden.

Bakken’s group and one of his graduate students, Nora Rydland Fjøsnemodeled theoretically doing just that for numerous dams in a part of southern Spain. Their study found that, in many cases, retrofitting was an economically viable approach.

Bakken hopes that hydro developers consider retrofitting before building anew. “I think, for the case in Spain, we have proven that this is both a technically and economically viable alternative,” he says. “And I think it’s the case in many other places too.”

Even power-generating dams could also be productively retrofitted. The Brazilian Energy Research Office estimates that updating aging hydro plants could add 3 to 11 gigawatts of generating capacity—above and beyond Brazil’s existing 87 GW hydropower base. Meanwhile, scientists at the National Renewable Energy Laboratory (NREL) in the US have proposed using reservoirs as beds for floating solar panels, something they think could theoretically generate terawatts of power.

But if you do need to build new hydropower facilities, other scientists believe the best course of action is to stay small: focus on establishing so-called “run-of-the-river” dams, which try to keep the river and its environmental conditions intact.

“This kind of turbine, you don’t need to flood large areas,” says Michael Craig, an energy systems researcher at the University of Michigan, and formerly of NREL. 

Craig and some of his colleagues from NREL and the private-sector Natel Energy modeled a sequence of run-of-the-river dams on a river in California, all linked such that they could be easily controlled. They found that this approach was, like retrofitting dams, economically viable.

“The smaller the facility, the more energy you can get out of that, I think is definitely a great strategy,” says Ilissa Ocko, a climate scientist at Environmental Defense Fund

Righting the course of old hydro

Hydropower’s behemoth dams of old can singlehandedly rewrite the courses of rivers, creating winding reservoirs in their wake. They can prevent floods and supply water, but they can also displace countless communities upstream and constrict river flow downstream. Such dams disproportionately hurt rural and indigenous people who rely on rivers for a living.

Moreover, some hydro plants generate alarming amounts of greenhouse gases. The culprit, scientists now know, are those very reservoirs—and the biological material trapped underneath. “You’re basically flooding a whole area that has all this vegetation on it that now is just decomposing underwater,” says Ocko.

The result? Greenhouse gases. On top of carbon dioxide, it can generate methane, which—while not as long-lasting in the atmosphere—is more potent at warming. Reservoirs that are larger in surface area, and reservoirs that have warmer waters—such as those near the equator—are especially prone to burping up copious amounts of methane.

Take the Brazilian Amazon, for instance. “Over the last decades, basically all the potential for hydropower expansion that Brazil had close to consuming markets has been exhausted, and Amazonia became the new frontier,” says Dailson José Bertassoli, Jr., a geochemist at the University of São Paulo.

But many dam reservoirs in the Amazon are comparable to fossil fuel plants in their capacity to generate greenhouse gases. So too are their counterparts in Western Africa. In fact, one Environmental Defense Fund study found that nearly seven percent of the 1500 hydro plants around the world they examined emit more greenhouse gases per unit energy than fossil fuel plants.

That’s ample reason to eschew building new dams and instead look to what we have. “In terms of environmental impacts, it makes sense to focus on areas that already are not in their natural conditions,” says Bertassoli.

So, in the face of its mounting environmental costs, does hydro have a future? 

Yes, say climate scientists, but with a caveat. The key is to minimize the future of those large, greenhouse-gas-excreting reservoirs. 

“I would never say that we should stay away from hydropower. From a climate perspective, I think we need all the solutions we can get.” says Ocko. But, she adds, “We can’t make an assumption and put it into this bucket of being a renewable energy source just like solar and wind. It’s not.”

Engineers: You Can Disrupt Climate Change

Post Syndicated from David Fork original https://spectrum.ieee.org/energy/renewables/engineers-you-can-disrupt-climate-change

Seven years ago, we published an article in IEEE Spectrum titled “What It Would Really Take to Reverse Climate Change.” We described what we had learned as Google engineers who worked on a well-intentioned but ultimately failed effort to cut the cost of renewable energy. We argued that incremental improvements to existing energy technologies weren’t enough to reverse climate change, and we advocated for a portfolio of conventional, cutting-edge, and might-seem-crazy R&D to find truly disruptive solutions. We wrote: “While humanity is currently on a trajectory to severe climate change, this disaster can be averted if researchers aim for goals that seem nearly impossible. We’re hopeful, because sometimes engineers and scientists do achieve the impossible.”

Today, still at Google, we remain hopeful. And we’re happy to say that we got a few things wrong. In particular, renewable energy systems have come down in price faster than we expected, and adoption has surged beyond the predictions we cited in 2014.

Our earlier article referred to “breakthrough” price targets ( modeled in collaboration with the consulting firm McKinsey & Co.) that could lead to a 55 percent reduction in U.S. emissions by 2050. Since then, wind and solar power prices have met the targets set for 2020, while battery prices did even better, plummeting to the range predicted for 2050. These better-than-expected price trends, combined with cheap natural gas, caused U.S. coal usage to drop by half. The result: By 2019, U.S. emissions had fallen to the level that the McKinsey scenario forecast for 2030—a decade sooner than our model predicted.

And thanks to this progress in decarbonizing electricity production, engineers are seeking and finding numerous opportunities to switch existing systems based on the combustion of fossil fuels to lower-carbon electricity. For example, electric heat pumps are becoming a cost-effective replacement for heating fuel, and electric cars are coming down in ­­price and going up in range.

Even with all this progress, though, we’re still on a trajectory to severe climate change: a 3 °C rise by 2100. Many countries are not meeting the emissions reductions they pledged in the 2015 Paris Agreement. Even if every country were to meet its pledge, it would not be enough to limit planetwide warming to 1.5 °C, which most experts consider necessary to avoid environmental disaster. Meeting pledges today would require a drastic slashing of emissions. If these wholesale emission reductions don’t happen, as we think likely, then other strategies will be needed to keep temperatures within bounds.

Here are some key numbers: To reverse climate change, even partially, we’ll need to bring atmospheric carbon dioxide levels down to a safer threshold of 350 parts per million; on Earth Day 2021 the figure stood at 417 ppm. We estimate that meeting that target will require removing on the order of 2,000 gigatonnes of CO2 from the atmosphere over the next century. That wholesale removal is necessary both to draw down existing atmospheric CO2 as well as the CO2 that will be emitted while we transition to a carbon-negative society (one that removes more carbon from the atmosphere than it emits).

Our opening battles in the war on climate change need engineers to work on the many existing technologies that can massively scale up. As already illustrated with wind, solar, and batteries, such scale-ups often bring dramatic drops in costs. Other industrial sectors require technological revolutions to reduce emissions. If you experiment with your own mix of climate-mitigation techniques using the En-ROADS interactive climate tool, you’ll see how many options you have to max out to change our current trajectory and achieve 350 ppm CO2 levels and a global temperature rise of no more than 1.5 °C.

So what’s an engineer who wants to save the planet to do? Even as we work on the changeover to a society powered by carbon-free energy, we must get serious about carbon sequestration, which is the stashing of CO2 in forests, soil, geological formations, and other places where it will stay put. And as a stopgap measure during this difficult transition period, we will also need to consider techniques for solar-radiation management—deflecting some incoming sunlight to reduce heating of the atmosphere. These strategic areas require real innovation over the coming years. To win the war on climate change we need new technologies too.

We’re optimistic that the needed technology will emerge within a couple of decades. After all, engineers of the past took mere decades to design engines of war, build ships that could circle the globe, create ubiquitous real-time communication, speed up computation over a trillionfold, and launch people into space and to the moon. The 1990s, 2000s, and 2010s were the decades when wind power, solar power, and grid-scale batteries respectively started to become mainstream. As for which technologies will define the coming decades and enable people to live sustainably and prosperously on a climate-stable planet, well, in part, that’s up to you. There’s plenty to keep engineers hard at work. Are you ready?

Before we get to the technology challenges that need your attention, allow us to talk for a moment about policy. Climate policy is essential to the engineering work of decarbonization, as it can make the costs of new energy technologies plummet and shift markets to low-carbon alternatives. For example, by 2005, Germany was offering extremely generous long-term contracts to solar-energy producers (at about five times the average price of electricity in the United States). This guaranteed demand jump-started the global market for solar photovoltaic (PV) panels, which has since grown exponentially. In short, Germany’s temporary subsidies helped create a sustainable global market for solar panels. People often underestimate how much human ingenuity can be unleashed when it’s propelled by market forces.

This surge in solar PV could have happened a decade earlier. Every basic process was ready by 1995: Engineers had mastered the technical steps of making silicon wafers, diffusing diode junctions, applying metal grids to the solar-cell surfaces, passivating the semiconductor surface to add an antireflective coating, and laminating modules. The only missing piece was supportive policy. We can’t afford any more of these “lost decades.” We want engineers to look at energy systems and ask themselves: Which technologies have everything they need to scale up and drive costs down—except the policy and market?

Economics Nobel laureate William Nordhaus argues that carbon pricing is instrumental to tackling climate change in his book The Climate Casino (Yale University Press, 2015). Today, carbon pricing applies to about 22 percent of global carbon emissions. The European Union’s large carbon market, which currently prices carbon at above €50 per ton (US $61), is a major reason why its airlines, steel manufacturers, and other industries are currently developing long-term decarbonization plans. But economist Mark Jaccard has pointed out that while carbon taxes are economically most efficient, they often face outsize political opposition. Climate-policy pioneers in Canada, California, and elsewhere have therefore resorted to flexible (albeit more complicated) regulations that provide a variety of options for industries to meet decarbonization objectives.

Engineers may appreciate the simplicity and elegance of carbon pricing, but the simplest approach is not always the one that enables progress. While we engineers aren’t in the business of making policy, it behooves us to stay informed and to support policies that will help our industries flourish.

Tough decarbonization challenges abound for ambitious engineers. There are far too many to enumerate in this article, so we’ll pick a few favorites and refer the reader to Project Drawdown, an organization that assesses the impact of climate efforts, for a more complete list.

Let’s consider air travel. It accounts for 2.5 percent of global carbon emissions, and decarbonizing it is a worthy goal. But you can’t simply capture airplane exhaust and pipe it underground, nor are engineers likely to develop a battery with the energy density of jet fuel anytime soon. So there are two options: Either pull CO2 directly from the air in amounts that offset airplane emissions and then stash it somewhere, or switch to planes that run on zero-carbon fuels, such as biofuels.

One interesting possibility is to use hydrogen for aviation fuel. Airbus is currently working on designs for a hydrogen-powered plane that it says will be in commercial service in 2035. Most of today’s hydrogen is decidedly bad for the climate, as it’s made from fossil methane gas in a process that emits CO2. But clean hydrogen production is a hot research topic, and the 200-year-old technique of water electrolysis—in which H2O is split into oxygen and hydrogen gas—is getting a new look. If low-carbon electricity is used to power electrolysis, the clean hydrogen produced could be used to manufacture chemicals, materials, and synthetic fuels.

Policy, particularly in Europe, Japan, and Australia, is driving hydrogen research forward. For example, the European Union published an ambitious strategy for 80 gigawatts of capacity in Europe and neighboring countries by 2030. Engineers can help drive down prices; the first goal is to reach $2 per kilogram (down from about $3 to $6.50 per kilogram now), at which point clean hydrogen would be cheaper than a combination of natural gas with carbon capture and sequestration.

Climate-friendly hydrogen could also lead to another great accomplishment: decarbonizing the production of metals. The Stone Age gave way to the Iron Age only when people figured out how to deploy energy to remove the oxygen from the metal ores found in nature. Europe was deforested in part to provide charcoal to burn in the crucibles where metalsmiths heated iron ore, so it was considered an environmental win when they switched from charcoal to coal in the 18th century. Today, thanks to the European Union’s carbon market, engineers are piloting exciting new methods to remove oxygen from metal ore using hydrogen and electric arc furnaces.

There’s still much work to do in decarbonizing the generation of electricity and production of clean fuels. Worldwide, humans use roughly one zettajoule per year—that’s 1021 joules every year. Satisfying that demand without further contributing to climate change means we’ll have to drastically speed up deployment of zero-carbon energy sources. Providing 1 ZJ per year with only solar PV, for example, would require covering roughly 1.6 percent of the world’s land area with panels. Doing it with nuclear energy alone would necessitate building three 1-gigawatt plants every day between now and 2050. It’s clear that we need a host of cost-effective and environmentally friendly options, particularly in light of large regional variations in resources.

While we consider those options, we’ll also need to make sure those sources of energy are steady and reliable. Critical infrastructure such as hospitals, data centers, airports, trains, and sewage plants need around-the-clock electricity. (Google, for one, is aggressively pursuing 24/7 carbon-free energy for its data centers by 2030.) Most large industrial processes, such as the production of glass, fertilizer, hydrogen, synthesized fuels, and cement, are currently cost-effective only when plants are operated nearly continuously, and often need high-temperature process heat.

To provide that steady carbon-free electricity and process heat, we should consider new forms of nuclear power. In the United States and Canada, new policies support advanced nuclear-energy development and licensing. Dozens of advanced nuclear-fission companies offer engineers a variety of interesting challenges, such as creating fault-tolerant fuels that become less reactive as they heat up. Other opportunities can be found in designing reactors that recycle spent fuel to reduce waste and mining needs, or that destroy long-lived waste components via new transmutation technologies.

Engineers who are drawn to really tough quests should consider nuclear fusion, where the challenges include controlling the plasma within which the fusion occurs and achieving net electric power output. This decade’s competition in advanced nuclear-energy technologies may produce winners that get investors excited, and a new round of policies could push these technologies down the cost curve, avoiding a lost decade for advanced nuclear energy.

Global-scale climate preservation is an idea that engineers should love, because it opens up new fields and career opportunities. Earth’s climate has run open loop for over 4 billion years; we are lucky that our planet’s wildly fluctuating climate was unusually stable over the 10,000 years that modern civilization arose and flourished. We believe that humankind will soon start wrapping a control loop around earth’s climate, designing and introducing controlled changes that preserve the climate.

The basic rationale for climate preservation is to avoid irreversible climate changes. The melting of the Greenland ice sheet could raise sea levels by 6 meters, or the runaway thawing of permafrost could release enough greenhouse gas to add an additional degree of global warming. Scientists agree that continuation of unchecked emissions will trigger such tipping points, although there’s uncertainty about when that would happen. The economist Nordhaus, applying the conservative precautionary principle to climate change, argues that this uncertainty justifies earlier and larger climate measures than if tipping-point thresholds were precisely known.

We believe in aggressively pursuing carbon dioxide removal because the alternative is both too grim and too expensive. Some approaches to carbon dioxide removal and sequestration are technically feasible and are now being tried. Others, such as ocean fertilization of algae and plankton, caused controversy when attempted in early experiments, but we need to learn more about these as well.

The Intergovernmental Panel on Climate Change’s recommendation for capping warming at 1.5 °C requires cutting net global emissions almost in half by 2030, and to zero by 2050, but nations are not making the necessary emission cuts. (By net emissions, we mean actual CO2 emissions minus the CO2 that we pull out of the air and sequester.) The IPCC estimates that achieving the 1.5 °C peak temperature goal and, over time, drawing CO2 concentrations down to 350 ppm actually requires negative emissions of more than 10 Gt of CO2 per year within several decades—and this may need to continue as long as there remain atmospheric litterbugs who continue to emit CO2.

The En-ROADS tool, which can be used to model the impact of climate-mitigation strategies, shows that limiting warming to 1.5 °C requires maxing out all options for carbon sequestration—including biological means, such as reforestation, and nascent technological methods that aren’t yet cost effective.

We need to sequester CO2, in part, to compensate for activities that can’t be decarbonized. Cement, for example, has the largest carbon footprint of any man-made material, creating about 8 percent of global emissions. Cement is manufactured by heating limestone (mostly calcite, or CaCO3), to produce lime (CaO). Making 1 tonne of cement lime releases about 1 tonne of CO2. If all the CO2 emissions from cement manufacturing were captured and pumped underground at a cost of $80 per tonne, we estimate that a 50-pound bag (about 23 kg) of concrete mix, one component of which is cement, will cost about 42 cents more. Such a price change would not stop people from using concrete nor significantly add to building costs. What’s more, the gas coming out of smokestacks at cement plants is rich in CO2 compared with the diluted amount in the atmosphere, which means it’s easier to capture and store.

Capturing cement’s emissions will be good practice as we get ready for the bigger lift of removing 2,000 Gt of CO2 directly from the atmosphere over the next 100 years. Therein lies one of the century’s biggest challenges for scientists and engineers. A recent Physics Today article estimated the costs of directly capturing atmospheric CO2 at between $100 and $600 per tonne. The process is expensive because it requires a lot of energy: Direct air capture involves forcing enormous volumes of air over sorbents, which are then heated to release concentrated CO2 for storage or use.

We need a price breakthrough in carbon capture and sequestration that rivals what we have seen in wind power, solar energy, and batteries. We estimate that at $100 per tonne, removing those 2,000 Gt of CO2 would account for roughly 2.8 percent of global GDP for 80 years. Compare that cost with the toll of hitting a climate tipping point, which no amount of spending could undo.

In principle, there are enough subterranean rock formations to store not just gigatonnes but teratonnes of CO2. But the scale of the sequestration required, and the urgency of the need for it, calls for outside-the-box thinking. For example, massive-scale, low-cost carbon removal may be possible by giving nature an assist. During the planet’s Carboniferous period, 350 million years ago, nature sequestered so much carbon that it reduced atmospheric CO2 from over 1,000 ppm to our preindustrial level of 260 ppm (and created coal in the process). The mechanism: Plants evolved the fibrous carbon-containing material lignin for their stems and bark, millions of years before other creatures evolved ways to digest it.

Now consider that the ocean absorbs and almost completely reemits about 200 Gt of CO2 per year. If we could prevent 10 percent of this reemission for 100 years, we would meet the goal of sequestering 2,000 Gt of CO2. Perhaps some critter in the ocean’s food chain could be altered to excrete an organic biopolymer like lignin that’s hard to metabolize, which would settle to the seafloor and sequester carbon. Phytoplankton reproduce quickly, offering a quick path to enormous scale. If our legacy of solving climate change is a few millimeters of indigestible carbon-rich poop at the bottom of the ocean, we’d be okay with that.

Altering radiative forcing—that is, reflecting more sunlight to space—could be used as a temporary and stopgap measure to limit warming until we’ve made a dent in reducing atmospheric CO2 levels. Such efforts could avoid the worst physical and economic impacts of temperature rise, and would be decommissioned once the crisis has passed. For example, we could reduce the formation of airplane contrails, which trap heat, and make roofs and other surfaces white to reflect more sunlight. These two measures, which could reduce our expected planetary warming by about 3 percent, would help the public better appreciate that our collective actions affect climate.

There are more ambitious proposals that would reflect more sunlight, but there is much to debate about the positive and negative consequences of such actions. We believe that the most responsible path forward is for engineers, chemists, biologists, and ecologists to test all the options, particularly those that can make a difference at a planetary scale.

We don’t claim to know which technologies will prevent a dystopian world that’s over 2° C warmer. But we fervently believe that the world’s engineers can find ways to deliver tens of terawatts of carbon-free energy, radically decarbonize industrial processes, sequester vast amounts of CO2, and temporarily deflect the necessary amounts of solar radiation. Effective use of policies that support worthy innovations can help move these technologies into place within the next three or four decades, putting us well on our way to a stable and livable planet. So, engineers, let’s get to work. Whether you make machines or design algorithms or analyze numbers, whether you tinker with biology, chemistry, physics, computers, or electrical engineering, you have a role to play.

The views expressed here are solely those of the authors and do not represent the positions of Google or the IEEE.

About the Authors

In 2014, two distinguished Google engineers wrote for IEEE Spectrum about the sobering lessons they’d learned while trying to develop renewable-energy systems that were as cheap as coal. That article, titled “What It Would Really Take to Reverse Climate Change,” struck a chord. By the metric of online readership, it was the seventh most popular article Spectrum published in the 2010s. The piece bluntly described the enormous scale of the challenge.  

Seven years later, the authors, David Fork [on right in photo] and Ross Koningstein, are back with a new message, and it’s surprisingly hopeful. “It’s stunning how rapidly things have been moving since the first article was published,” says Fork. The scope of the challenge is still enormous, of course, but experts now have a better understanding of how a variety of technologies could be combined to prevent catastrophic climate change, the coauthors say. Many renewable-energy systems, for example, are already mature and just need to be scaled up. Some innovations need significant development, including new processes to produce steel and concrete, and geoengineering techniques to sequester carbon and temporarily reduce solar radiation. The one commonality among all these promising technologies, they conclude, is that engineers can make a difference on a planetary scale.

“We need engineers to recognize where these opportunities are, and then not step on the gas pedal but step on the accelerator of an electric vehicle,” says Koningstein. Concerned about the pessimistic tone of most climate coverage, the authors argue that wise policies, market pressure, and human creativity can get the job done. “When you put the right incentives in place, you capture the ingenuity of the masses,” says Fork. “All of us are smarter than any of us.”

  • Disrupting Climate Change Is a Math Problem

    And we’re showing our work

    We,the authors of thisarticle,work at Google on the renewable energy team, and in our jobs we could never get away with presenting a bold idea if we didn’t have the math to back it up. So we’re presenting here some data and calculations to support the biggest claims in our article.

    We need to remove about 2000 gigatonnes of CO2from the atmosphere

    The Intergovernmental Panel on Climate Change (IPCC)special report “Global Warming of 1.5° C” states thatcumulative CO2emissionsfrom 1876 to the end of 2010 were 1,930 gigatonnes CO2, and that by the end of 2017, the amount had reached 2,220 Gt CO2. These emissions were accompanied by an estimated 1° C surface temperature change over that time span. So to reverse the effects of climate change we need to remove at least 2,000 Gt CO2from the atmosphere and oceans. Add to that total our emissions going forward, which are currently 40 Gt of CO2per year.

    There are other ways to get to a number of the same scale. That same IPCC report also states: “Pathways that aim for limiting warming to 1.5° C by 2100 after a temporary temperature overshoot rely on large-scale deployment of carbon dioxide removal (CDR) measures.” This acknowledgment has led to including large-scale (as in tens of Gt CO2per year) carbon removal in models for reducing net carbon emissions. It’s important to understand that net carbon emissions means actual CO2emissions minus the CO2that we pull out of the air and sequester.

    The IPCC report states a goal of reducing net emissions to zero by 2050, using both significant emissions cuts and carbon removal to limit global warming to 1.5° C. Agraphicshows net emissions gradually decline to almost negative 20 Gt CO2per year by the end of the century, and would clearly have to continue on at that scale. If about 20 Gt CO2per year is removed for the next 100 years, that would be 2000 Gt CO2.

    We don’t need to know the precise amount of necessary CO2removal to evaluate the suitability of potential approaches to this problem: Whether the number is about 2000 Gt or even higher, net negative emissions would need to continue for perhaps a century at 20 Gt CO2per year to restore the atmosphere (and oceans) to desired levels. So when setting targets for carbon removal, think tens of gigatonnes CO2per year—think big!

    Humans use about 1 zettajoule of energy per year

    In 2017 global energy consumption was about160,000 terawatt-hours. This is about 6×1020joules. Over time energy consumption has always increased as development has improved the quality of people’s lives. It therefore seems likely that later this century, humans will be using even more than 6×1020joules of energy. In our article, we round up humanity’s energy use to 1 zettajoule (1021joules) for simplicity.

    To supply 1 ZJ of energy per year with photovoltaic solar panels, we’d need to cover 1.6 percent of the planet’s land surface

    Solar installations are rated in terms of their peak capacity, which is their power production in full sun. To determine the output of an installation, we need to know its capacity factor, which is the average utilization of its peak capacity. In a good location for solar PV, the capacity factor can be about 20 percent. For every watt of peak capacity with a 20 percent capacity factor, one year of operation will produce 6,307,200 joules of energy (365 days x 24 hours x 60 minutes x 60 seconds x 0.2 capacity). Dividing this number into 1 ZJ reveals that it wouldrequire solar panels with a peak capacity of 160 terrawattsto produce 1 ZJ per year of electricity generation.

    A utility scale solar farm typically has a ⅓ ratio of panel area to land area, which translates to about 66 peak megawatts of power generation per square kilometer. To produce160 TW of peak solar generation capacity we would need about 2.4 million square kilometers, or about 1.6 percent of Earth’s land area. For comparison, farming claimsabout 40 percent of Earth’s land surface. About one-third of the planet’s land surface is desert; hence in a scenario where desert land is used for solar energy generation 1 ZJ of energy per year could be produced without significantly impacting the food supply.

    To supply 1 ZJ of energy per year with nuclear power, we’d have to build three 1-gigawatt plants per day for 30 years

    A typical nuclear power plant today generates about 1 gigawatt of power. Let’s say that the plant operates with a capacity factor of 95 percent, meaning that it’s up and running at its design capacity 95 percent of the time. This one plant will produce 3×1016Joules per year. One zettajoule is 1021Joules. So it would take a little over 33,000 1-GW plants to provide the capacity for generating 1 ZJ per year. Given 30 years to build this quantity of nuclear power plants, the average rate of construction would be about 3 plants per day.

    A 50-pound bag of concrete mix will cost about 42 cents more if the emissions from cement manufacturing are captured and stored

    Making a tonne of cement by burning fossil fuels to provide process heat releases CO2in two ways: It’s released duringthe combustion process and also comes from the heated feedstock of carbonate rock. Combined, the emissions areabout 0.93 tonnes of carbon dioxide per tonne of cement. The roughly “1 tonne per tonne” rule of thumb makes it rather easy to compute the cost of producing zero-carbon cement. As of this writing, the price of cement averagesaround $125 per tonne. If a cement plant were to attach machinery that captured and sequestered its carbon emissions for a cost of $80 per tonne of CO2(an optimistic cost estimate), this process would add 60 percent to the cost of cement (0.93 x $80 / $125 is about 0.6).

    A 50-pound bag of cement weighs 0.023 tonnes and produces emissions of about 0.021 tonnes CO2, hence at a sequestration cost of $80 per tonne, the added cost would be about $1.69. Ready-mix concrete is about 25 percent cement by weight. That cement would add about $0.42 (0.25 x $1.69) to the cost of a 50-pound bag of concrete. Cement is such a valuable material, it’s arguable that if the price were to increase significantly more than this amount, even if the price were to double, humanity would still use lots of it.

    Removing 2,000 Gt of CO2 would account for roughly 2.8 percent of global GDP for 80 years

    In the article, we suggest that 2.8 percent of global gross domestic product would be a reasonable amount of money to spend to pull down the levels of CO2 in the atmosphere. In 2021, global gross domestic product will be aboutUS $90 trillion. Choosing the timeframe for drawing 2000 Gt of CO2is a very uncertain task; 80 years seems like a reasonable number to get the planet into decent shape by early next century. This time horizon gives us an annualCO2 capture and sequestration target of about 25 Gt of CO2per year.

    The cost of mechanical direct air capture of CO2and sequestration is currently hundreds of dollars per tonne; in the article we note that the area is ripe for creative R&D. If the price could be reduced to $100 per tonne, drawing down those 25 Gt of CO2per year works out to $2.5 trillion per year, or about 2.8 percent of current global GDP.

    Carbon capture is perhaps the best lever to use for stabilizing Earth’s climate in the long term—meaning a time scale of centuries. Drawing down the greenhouse gas concentrations to below current levels will eventually cool the land and ocean temperatures, and will reverse ocean acidification as well.

Perovskite Solar Out-Benches Rivals in 2021

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/tech-talk/energy/renewables/oxford-pv-sets-new-record-for-perovskite-solar-cells

In October, when the International Energy Agency pronounced solar energy the “cheapest electricity in history,” their claim rested on largely inefficient panel technologies in the 15-25 percent range. Imagine, say perovskite PV advocates, what new standards solar power could set with efficiencies of 30 percent or more. 

As of December, photovoltaic panel efficiency ratings north of 30 percent no longer seemed quite so theoretical either. 

December was the month the U.S. National Renewable Energy Laboratory certified U.K. startup Oxford PV’s new record: A single solar cell coated with the mineral perovskite, NREL confirmed, can now convert 29.52 percent of incident solar energy into electricity. According to NREL’s own benchmarking, conventional silicon cells appear to have maxed out at 27.6 percent. 

Oxford PV, a ten-year-old spinoff from the University of Oxford, in England, say they expect to cross over into the 30s soon, too.

At its current pace, the company says it expects to be manufacturing cells with 33 percent efficiency within four years. One competitor in the perovskite-silicon PV race, the German research institute Helmholtz-Zentrum Berlin, has already achieved 29.15 percent efficiency with their perovskite cell and expects to be able to push its rating up to 32.4 percent. 

“We hope [this technology] will change the face of photovoltaics and accelerate the adoption of solar to address climate change,” Chris Case, Oxford PV’s chief technology officer, says of his company’s tandem perovskite-silicon cells. 

The tandem approach—coating an ordinary silicon wafer with a thin-film layer of perovskite material—enables Oxford PV to capture more available solar radiation. The perovskite layer absorbs shorter wavelengths, while the silicon layer absorbs longer wavelengths. To improve efficiency, the company expects to refine the cells’ coatings and antireflection layers and remove defects and impurities, Case says. 

Companies and universities worldwide are looking to perovskites as a potential future replacement for silicon, in the hopes of making renewable energy more affordable and accessible.

Early prototypes of perovskite solar cells were unstable and degraded quickly. But over the past decade, researchers have steadily improved the stability and durability of perovskite materials for both indoor and outdoor applications.

Oxford PV expects to start selling its perovskite-silicon cells to the public in early 2022, says CEO Frank Averdung. That would make it the first company to bring such a product to the global solar market. 

The startup is expanding its pilot plant in Germany into a 100-megawatt-capacity solar cell factory. Oxford PV began producing small volumes of cells there in 2017. The company has field-tested the technology for more than a year, Case says.

So far, he adds, data suggest the cells perform about the same as commercial silicon panels. “We see no degradation that’s any different than reference commercial panels that we’re comparing to,” Case says. 

He says it typically takes a couple of years for efficiency achievements in the lab to appear in factory-produced cells. Thus the first devices off Oxford PV’s manufacturing line will have an efficiency of 26 percent—higher than any other commercially available solar cell, Averdung says. The company expects residential rooftop solar projects using its technology will generate 20 percent more power using the same number of cells as existing installations.

Some solar researchers remain skeptical of perovskites, pointing to the material’s potential to degrade when exposed to moisture, harsh temperatures, salt spray, oxygen, and other elements. Case says Oxford PV’s cells have passed a battery of accelerated stress tests, both internally and by third parties.

“They will definitely be expected to last as long or longer than any of the best silicon modules that are out there,” he says of the cells.

Germany’s Energiewende, 20 Years Later

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/energy/renewables/germanys-energiewende-20-years-later

In 2000, Germany launched a deliberately targeted program to decarbonize its primary energy supply, a plan more ambitious than anything seen anywhere else. The policy, called the Energiewende, is rooted in Germany’s naturalistic and romantic tradition, reflected in the rise of the Green Party and, more recently, in public opposition to nuclear electricity generation. These attitudes are not shared by the country’s two large neighbors: France built the world’s leading nuclear industrial complex with hardly any opposition, and Poland is content burning its coal.

The policy worked through the government subsidization of renewable electricity generated with photovoltaic cells and wind turbines and by burning fuels produced by the fermentation of crops and agricultural waste. It was accelerated in 2011 when Japan’s nuclear disaster in Fukushima led the German government to order that all its nuclear power plants be shut down by 2022.

During the past two decades, the Energiewende has been praised as an innovative miracle that will inexorably lead to a completely green Germany and criticized as an expensive, poorly coordinated overreach. I will merely present the facts.

The initiative has been expensive, and it has made a major difference. In 2000, 6.6 percent of Germany’s electricity came from renewable sources; in 2019, the share reached 41.1 percent. In 2000, Germany had an installed capacity of 121 gigawatts and it generated 577 terawatt-hours, which is 54 percent as much as it theoretically could have done (that is, 54 percent was its capacity factor). In 2019, the country produced just 5 percent more (607 TWh), but its installed capacity was 80 percent higher (218.1 GW) because it now had two generating systems.

The new system, using intermittent power from wind and solar, accounted for 110 GW, nearly 50 percent of all installed capacity in 2019, but operated with a capacity factor of just 20 percent. (That included a mere 10 percent for solar, which is hardly surprising, given that large parts of the country are as cloudy as Seattle.) The old system stood alongside it, almost intact, retaining nearly 85 percent of net generating capacity in 2019. Germany needs to keep the old system in order to meet demand on cloudy and calm days and to produce nearly half of total demand. In consequence, the capacity factor of this sector is also low.

It costs Germany a great deal to maintain such an excess of installed power. The average cost of electricity for German households has doubled since 2000. By 2019, households had to pay 34 U.S. cents per kilowatt-hour, compared to 22 cents per kilowatt-hour in France and 13 cents in the United States.

We can measure just how far the Energiewende has pushed Germany toward the ultimate goal of decarbonization. In 2000, the country derived nearly 84 percent of its total primary energy from fossil fuels; this share fell to about 78 percent in 2019. If continued, this rate of decline would leave fossil fuels still providing nearly 70 percent of the country’s primary energy supply in 2050.

Meanwhile, during the same 20-year period, the United States reduced the share of fossil fuels in its primary energy consumption from 85.7 percent to 80 percent, cutting almost exactly as much as Germany did. The conclusion is as surprising as it is indisputable. Without anything like the expensive, target-mandated Energiewende, the United States has decarbonized at least as fast as Germany, the supposed poster child of emerging greenness.

This article appears in the December 2020 print issue as “Energiewende, 20 Years Later.”

Iron Powder Passes First Industrial Test as Renewable, Carbon Dioxide-Free Fuel

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/energywise/energy/renewables/iron-powder-passes-first-industrial-test-as-renewable-co2free-fuel

Simple question: What if we could curb this whole fossil fuel-fed climate change nightmare and burn something else as an energy source instead? As a bonus, what if that something else is one of the most common elements on Earth?

Simple answer: Let’s burn iron.

While setting fire to an iron ingot is probably more trouble than it’s worth, fine iron powder mixed with air is highly combustible. When you burn this mixture, you’re oxidizing the iron. Whereas a carbon fuel oxidizes into CO2, an iron fuel oxidizes into Fe2O3, which is just rust. The nice thing about rust is that it’s a solid which can be captured post-combustion. And that’s the only byproduct of the entire business—in goes the iron powder, and out comes energy in the form of heat and rust powder. Iron has an energy density of about 11.3 kWh/L, which is better than gasoline. Although its specific energy is a relatively poor 1.4 kWh/kg, meaning that for a given amount of energy, iron powder will take up a little bit less space than gasoline but it’ll be almost ten times heavier.

It might not be suitable for powering your car, in other words. It probably won’t heat your house either. But it could be ideal for industry, which is where it’s being tested right now.

Researchers from TU Eindhoven have been developing iron powder as a practical fuel for the past several years, and last month they installed an iron powder heating system at a brewery in the Netherlands, which is turning all that stored up energy into beer. Since electricity can’t efficiently produce the kind of heat required for many industrial applications (brewing included), iron powder is a viable zero-carbon option, with only rust left over.

So what happens to all that rust? This is where things get clever, because the iron isn’t just a fuel that’s consumed— it’s energy storage that can be recharged. And to recharge it, you take all that Fe2O3, strip out the oxygen, and turn it back into Fe, ready to be burned again. It’s not easy to do this, but much of the energy and work that it takes to pry those Os away from the Fes get returned to you when you burn the Fe the next time. The idea is that you can use the same iron over and over again, discharging it and recharging it just like you would a battery.

To maintain the zero-carbon nature of the iron fuel, the recharging process has to be zero-carbon as well. There are a variety of different ways of using electricity to turn rust back into iron, and the TU/e researchers are exploring three different technologies based on hot hydrogen reduction (which turns iron oxide and hydrogen into iron and water), as they described to us in an email:

Mesh Belt Furnace: In the mesh belt furnace the iron oxide is transported by a conveyor belt through a furnace in which hydrogen is added at 800-1000°C. The iron oxide is reduced to iron, which sticks together because of the heat, resulting in a layer of iron. This can then be ground up to obtain iron powder.
Fluidized Bed Reactor: This is a conventional reactor type, but its use in hydrogen reduction of iron oxide is new. In the fluidized bed reactor the reaction is carried out at lower temperatures around 600°C, avoiding sticking, but taking longer.
Entrained Flow Reactor: The entrained flow reactor is an attempt to implement flash ironmaking technology. This method performs the reaction at high temperatures, 1100-1400°C, by blowing the iron oxide through a reaction chamber together with the hydrogen flow to avoid sticking. This might be a good solution, but it is a new technology and has yet to be proven.

Both production of the hydrogen and the heat necessary to run the furnace or the reactors require energy, of course, but it’s grid energy that can come from renewable sources. 

If renewing the iron fuel requires hydrogen, an obvious question is why not just use hydrogen as a zero-carbon fuel in the first place? The problem with hydrogen is that as an energy storage medium, it’s super annoying to deal with, since storing useful amounts of it generally involves high pressure and extreme cold. In a localized industrial setting (like you’d have in your rust reduction plant) this isn’t as big of a deal, but once you start trying to distribute it, it becomes a real headache. Iron powder, on the other hand, is safe to handle, stores indefinitely, and can be easily moved with existing bulk carriers like rail.

Which is why its future looks to be in applications where weight is not a primary concern and collection of the rust is feasible. In addition to industrial heat generation (which will eventually include retrofitting coal-fired power plants to burn iron powder instead), the TU/e researchers are exploring whether iron powder could be used as fuel for large cargo ships, which are extraordinarily dirty carbon emitters that are also designed to carry a lot of weight. 

Philip de Goey, a professor of combustion technology at TU/e, told us that he hopes to be able to deploy 10 MW iron powder high-temperature heat systems for industry within the next four years, with 10 years to the first coal power plant conversion. There are still challenges, de Goey tells us: “the technology needs refinement and development, the market for metal powders needs to be scaled up, and metal powders have to be part of the future energy system and regarded as safe and clean alternative.” De Goey’s view is that iron powder has a significant but well-constrained role in energy storage, transport, and production that complements other zero-carbon sources like hydrogen. For a zero carbon energy future, de Goey says, “there is no winner or loser— we need them all.”

Why Does the U.S. Have Three Electrical Grids?

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/renewables/why-does-the-us-have-three-electrical-grids

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

If you look at lists of the 100 greatest inventions of all time, electricity figures prominently. Once you get past some key enablers that can’t really be called inventions—fire, money, the wheel, calendars, the alphabet—you find things like light bulbs, the automobile, refrigeration, radios, the telegraph and telephone, airplanes, computers and the Internet. Antibiotics and the modern hospital would be impossible without refrigeration. The vaccines we’re all waiting for depend on electricity in a hundred different ways.

It’s the key to modern life as we know it, and yet, universal, reliable service remains an unsolved problem. By one estimate, a billion people still do without it. Even in a modern city like Mumbai, generators are commonplace, because of an uncertain electrical grid. This year, California once again saw rolling blackouts, and with our contemporary climate producing heat waves that can stretch from the Pacific Coast to the Rocky Mountains, they won’t be the last.

Electricity is hard to store and hard to move, and electrical grids are complex, creaky, and expensive to change. In the early 20teens, Europe began merging its distinct grids into a continent-wide supergrid, an algorithm-based project that IEEE Spectrum wrote about in 2014. The need for a continent-wide supergrid in the U.S. has been almost as great, and by 2018 the planning of one was pretty far long—until it hit a roadblock that, two years later, still stymies any progress. The problem is not the technology, and not even the cost. The problem is political. That’s the conclusion of an extensively reported investigation jointly conducted by The Atlantic magazine and InvestigateWest, a watchdog nonprofit that was founded in 2009 after the one of Seattle’s daily newspapers stopped publishing. The resulting article, with the heading, “Who Killed the Supergrid?”, was written by Peter Fairley, who has been a longtime contributing editor for IEEE Spectrum and is my guest today. He joins us via Skype.

Peter, welcome to the podcast.

Peter Fairley It’s great to be here, Steven.

Steven Cherry Peter, you wrote that 2014 article in Spectrum about the Pan-European Hybrid Electricity Market Integration Algorithm, which you say was needed to tie together separate fiefdoms. Maybe you can tell us what was bad about the separate fiefdoms served Europe nobly for a century.

Peter Fairley Thanks for the question, Steven. That story was about a pretty wonky development that nevertheless was very significant. Europe, over the last century, has amalgamated its power systems to the point where the European grid now exchange’s electricity, literally across the continent, north, south, east, west. But until fairly recently, there have been sort of different power markets operating within it. So even though the different regions are all physically interconnected, there’s a limit to how much power can actually flow all the way from Spain up to Central Europe. And so there are these individual regional markets that handle keeping the power supply and demand in balance, and putting prices on electricity. And that algorithm basically made a big step towards integrating them all. So that you’d have one big market and a more competitive, open market and the ability to, for example, if you have spare wind power in one area, to then make use of that in some place a thousand kilometers away.

Steven Cherry The U.S. also has separate fiefdoms. Specifically, there are three that barely interact at all. What are they? And why can’t they share power?

Peter Fairley Now, in this case, when we’re talking about the U.S. fiefdoms, we’re talking about big zones that are physically divided. You have the Eastern—what’s called the Eastern Interconnection—which is a huge zone of synchronous AC power that’s basically most of North America east of the Rockies. You have the Western Interconnection, which is most of North America west of the Rockies. And then you have Texas, which has its own separate grid.

Steven Cherry And why can’t they share power?

Peter Fairley Everything within those separate zones is synched up. So you’ve got your 60 hertz AC wave; 60 times a second the AC power flow is changing direction. And all of the generators, all of the power consumption within each zone is doing that synchronously. But the east is doing it on its own. The West is on a different phase. Same for Texas.

Now you can trickle some power across those divides, across what are called “seams” that separate those, using DC power converters—basically, sort of giant substations with the world’s largest electronic devices—which are taking some AC power from one zone, turning it into DC power, and then producing a synthetic AC wave, to put that power into another zone. So to give you a sense of just what the scale of the transfers is and how small it is, the East and the West interconnects have a total of about 950 gigawatts of power-generating capacity together. And they can share a little over one gigawatt of electricity.

Steven Cherry So barely one-tenth of one percent. There are enormous financial benefits and reliability benefits to uniting the three. Let’s start with reliability.

Peter Fairley Historically, when grids started out, you would have literally a power system for one neighborhood and a separate power system for another. And then ultimately, over the last century, they have amalgamated. Cities connected with each other and then states connected with each other. Now we have these huge interconnections. And reliability has been one of the big drivers for that because you can imagine a situation where if you if you’re in city X and your biggest power generator goes offline, you know, burn out or whatever. If you’re interconnected with your neighbor, they probably have some spare generating capacity and they can help you out. They can keep the system from going down.

So similarly, if you could interconnect the three big power systems in North America, they could support each other. So, for example, if you have a major blackout or a major weather event like we saw last month—there was this massive heatwave in the West, and much of the West was struggling to keep the lights on. It wasn’t just California. If they were more strongly interconnected with Texas or the Eastern Interconnect, they could have leaned on those neighbors for extra power supply.

Steven Cherry Yeah, your article imagines, for example, the sun rising in the West during a heatwave sending power east; the sun setting in the Midwest, wind farms could send power westward. What about the financial benefits of tying together these three interconnects? Are they substantial? And are they enough to pay for the work that would be needed to unify them into a supergrid?

Peter Fairley The financial benefits are substantial and they would pay for themselves. And there’s really two reasons for that. One is as old as our systems, and that is, if you interconnect your power grids, then all of the generators in the amalgamated system can, in theory, they can all serve that total load. And what that means is they’re all competing against each other. And power plants that are inefficient are more likely to be driven out of the market or to operate less frequently. And so that the whole system becomes more efficient, more cost-effective, and prices tend to go down. You see that kind of savings when you look at interconnecting the big grids in North America. Consumers benefit—not necessarily all the power generators, right? There you get more winners and losers. And so that’s the old part of transmission economics.

What’s new is the increasing reliance on renewable energy and particularly variable renewable energy supplies like wind and solar. Their production tends to be more kind of bunchy, where you have days when there’s no wind and you have days when you’ve got so much wind that the local system can barely handle it. So there are a number of reasons why renewable energy really benefits economically when it’s in a larger system. You just get better utilization of the same installations.

Steven Cherry And that’s all true, even though sending power 1000 miles or 3000 miles? You lose a fair amount of that generation, don’t you?

Peter Fairley It’s less than people imagine, especially if you’re using the latest high voltage direct current power transmission equipment. DC power lines transmit power more efficiently than AC lines do, because the physics are actually pretty straightforward. An AC current will ride on the outside of a power cable, whereas a DC current will use the entire cross-section of the metal. And so you get less resistance overall, less heating, and less loss. And so. And the power electronics that you need on either side of a long power line like that are also becoming much more efficient. So you’re talking about losses of a couple of percent on lines that, for example in China, span over 3000 kilometers.

Steven Cherry The reliability benefits, the financial benefits, the way a supergrid would be an important step for helping us move off of our largely carbon-based sources of power—we know all this in part because in the mid-2010s a study was made of the feasibility—including the financial feasibility—of unifying the U.S. in one single supergrid. Tell us about the Interconnections Seams Study.

Peter Fairley So the Interconnection Seams Study [Seams] was one of a suite of studies that got started in 2016 at the National Renewable Energy Laboratory in Colorado, which is one of the national labs operated by the U.S. Department of Energy. And the premise of the Seams study was that the electronic converters sitting between the east and the west grids were getting old; they were built largely in the 70s; they are going to start to fail and need to be replaced.

And the people at NREL were saying, this is an opportunity. Let’s think—and the power operators along the seam were thinking the same thing—we’re gonna have to replace these things. Let’s study our strategic options rather than have them go out of service and just automatically replace them with similar equipment. So what they posited was, let’s look at some longer DC connections to tie the East and the West together—and maybe some bigger ones. And let’s see if they pay for themselves. Let’s see if they have the kind of transformative effects that one would imagine that they would, just based on the theory. So they set up a big simulation modeling effort and they started running the numbers…

Now, of course, this got started in 2016 under President Obama. And it continued to 2017 and 2018 under a very different president. And basically, they affirmed that tying these grids, together with long DC lines, was a great idea, that it would pay for itself, that it would make much better use of renewable energy. But it also showed that it would accelerate the shutdown of coal-fired power. And that got them in some hot water with the new masters at the Department of Energy.

Steven Cherry By 2018 the study was largely completed and researchers will begin to share its conclusions with other energy experts and policymakers. For example, there was a meeting in Iowa. You describe where there is a lot of excitement over the scenes study. You write that things took a dramatic turn at one such gathering in Lawrence, Kansas.

Peter Fairley Yes. So the study was complete as far as the researchers were concerned. And they were working on their final task under their contract from the Department of Energy, which was to write and submit a journal article in this case. They were targeting an IEEE journal. And they, as you say, had started making some presentations. The second one was in August, in Kansas, and there’s a DOE official—a political appointee—who’s sitting in the audience and she does not like what she’s hearing. She, while the talk is going on, pulls out her cell phone, writes an email to DOE headquarters, and throws a red flag in the air.

Steven Cherry The drama moved up the political chain to a pretty high perch.

Peter Fairley According to an email from one of the researchers that I obtained and is presented in the InvestigateWest version of this article, it went all the way to the current secretary of energy, Daniel Brouillette, and perhaps to the then-Secretary of Energy, former Texas Governor [Rick] Perry.

Steven Cherry And the problem you say in that article was essentially the U.S. administration’s connections to—devotion to—the coal industry.

Peter Fairley Right. You’ve got a president who has made a lot of noise both during his election campaign and since then about clean, beautiful coal. He is committed to trying to stop the bleeding in the U.S. coal industry, to slow down or stop the ongoing mothballing of coal-fired power plants. His Secretary of Energy. Rick Perry is doing everything he can to deliver on Trump’s promises. And along comes this study that says we can have a cleaner, more efficient power system with less coal. And yes, so it just ran completely counter to the political narrative of the day.

Steven Cherry You said earlier the financial benefits to consumers are unequivocal. But in the case of the energy providers, there would be winners and losers and the losers with largely come from the coal industry.

Peter Fairley I would just add one thing to that, and that is and this depends on really the different systems. You’re looking at the different conditions and scenarios and assumptions. But, you know, in a scenario where you have more renewable energy, there are also going to be impacts on natural gas. And the oil and gas industry is definitely also a major political backer of the Trump administration.

Steven Cherry The irony is that the grid is moving off of coal anyway, and to some extent, oil and even natural gas, isn’t it?

Peter Fairley Definitely oil. It’s just a very expensive and inefficient way to produce power. So we’ve been shutting that down for a long time. There’s very little left. We are shutting down coal at a rapid rate in spite of every effort to save it. Natural gas is growing. So natural gas has really been—even more so than renewables—the beneficiary of the coal shutdown. Natural gas is very cheap in the U.S. thanks to widespread fracking. And so it’s coming on strong and it’s still growing.

Steven Cherry Where is the Seams study now?

Peter Fairley The Seams study is sitting at the National Renewable Energy Lab. Its leaders, under pressure from the political appointees at DOE, its leaders have kept it under wraps. It appears that there may have been some additional work done on the study since it got held up in 2018. But we don’t know what the nature of that work was. Yeah, so it’s just kind of missing in action at this point.

My sources tell me that there is an effort underway at the lab to get it out. And I think the reason for that is that they’ve taken a real hit in terms of the morale of their staff. the NREL Seams study is not the only one that’s been held up, that is being held up. In fact, it’s one of dozens, according to my follow-up reporting. And, you know, NREL researchers are feeling pretty hard done by and I think the management is trying to show its staff that it has some scientific integrity.

But I think it’s important to note that there are other political barriers to building a supergrid. It might be a no brainer on paper, but in addition to the pushback from the fossil-fuel industry that we’re seeing with Seams, there are other political crosscurrents that have long stood in the way of long-distance transmission in the U.S. For example—and this is a huge one—that, in the U.S., most states have their own public utility commission that has to approve new power lines. And when you’re looking at the kind of lines that Seams contemplated, or that would be part of a supergrid, you’re talking about long lines that have to span, in some cases, a dozen states. And so you need to get approval from each of those states to transit— to send power from point A to point X. And that is a huge challenge. There’s a wonderful book that really explores that side of things called Superpower [Simon & Schuster, 2019] by the Wall Street Journal’s Russell Gold.

Steven Cherry The politics that led to the suppression of the publication of the Seams study go beyond Seams itself don’t they? There are consequences, for example, at the Office of Energy Efficiency and Renewable Energy.

Peter Fairley Absolutely. Seams is one of several dozen studies that I know of right now that are held up and they go way beyond transmission. They get into energy efficiency upgrades to low-income housing, prices for solar power… So, for example—and I believe this hasn’t been reported yet; I’m working on it—the Department of Energy has hitherto published annual reports on renewable energy technologies like wind and solar. And, in those, they provide the latest update on how much it costs to build a solar power plant, for example. And they also update their goals for the technology. Those annual reports have now been canceled. They will be every other year, if not less frequent. That’s an example of politics getting in the way because the cost savings from delaying those reports are not great, but the potential impact on the market is. There are many studies, not just those performed by the Department of Energy that will use those official price numbers in their simulations. And so if you delay updating those prices for something like solar, where the prices are coming down rapidly, you are making renewable energy look less competitive.

Steven Cherry And even beyond the Department of Energy, the EPA, for example, has censored itself on the topic of climate change, removing information and databases from its own Web sites.

Peter Fairley That’s right. The way I think of it is, when you tell a lie, it begets other lies. And you and you have to tell more lies to cover your initial lie and to maintain the fiction. And I see the same thing at work here with the Trump administration. When the president says that climate change is a hoax, when the president says that coal is a clean source of power, it then falls to the people below him on the political food chain to somehow make the world fit his fantastical and anti-science vision. And so, you just get this proliferation of information control in a hopeless bid to try and bend the facts to somehow make the great leader look reasonable and rational.

Steven Cherry You say even e-mails related to the Seams study have disappeared, something you found in your Freedom of Information Act requests. What about the national labs themselves? Historically, they have been almost academic research organizations or at least a home for unfettered academic freedom style research.

Peter Fairley That’s the idea. There has been this presumption or practice in the past, under past administrations, that the national labs had some independence. And that’s not to say that there’s never been political oversight or influence on the labs. Certainly, the Department of Energy decides what research it’s going to fund at the labs. And so that in itself shapes the research landscape. But there was always this idea that the labs would then be—you fund the study and then it’s up to the labs to do the best work they can and to publish the results. And the idea that you are deep-sixing studies that are simply politically inconvenient or altering the content of the studies to fit the politics that’s new. That’s what people at the lab say is new under the Trump administration. It violates. DOE’s own scientific integrity policies in some cases, for example, with the Lawrence Berkeley National Laboratory. It violates the lab’s scientific integrity policy and the contract language under which the University of California system operates that lab for the Department of Energy. So, yeah, the independence of the national labs is under threat today. And there are absolutely concerns among scientists that precedents are being set that could affect how the labs operate, even if, let’s say, President Trump is voted out of office in November.

Steven Cherry Along those lines, what do you think the future of grid unification is?

Peter Fairley Well, Steven, I’ve been writing about climate and energy for over 20 years now, and I would have lost my mind if I wasn’t a hopeful person. So I still feel optimistic about our ability to recognize the huge challenge that climate change poses and to change the way we live and to change our energy system. And so I do think that we will see longer power lines helping regions share energy in the future. I am hopeful about that. It’s just it makes too much sense to leave that on the shelf.

Steven Cherry Well, Peter, it’s an amazing investigation of the sort that reminds us why the press is important enough to democracy to be called the fourth estate. Thanks for publishing this work and for joining us today.

Peter Fairley Thank you so much. Steven. It’s been a pleasure.

Steven Cherry We’ve been speaking with Peter Fairley, a journalist who focuses on energy and the environment, about his researching and reporting on the suspension of work on a potential unification of the U.S. energy grid.

This interview was recorded September 11, 2020. Our audio engineering was by Gotham Podcast Studio; our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

Exclusive: Airborne Wind Energy Company Closes Shop, Opens Patents

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/renewables/exclusive-airborne-wind-energy-company-closes-shop-opens-patents

This week, a 13-year experiment in harnessing wind power using kites and modified gliders finally closes down for good. But the technology behind it is open-sourced and is being passed on to others in the field.

As of 10 September, the airborne wind energy (AWE) company Makani Technologies has officially announced its closure. A key investor, the energy company Shell, also released a statement to the press indicating that “given the current economic environment” it would not be developing any of Makani’s intellectual property either. Meanwhile, Makani’s parent company, X, Alphabet’s moonshot factory, has made a non-assertion pledge on Makani’s patent portfolio. That means anyone who wants to use Makani patents, designs, software, and research results can do so without fear of legal reprisal.

Makani’s story, recounted last year on this site, is now the subject of a 110-minute documentary called Pulling Power from the Sky—also free to view.

When she was emerging from graduate studies at MIT in 2009, Paula Echeverri (once Makani’s chief engineer) said the company was a compelling team to join, especially for a former aerospace engineering student.

“Energy kite design is not quite aircraft design and not quite wind turbine design,” she said.

The idea behind the company’s technology is to raise the altitude of the wind energy harvesting to hundreds of meters in the sky—where the winds are typically both stronger and more steady. Because a traditional windmill reaching anywhere approaching these heights would be impractical, Makani was looking into kites or gliders that could ascend to altitude first—fastened to the ground by a tether. Only then would the flyer begin harvesting energy from wind gusts.

Pulling Power recounts Makani’s story from its very earliest days, circa 2006, when kites like the ones kite surfers use were the wind energy harvester of choice. However, using kites also means drawing power out of the tug on the kite’s tether. Which, as revealed by the company’s early experiments, couldn’t compete with propellers on a glider plane.

What became the Makani basic flyer, the M600 Energy Kite, looked like an oversized hobbyist’s glider but with a bank of propellers across the wing. These props would first be used to loft the glider to its energy-harvesting altitude. Then the engine would shut off and the glider would ride the air currents—using the props as mini wind turbines.

According to a free 1,180-page ebook (Part 1Part 2Part 3The Energy Kite, which Makani is also releasing online, the company soon found a potentially profitable niche in operating offshore.

Just in terms of tonnage, AWE had a big advantage over traditional offshore wind farms. Wind turbines (in shallow water) fixed to the seabed might require 200 to 400 tons of metal for every megawatt of power the turbine generated. And floating deep-water turbines, anchored to seabed by cables, typically involve 800 tons or more per megawatt. Meanwhile, a Makani AWE platform—which can be anchored in even deeper water—weighed only 70 tons per rated megawatt of generating capacity.

Yet, according to the ebook, in real-world tests, Makani’s M600 proved difficult to fly at optimum speed. In high winds, it couldn’t fly fast enough to pull as much power out of the wind as the designers had hoped. In low winds, it often flew too fast. In all cases, the report says, the rotors just couldn’t operate at peak capacity through much of the flyer’s maneuvers. The upshot: The company had a photogenic oversized model airplane, but not the technology that’d give regular wind turbines a run for their money.

Don’t take Makani’s word for it, though, says Echeverri. Not only is the company releasing its patents into the wild, it’s also giving away its code baseflight logs, and a Makani flyer simulation tool called KiteFAST.

“I think that the physics and the technical aspects are still such that, in floating offshore wind, there’s a ton of opportunity for innovation,” says Echeverri.

One of the factors the Makani team didn’t anticipate in the company’s early years, she said, was how precipitously electricity prices would continue to dropleaving precious little room at the margins for new technologies like AWEs to blossom and grow.

“We’re thinking about the existing airborne wind industry,” Echeverri said. “For people working on the particular problems we’d been working on, we don’t want to bury those lessons. We also found this to be a really inspiring journey for us as engineers—a joyful journey… It is worthwhile to work on hard problems.”

Solar Closing in on “Practical” Hydrogen Production

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/renewables/solar-closing-in-on-practical-hydrogen-production

Israeli and Italian scientists have developed a renewable energy technology that converts solar energy to hydrogen fuel — and it’s reportedly at the threshold of “practical” viability.

The new solar tech would offer a sustainable way to turn water and sunlight into storable energy for fuel cells, whether that stored power feeds into the electrical grid or goes to fuel-cell powered trucks, trains, cars, ships, planes or industrial processes.

Think of this research as a sort of artificial photosynthesis, said Lilac Amirav, associate professor of chemistry at the Technion — Israel Institute of Technology in Haifa. (If it could be scaled up, the technology could eventually be the basis of “solar factories” in which arrays of solar collectors split water into stores of hydrogen fuel——as well as, for reasons discussed below, one or more other industrial chemicals.)

“We [start with] a semiconductor that’s very similar to what we have in solar panels,” says Amirav. But rather than taking the photovoltaic route of using sunlight to liberate a current of electrons, the reaction they’re studying harnesses sunlight to efficiently and cost-effectively peel off hydrogen from water molecules.

The big hurdle to date has been that hydrogen and oxygen just as readily recombine once they’re split apart—that is, unless a catalyst can be introduced to the reaction that shunts water’s two component elements away from one another.

Enter the rod-shaped nanoparticles Amirav and co-researchers have developed. The wand-like rods (50-60 nanometers long and just 4.5 nm in diameter) are all tipped with platinum spheres 2–3 nm in diameter, like nano-size marbles fastened onto the ends of drinking straws.

Since 2010, when the team first began publishing papers about such specially tuned nanorods, they’ve been tweaking the design to maximize its ability to extract as much hydrogen and excess energy as possible from “solar-to-chemical energy conversion.”

Which brings us back to those “other” industrial chemicals. Because creating molecular hydrogen out of water also yields oxygen, they realized they had to figure out what to do with that byproduct. “When you’re thinking about artificial photosynthesis, you care about hydrogen—because hydrogen’s a fuel,” says Amirav. “Oxygen is not such an interesting product. But that is the bottleneck of the process.”

There’s no getting around the fact that oxygen liberated from split water molecules carries energy away from the reaction, too. So, unless it’s harnessed, it ultimately represents just wasted solar energy—which means lost efficiency in the overall reaction.

So, the researchers added another reaction to the process. Not only does their platinum-tipped nanorod catalyst use solar energy to turn water into hydrogen, it also uses the liberated oxygen to convert the organic molecule benzylamine into the industrial chemical benzaldehyde (commonly used in dyes, flavoring extracts, and perfumes).

All told, the nanorods convert 4.2 percent of the energy of incoming sunlight into chemical bonds. Considering the energy in the hydrogen fuel alone, they convert 3.6 percent of sunlight energy into stored fuel.

These might seem like minuscule figures. But 3.6 percent is still considerably better than the 1-2 percent range that previous technologies had achieved. And according to the U.S. Department of Energy, 5-10 percent efficiency is all that’s needed to reach what the researchers call the “practical feasibility threshold” for solar hydrogen generation.

Between February and August of this year, Amirav and her colleagues published about the above innovations in the journals NanoEnergy and Chemistry Europe. They also recently presented their research at the fall virtual meeting of the American Chemical Society.

In their presentation, which hinted at future directions for their work, they teased further efficiency improvements courtesy of new new work with AI data mining experts.

“We are looking for alternative organic transformations,” says Amirav. This way, she and her collaborators hope, their solar factories can produce hydrogen fuel plus an array of other useful industrial byproducts. In the future, their artificial photosynthesis process could yield low-emission energy, plus some beneficial chemical extracts as a “practical” and “feasible” side-effect.

South Africa’s Lights Flicker as its Electric Utility Ponders a Future Without Carbon

Post Syndicated from David Wagman original https://spectrum.ieee.org/energywise/energy/renewables/south-africas-lights-flicker-as-its-electric-utility-ponders-a-future-without-carbon

Optimists look to the future of Eskom, South Africa’s electric utility, and see a glass half full. Pessimists see a glass nearly empty.

The optimists look to what they say is a rare opportunity for a national utility to fully embrace renewable energy resources. In so doing, the power provider could decarbonize its fleet of electric generating plants as well as Africa’s largest economy. Tens of thousands of green energy jobs could be created and gigawatts of renewable generating resources could be added each year for the next 10 or 20 years.

The pessimists—some might call them realists—point to a utility buckling under a massive debt load, singed by a legacy of government cronyism, disappointed by the performance problems of two of the world’s newest and biggest coal-fired power plants, and swamped beneath a backlog of deferred maintenance projects.

In particular, deferred maintenance across the utility’s 37,000 megawatts (MW) of installed capacity has led to forced outages and rolling blackouts in recent weeks during this, South Africa’s winter season.

In late July, the utility urged consumers to turn off lights after four of six generating units at the 3,600-MW Tutuka power station shut down following equipment failures. Only days earlier, the utility had restored power after four other power plants suffered unexpected outages.

Eskom’s problems have been decades in the making and are closely tied to the country’s transition away from Apartheid, the policy of racial segregation that ruled South Africa from the late 1940s until the early 1990s.

As a state-owned entity, Eskom made massive investment in coal-fired generating capacity during the 1970s and 1980s. That capacity led to low electricity tariffs driven still lower by policymakers’ decision not to set aside money for future investment.

But as economic growth and energy consumption accelerated during the 1990s, the government see-sawed over whether to pursue new generating capacity in a restructured energy sector, led by private-sector investment, or by government-led investment, through Eskom.

It ultimately opted for government investment, and in the early 2000s, accepted proposals to build two enormous coal-fired power stations, known as Medupi (meaning “a gentle rain”) and Kusile (meaning “the dawn has come”).

Those decisions to build came at a time when South African President Jacob Zuma was entering office, and utility oversight was weakened when Zuma installed senior managers who “ran amok,” says Mark Swilling, who directs the Sustainable Development Program at Stellenbosch University. Eskom then did “the worst possible thing,” says Swilling: It took on enormous debt loads at a time when fiscal and management oversight was being reduced.

At the same time, the utility took on the role of project manager for both power plants. One source familiar with the projects said that no engineering firm wanted the work because of “all the variables” around the government’s role. And, because few big infrastructure projects had been undertaken since the end of Apartheid, little insight existed into factors such as labor productivity.

The government anted up billions of dollars in guarantees to support construction of both power plants. But engineering flaws, coupled with inexperienced project managers led to delays and cost overruns at both sites.

What’s more, in 2015, a U.S. District Court fined equipment supplier Hitachi $19 million for improper payments it made to secure contracts to provide boilers for the power plants. The Japanese-based industrial conglomerate—which fell under the reach of U.S. securities law because the company did business in the U.S.—agreed to pay the fine but did not admit guilt.

Hitachi has since become part of Mitsubishi. Mitsubishi Hitachi Power Systems-Africa (MHPS-A) did not reply to requests for an interview.

Earlier this year, a South African government probe questioned fees Eskom paid to U.S.-based engineering group Black & Veatch as far back as 2005. The probe questioned alleged price escalations along with supposedly no-bid contracts for work related to the Kusile power plant and other energy infrastructure projects.

In an email response to a request for comment about the inquiry, Black & Veatch spokesperson Patrick MacElroy said, “We were selected by Eskom in an open and transparent process to provide engineering, project management, and construction management support services for the Kusile power station by the Eskom board, the Tender Committee, and by National Treasury. This initial project award was extended, and the scope increased through the addition of approved annual Task Orders including the current agreement governing our operations on the project.”

MacElroy said that over time, the engineering firm’s role expanded, “which required additional resources compared to the initial project plan.” Using boldface, underlined type in his email, MacElroy said that “at no point was Black & Veatch responsible for the design of the boilers or coal ash systems often highlighted as a source of cost overruns and technical issues.”

One source calls the power plants “an albatross around the country’s neck.”

Medupi Power Station is a dry-cooled coal-fired power station with an installed capacity of 4,764 MW, placing it among the largest coal-fired power stations in the world. When work first began on the plant in 2007, Eskom said it could be built for around $4.7 billion. By last year, that estimate had grown to nearly $14 billion.

The first 794 MW unit was commissioned and handed over to Eskom Generation in August 2015. Another five units—each also approaching 800 MW in capacity—were completed and delivered at roughly nine-month intervals. The final unit is expected to achieve commercial status later this year.

A series of technical problems have plagued the plant, including steam piping pressure, boiler weld defects, ash system blockage and damage to mill crushers used to prepare coal for burning.

Most troubling, however, may be design defects with the power plant’s boilers, which were supplied by Hitachi prior to its joining Mitsubishi Hitachi Power Systems-Africa.

“There are fundamental problems” with the boilers, says Mark Swilling. In particular, design engineers “got the boilers wrong” for the type of coal that was to be burned. A May 2020 system status briefing given by two senior Eskom executives reported that technical solutions had been agreed to between Eskom and MHPS-A for boiler defects at both the Medupi and Kusile power stations.

The repairs were first made to Medupi Unit 3 during a 75-day shutdown earlier this year. Medupi unit 6 was next for a 75-day repair outage. Three additional units are slated to be modified during the remainder of the year.

The World Bank, which provided some of the financing for the two power plants, issued a report this past February and said the boiler’s defects were impacting two key plant performance metrics: energy availability factor (EAF) and unplanned capability loss factor (UCLF). These measures, 92% and 2%, respectively, were “far from ideal values,” the report said.

The World Bank report said that Eskom’s EAF target for Medupi was 75%. That was a full 17 points below where it should have been. Even with the reduced target, some of the plant’s generating units were still operating at a substandard level.

Meanwhile, the target for UCLF, which represents unplanned availability and power capacity losses, stood at 18%, nine times as high as the optimal value. As of February, all units except one were operating above this target, the World Bank report said.

The report blamed the poor performance on “major plant defects” along with what it said was an inadequate maintenance and operating regime and difficulty managing spare parts.

The bottom line, says Swilling, is that the Medupi plant will “never, ever” achieve the performance targets that were specified before the generating units were built.

Further substantiating his outlook is another setback: It may not be until the early 2030s before flue gas desulfurization equipment is installed on each unit. A July oversight report by the African Development Bank said that retrofits to add those environmental controls, which were specified back in 2007, could cost another $2.5 billion.

Amid the turmoil, Andre de Ruyter took over as Eskom CEO in early January. Appointed by South African President Cyril Ramaphosa, de Ruyter is tasked with overseeing a plan to split Eskom into three units: for generation, transmission, and distribution, in an attempt to achieve efficiencies and attract investment. The executive has also undertaken a series of initiatives that some say are particularly bold.

First, de Ruyter committed to a schedule of power plant maintenance work “no matter the consequences,” as Swilling puts it. The result has been razor-thin reserve margins that have led to frequent electric service disruptions as some generating units are repaired while still others suffer forced outages.

Second, de Ruyter has said that Eskom will never build another coal-fired power plant. And third, he has called for the utility to lead an energy transformation in South Africa to decarbonize the grid and the economy.

The utility may have no other choice but to move away from fossil fuels. The average age of its coal-dominated power generating fleet is approaching 40 years. The legacy of deferred maintenance has not been kind. As a result, the price tag to repair the existing assets could exceed the cost of shutting them down and replacing the capacity with renewable energy resources.

That’s a tall order. After all, the utility has around 37 gigawatts of installed generating capacity, much of it coal-fired. By comparison, the country’s nascent, largely private renewable energy sector has installed around 5 GW of capacity over the past half-decade. An unprecedented boom in utility-scale wind and solar projects would need to ramp up rapidly just to keep pace with an expected surge of coal plant retirements.

But here, too, politics may get in the way. The country’s Minister of Public Enterprises supports a shift to renewable energy while the Minster of Mineral Resources and Energy does not.

“There is a lot of political uncertainty,” Swilling observes.

Whether or not Eskom—and, by extension, South Africa—shoulders this effort to shift from coal to renewables remains unclear. For now, at least, the utility’s glass appears to be nearly empty, drained by decades of bad management, meddlesome politicians, and mishandled infrastructure investments.

However, if Eskom in particular, and South Africans in general, embrace a plan to decarbonize the grid and build massive amounts of renewable energy, then the country may be able to fill its glass to overflowing as it shows the world how to restructure an economy in a way that fully embraces renewable energy.

Spherical Solar Cells Soak Up Scattered Sunlight

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/energywise/energy/renewables/spherical-solar-cells-soak-up-scattered-sunlight

Flat solar panels still face big limitations when it comes to making the most of the available sunlight each day. A new spherical solar cell design aims to boost solar power harvesting potential from nearly every angle without requiring expensive moving parts to keep tracking the sun’s apparent movement across the sky. 

Dethroned! Renewables Generated More Power than King Coal in April

Post Syndicated from Sandy Ong original https://spectrum.ieee.org/energywise/energy/renewables/renewables-generated-more-power-than-coal-in-april

For the first time ever, renewable energy supplied more power to the U.S. electricity grid than coal-fired plants for 47 days straight. The run is impressive because it trounces the previous record of nine continuous days last June and exceeds the total number of days renewables beat coal in all of 2019 (38 days).

In a recent report, the Institute for Energy Economics and Financial Analysis (IEEFA) details how the streak was first observed on 25 March and continued through to 10 May, the day the data was last analyzed. 

“We’ll probably track it again at the end of May, so the period could actually be longer,” says Dennis Wamsted, an energy analyst at IEEFA. Already, the figures for April speak volumes: wind, hydropower, and utility-scale solar sources produced 58.7 terawatt-hours (TWh) of electricity compared with coal’s 40.6 TWh—or 22.2% and 15.3% of the market respectively.  

In reality, the gap between the two sources is likely to be much larger, says Wamsted. That’s because the U.S. Energy Information Administration (EIA) database, where IEEFA obtains its data from, excludes power generated by rooftop solar panels, which itself is a huge power source.

The news that renewables overtook coal in the month of April isn’t surprising, says Brian Murray, director of the Duke University Energy Initiative. The first time this happened was last year, also in April. The month marks “shoulder season,” he says, “when heating is coming off but air-conditioning hasn’t really kicked in yet.” It’s when electricity demand is typically the lowest, which is why many power plants schedule their yearly maintenance during this time.

Spring is also when wind and hydropower generation peak, says Murray. Various thermal forces come into play with the Sun’s new positioning, and the melting snowpacks feed rivers and fill up reservoirs. 

“Normally you would expect some sort of rebound of coal generation in the summer, but I think there’s a variety of reasons why that’s not going to happen this year,” he says. “One has to do with coronavirus.”

With the pandemic placing most of the country in lockdown and economic activity declining, the EIA estimates that U.S. demand for electric power will fall by 5% in 2020. This, in turn, will drive coal production down by a quarter. In contrast, renewables are still expected to grow by 11%. The reason behind this is partly due to how energy is dispatched to the grid. Because of cheaper costs, renewables are used first if available, followed by nuclear power, natural gas, and then finally coal.

Coronavirus aside, the transition has been a long time coming. “Renewables have been on an inexorable rise for the last 10 years, increasingly eating coal’s lunch,” says Mike O’Boyle, director of electricity policy at Energy Innovation, a San Francisco-based think tank. The average coal plant in the U.S. is 40 years old, and these aging, inefficient plants are finding it increasingly difficult to compete against ever-cheaper renewable energy sources.

A decade ago, the average coal plant generated as much as 67% of its capacity. Today, that figure has dropped to 48%. And in the next five years, coal production is expected to fall to two-thirds of 2014 levels—a decline of 90 gigawatts (GW)—as increasing numbers of plants shut. 

“And that’s without policy changes that we anticipate will strengthen in the U.S., in which more than a third of people are in a state, city, or utility with a 100% clean energy goal,” says O’Boyle. Already, 30 states have renewable portfolio standards, or policies designed to increase electricity generation from renewable resources.

The transition towards renewables is one that’s being observed all across the world today. Global use of coal-powered electricity fell 3% last year, the biggest drop on record after nearly four decades. In Europe, the figure was 24%. The region has been remarkably progressive in its march towards renewable energy—last month saw both Sweden and Austria closing their last remaining coal plants, while the U.K. went through its longest coal-free stretch (35 days) since the Industrial Revolution more than 230 years ago.

But coal is still king in many parts of the world. For developing countries where electricity can be scarce and unreliable, the fossil fuel is often seen as the best option for power. 

The good news, however, is that the world’s two largest consumers of coal are investing heavily in renewables. Although China is still heavily reliant coal, it also boasts the largest capacity of wind, solar, and hydropower in the world today. India, with it abundant sunshine, is pursuing an aggressive solar plan. It is building the world’s largest solar park, and Prime Minister Narendra Modi has pledged that the country will produce 100 GW of solar power—five times what the U.S. generates—by 2022.

Today, renewable energy sources offer the cheapest form of power in two-thirds of the world, and they look set to get cheaper. They now provide up to 30% of global electricity demand, a figure is expected to grow to 50% by 2050. As a recent United Nations report put it: renewables are now “looking all grown up.”

Next-Gen Solar Cells Can Harvest Indoor Lighting for IoT Devices

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energywise/energy/renewables/next-gen-solar-cells-harvest-indoor-lighting-iot-devices

Billions of Internet-connected devices now adorn our walls and ceilings, sensing, monitoring, and transmitting data to smartphones and far-flung servers. As gadgets proliferate, so too does their electricity demand and need for household batteries, most of which wind up in landfills. To combat waste, researchers are devising new types of solar cells that can harvest energy from the indoor lights we’re already using.

The dominant material used in today’s solar cells, crystalline silicon, doesn’t perform as well under lamps as it does beneath the blazing sun. But emerging alternatives—such as perovskite solar cells and dye-sensitized materials—may prove to be significantly more efficient at converting artificial lighting to electrical power. 

A group of researchers from Italy, Germany, and Colombia is developing flexible perovskite solar cells specifically for indoor devices. In recent tests, their thin-film solar cell delivered power conversion efficiencies of more than 20 percent under 200 lux, the typical amount of illuminance in homes. That’s about triple the indoor efficiency of polycrystalline silicon, according to Thomas Brown, a project leader and engineering professor at the University of Rome Tor Vergata.

A Bright Spot for Solar Windows Powered By Perovskites

Post Syndicated from Sandy Ong original https://spectrum.ieee.org/energywise/energy/renewables/solar-windows-powered-perovskites

To most of us, windows are little more than glass panes that let light in and keep bad weather out. But to some scientists, windows represent possibility—the chance to take passive parts of a building and transform them into active power generators.

Anthony Chesman is one researcher working to develop such solar windows. “There are a lot of windows in the world that aren’t being used for any other purpose than to allow lighting in and for people to see through,” says Chesman, who is from Australia’s national science agency CSIRO. “But really, there’s a huge opportunity there in turning those windows into a space that can also generate electricity,” he says.

EnergySails Aim to Harness Wind and Sun To Clean Up Cargo Ships

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energywise/energy/renewables/energysails-harness-wind-sun-clean-up-cargo-ships

The global shipping industry is experiencing a wind-powered revival. Metal cylinders now spin from the decks of a half-dozen cargo ships, easing the burden on diesel engines and curbing fuel consumption. Devices like giant towing kites, vertical suction wings, and telescoping masts are well underway, while canvas sails flutter once more on smaller vessels. 

The latest development in “wind-assisted propulsion” comes from Japan. Eco Marine Power (EMP) recently unveiled a full-scale version of its EnergySail system at the Onomichi Marine Tech Test Center in Hiroshima Prefecture. The rigid, rectangular device is slightly curved and can be positioned into the wind to create lift, helping propel vessels forward. Marine-grade solar panels along the face can supply electricity for onboard lighting and equipment.

Greg Atkinson, EMP’s chief technology officer, says the 4-meter-tall sail will undergo shore-based testing this year, in preparation for sea trials. The device will deliver 1-kilowatt in peak solar power, or kWp, though the startup is still evaluating which type of photovoltaic panel to use. The potential sail power is yet to be determined, he says.

The EnergySail is one piece of EMP’s larger technology platform. The Fukuoka-based firm is also developing an integrated system that includes deck-mounted solar panels; recyclable marine batteries; charging systems; and computer programs that automatically rotate sails to capture optimal amounts of wind, or lower the devices when not in use or during bad weather. Atkinson notes that moving an EnergySail (mainly to optimize its wind collection) may affect how much sunlight it receives, though the panels can still collect solar power when lying flat.

The startup’s ultimate goal is to hoist about a dozen EnergySails on a tanker or freighter that has the available deck space. An array of that size could deliver power savings of up to 15 percent, depending on wind conditions and the vessel’s size, models show.

Gavin Allwright, secretary of the International Windship Association, says that figure is in line with projections for other wind-assisted technologies, which can help watercraft achieve between 5 and 20 percent fuel savings compared to typical ships. (EMP is not a member of the association.) For instance, the Finnish company Norsepower recently outfitted a Maersk oil tanker with two spinning rotor sails. The devices lowered the vessel’s fuel use by 8.2 percent on average during a 12-month trial period.

Shipping companies are increasingly investing in clean energy as international regulators move to slash global greenhouse gas emissions. Nearly all commercial cargo ships use oil or gas to carry goods across the globe; together, they contribute up to 3 percent of the world’s total annual fossil fuel emissions. Zero-emission alternatives like hydrogen fuel cells and ammonia-burning engines are still years from commercialization. But wind-assisted propulsion represents a more immediate, if partial, solution. 

For its EnergySail unit, EMP partnered with Teramoto Iron Works, which built the first rigid sails in the 1980s. Those devices — called JAMDA sails after the Japan Marine Machinery Development Association—were shown to reduce ships’ fuel use by between 10 to 30 percent on smaller coastal vessels, despite some technical issues. However, the experiment was short-lived. Plunging oil prices eroded the business case for efficiency upgrades, and shipowners later took them down.

EMP is currently talking with several shipowners to start installing its full energy system, potentially later this year. For the sea trial, the startup plans to install a deck-mounted solar array with up to 25 kWp; battery packs; computer systems; and one or two EnergySails. Atkinson says it may take two to three years of testing to verify whether the equipment can weather harsh conditions, including fierce winds and corrosive saltwater. 

Separately, EMP has started testing the non-sail portion of its platform. In May 2019, the company installed a 1.2-kWp solar array on a large crane vessel owned by Singaporean carrier Masterbulk. The setup also includes a 3.6-kilowatt-hour VRLA (valve regulated lead acid) battery pack made by Furukawa Battery Co. An onboard monitoring system automatically reports and logs fuel-consumption data in real time and calculates daily emissions of carbon and sulfur dioxide.

EMP previously tested Furukawa’s batteries on a vessel in Greece. During the day, solar panels recharged the batteries, which keep the voltage stable and could directly power the vessel’s lighting load. The batteries could also store the excess solar power to keep the lights on at night. It took the partners about five years of testing to ensure the system was stable. 

Atkinson says that, so far, the COVID-19 pandemic hasn’t disrupted the company’s work or halted its plans for the year.

“We can do much of the design work remotely and by using cloud-based applications,” he says. “Also, we can use virtual wind tunnels and [Computer Aided Design] applications for much of the initial design work for the sea trials phase.”

Across the industry, however, the coronavirus outbreak is wreaking economic havoc. Allwright says that shipowner interest in wind-assisted propulsion was “absolutely crazy” until a few weeks ago. “Now, shipping companies are saying, ‘Look, we can’t invest in new technology right now because we’re trying to survive,’” he says. 

Still, some technology developers are nonetheless accelerating their design work, in the hopes of launching projects as soon as the industry bounces back. “This pause gives the providers an extra 12 months to get these things tested and ready for action,” Allwright says.

Redox-Flow Cell Stores Renewable Energy as Hydrogen

Post Syndicated from Sandy Ong original https://spectrum.ieee.org/energywise/energy/renewables/storing-renewable-energy-hydrogen-redoxflow-cell

When it comes to renewables, the big question is: How do we store all that energy for use later on? Because such energy is intermittent in nature, storing it when there is a surplus is key to ensuring a continuous supply—for rainy days (literally), at night, or when the wind doesn’t blow.

Using today’s lithium-ion batteries for long-term grid storage isn’t feasible for a number of reasons. For example, they have fixed charge capacities and don’t hold charge well over extended periods of time. 

The solution, some propose, is to store energy chemically—in the form of hydrogen fuel—rather than electrically. This involves using devices called electrolyzers that make use of renewable energy to split water into hydrogen and oxygen gas. 

Saitec Teams Up With German Utility RWE to Test Floating Wind Turbines in Bay of Biscay

Post Syndicated from Lynne Peskoe-Yang original https://spectrum.ieee.org/energywise/energy/renewables/saitec-rwe-test-floating-wind-turbines

Kilometers off the coast of Basque Country in northern Spain, a new twist on offshore wind energy will soon face its final test. The Spanish firm Saitec Engineering made headlines late last year with its distinctive floating turbine concept, and promised to deploy a prototype in April. Last week, that launch took on new significance when Saitec announced a partnership with the renewables division of the German energy titan RWE.

The potential to harvest wind from beyond the shoreline is substantial. “The farther from shore [the wind farm is located], the bigger the wind resource is,” said Luis González-Pinto, chief operating officer of Saitec Offshore Technologies.

Prototype Offers High Hopes for High-Efficiency Solar Cells

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energywise/energy/renewables/prototype-high-efficiency-solar-cells-news

Scientists continue to tinker with recipes for turning sunlight into electricity. By testing new materials and components, in varying sizes and combinations, their goal is to produce solar cells that are more efficient and less expensive to manufacture, allowing for wider adoption of renewable energy. 

The latest development in that effort comes from researchers in St. Petersburg, Russia. The group recently created a tiny prototype of a high-efficiency solar cell using gallium phosphide and nitrogen. If successful, the cells could nearly double today’s efficiency rates—that is, the degree to which incoming solar energy is converted into electrical power.

The new approach could theoretically achieve efficiencies of up to 45 percent, the scientists said. By contrast, conventional silicon cells are typically less than 20 percent efficient. 

AltaRock Energy Melts Rock With Millimeter Waves for Geothermal Wells

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energy/renewables/altarock-energy-melts-rock-with-millimeter-waves-for-geothermal-wells

A vast supply of heat lies beneath our feet. Yet today’s drilling methods can barely push through dense rocks and high-pressure conditions to reach it. A new generation of “enhanced” drilling systems aims to obliterate those barriers and unlock unprecedented supplies of geothermal energy.

AltaRock Energy is leading an effort to melt and vaporize rocks with millimeter waves. Instead of grinding away with mechanical drills, scientists use a gyrotron—a specialized high-frequency microwave-beam generator—to open holes in slabs of hard rock. The goal is to penetrate rock at faster speeds, to greater depths, and at a lower cost than conventional drills do.

The Seattle-based company recently received a US $3.9 million grant from the U.S. Department of Energy’s Advanced Research Projects Agency–Energy (ARPA-E). The three-year initiative will enable scientists to demonstrate the technology at increasingly larger scales, from burning through hand-size samples to room-size slabs. Project partners say they hope to start drilling in real-world test sites before the grant period ends in September 2022.

AltaRock estimates that just 0.1 percent of the planet’s heat content could supply humanity’s total energy needs for 2 million years. Earth’s core, at a scorching 6,000 °C, radiates heat through layers of magma, continental crust, and sedimentary rock. At extreme depths, that heat is available in constant supply anywhere on the planet. But most geothermal projects don’t reach deeper than 3 kilometers, owing to technical or financial restrictions. Many wells tap heat from geysers or hot springs close to the surface.

That’s one reason why, despite its potential, geothermal energy accounts for only about 0.2 percent of global power capacity, according to the International Renewable Energy Association.

“Today we have an access problem,” says Carlos Araque, CEO of Quaise, an affiliate of AltaRock. “The promise is that, if we could drill 10 to 20 km deep, we’d basically have access to an infinite source of energy.”

The ARPA-E initiative uses technology first developed by Paul Woskov, a senior research engineer at MIT’s Plasma Science and Fusion Center. Since 2008, Woskov and his colleagues have used a 10-kilowatt gyrotron to produce millimeter waves at frequencies between 30 and 300 gigahertz. Elsewhere, millimeter waves are used for many purposes, including 5G wireless networks, airport security, and astronomy. While producing those waves requires only milliwatts of power, it takes several megawatts to drill through rocks.

To start, MIT researchers place a piece of rock in a test chamber, then blast it with high-powered, high-frequency beams. A metallic waveguide directs the beams to form holes. Compressed gas is injected to prevent plasma from breaking down and bursting into flames, which would hamper the process. In trials, millimeter waves have bored holes through granite, basalt, sandstone, and limestone.

The ARPA-E grant will allow the MIT team to develop their process using megawatt-size gyrotrons at Oak Ridge National Laboratory, in Tennessee. “We’re trying to bring forward a disruption in technology to open up the way for deep geothermal energy,” Araque says.

Other enhanced geothermal systems now under way use mechanical methods to extract energy from deeper wells and hotter sources. In Iceland, engineers are drilling 5 km deep into magma reservoirs, boring down between two tectonic plates. Demonstration projects in Australia, Japan, Mexico, and the U.S. West—including one by AltaRock—involve drilling artificial fractures into continental rocks. Engineers then inject water or liquid biomass into the fractures and pump it to the surface. When the liquid surpasses 374 °C and 22,100 kilopascals of pressure, it becomes a “supercritical” fluid, meaning it can transfer energy more efficiently and flow more easily than water from a typical well.

However, such efforts can trigger seismic activity, and projects in Switzerland and South Korea were shut down after earthquakes rattled surrounding cities. Such risks aren’t expected for millimeter-wave drilling. Araque says that while beams could spill outside their boreholes, any damage would be confined deep below ground.

Maria Richards, coordinator at Southern Methodist University’s Geothermal Laboratory, in Dallas, says that one advantage of using millimeter waves is that the drilling can occur almost anywhere—including alongside existing power plants. At shuttered coal facilities, deep geothermal wells could produce steam to drive the existing turbines.

The Texas laboratory previously explored using geothermal power to help natural-gas plants operate more efficiently. “In the end, it was too expensive. But if we could have drilled deeper and gotten higher temperatures, a project like ours would’ve been more profitable,” Richards says. She notes that millimeter-wave beams could also reach high-pressure offshore oil and gas reservoirs that are too dangerous for mechanical drills to tap.

This article appears in the March 2020 print issue as “AltaRock Melts Rock For Geothermal Wells.”

The Pros and Cons of the World’s Biggest Solar Park

Post Syndicated from Peter Fairley original https://spectrum.ieee.org/energy/renewables/the-pros-and-cons-of-the-worlds-biggest-solar-park

It’s 10 a.m. and Indian peanut farmer Venkeapream is relaxing at his family compound in Pavagada, an arid area north of Bangalore. The 67-year-old retired three years ago upon leasing his land to the Karnataka state government. That land is now part of a 53-square-kilometer area festooned with millions of solar panels. As his fields yield carbon-free electricity, Venkeapream pursues his passion full time: playing the electric harmonium, a portable reed organ.

With a capacity of 2 gigawatts and counting, Pavagada’s arrays represent the world’s largest cluster of photovoltaics. It’s also one of the most successful examples of a solar “park,” whereby governments provide multiple companies land and transmission—two big hurdles that slow solar development. Solar parks account for much of the 25.5 GW of solar capacity India has added in the last five years. The states of Rajasthan and Gujarat have, respectively, 2.25-GW and 5.29-GW solar parks under way, and Egypt’s 1.8-GW installation is one of several new international projects.

Alas, even as they speed the growth of renewable energy, solar parks also concentrate some of solar energy’s liabilities.

Sheshagiri Rao, an agricultural researcher and farmer based near Pavagada, says lease payments give peanut farmers such as Venkeapream a steadier income. But Rao says shepherds who held traditional rights to graze their fields were fenced out without compensation, and many have sold out. In Venkeapream’s village, flocks once totaled 2,000 to 3,000 sheep. There are now only about 600 left.

The constant need to keep dust off the panels, meanwhile, has put more strain on already overtapped groundwater supplies. Local farmers bring water to clean the more than 400,000 panels at the Pavagada site of Indian energy developer Acme Cleantech Solutions. “At least 2 liters of water is required to clean one panel. This is huge,” says B. Prabhakar, Acme’s site manager. Robotic dusters allow Acme to clean just twice a month, but most operators lack such equipment.

Then there are the power surges and drops created as clouds pass over Pavagada—generation swings that must be countered with coal-fired and hydropower plants. Balancing renewable energy swings is a growing challenge for grid operators in Karnataka, which leads India in solar capacity and also has more than 4 GW of variable wind power.

Karnataka capped new solar parks at 0.2 GW after launching Pavagada. Analysts heralded the state’s apparent shift toward distributed installations, such as rooftop solar systems, during a November 2019 meeting on sustainable energy in neighboring state Tamil Nadu. As Saptak Ghosh, who leads renewable energy programs at the Bangalore-based Center for Study of Science, Technology & Policy (CSTEP), put it: “Pavagada will be the end of big solar parks in Karnataka. Smaller is the future.”

Just a few days later, though, news broke that Karnataka’s renewable energy arm was acquiring land for three 2.5-GW solar megaparks. The state’s move may reflect pressure from the national government to accelerate solar installations, as well as confidence that Pavagada’s shortcomings can be fixed.

Instead of harming shepherds, for example, solar operators could open their gates. Grass and weeds growing amidst the panels pose a serious fire risk, according to Acme’s Prabhakar. Increasingly, operators in other countries rely on sheep to keep vegetation down.

Higher-tech solutions may ultimately address Pavagada’s water consumption and cloud-induced power swings. Israeli robotics firm Ecoppia is already providing what it calls “water free” cleaning at the Pavagada site operated by Fortum, a Finnish energy company.

Karnataka’s solution for power swings at its new megaparks, meanwhile, is to plug the parks straight into the national grid’s biggest power lines. The trio of plants are a joint project with the national-government-owned Solar Energy Corporation of India, and designed to export renewable electricity to other states. Power stations outside of Karnataka will balance the solar parks’ generation, according to Ghosh’s colleague, CSTEP senior research engineer and power-grid specialist Milind R.

India’s government is eager to help, having promised to boost renewable capacity to 175 GW by March 2022 and to 450 GW by 2030. As Thomas Spencer, research fellow at the Energy and Resources Institute, a New Delhi–based nonprofit, noted at the November meeting in Tamil Nadu, India is “well off the track” for meeting either target.

This article appears in the February 2020 print issue as “India Grapples With Vast Solar Park.”

Bon Voyage for the Autonomous Ship Mayflower

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energy/renewables/bon-voyage-for-the-autonomous-ship-mayflower

graphic link to special report landing page

In September a modern-day Mayflower will launch from Plymouth, a seaside town on the English Channel. And as its namesake did precisely 400 years earlier, this boat will set a course for the other side of the Atlantic.

Weather permitting, the 2020 voyage will follow the same course, but that’s about the only thing the two ships will have in common. Instead of carrying pilgrims intent on beginning a new life in the New World, this ship will be fully autonomous, with no crew or passengers on board. It will be powered in part by solar panels and a wind turbine at its stern. The boat has a backup electrical generator on board, although there are no plans to refuel the boat at sea if the generator backup runs dry.

The ship will cross the Atlantic to Plymouth, Mass., in 12 days instead of the 60 days of the 1620 voyage. It’ll be made of aluminum and composite materials. And it will measure 15 meters and weigh 5 metric tons—half as long and 1/36 as heavy as the original wooden boat. Just as a spacefaring mission would, the new Mayflower will contain science bays for experiments to measure oceanographic, climate, and meteorological data. And its trimaran design makes it look a little like a sleek, scaled-down, seagoing version of the Battlestar Galactica, from the TV series of the same name.

“It doesn’t conform to any specific class, regulations, or rules,” says Rachel Nicholls-Lee, the naval architect designing the boat. But because the International Organization for Standardization has a set of standards for oceangoing vessels under 24 meters in length, Nicholls-Lee is designing the boat as close to those specs as she can. Of course, without anyone on board, the new Mayflower also sets some of its own standards, too. For instance, the tightly waterproofed interior will barely have room for a human to crawl around in and to access its computer servers.

“It’s the best access we can have, really,” says Nicholls-Lee. “There won’t be room for people. So it is cozy. It’s doable, but it’s not somewhere you’d want to spend much time.” She adds that there’s just one meter between the waterline and the top of the boat’s hull. Atop the hull will also be a “sail fin” that juts up vertically to exploit wind power for propulsion, while a vertical turbine exploits it to generate electricity.

Nicholls-Lee’s architectural firm, Whiskerstay, based in Cornwall, England, provides the nautical design expertise for a team that also includes the project’s cofounders, Greg Cook and Brett Phaneuf. Cook (based in Chester, Conn.) and Phaneuf (based in Plymouth, England) jointly head up the marine exploration nonprofit Promare.

Phaneuf, who’s also the managing director of the Plymouth-based submersibles consulting company MSubs, says the idea for the autonomous Mayflower quadricentennial voyage came up at a Plymouth city council meeting in 2016. With the 400th anniversary of the original voyage fast approaching, Phaneuf said Plymouth city councillors were chatting about ideas for commemorating the historical event.

“Someone said, ‘We’re thinking about building a replica,’ ” Phaneuf says. “[I said], ‘That’s not the best idea. There’s already a replica in Massachusetts, and I grew up not far from it and Plymouth Rock.’ Instead of building something that is a 17th-century ship, we should build something that represents what the marine enterprise for the next 400 years is going to look like.” The town’s officials liked the idea, and they gave him the resources to start working on it.

The No. 1 challenge was clear from the start: “How do you get a ship across the ocean without sinking?” Phaneuf says. “The big issue isn’t automation, because automation is all around us in our daily lives. Much of the modern world is automated—people just don’t realize it. Reliability at sea is really the big challenge.”

But the team’s budget constrained its ability to tackle the reliability problem head-on. The ship will be at sea on its own with no crewed vessel tailing it. In fact, its designers are assuming that much of the Atlantic crossing will have to be done with spotty satellite communications at best.

Phaneuf says that the new Mayflower will have little competition in the autonomous sailing ship category. “There are lots of automated boats, ranging in size from less than one meter to about 10 meters,” he says. “But are they ships? Are they fully autonomous? Not really.” Not that the team’s Mayflower is going to be vastly larger than 10 meters in length. “We only have enough money to build a boat that’s 15 meters long,” he says. “Not big, by the ocean’s standards. And even if it was as big as an aircraft carrier, there’s a few of them at the bottom of the ocean from the years gone by.”

Cook, who consults with Phaneuf and the Mayflower project from a distance in his Connecticut office, says the 400-year-anniversary deadline was always on researchers’ minds.

“There are a lot of days when you think we’ll never get this done,” Cook says. “And you just keep your head down and power through it. You go over it, you go under it, you go around it, or you go through it. Because you’ve got to get it. And we will.”

When IEEE Spectrum contacted Cook in October, he was negotiating with the shipyard in Gdańsk, Poland, that’s building the new Mayflower’s hull. The yard needed plans executed to a level of detail that the team was not quite ready to provide. But parts of the boat needed to be completed promptly, so the day’s balancing act was already under way.

The next day’s challenge, Cook says, involved the output from the radar systems on the boat. The finest commercial radars in the world, he says, are worthless if they can’t output raw radar data—which the computers on the ship will need to process. So finding a radar system that represents the best mix of quality, affordability, and versatility with respect to output was another struggle.

Nicholls-Lee specializes in designing sustainable energy systems for boats, so she was up to the challenge of developing a boat that one day might not need to refuel. The ship will have 15 solar panels, each just 3 millimeters thick, which means they’ll follow the curve of the hull. “They’re very low profile; they’re not going to get washed off or anything like that,” Nicholls-Lee says. On a clear day, the panels could potentially generate some 2.5 kilowatts.

The sail fin is expected to propel the boat to its currently projected average cruising speed of 10 knots. When it operates just on electricity—to be stored in a half-ton battery bank in the hull—the Mayflower should make from 4 to 5 knots.

The ship’s eyes and ears sit near the stern, Nicholls-Lee says. Radar, cameras, lights, antennas, satellite-navigation equipment, and sonar pods will all be perched above the hull on a specially outfitted mast.

Nicholls-Lee says she’s been negotiating “with the AI team, who want the mast with all the equipment on it as high up as possible.” The mast really can’t be placed further forward on the boat, she says, because anything that’s closer to the bow gets the worst of the waves and the weather. And although the boat could keep moving if its sail fin snapped off, the loss of the mast would leave the Mayflower unable to navigate, leaving it more or less dead in the water.

The problem with putting the sensors behind the sail fin, Nicholls-Lee says, is that it means losing a fair portion of the field of view. That’s a trade-off the engineers are willing to work with if it helps to reduce their chances of being demasted by a particularly nasty wave or swell. In the worst case, in which the sail fin gets stuck in one position, blocking the radar, sonar, and cameras, the fin has an emergency clutch. Resorting to that clutch would deprive the ship of the wind’s propulsive power, but at least it wouldn’t blind the ship.

Behind all that hardware is the software, which of course ultimately does the piloting. IBM supplies the AI package, together with cloud computing power.

The 8 to 10 core team members are now adapting the hardware and software to the problem of transatlantic navigation, Phaneuf says. An example of what they’re tweaking is an element of the Mayflower’s software stack called the operational decision manager.

“It’s a thing that parses rules,” Phaneuf says. “It’s used in fiscal markets. It looks at bank swaps or credit card applications, making tens of thousands or millions of decisions, over and over again, all day. You put in a set of rules textually, and it keeps refining as you give it more input. So in this case we can put in all the collision regulations and all sorts of exceptions and alternative hypotheses that take into account when people don’t follow the rules.”

Eric Aquaronne, a cloud-and-AI strategist for IBM in Nice, France, says that ultimately the Mayflower’s software must output a perhaps deceptively simple set of decisions. “In the end, it has to decide, Do I go right, left, or change my speed?” Aquaronne says.

Yet within those options, at every instant during the boat’s voyage are hidden a whole universe of weather, sensor, and regulatory data, as well as communications with the IBM systems onshore that continue to train the AI algorithms. (The boat will sometimes lose the satellite connection, Phaneuf notes, at which point it is really on its own, running its AI inference algorithms locally.)

Today very little weather data is collected from the ocean’s surface, Phaneuf notes. A successful Mayflower voyage that gathered such data for months on end could therefore make a strong case for having more such autonomous ships out in the ocean.

“We can help refine weather models, and if you have more of these things out on the ocean, you could make weather prediction ever more resolute,” he says. But, he adds, “it’s the first voyage. So we’re trying not to go too crazy. I’m really just worried about getting across. I’m letting the other guys worry about the science packages. I’m mostly concerned with the ‘not sinking’ part now—and the ‘get there relatively close to where it’s supposed to be’ part. After that, the sky’s the limit.”