Tag Archives: energy

Fukushima’s Legacy: Japan’s Hard Turn Toward Renewables

Post Syndicated from John Boyd original https://spectrum.ieee.org/energy/batteries-storage/fukushimas-legacy-japans-hard-turn-toward-renewables

When the tsunami generated by the Great East Japan Earthquake struck the Fukushima Daiichi Nuclear Power Plant on 11 March 2011, it not only knocked out the plant but eventually led to the shutdown of all the country’s 54 nuclear reactors as a safety precaution. Ten years on, just nine reactors have come back on line. And while nuclear energy in Japan today is anything but dead (the central government now hopes nuclear could provide 20 percent of the nation’s power by 2030), the prospect of a ­zero-carbon future in Japan still leaves the lion’s share to renewables.

The magnitude 9.0 earthquake also killed nearly 20,000 people, with 2,500 still missing. As of last December, some 42,000 of the total 470,000 evacuees remained evacuated, even as the disaster’s 10th anniversary loomed. The government has directed its decontamination efforts to reducing an individual’s radiation dose to 1 millisievert a year, a generally accepted international standard. Nevertheless, some 337 square kilometers within seven Fukushima municipalities continue to be designated “difficult-to-return zones,” while a critical Greenpeace radiation survey report published in 2019 warned that forests in the region, which have never been decontaminated, “will continue to be long-term sources of recontamination.”

To help both revitalize the stricken area and advance the country’s decarbonization efforts, the government in 2014 established the Fukushima Renewable Energy Institute, AIST (FREA) in Koriyama, Fukushima prefecture, says Masaru Nakaiwa, FREA’s ­director-general. (“AIST” stands for the National Institute of Advanced Industrial Science and Technology.) FREA’s mandate is to work with industry and academia to improve photovoltaic and wind-turbine performance, optimize ground-source heat pumps and geothermal resources, and develop technologies for hydrogen-energy carriers and hydrogen-energy systems.

“Fukushima prefectural government has set a target of producing all of ­Fukushima’s energy demands from renewable sources by 2040,” says Nakaiwa. To do this, the government is working with FREA, industry, and universities to help commercialize research in renewable technologies and increase the use of solar, biomass, and wind generation in the prefecture. Hydrogen is also viewed as an important new energy resource. The prefecture is now home to the Fukushima Hydrogen Energy Research Field, the world’s largest green-hydrogen production facility, capable of supplying 1,200 cubic meters of hydrogen an hour. This new focus is in keeping with past and recent central government announcements on hydrogen and the goal to make Japan carbon neutral by 2050.

Achieving the 2050 target won’t be easy. Whereas nuclear accounted for 30 percent of the country’s energy use before the accident, today it provides just 6 percent. Making up the shortfall, Japan now relies more on coal (25 percent), natural gas (23 percent), and oil (39 percent), with renewables and hydro accounting for the rest, as of April 2018.

To encourage industry to work toward carbon neutrality, the government will provide capital investment, tax relief, and deregulation in areas such as wind power; carbon capture, utilization, and storage; and the mass production of storage batteries.

At the end of 2018, some 55 gigawatts of solar power equipment had been installed around Japan, putting the country on track to surpass the government’s target of 64 GW by 2030. Regarding wind power, however, Japan had only 3.6 GW of equipment installed in 2018, hence Japan’s Ministry of Economy, Trade and Industry noted it as technology to invest in.

More notable is the country’s embrace of hydrogen as a versatile energy-storage medium. Hydrogen can be produced from various kinds of natural resources, in particular the water used for electrolysis, which removes carbon dioxide, says Satoshi Hirano, FREA’s deputy director-general. And hydrogen can be compressed, stored, transported, and converted into electricity or heat when needed, without emitting CO2.

Hydrogen’s major downside is the high cost of production. Hence FREA and other national research institutes are developing efficient, low-cost hydrogen-production technologies powered by renewable energies, says Manabu Ihara, director of the Tokyo Tech Academy of Energy and Informatics at the Tokyo Institute of Technology.

FREA has already demonstrated a green-hydrogen supply chain and a hydrogen cofiring generator system, as well as the successful synthesis of ammonia (NH3) from green hydrogen, and its use to fuel a modified micro gas-turbine generator. (Hydrogen could also be used in ammonia-powered cargo ships.) Currently FREA is working with IHI Corp. and Tohoku University to develop larger generator systems using liquid ammonia spray injection, says Hirano.

Other countries are also developing green-hydrogen projects. China has a major project underway in Inner Mongolia slated to produce 454,000 metric tons annually; the European Union estimates spending €430 billion (about US $520 billion) over the next 10 years on hydrogen technologies, while South Korea is aiming to become a leader in developing clean hydrogen.

Meanwhile, Japan is creating international supply chains for shipping green hydrogen and “blue” hydrogen (using carbon capture and storage) to the country, and has established pilot projects in Brunei and Australia to test the feasibility of the scheme. These overseas and domestic sources of clean hydrogen fueling large-scale modified gas turbines will eventually take on the role of supplying base load power to the electric grid that can replace nuclear power, says Ihara, of the Tokyo Institute of Technology. “And we should see this partly realized before 2030.”

This article appears in the March 2021 print issue as “Japan’s Renewables Renaissance.”

Reversing Climate Change by Pulling Carbon Out of the Air

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/environment/reversing-climate-change-by-pulling-carbon-out-of-the-air

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

Let’s face it. The United States, and, really, the entire world, has squandered much of the time that has elapsed since climate change first became a concern more than forty years ago.

Increasingly, scientists are warning that taking coal plants off line, building wind and solar farms here and there, and planting trees, even everywhere, aren’t going keep our planet from heating to the point of human misery. Twenty years from now, we’re going to wish we had started thinking about not just carbon-zero technologies, but carbon-negative ones.

Last year we spoke with the founder of Air Company, which makes carbon-negative vodka by starting with liquid CO2 and turning it into ethanol, and then further refining it into a product sold in high-end liquor stores. Was it possible to skip the final refining steps and just use the ethanol as fuel? Yes, we were told, but that would be a waste of what was already close to being a premium product.

Which leads to the question, are there any efforts under way to take carbon out of the atmosphere on an industrial scale? And if so, what would be the entire product chain?

One company already doing that is Global Thermostat, and its CEO is our guest today.

Graciela Chichilnisky is, in addition to the startup, an Argentine-born Professor of Economics and Mathematical Statistics at Columbia University and Director of the school’s Consortium for Risk Management. She’s also co-author of a July 2020 book, Reversing Climate Change.

Welcome to the podcast.

Graciela Chichilnisky Thank you, Steven. Pleasure to be here.

Steven Cherry Graciela, you have to pilot facilities in California, they will each have the capacity to remove 3000 to 4000 metric tons of CO2 per year. How exactly do they operate?

Graciela Chichilnisky The actual capacity varies depending on the equipment, but you are right on the whole, and the facility is at SRI, which used to be the Stanford Research Institute. They work by removing CO2 directly from the air. The technology is called “direct-air-capture” and our firm, Global Thermostat, is the only American firm doing that. And it is the world leader.

The technology, essentially, scrubs air. So you move a lot of air over capture equipment and chemicals that have a natural affinity for CO2, so as the air moves by, the CO2 is absorbed by the solvents and then you separate the combination of the solvent with the CO2 and lo and behold, you got yourself 98 percent pure CO2 coming out at, as a gas, at one atmosphere. That is [at a] very, very, very high, level, how it works.

And the details are, of course, much more complex and very, very interesting. What is most interesting, perhaps, is the chemists who are used to working with constrained capture in limited facilities—hence volumes—find that the natural chemical and physical properties of the process change when you are acting in an unconstrained area (in fact, the whole atmosphere). You are using the air directly from the atmosphere to remove the CO2. And that’s why it is possible to do that in a way that we have patented—we have about 70 patents right now—in a way that actually is economically feasible. It is possible to do it, save the CO2, and make money. And that is, in fact, the business plan for our company, which includes reversing climate change through this process.

Steven Cherry Yes, so let’s take the next step of the process, what happens with the CO2 once it’s at its 98 percent purity?

Graciela Chichilnisky The CO2—what is perhaps a very good secret for most people—you see CO2 is a very valuable gas and even though it’s a nuisance and is dangerous depending on the concentration in your atmosphere, here or earth, it sells for anywhere between a $100/tonne and $1500 to $1800/tonne. So if you think about that, all you need to know is that the cost of obtaining the CO2 from the air should be lower than the cost of selling it.

The question is what markets would satisfy that. And I’m going to give you a case in which we are already working and selling which we are not working yet. We’re already working with the production of synthetic fuels, in particular synthetic gasoline. Gasoline can be produced by combining CO2 and hydrogen, the CO2 from the air, the hydrogen from water—the hydrogen is produced using hydrolysis—and the CO2 comes from here using our technology. Combining those two gives you hydrocarbons and when properly mixed, you obtain a chemical which is molecule by molecule identical to gasoline, except it comes from water and air instead of coming from petroleum. So if you burn it, you still produce CO2, but the CO2 that is emitted came from the atmosphere in the production of the gasoline and therefore you have a closed circle. And in net terms you’re emitting nothing, using the gasoline that is produced from CO2 and hydrogen—from air and water. These markets, the markets currently in our case, in addition to our synthetic gasoline, include the water desalination market. We work with a company that is the largest desalinated of water in the world, in Saudi Arabia.

And they need a lot of CO2 because the process of desalinating water for human consumption requires the use of CO2. In addition to those two examples, applications, commercial uses, synthetic gasoline and disseminated water, there are carbonated beverages, for example, beer and Coca-Cola. Indeed, we work with Coca-Cola and we work with Siemens, and with AME, automobile companies such as. Porsche, to produce clean gasoline—the synthetic gasoline I mentioned.

From the CO2, you can actually produce elements of cement and other building materials. So as a whole, McKinsey has documented that there is a $1 trillion market per year globally for CO2. So CO2 is a very valuable chemical on Earth, even though it’s a nuisance and dangerous in the atmosphere. So the notion is—the notion of Global Thermostat is—bring it down. In other words, take it from the atmosphere where it is dangerous; bring it down to earth, where it is valuable.

Steven Cherry I love that our first carbon negative podcast involved vodka and our second one now involves beer. So that’s the economic case for what you’re doing. There’s also the question of the carbon budget. There’s a certain amount of energy used in the processes of removing CO2 from the air and then using it for some of these applications; what would be a typical net carbon budget?

Graciela Chichilnisky Negative, in other words, what happens is that we don’t use electricity, which is mostly reduced from fossil fuels right now. We use heat and our heat can be produced as a waste heat from other processes; it doesn’t have to be electricity. In fact we use very little electricity.

But think of it this way: In the year 2020, we for the first time in history humans are able to produce electricity directly from the sun less expensively than using fossil fuels. The two-and-a-half cents or less, continually downward, is the going price for solar photovoltaic production of electricity. It’s the lowest cost. Two cents a kilowatt hour is really the lowest possible cost.

Steven Cherry One wonderful thing about this is that you’re an economist and so you’re determined not just to develop technologies, but ensure that they find a home in the marketplace because that’s the most practical way to implement them at scale.

In 2019, Global Thermostat started working with Exxon Mobil. I understand they provided some money and I believe initially 10 employees. I gather the idea is for them to be one organization commercializing this technology further. How would that work?

Graciela Chichilnisky Well, first of all, I do have two Ph.D.s; I started pure mathematics at MIT. That was my first Ph.D. My second Ph.D. was in economics at UC Berkeley. So I do have the mathematics as well as the economics in my background. What we’re doing requires several forms of expertise. You said it; Global Thermostat has made a joint development agreement with Exxon and is working with Coca-Cola and is working now, with Siemens; is working with a company called HIF, which is in Chile.

So, how does that work? As you probably know, Exxon Mobil is a multifaceted company. In addition to fossil fuels, they have a huge expertise in carbon capture technology, the old fashioned, I would say traditional, type. And by that I mean capture of CO2 from the fumes of power plants, for example.

They have the resources and the know-how, and we are a small company and we want to expand our production. So they offered an opportunity for us to go with the high-level technology, the advanced company in the area of carbon capture in a more traditional way that are willing to experiment and they’re willing to advance commercially the removal of CO2 directly from the atmosphere.

So that with them in our contract, we intend to build a one gigaton plant, that’s what we contracted to do, which means that we then we will scale up or technology. So every year it can eventually remove one billion—with a ‘b’ as in boy—tons of CO2 from the atmosphere every year. That’s the scale-up I’m talking about, and that is the main purpose of our partnership with Exxon Mobil.

And if you think about it—you said it yourself—you want to know what the carbon budget really, roughly speaking, don’t forget that I worked in the Kyoto Protocol. And I created the carbon market of the Kyoto Protocols. So I know a lot about carbon budgets and how demanding they are and how far we are from what we need to do. We need to essentially remove 40 gigatons of CO2 every year from the atmosphere in order to reverse climate change. And what I’m telling you is that we these type of partnerships with companies like Exxon, we can do one gigaton—you’re at a shooting distance from that goal. And that’s why I a contract with Exxon is to scale up our technology to remove one gigaton of CO2 per year. And then if we had 40 of those plans, then we would be removing all the CO2 that humans need to remove from the atmosphere right now in order to reverse climate change.

Steven Cherry It seems paradoxical that it would make more sense to take carbon directly out of the air, the direct air capture, rather than focusing on concentrated sources of carbon and carbon dioxide, such as a power plant smokestack. How is that paradox resolved? How is it more sensible to take it directly out of the atmosphere?

Graciela Chichilnisky First of all, it is not sensible, it’s very creative, very unique, and he has never been done not what we’re doing—it has never been done. And there is a good reason why wasn’t done, because as you’re point out, it’s more difficult, actually, and it’s more expensive to remove CO2 from the air than to remove it from a concentrated source. So why would we be doing that? The answer is, if you remove CO2 from the chimneys or any natural facility, the best you can do—the best best best possible—is to make that facility carbon neutral; to remove all the CO2 that it is emitting.

That’s the best. If you’re really lucky, right? Okay, that’s not enough anymore. When I used to be the lead author of the IPCC, the Intergovernmental Panel on Climate Change, working on this topic, I found—and it is well-known now—that going carbon neutral does not suffice. I think you say that in your introduction. Now we have to go carbon negative, which means we have to remove in net terms more CO2 than what is emitted. And that CO2 that we remove should be stabilized on Earth. I’m not saying sequester on the ground, but I’m saying stabilized. You know, it could be in materials or instruments or whatever, stabilizing nerves after it’s removed.

If you need to remove more CO2 than what you emit and we need to remove 40 gigatons more than what we emit right now, you cannot do it from industrial facilities, the best that you can achieve is carbon neutrality. You need to go carbon negative. For that you have to go and remove CO2 from air.

Steven Cherry I said that 20 years from now, we’ll wish we had started all this 20 years earlier, but you actually started this process a decade ago, you already foresaw that we would need carbon negative processes. But at the same time, as you mentioned, you were also working to develop the Kyoto Protocols, specifically creating carbon markets. Was that just a stopgap before getting to this point that you’re at now?

Graciela Chichilnisky No. No, no. The carbon market solution was the solution, an easy solution. Let me explain. The problem is that our prices are all wrong, and when we try to maximize economic performance, we maximize our GDP, in which we don’t take into account the enormous damage that excessive CO2 emissions are causing to humans to our economy, to our world, and even to our survival as a species. So the invention of the carbon market—I invented and I designed it and I rolled it into the Kyoto Protocol in 1997—was done with a purpose of changing the system of values.

In other words, introducing prices and values that make it more desirable to be clean rather than to over-emit. Right now if we were to cut all the trees in the United States and produce toilet paper, our would economic system of economic performance, how we measure it, we say that we are much better off. After all, more trees are being cut off and used to produce toilet paper than before.

So I decided that this had to change. And in fact, when I designed and created the carbon market, in the Kyoto Protocol, it became international law in 2005. And it is now what’s called the European Union Emission Trading System, which encompasses 27 nations, and is also used in China, in 14 states in the United States, and essentially 25 percent of humankind is using now the carbon market, that I designed and wrote into the protocol originally in 1997. But the most important statistic for me is, in December 2019 Physics Today, there is an article on the carbon market, which says the carbon market has succeeded by decreasing the emissions from the nations that use the carbon market in those years since 2005, when it became international law, decreasing the emission, those nations that use the carbon market by 30 percent from the base year.

Another way of saying is that if the whole world was using, not just the 25 percent that I mentioned, the carbon markets, we would be 30 percent below the level of emissions of 2005. And you know what? We really wouldn’t have the climate disaster, the catastrophe, that we fear. We would not have it because we would be containing the emissions of CO2 through the use of the carbon market, as was done in all the nations that adopted carbon market when it became international law in 2005.

So that’s a solution, but we haven’t adopted it, only 25 percent of the work succeeded. The rest of the world went south. We emitted even more. So now in relation to decreasing emissions because you cannot avoid increasing emissions—that’s critical—you now have to remove the CO2, the legacy CO2, that we put into the atmosphere and which is still in the atmosphere after all these years. So from the physical point of view, you have to know CO2 doesn’t decay, doesn’t decay as fast as other gases, and it remains in the atmosphere once emitted for decades, even hundreds of years in some cases. As a result of that, we do have a lot of legacy CO2 that doesn’t decay.

Steven Cherry The title of your book is Reversing Climate Change. The subtitle is How Carbon Removals Can Resolve Climate Change and Fix the Economy. Perhaps you want to say another word about the fix the economy part.

Graciela Chichilnisky Yeah, I will do it with two sentences. Sentence #1, I just want to quote new President Biden, who said, “When I think about climate change,” he said, “I think jobs, jobs, jobs.” So a technological evolution of this nature, that could be even a revolution, it’s creating a lot of jobs and it is creating the infrastructure that will allow us to solve the problem and grow the economy at the same time, because every time you remove CO2, you make money now. It doesn’t cost money. You have to invest initially, but you make money.

 The second issue—[Biden] doesn’t address because he doesn’t know the level of detail or this type of focus—is the problem of the environment and the resources is very closely tied with the problem of inequity. And you must be aware, because there have been a number of books that were really prominently published and reviewed about the increase in the inequity in the global economy, not just internationally that we know is huge, it has increased 20 times since 1945, but also within nations, like in the United States. Well, what’s interesting is that these new technologies not only solve the problem at the technological level and not only can bring jobs, as I mentioned and I quoted Biden saying so, but in addition, these technologies sponsor equity. And I will give you two examples very quickly. As I mentioned already, the solar photovoltaic revolution in which 80 percent of the cost of the production of electricity for photovoltaic energy has decreased in the last 20 years.

That revolution has created the most accessible form of energy than ever before, because while fossil fuels were the main raw material for the production of electricity in the $60 trillion power plant economy, those are really not very equitable at all. And fossil fuels come from a few regions in the world. They have to be extracted from under the earth, etc. And the result is that our whole energy production system lies at the foundation of the inequity of the modern economy, the industrial revolution. If you replace fossil fuels, natural gas, petroleum, and coal, by the sun, as an input, you have a major equalizer because everybody in the world has access to the sun in the same amount. So the input now is no longer fossil fuels that come from a few places that make a lot of money. The input now is the sun that comes from everywhere and everybody has access to that. They import. That creates energy. Now, that’s more equitable is a huge difference, huge difference.

And the other difference is that with new technology that transforms CO2 into materials for construction or even into clean forms of energy like synthetic gasoline as I explained before. That is based on air, as an input, and the air has a property, it has the same concentration of CO2 all over the planet and this means an equalizer again. So we now can reduce cement, let’s say, beverages, food. You can even reduce protein from CO2 of course, because of the carbon molecules; you can actually produce all the materials that we need and even food and drinks, beverages, from air. And the air is equitably distributed—it’s one of the last few public goods that everybody has access to, as is the sun. So we are now going into a new economy. Powered by sun and with resources coming from air and, you know, what? That solves the problem of equity in a big way. I would say inequity, which is so paralyzing to economies and to the world as a whole. So I wanted to say not only this is an environmental change, some may say a revolution, but this is in addition a social and economic change and some would say revolution.

Steven Cherry Yeah, we could do we could do an entire show on things like the resource paradox, countries that are rich in oil, for example, end up being poorer through the extraction processes than when they started. Well, Graciela, it’s going to take economists, businesspeople, scientists and politicians to lead us out of this crisis. And we’re fortunate to have a news, someone who is several of those things. Thank you for your research, your book, your company, your teaching, and for joining us today.

Graciela Chichilnisky Great. Thank you very, very much for your time and for your insightful questions.

Steven Cherry Well Graciela, it’s going to take economists, businesspeople, scientists, and politicians to lead us out of this crisis, and we’re fortunate to have in you someone who is two of those things working with the other two. Thanks for your research, your book, your company, and your teaching—and for joining us today.

We’ve been speaking with Graciela Chichilnisky: Columbia University economist, co-author of the 2020 book, Reversing Climate Change, and CEO of Global Thermostat, a startup devoted to pulling carbon out of the air cost-effectively.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

This interview was recorded February 2, 2021 via Zoom and AdobeAudition. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on Spotify, Apple Podcast, and wherever else you get your podcasts, or listen on the Spectrum website, where you can also sign up for alerts of new episodes. We welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

 

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

What the Texas-Freeze Fiasco Tells Us About The Future of the Grid

Post Syndicated from Robert Hebner original https://spectrum.ieee.org/energywise/energy/the-smarter-grid/what-texas-freeze-fiasco-tells-us-about-future-of-the-grid

“Don’t Mess with Texas” started life as part of an anti-litter campaign, back in 1985, and soon became an internationally recognized slogan. Too bad nature cares not a whit about slogans. In mid-February, a wintry blast hit the state, leaving more than 4 million people without power, most of them in homes not designed to shelter against bitter cold. The prolonged icy temperatures triggered a public health emergency and killed several dozen people in the state, according to press accounts.

So what actually happened, and why? The first question is a lot easier to answer than the second. What everyone agrees on is that the whole state experienced record cold, preceded by ice storms, which were followed by snow.  Central Texas, for example, recorded the coldest temperatures in more than three decades and the most snow—about 15 centimeters—in more than seven decades.  Moreover, the number of hours below freezing was in the triple digits—in a state in which dips below freezing very seldom last more than a few hours.

And bad things happened to the grid.  Ice storms caused tree limbs to fall onto distribution lines, causing power outages.  Wind turbines were taken off line due to icing of their blades.  Distribution of natural gas to power plants was shut off or curtailed when key components in the gas system froze up.  Even a nuclear plant had a cold-weather-related failure.  At the South Texas Project Electrical Generating Station in Bay City, Texas, a 1,300-megawatt unit went off line on 15 February after a pressure sensor in a feedwater line malfunctioned.

At the same time, the frigid weather triggered soaring demand for electricity.  Unfortunately, some plants were off line for maintenance and others were unavailable because of the cold. As the crisis went on, and on, nervous grid operators recognized that surging demand would outstrip supply, causing major parts of the state’s grid—or perhaps its entire grid—to collapse.

So, at 1:25 a.m. on 16 February, about two days after the storm spread across the state, operators began implementing rolling blackouts to assure power-system stability.  But they soon ran into problems, because the curtailment area was so large.  Some places, including Austin, the state’s capitol, found that in order to reduce the load by the amount mandated by the state’s electrical authority, they had to shut down all electrical feeders except the ones feeding critical loads, such as water treatment plants and hospitals.  So, the “rolling” blackouts weren’t rolling at all; for nearly all residential customers in and around Austin, once the power was turned off, it stayed off.

Now to the second question: Why did the Texas grid crumble? The weather-triggered problems led to a tidal wave of instant pundits picking over the very limited data to support their preferred theory as to the root cause of the problem. Against renewables? Then obviously the whole sorry episode could be blamed on the iced-over wind turbines.  Anti-fossil fuels? In that case, the maximizing of profits by those plant operators was clearly the fundamental cause. Microgrid proponents said there would not have been a problem if Texas had more microgrids.

And there were twists here, too, related to a couple of unusual technical and economic aspects of the Texas electrical system. Most of the United States and Canada are covered by just three synchronous electrical grids. There’s one for the eastern part of the continent, one for the western part of the continent, and a relatively tiny one that covers most of Texas. That Texas grid is operated by an organization called the Electric Reliability Council of Texas (ERCOT). Not everyone thinks it’s a good idea for Texas to have its own grid, so for these folks, the lack of synchronous connections to the rest of the U.S. was the problem.

Also, since 1999, Texas has had a deregulated, energy-only market structure, which means that suppliers get paid only for the electricity they produce and sell, and the market is not regulated by the Federal Energy Regulatory Commission.  So there were also calls for a transition to a forward-capacity-market structure in which suppliers are paid not only for what they sell but also to maintain the capacity to produce more than they sell. A few observers claimed that a capacity market would have avoided the fiasco.

Focusing on the technical claims and counter-claims for the moment, it is obvious that engineers around the world know how to make wind turbines and fossil-fuel power plants that continue to work under prolonged winter stress.  So why were these tried-and-true engineering approaches not implemented? 

To understand the reason, you first have to consider a fundamental role of State utility commissions, which is to assure that the people of the State get the lowest-cost electricity with acceptable reliability.  It’s always possible to invest more money and get a more reliable electrical system.  So, it’s a mostly non-technical judgement call to properly balance the cost of enhanced reliability against the risk of an unusual calamity. It is this logic that leads to, for example, Buffalo, New York, having considerably more snow plows per kilometer of paved road than San Antonio, Texas.

Not wanting a crisis to go to waste, some are proposing significant structural changes.  For example, the grid covering much of Texas is connected to the rest of the US power grid and the Mexican power grid via five direct-current links.  Some observers saw an opportunity to renew calls for Texas to merge its grid with one or both of the other major continental grids. This could be accomplished by building new high-voltage transmission lines, either AC or DC, tapping into other parts of the country. These would expand the existing electricity import-export market for Texas and better integrate Texas’s grid with the other two, adjacent grid systems.

This won’t be a near-term solution. The time required to build transmission lines is measured in years and the cost will likely exceed US $1 million per mile ($620,000 per km). And this transmission-expansion idea competes with alternatives: distributed generators fueled by propane or natural gas; and storage facilities based on batteries or fuel cells capable of powering a single house or a retail, industrial, or commercial facility.

There are some intriguing transportation-related options for enhanced grid resilience now becoming available, too. These are linked to emerging technologies for the electrification of transportation. The U.S. Department of Transportation, for example, unveiled a fuel-cell-powered-electric transit bus last year that could provide emergency power to a drug store, a supermarket, or some other critical establishment.  It was cost effective for periods up to two weeks compared with leasing a generator.  Ford made news on 18 February when it asked its dealers to loan out stocks of its new F-150 hybrid truck, versions of which are equipped with generators capable of putting out 7.2 kilowatts. In October 2019, the US Departments of Energy and Defense offered up to $1 million to develop a military vehicle with a similar purpose.

A vital fact made very visible by the Texas situation is that population centers increasingly rely on interacting systems.  In Texas, the weather disrupted both transportation and electricity.  These disruptions in turn affected the water supply, telecommunications, emergency response, the food supply, the availability of gasoline, and healthcare—including COVID-19 vaccinations.  For years, to aid in planning and event management, academics, companies, cities and states have been developing models to predict the interconnected effects of disasters in specific locations.  Recently, the Department of Energy, via its laboratories, has addressed this issue.  Better models could help officials prevent major fiascoes in some cases, or, when that’s not possible, react better during crises by giving managers the tools needed for real-time management of complex, interdependent systems. 

Now, in Texas, given the high levels of publicity, political involvement, and consumer anger, it’s a pretty safe bet that the needle will very soon be moved toward higher cost and more reliability. In fact, Texas’s Governor, Greg Abbott, has proposed requiring the implementation of established winterizing technology.

There will be exhaustive, detailed, after-action analysis once past the immediate crisis that will probably uncover crucial new details.  For now, though, it seems pretty clear that what happened in Texas was likely preventable with readily accessible and longstanding engineering practices.  But a collective, and likely implicit, judgment was made that the risk to be mitigated was so small that mitigation would not be worth the cost. And nature “messed” with that judgment.

Robert Hebner is Director of the Center for Electromechanics at the University of Texas at Austin. A Fellow of the IEEE, Hebner has served on the IEEE Board of Directors and is also a former member of the IEEE Spectrum editorial board.

The Uneconomics of Coal, Fracking, and Developing ANWR

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/environment/the-uneconomics-of-coal-fracking-and-developing-anwr

Steven Cherry Hi this is Steven Cherry for Radio Spectrum.

Many things have changed in 2020, and it’s an open question which are altered permanently and which are transitory. Work-from-home may be here to stay; as might the shift from movie theatres and cable tv networks to streaming services; pet adoption rates are so high that some animal shelters are empty and global greenhouse gas emissions declined in record numbers.

That last fact has several causes—the lockdowns and voluntary confinements of the pandemic; an oil glut that preceded the pandemic and continued through it; the ways renewable energy—especially solar energy—is successfully competing with fossil-fuels. According to the Institute for Energy Economics and Financial Analysis, an Ohio-based non-profit that studies the energy economy, more than 100 banks and insurers have divested or are divesting from coal mining and coal power plants. Their analysis also shows that natural gas power plant projects—for example one that’s been proposed for central Virginia—are a poor investment, due to a combination of clean-energy regulations and the difficulty of amortizing big power-plant construction in the face of a growing clean-energy pipeline, expected to grow dramatically over the next four years.

Such continued growth in clean-energy projects is particularly notable, as it comes despite high job losses for the renewable energy industry, slowing construction activity, and difficulty in finding capital financing. Those same headwinds brought about a record number of bankruptcies in the fracking industry.

My guest today is eminently qualified to answer the question, are the changes we’re seeing in the U.S. energy-generation profile temporary or permanent? And what are the consequences for climate change? Kathy Hipple was formerly an analyst at the aforementioned Institute for Energy Economics and Financial Analysis and is a professor in Bard College’s Managing for Sustainability MBA program.

Kathy, welcome to the podcast.

Kathy Hipple Thank you, Steven. It’s great to be here.

Steven Cherry Kathy, your background is broader than most. You did a long stint on Wall Street at Merrill Lynch, but you’re also on the board of Meals on Wheels in Bennington, Vermont. There are issues of environmental justice in our decisions about what kind of energy generation to finance and where, and we’ll get to that. But first, it seems like the economics behind our energy sources are shifting almost faster than we can keep up. Where are we at currently with the economics of fossil fuels—coal, petroleum, natural gas?

Kathy Hipple Well, you’re right. It has seemed that 2020 saw an acceleration of trends. But this is not new. This has been going on for at least a decade, that fossil fuels have been in decline from a financial standpoint. And the energy sector—which currently only includes oil and gas companies, that does not include renewable energy—finished last place in the market for the decade between 2010 and 2020. It also finished last place in 2020, 2019, and 2018. So this is a sector in financial decline, long-term financial decline. And as we know and because I’m a finance professor, finance is all about the future. So the market is telling us that the future is not fossil fuels. Which is why the energy sector is now only 2 percent—a little over 2 percent—of the S&P 500. And in the 1980w it was 28+ percent. So we now have a world economy that is much less dependent on fossil fuels financially than it has ever been.

Steven Cherry Wall Street firms have promised to lead the charge toward sustainable energy use, but the world’s largest asset manager, BlackRock, a year after it said it would divest its portfolio from fossil fuels, still has something like $85 billion invested in coal companies, the worst of the fossil fuels in terms of pollution and greenhouse gases.

Kathy Hipple Yes, BlackRock has been a disappointment in many respects. They are not walking their talk. Their talk is impressive, but their follow-through, as you say, they’re still heavily invested in coal, still heavily invested in financing gas and oil projects around the world. And they are also moving into clean energy. But they have not yet done the divestment that many activists have called on them to do and that the Larry Fink letter suggests that they will do.

They have not been as transparent as they probably should be in terms of how they are working with management of companies to see if they are actually promoting the energy transition or if they are reporting on Taskforce for Climate-related Financial Disclosures, TCFD. So I do think that they grew their asset base tremendously in 2020, but they have a long way to go before they will become a climate leader on the investment side.

Steven Cherry It’s impossible to talk about new drilling without talking about fracking. A 2019 study of 40 dedicated U.S. shale oil companies found that only four of them had a positive cash flow balance. Much of the easiest drilling has already been done. Investors haven’t been getting good returns even on them. And the price of oil generally is pretty low. The thing that has puzzled some observers is that besides the economic damage wrought by fracking financially, it seems to be driven more by enthusiasm than results. Does fracking make sense financially?

Kathy Hipple Fracking does not make sense financially and it never has. That is the big dirty secret—even when oil prices were well above $100/barrel and natural gas prices were much higher than they are now. These companies, year in and year out since 2010, had been cash flow negative in aggregate. Occasionally you’ll get one or two companies that will outperform their peers. But in aggregate, the oil—the frackers that are going after oil, largely in the Permian Basin in Texas and New Mexico—have been cash flow negative each and every year; and in even worse shape than the oil price fractures, are the fossil gas (sometimes called natural gas) producers, largely in the Marcellus-Utica basins in Appalachia.

They have been in extremis and they have produced negative cash flows again, even when gas prices were much higher than they are now. So the business case for fracking has never been proved; it’s a poor business model—as you mentioned, the decline rate is very high, which means you have to continue to put money into drill new wells. And the industry has never found a way to be profitable and to be cash flow positive.

In fact, one of the former CEOs of the largest gas frackers, EQT, said he had never seen an industry, in a sense, commit suicide the way the fracking industry has done. So you’re right, it’s been a terrible investment. It’s been driven by enthusiasm and a lot of investors saying wait until next year. But largely the investor base has moved away from this sector. The sector has no access to the public markets for either equity or for debt. Many banks have walked away from them. They’ve closed their loan portfolios. One prominent bank sold their entire energy portfolio for roughly 50, 60 cents on the dollar. So the sector probably can only go forward if it has access to higher-risk capital or higher-cost capital. And these will be investors who are willing to gamble on a sector that has never yet shown a financial success.

Steven Cherry There’s a lot of political momentum behind fracking, especially in western Pennsylvania and places like that, North Dakota. What is one to do when there’s such a disconnect between the politics and the finances?

Kathy Hipple That’s a great question, Steven. The industry has lost a tremendous amount of its economic and financial power, but it retains a lot of political power. And that is particularly true in places like Texas and in Pennsylvania, as you mentioned. However, I think that the public view about fracking has started to change. In fact, there was an interesting study that the counties in Pennsylvania that had more fracking, in fact, did not vote for Trump at the same level they had four years earlier, and that the public is starting to really question whether they want to have pipelines under their land, whether they want to have orphan wells or wells for gas and for oil that have just been abandoned. And they’re really questioning whether the number of jobs the industry promises will ever materialize.

Often the industry comes to a state and says we will produce this many jobs. And in fact, most of the jobs are in construction and they’re short-term jobs. And they are reasonably high-paying jobs, but often the jobs are imported from construction workers outside the state. And once these wells are drilled, they don’t require people to man them. So these are not good long term sources of revenue for these local counties, communities, or states.

Some of my students, interestingly enough, did a study on a wind farm in a small county, Paulding County, in Ohio, and they showed that the long term revenue produced from the wind farm was actually very stable income and that the county could make use of these—they were called payment in lieu of taxes—PILOT funds to finance their school district, to finance special ed, to finance DARE officers (stay off drug officers), and that a lot of counties throughout Texas, for example, are really very dependent now on income and revenue streams coming from wind. So I think as more municipalities are looking at the long-term stable income that comes in from a wind farm, for example, versus the boom-bust cycle of the oil and gas industry, clean energy will begin to be much, much more appealing—even more so than it is now.

Steven Cherry Historically, a lot of that revenue to communities are really … there’s sort of no better example of that than Alaska and in fact, in mid-November 2020, in other words, in the lame duck period between election and inauguration, the Trump administration opened up ANWAR, the Alaska Arctic National Wildlife Refuge. In fact, this was our impetus for first contacting you for this show. It’s now mid-January as we record this. Where are we at with ANWAR?

Kathy Hipple Well, it’s a beautiful, pristine part of the world and it’s very high cost to produce oil from that part of the world. And since there’s a glut of oil and a glut of gas on the market worldwide, one questions whether there’s any rational reason for drilling there. But it was one of the final moves by the Trump administration to rush through the process of allowing bidding on these lands.

And it will be interesting to see. Very few bids came in. And it doesn’t mean anybody will go forward because this is not economically producible oil, given current prices of oil. Any firm that puts money into this is likely at the end of the day to lose money.

Steven Cherry You know, back in the mid 2010s, Shell ended up abandoning a $7 billion drilling project in the Arctic, are the oil companies really enthusiastic about drilling there?

Kathy Hipple No, it doesn’t appear that they are. In fact, if you look at most of 2020, there were massive historic write-downs among the big oil companies around the world. The large oil companies did not participate in bidding for the land and water. They … A couple of smaller companies did. But the larger companies have largely stayed away.

Steven Cherry So is unwarned more of a symbol of a conflict between business and environmentalism?

Kathy Hipple I wouldn’t have put it in those terms, but I think that’s an excellent way to put that.

Steven Cherry The Biden administration promised an enormous infrastructure program oriented toward environmental concerns and shifting to a clean energy economy. Leaving aside the political difficulties in getting any such legislation through Congress, how big a program could we have and still remain within the bounds of good economic sense?

Kathy Hipple I don’t know the exact dollar amount to answer that question, but there’s still a tremendous amount of low hanging fruit with infrastructure spending and energy-efficiency spending. We always talk about moving to clean energy and renewable energy, which is fantastic. There’s an enormous need to build that out in this country. But there’s also a lot of low-hanging fruit about just energy efficiency, which ends up getting kind of short shrift when we talk about the energy transition. That could be billions and billions building out an electric-vehicle-charging system around the country. We need to move very quickly to decarbonize. Many of the countries’ plans are 2030, 2040, 2050. The urgency is to act immediately, to act now. And I’m extraordinarily happy that the Biden administration is moving as quickly as they are—just a few days into their administration.

Steven Cherry I was going to ask you about electric vehicles. It looks like Detroit is finally getting serious about them. How does that change the energy generation situation and the grid distribution system five years from now, 10 years from now?

Kathy Hipple Well, it’s essential to decarbonize the economy and much of the use of oil is for vehicle travel. The more vehicles can be electrified, the less need there will be for oil in this country. The United States has fallen behind Europe in terms of EVs and China is coming along very, very quickly and very aggressively. So the United States has a long way to go.

And part of it is that people do have a concern about range anxiety. There are not enough high-speed chargers. Many people live in apartments, and if they live in apartments, they can’t charge their vehicle overnight. They may not be going to an office, which you alluded to in your opening statement. So they can’t charge there. So if you live, for example, in New York City, where I split my time between Vermont and New York City, if you live in an apartment building, it’s very difficult in New York City to reliably have an EV. And that has to change and it has to change very, very quickly.

Steven Cherry Perhaps we could say a word about nuclear power. We’ve had three really bad accidents and almost three-quarters of a century, four, if you count Three Mile Island. That’s either a lot or a little, depending on how you look at these things. France still gets a steady 63 percent of its energy from nuclear. In fact, it only gets 10 percent from fossil fuels. Now, there are a number of new designs, including one that puts small nuclear plants on barges in the ocean. Is there a future for nuclear construction, new nuclear construction outside of China, which has been continuing to move that way?

Kathy Hipple I am not the world’s expert on nuclear power, but what I see, the cost of solar dropping 90 percent and wind dropping 70 percent and battery storage dropping quickly. I keep seeing estimates for new nuclear power and it is surprisingly continuing to increase. So it is very difficult for a new energy plant, whether it’s gas or whether it’s nuclear, to compete with the dropping cost or the declining costs of solar, wind, and battery storage.

So I don’t see in the United States that there’s a future for certainly not large nuclear. The question would be is how long do the existing nuclear plants continue to operate in the United States? And most of the energy forecasts to get to net-zero by 2030, 2040, 2050, do assume that the currently existing nuclear plants continue to operate, but they do not generally call for new nuclear.

Steven Cherry Finally, there are issues of environmental justice that are economic, for example, the air pollution caused by fossil fuel extraction and consumption falls disproportionately on minorities and the poor. This is something that you’ve studied as well.

I think that the issue of environmental justice has always been there, but it has gained a tremendous amount of traction in the past couple of years, I think, especially in 2020, when it became increasingly clear how disproportionate the poor communities were being affected by fossil fuels, which includes also petrochemical plants.

If you look at Cancer Alley in Louisiana and the number of refineries and petrochemical plants that are in a very small area of Louisiana, it’s very difficult not to be very, very concerned about environmental justice issues and the concept of a just transition. It’s a very interesting one that really needs to be top of mind as we are very thoughtful about accelerating the energy transition. It’s simply as a matter of basic decency and fairness that we cannot have the pollution caused by fossil fuels to fall disproportionately on poor communities and especially black and community communities of color. Terribly unfair.

Steven Cherry In some ways this is a part of a broader question about externalities and how they get paid for either financially or in terms of things like cancer that have tilted our economy toward fossil fuel consumption for a century now. Is there anything that can be done about that?

Kathy Hipple Well, it depends on who you ask. If you asked, for example, Bob Litterman, he chaired the Climate Leadership Committee, and he has pushed hard for a … essentially a carbon tax, but that if carbon was taxed and if the proceeds of the revenues generated by that was treated like a dividend in his view and that of, I think, his fellow Climate Leadership committee board members, that would go a long way toward addressing some of the social costs of carbon pollution. That’s one possible solution. Other countries are figuring out how to do it with cap-and-trade. But I think it’s only a question of time in this country before we have some kind of a reckoning. And one of the things the Biden administration is doing is trying to actually calculate the social cost of carbon pollution.

Steven Cherry Kathy, we’ve been speaking about oil companies as a sort of hegemony, but are there distinctions you want to make among them?

Kathy Hipple I think that’s a very interesting question, Steven. In the last few years, some of the oil—especially the large oil—companies, we call them oil majors or the integrated oil companies, have started to diverge. So the European oil companies, Shell, BP, Total, in particular, have taken a more forward-looking view toward the energy transition than have their American counterparts, Exxon and Chevron. Exxon and Chevron have largely continued along the path of doubling down on oil and gas production and petrochemicals, whereas Total, for example, has been very forward-thinking for about a decade. Now, are they doing enough? No. Still, a very small percentage of their capital expenditures are directed toward clean energy, but they are at least moving in the right direction. And Shell and BP are very involved as well, at least moving in that direction again—not quickly enough, not aggressively enough, to meet the Paris … To be aligned with Paris. But at least we’re seeing that they are aware of the energy transition and they are not staking their entire future on oil and gas, but trying to move beyond that.

Steven Cherry Companies like BP have even set a date to be out of fossil fuels 2040 or 2050. How painful is that going to be for them? Are there loopholes that make this more of a PR commitment than a serious one?

Kathy Hipple That’s a great question. BP did actually say they would reduce their fossil fuel production and that the loophole is some of their joint ventures have been carved out of that. But that was one of the most significant because it said they will, along with Repsol another European oil company, did say that they would reduce production. And we need more of that. This industry is mature. It’s declining. We need a managed decline for that industry. And that will not happen if they are just making empty statements.

Steven Cherry Well, Kathy, it seems like we’re not really going to get to where we need to on climate change until we restructure the economy around it. So thank you for your work toward that and for joining us today to talk about it.

Kathy Hipple Thank you very much for having me, Steven. And congratulations on the work that you’re doing with your students at NYU.

Steven Cherry We’ve been speaking with Kathy Hipple, of Bard College’s Managing for Sustainability MBA program, about the clean-energy economy.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

This interview was recorded January 25, 2021 via Zoom. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on Spotify, Apple Podcast, and wherever else you get your podcasts, or listen on the Spectrum website, where you can also sign up for alerts of new episodes. We welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

The First Battery-Powered Tanker is Coming to Tokyo

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energywise/energy/batteries-storage/first-battery-powered-tanker-coming-to-tokyo

A new ship powered only by lithium-ion batteries is coming to Japan’s coastline. The 60-meter-long tanker will be the first all-electric vessel of its kind when it launches in Tokyo Bay next year, its developers say.

The “e5” tanker is the latest in a small but growing fleet of vessels that use batteries for propulsion or onboard electricity use. As the global shipping industry works to curb carbon dioxide emissions and eliminate air pollution, shipbuilders and cargo owners are increasingly moving to electrify the freighters, tankers, and other vessels that move goods across the water.

Tokyo-based Asahi Tanker will own and operate the e5 vessel—which, ironically, will carry marine diesel fuels to refill the tanks of other cargo ships in the Bay. The 3.5-megawatt-hour (MWh) energy storage system is about the size of 40 Tesla Model S battery packs. That’s enough capacity to propel the ship for “many hours” before needing to plug into a shoreside charging station, said Sean Puchalski of Corvus Energy, the company supplying the batteries.

Corvus, which has offices in Norway and Canada, has put batteries in nearly 400 ships, roughly a quarter of which are fully electric, he said. Most of these are passenger and car ferries plying the Norwegian fjords, where ship operators face tight restrictions on emissions of CO2 and toxic air pollutants, such as sulfur dioxide and nitrogen oxides.

The Japanese tanker is Corvus’s first fully-electric coastal freighter project; the company hopes the e5 will be the first of hundreds more just like it. “We see it has a beachhead for the coastal shipping market globally,” Puchalski said. “There are many other coastal freighter types that are similar in size and energy demand.”

The number of battery-powered ships has ballooned from virtually zero a decade ago to hundreds worldwide. The e5 tanker’s battery is relatively big for today’s electric ships, though several larger projects are also in development. The Yara Birkeland, an 80-meter-long container ship, will use a 9-MWh system for all of its propulsion when it launches in late 2021. Corvus is supplying 10 MWh worth of batteries for AIDAPerla, a 3,330-passenger cruise ship.

Two main factors are giving momentum to maritime batteries. First, lithium-ion technology has become significantly cheaper thanks to the electric car boom on land. Average battery pack prices were about $140 per kilowatt-hour in 2020, down from about $670 in 2013. Prices are expected to drop to about $100 per kilowatt-hour by 2023, BloombergNEF, a research consulting firm, said in a report.

Second, shipping companies are now required to tackle their carbon footprints. Cargo ships account for nearly 3 percent of annual greenhouse gas emissions, according to the International Maritime Organization, the United Nations body that regulates the industry. In 2018, the IMO agreed to reduce shipping emissions by 50 percent from 2008 levels by 2050—a target that is spurring investment in not only batteries but also cleaner-burning fuels like hydrogen and ammonia

First-mover projects like the e5 tanker are needed to develop technologies and infrastructure that can eventually scale for larger, longer-distance vessels, said Narve Mjøs, director of the Green Shipping Programme for DNV GL, an international consultancy in Oslo.

“Here in Norway, most of the green technologies and fuels have first been used between our islands and in our fjords,” he said. “But it’s important that these technologies can take the steps toward short-sea and deep-sea shipping,” he added, referring to two sectors with much higher energy requirements.

Mjøs said he believes eventually every ship will have some type of battery system—either to propel the vessel while at sea, or to keep the ship’s lights and equipment running while at berth. But ocean-crossing cargo ships will probably never be only powered by batteries. To sail for days or weeks without recharging, a ship would have to carry so many batteries there’d be no room left for cargo, he said.

That’s why companies like Corvus are expanding their focus. On 1 February, Corvus announced it would begin developing “large scale” hydrogen fuel cell systems for ships, which it will pair with its lithium-ion batteries. (Put simply, fuel cell modules convert chemical energy into electrical energy without burning the fuel.) The company plans to showcase its first combined system by 2023.

“Corvus is definitely interested in pushing the boundary on how applicable we can make battery technology,” Puchalski said. “But where the range of the ship is too far, or is not practical for battery-only, we’ll add the fuel cell.”

Increasing Energy Inefficiency

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/energy/policy/increasing-energy-inefficiency

Perhaps the most celebrated graphic image of all time was published in 1869 by Charles Joseph Minard, a French civil engineer. He traced the advance of Napoleon’s army into Russia and its retreat from 1812 to 1813 by a sequence of thinning bands representing the total number of men. Four hundred twenty-two thousand soldiers crossed eastward into Russia, 100,000 reached Moscow, and 10,000 crossed the Neman River westward to Prussia, at which point the Grand Armée had lost 97.6 percent of its initial force.

A similar graphic technique was employed by a committee of Britain’s Institution of Civil Engineers in its 1897–98 report on thermal efficiency in steam engines. The graphic illustrated the operation of the Louisville-Leavitt pumping steam engine by beginning with the combustion of coal on the boiler’s grate, producing 193,708 kilojoules (183,600 British thermal units) per minute, continuing with the 149,976 kJ per minute that actually reached the engine, and ending with the effective work (brake horsepower) of 26,788 kJ per minute, for an overall efficiency of only 13.8 percent. Soon this representation became known as a Sankey diagram, after Matthew Henry Phineas Riall Sankey, the honorary chairman of the committee.

One of the most revealing uses of ­Sankey diagrams is to trace national energy flows, starting on the left with all primary energy inputs—all fossil fuels and biofuels, together with electricity generated from hydro, nuclear, wind, solar and geothermal sources—and ending on the right with actual energy services (industrial heat, kinetic and chemical energies, residential and commercial heating and ­air-conditioning, all forms of transportation). A set of these graphs is available for the United States for 1950, 1960, 1970, and then for every year from 1978 through 2019; they can be downloaded from two Lawrence Livermore National Laboratory websites. The latest Sankey diagram, for 2019, shows that the nation’s useful energies (energy services) added up to 32.6 percent of the total primary energy input, a considerably poorer performance than in 1950, when the overall mean was 50.8 percent!

Two realities explain this retrogression. First, transportation has taken a larger share of the energy budget. The average efficiency of car engines has been improving since the mid-1970s, and flying has seen even more impressive efficiency gains per passenger-kilometer. However, rising car ownership, heavier vehicles, much more frequent flying, and longer distances traveled per year per capita explain the sector’s higher share of final energy use (37 percent in 2019, 30 percent in 1950) and its slight efficiency drop from 26 percent to 21 percent during the past 70 years.

The second reality is the decline in the average conversion efficiency of residential and commercial energy use, from about 70 percent to 65 percent, as the gains from more efficient heating have been more than erased by the mass adoption of air-conditioning. Electricity for air-conditioning comes mostly from ­fossil-fuel-powered plants with their considerable inherent conversion losses: In 2019, the average efficiency of U.S. ­electricity-generating coal-fired plants was about 32 percent and that of the now-dominant gas-fired stations 44 percent.

The decline of average conversion efficiency has been much more pronounced in the industrial sector, from 70 percent to 49 percent, which is explained largely by the sector’s ongoing electrification (which displaced former direct fuel uses) and by the expansion of ­electricity-intensive manufacturing. This is a common paradox that has accompanied improved design and higher efficiency of individual energy converters: Even as their specific performance gets better, the overall performance gets worse. The United States is now wasting significantly more energy than it did a lifetime ago. About two-thirds of the total primary input go directly into heating the universe without performing first any useful work, and only a third provides desired energy services, while in 1950 it was a 50/50 split. Another example of progressing by regressing.

This article appears in the February 2021 print issue as “Energy-Conversion Efficiency Is Falling.”

Bright X-Rays, AI, and Robotic Labs—A Roadmap for Better Batteries

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/batteries-storage/bright-xrays-ai-and-robotic-labsa-roadmap-for-better-batteries

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

Batteries have come a long way. What used to power flashlights and toys, Timex watches and Sony Walkmans, are now found in everything from phones and laptops to cars and planes.

Batteries all work the same: Chemical energy is converted to electrical energy by creating a flow of electrons from one material to another; that flow generates an electrical current.

Yet batteries are also wildly different, both because the light bulb in a flashlight and the engine in a Tesla have different needs, and because battery technology keeps improving as researchers fiddle with every part of the system: the two chemistries that make up the anode and the cathode, and the electrolyte and how the ions pass through it from one to the other.

A Chinese proverb says, “Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.” The Christian Bible says, “follow me and I will make you fishers of men.”

In other words, a more engineering-oriented proverb would say, “let’s create a lab and develop techniques for measuring the efficacy of different fishing rods, which will help us develop different rods for different bodies of water and different species of fish.”

The Argonne National Laboratory is one such lab. There, under the leadership of Venkat Srinivasan, director of its Collaborative Center for Energy Storage Science, a team of scientists has developed a quiver of techniques for precisely measuring the velocity and behavior of ions and comparing it to mathematical models of battery designs.

Venkat Srinivasan [Ven-kat Sri-ni-va-san] is also deputy director of Argonne’s Joint Center for Energy Storage Research, a national program that looks beyond the current generation of lithium–ion batteries. He was previously a staff scientist at Lawrence Berkeley National Laboratory, wrote a popular blog, “This Week in Batteries,” and is my guest today via Teams.

Venkat, welcome to the podcast.

Venkat Srinivasan Thank you so much. I appreciate the time. I always love talking about batteries, so it’d be great to have this conversation.

Steven Cherry I think I gave about as simplistic a description of batteries as one could give, maybe we could start with what are the main battery types today and why is one better than another for a given application?

Venkat Srinivasan So, Steve, there are two kinds of batteries that I think all of us use in our daily lives. One of them is a primary battery. The ones that you don’t recharge. So a common one is something that you might be putting in your children’s toys or something like that.

The second, which I think is the one that is sort of powering everything that we think of, things like electric cars and grid storage or rechargeable batteries. So these are the ones where we have to go back and charge them again. So let’s talk a little bit more about rechargeable batteries that are a number of them that are sitting somewhere in the world. You have lead–acid batteries that are sitting in your car today. They’ve been sitting there for the last 30, 40 years where they used to stop the car for lighting the car up when the engine is not on. This is something that will continue to be in our cars for quite some time.

[n-dashes].

You’re also seeing lithium–ion batteries that are not powering the car itself. Instead of having internal combustion engine and gasoline, you’re seeing more pure chemicals coming out that have lithium–ion batteries. And then the third battery, which we sort of don’t see, but we have some in different places are nickel–cadmium and metal–hydride batteries. These are kind of going away slowly. But the Toyota Prius is a great example of a nickel–metal hybrid. But many people still drive Priuses—I have one—that still has nickel-metal batteries in them. These are some of the classes of materials that are more common. But there are others, like flow batteries, that people haven’t really probably thought about and haven’t seen, which is being researched quite a bit, there are companies that are trying to install flow batteries for grid storage, which are also rechargeable batteries that are of a different type.

The most prevalent of these is lithium–ion; that’s the chemistry that has completely changed electric vehicle transportation. It’s changed the way we speak on our phones. The iPhone would not be possible if not for the lithium–ion battery. It’s the battery that has pretty much revolutionized all of transportation. And it’s the reason why the Nobel Prize two years ago went to the lithium–ion batteries for the discovery and ultimately the commercialization of the technology—it’s because it had such a wide impact.

Steven Cherry I gather that remarkably, we’ve designed all these different batteries and can power a cell phone for a full day and power a car from New York to Boston without fully understanding the chemistry involved. I’m going to offer a comparison and I’d like you to say whether it’s accurate or not.

We developed vaccines for smallpox beginning in 1798; we ended smallpox as a threat to humanity—all without understanding the actual mechanisms at the genetic level or even the cellular level by which the vaccine confers immunity. But the coronavirus vaccines we’re now deploying were developed in record time because we were able to study the virus and how it interacts with human organs at those deeper levels. And the comparison here is that with these new techniques developed at Argonne and elsewhere, we can finally understand battery chemistry at the most fundamental level.

Venkat Srinivasan That is absolutely correct. If you go back in time and ask yourself, what about the batteries like the acid batteries and the nickel–cadmium batteries—did we invent them in some systematic fashion? Well, I guess not, right?

Certainly once the materials were discovered, there was a lot of innovation that went into it using what was state-of-the-art techniques at that time to make them better and better and better. But to a large extent, the story that you just said about the vaccines with the smallpox is probably very similar to the kinds of things that are happening in batteries, the older chemistries.

The world has changed now. If you look at the kinds of things we are doing today, like you said, that in a variety of techniques, both experimental but also mathematical, meaning, now computer simulations have come to our aid and now we’re able to take a deeper understanding on how batteries behave and then use that to discover new materials—first, maybe on a computer, but certainly in the lab at some point. So this is something that is also happening in the battery world. The kinds of innovations you are seeing now with COVID vaccines are the kinds of things we are seeing happen in the battery world in terms of discovering the next big breakthrough.

Steven Cherry So I gather the main technology you’re using now is ultraright X-rays and you’re using it to come up with for the first time the electrical current, something known as the transport number. Let’s let’s start with the X-rays.

Venkat Srinivasan We used to cycle the battery up back. Things used to happen to them. We then had to open up the battery and see what happened on the inside. And as you can imagine, right when you open up a battery, you hope that nothing changes by the time you take it to your experimental technique of choice to look at what’s happening on the inside. But oftentimes things change. So what you have inside the battery during its operation may not be the same as what you’re probing when you open up the cell. So a trend that’s been going on for some time now is to say, well, maybe we should be thinking about in situ to operando methods, meaning inside the party’s environment during operation, trying to find more information in the cell.

Typically all battery people will do is they’ll send a current into the battery and then measure the potential or vice versa. That’s a common thing that’s done. So what we are trying to do now is do one more thing on top of that: Can we probe something on the inside without opening up the cell? X-rays come into play because these are extremely powerful light, they can go through the battery casing, go into the cell, and you can actually start seeing things inside the battery itself during operando operation, meaning you can pass current keep the battery in the environment you want it to be and send the X-ray beam and see what’s happening on the inside.

So this is a trend that we’ve been slowly exploring, going back a decade. And a decade ago, we probably did not have the resolution to be able to see things at a very minute scale. So we were seeing maybe a few tens of microns of what was happening in these batteries. Maybe we were measuring things once every minute or so, but we’re slowly getting better and better; we’re making the resolution tighter, meaning we can see smaller features and we are trying to get the time resolution such that we can see things at a faster and faster time. So that trend is something that is going to is helping us and we continue to help us make batteries better.

Steven Cherry So if I could push my comparison a little further, we developed the COVID vaccines in record time and with stunning efficiency. I mean, 95 percent effective right out of the gate. Will this new ability to look inside the battery while it’s in operation, will this create new generations of better batteries in record time?

Venkat Srinivasan That will be the hope. And I do want to bring in two aspects that I think work complementarily with each other. One is the extreme techniques—and related techniques like X-ray, so we should not forget that there are non-X-ray techniques also that give us information that can be crucially important. But along with that, there has been this revolution in computing that has really come to the forefront in the last five to 10 years. What this computing revolution is that basically because computers are getting more and more powerful and computing resources are getting cheaper, we are able to now start to calculate on computers all sorts of things. For example, we can calculate how much lithium can a material hold—without actually having to go into the lab. And we can do this in a high-throughput fashion: screen a variety of materials and start to see which of these looks the most promising. Similarly, we can do it, same thing, to ask: Can we find iron conductors to find, say, solid-state battery materials using the same techniques?

Now, once you have these kinds of materials in play and you do them very, very fast using computers, you can start to think about how do you combine them with these X-ray techniques. So you could imagine that you’re finding a material on the computer. You’re trying to synthesize them and during the synthesis you try to watch and see, are you making the material you were predicting or did something happen during synthesis where you were not able to make the particular material?

And using this complementary way of looking at things, I think in the next five to 10 years you’re going to see this amazing acceleration of material discovery between the computing and the X-ray sources and other techniques of experimental methods. They’re going to see this incredible acceleration in terms of finding new things. You know, the big trick in materials—and this is certainly true for battery materials—if you can find those materials, maybe one of them looks interesting. So the job here is to cycle through those thousand as quickly as possible to find that one nugget that can be exciting. And so what we’re seeing now with computing and with these X-rays is the ability to cycle through many materials very quickly so that we can start to pin down which of those which of the one among those thousand looks the most promising that we can spend a lot more resources and time on them.

Steven Cherry We’ve been relying on lithium–ion for quite a while. It was first developed in 1985 and first used commercially by Sony in 1991. These batteries are somewhat infamous for occasionally exploding in phones and laptops and living rooms and on airplanes and even in the airplanes themselves in the case of the Boeing 787. Do you think this research will lead to safer batteries?

Venkat Srinivasan Absolutely. The first thing I should clarify is that the lithium–ion from the 1990s is not the same lithium–ion we used today. There have been many generations of materials that have changed over time; they’ve gotten better; the energy density has actually gone up by a factor of three in those twenty-five years, and there’s a chance that it’s going to continue to go up by another factor of two in the next decade or so. The reality is that when we use the word lithium–ion, we’re actually talking about a variety of material classes that go into the into the anodes, the cathode, and the electrolytes that make up the lithium–ion batteries. So the first thing to kind of notice is that these materials are changing continuously, what the new techniques are bringing is a way for us to push the boundaries of lithium–ion, meaning there is still a lot of room left for lithium–ion to get better, and these new techniques are allowing us to invent the next generation of cathode materials, anode materials, and electrolytes that could be used in the system to continue to push on things like energy density, fast-charge capability, cycle life. These are the kinds of big problems we’re worried about. So these techniques are certainly going to allow us to get there.

There is another important thing to think about for lithium–ion, which is recyclability. I think it’s been pretty clear that as the market for batteries starts to go up, they’re going to have a lot of batteries that are going to reach end-of-life at some stage and we do not want to throw them away. We want to take out the precious metals in them, the ones that we think are going to be useful for the next generation of batteries. And we want to make sure we dispose of them in a very sort of a safe and efficient manner for the environment. So I think that is also an area of R&D that’s going to be enabled by these kinds of techniques.

The last thing I’d say is that we’re thinking hard about systems that go beyond lithium–ion, things like solid-state batteries, things like magnesium-based batteries … And those kinds of chemistries, we really feel like taking these modern techniques and putting them in play is going to accelerate the development time frame. So you mentioned 1985 and 1991; lithium–ion battery research started in the 1950s and 60s, and it’s taken as many decades before we could get to a stage where Sony could actually go and commercialize it. And we think we can accelerate the timeline pretty significantly for things like solid-state batteries or magnesium-based batteries because of all the modern techniques.

Steven Cherry Charging time is also a big area for potential improvement, especially in electric cars, which still only have a driving range that maybe gets to 400 kilometers, in practical terms. Will we be getting to the point where we can recharge in the time it takes to get a White Chocolate Gingerbread Frappuccino at Starbucks?

Venkat Srinivasan That’s the that’s the dream. So Argonne actually leads a project for the Department of Energy working with multiple other national labs on enabling 10-minute charging of batteries. I will say that in the last two or three years, there’s been tremendous progress in this area. Instead of a forty-five-minute charge or a one-hour charge that was considered to be a fast charge. We now feel like there is a possibility of getting under 30 minutes of charging. They still have to be proven out. They have to be implemented at large scale. But more and more as we learn using these similar techniques that I can see a little bit more about, that there is a lot of work happening at the Advanced Photon Source looking at fast charging of batteries, trying to understand the phenomenon that is stopping us from charging very fast. These same techniques are allowing us to think about how to solve the problem.

And I’ll take a bet in the next five years, we’ll start to look at 10-minute charging as something that is going to be possible. Three or four years ago, I would not have said that. But in the next five years, I think they are going to start saying, hey, you know, I think there are ways in which you can start to get to this kind of charging time. Certainly it’s a big challenge. It’s not just a challenge in the battery side, it’s a challenge in how are we going to get the electricity to reach the electric car? I mean, there’s going to be a problem there. There’s a lot of heat generation that happens in these systems. We’ve got to find a way to pull it out. So there’s a lot of challenges that we have to solve. But I think these techniques are slowly giving us answers to, why is it a problem to begin with? And allowing us to start to test various hypotheses to find ways to solve the problem.

Steven Cherry The last area where I think people are looking for dramatic improvement is weight and bulk. It’s important in our cell phones and it’s also important in electric cars.

Venkat Srinivasan Yeah, absolutely. So frankly, it’s not just in electric cars. At Argonne they’re starting to think about light-duty vehicles, which is our passenger cars, but also heavy-duty vehicles. Right. I mean, what happens when you start to think about trucking across the country carrying a heavy payload? We are trying to think hard about aviation, about marine, and rail. As you start to get to these kinds of applications, the energy density requirement goes up dramatically.

I’ll give you some numbers. If you look at today’s lithium–ion batteries at the pack level, the energy density is approximately 180 watt-hours per kilogram, give or take. Depending on the company, That could be a little bit higher or lower, but approximately 180 Wh/kg. If we look at a 737 going across the country or a significant distance carrying a number of passengers, the kinds of energy density you would need is upwards of 800 Wh/kg. So just to give you a sense for that, right, we said it’s 180 for today’s lithium–ion. We’re talking about four to five times the energy density of today’s lithium–ion before we can start to think about electric aviation. So energy density would gravimetric and volumetric. It’s going to be extremely important in the future. Much of the R&D that we are doing is trying to discover materials that allow us to increase energy density. The hope is that you will increase energy density. You will make the battery charge very fine. To get them to last very long, all simultaneously, that tends to be a big deal, but it is not all about compromising between these different competing metrics—cycle life, calendar life, cost, safety, performance, all of them tend to play against each other. But the big hope is that we are able to improve the energy density without compromising on these other metrics. That’s kind of the big focus of the R&D that’s going on worldwide, but certainly at Argonne.

Steven Cherry I gather there’s also a new business model for conducting this research, a nonprofit organization that brings corporate and government, and academic research all under one aegis. Tell us about CalCharge.

Venkat Srinivasan Yeah, if you kind of think about the battery world and this is true for many of these hard technologies, the sort of the cleantech or greentech as people have come to call them. There is a lot of innovation that is needed, which means in our lab R&D, the kinds of techniques and models that we’re talking about is crucially important. But it’s also important for us to find a way to make them into a market, meaning you have to be able to take that lab innovation; you’ve got to be able to manufacture them; you’ve got to get them in the hands of, say, a car company that’s going to test them and ultimately qualify them and then integrate them into the vehicle.

So this is a long road to go from lab to market. And the traditional way you’ve thought about this is you will want to throw it across the fence, right. So, say at Argonne National Lab, invent something and then we throw it across the fence to industry and then you hope that industry takes it from there and they run with it and they solve the problems. That tends to be an extremely inefficient process. That’s because oftentimes that a national lab might stop is not enough for an industry to run with it—there are multiple paths that show up. And when you integrate these devices into the company’s existing other components there are problems that show up when you get it up to manufacturing, when you start to get up to a larger scale; there are problems that show up and you make a pact with it. And oftentimes the solution to these problems goes back to the material. So the fundamental principle that me and many others have started thinking about is you do not want to keep R&D, the manufacturing, and the market separate. You have to find a way to connect them up.

And if you connect them up very closely, then the market starts to drive the R&D, the R&D innovation starts to get the people to the manufacturing world excited. And there is this close connection among all of these three things that makes things go faster and faster. We’ve seen this in other industries and it certainly will be true in the battery world. So we’ve been trying very, very hard to kind of enable these kinds of what I would call public-private[NDASH] partnerships, ways in which we, the public, meaning the national lab systems, can start to interact with the private companies and find ways to move this along. So this is a concept that I think of me and a few others have been sort of thinking about for quite some time. Before I moved to Argonne, I was at Lawrence Berkeley. And at Lawrence Berkeley—the Bay Area has a very rich ecosystem of battery companies, especially startup companies.

So I created this entity called CalCharge, which was a way to connect up the local ecosystem in the San Francisco Bay Area to the national labs in the area—Lawrence Berkeley, SLAC, and Sandia National Labs in Livermore. So those are the three that were connected. And the idea behind this is how do we take the sort of the national lab facilities, the people, and the kind of the amazing brains that they have and use them to start to solve some of the problems that is facing? And how do we take the IP that is sitting in the lab and how do we move them to market using these startups so that we can continuously work with each other, make sure that we don’t have these valleys of death as we’ve come to call them, when we move from lab to market and try to accelerate that. I’ve been doing very similar things at Argonne in the last four years thinking hard about how do you do this, but on a national scale.

So we’ve been working closely with the Department of Energy, working with various entities both in the Chicagoland area, but also in the wider U.S. community, to start to think about enabling these kinds of ecosystems where national labs like ours and others across the country—there are 70 national labs, Department of Energy national labs—maybe a dozen of them have expertise that can be used for the free world. How do we connect them up? And the local universities that are the different parts of the country with amazing expertise, how do you connect them up to these startups, the big companies, the manufacturers, the car companies that are coming in, but also the material companies, companies that are providing lithium for a supply chain perspective? So my dream is that we would have this big ecosystem of everybody talking to each other, finding ways to leverage each other and ultimately making this technology something that can reach the market as quickly as possible.

Steven Cherry And right now, who is waiting on whom? Is there enough new research that it’s up to the corporations to do something with it? Or are they looking for specific improvements that that they need to wait for you to make?

Venkat Srinivasan All of the above. That is probably quite a bit of R&D that’s going on that industry is not aware of, and that tends to be a big problem—there’s a visibility problem when it comes to the kinds of things that are going on in the national labs and the academic world. There are things where we are not aware of the problems that industry is facing. And I think these kinds of disconnects where sometimes the lack of awareness keeps things from happening fast is what we need to solve. And the more connections we have, the more interactions we have, the more conversations we have with each other, the exposure increases. And when the exposure increases, we have a better chance of being able to solve these kinds of problems where the lack of information stops us from getting the kinds of innovation that we could get.

Steven Cherry And at your end, at the research end, I gather one immediate improvement you’re looking to make is the brightness of the X-rays. Is there anything else that we should look forward to?

Venkat Srinivasan Yeah, there are a couple of things that I think are very important. The first one is the brightness of the X-rays. There’s an upgrade that is coming up for the advanced photon source that’s going to change the time resolution in which we can start to see these batteries. So, for example, when you’re charging the batteries very fast, you can get data very quickly. So that’s going to be super important. The second one is you can also start to think about seeing features that are even smaller than the kinds of features we see today. So that’s the first big thing.

The second thing that is connected to that is artificial intelligence and machine learning is becoming something that is permeating through all forms of research, including battery research, we use AI and ML for all sorts of things. But one thing we’ve been thinking about is how do we connect up AI and ML to the kinds of X-ray techniques we’ve been using. So, for example, instead of looking all over the battery to see if there is a problem, can we use signatures but of where the problems could be occurring? So that these machine learning tools can quickly go in and identify the spot where things could be going wrong so that you can spend all your time and energy taking data at that particular spot. So that, again, we’re being very efficient with the time that we have to ensure that we’re catching the problems we have to catch. So I think the next big thing that is going on is this whole artificial intelligence and machine learning that is going to be integral for us in the battery discovery world.

The last thing which is an emerging trend is what is called automated labs or self-driving labs. The idea behind this is that instead of a human being going in and sort of synthesizing a material starting in the morning and finishing the evening and then characterizing it the next day and finding out what happened to it and then going back and trying the next material, could we start to do this using robotics? This is something that’s been a trend for a while now. But where things are heading is that more and more robots can start to do things that a human being could do. So you could imagine robots going in and synthesizing electrolyte molecules, mixing them up, testing for the conductivity and trying to see if the conductivity is higher than the one that you had before. If it’s not going back and iterating on finding a new molecule based on the previous results so that you can efficiently try to find the answer for a higher conductive electrolyte than one that you have is your baseline. Robots work 24/7. So it kind of makes it very, very useful for us to think about these ways of innovating. Robots generate a lot of data, which we now know how to handle because of all the machine learning tools we’ll be developing in the last three, four, five years. So all of a sudden, the synergy, the intersection between machine learning, the ability to analyze a lot of data, and robotics are starting to come into play. And I think we’re going to see that that’s going to open up new ways to discover materials in a rapid fashion.

Steven Cherry Well, Venkat, if you will forgive a rather obvious pun, the future of battery technology seems bright. And I wish you and your colleagues at Argonne and CalCharge every success. Thank you for your role in this research and for being here today.

Thank you so much. I appreciate the time you’ve taken to ask me this questions.

We’ve been speaking with Venkat Srivinasan of Argonne National Lab about a newfound ability to study batteries at the molecular level and about improvements that might result from it.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

This interview was recorded January 6, 2021 using Adobe Audition and edited in Audacity. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on the Spectrum website, where you can also sign up for alerts, or on Spotify, Apple, Google—wherever you get your podcasts. We welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

See Also:

Battery of tests: Scientists figure out how to track what happens inside batteries

Concentration and velocity profiles in a polymeric lithium-ion battery electrolyte

Carbon Engineering’s Tech Will Suck Carbon From the Sky

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energy/fossil-fuels/carbon-engineerings-tech-will-suck-carbon-from-the-sky

graphic link to special report landing page

West Texas is a hydrocarbon hot spot, with thousands of wells pumping millions of barrels of oil and billions of cubic feet of natural gas from the Permian Basin. When burned, all that oil and gas will release vast amounts of greenhouse gases into the atmosphere.

A new facility there aims to do the opposite. Rows of giant fans spread across a flat, arid field will pull carbon dioxide from the air and then pump it deep underground. When completed, the project could capture 1 million metric tons of carbon dioxide per year, doing the air-scrubbing work of some 40 million trees.

Canadian firm Carbon Engineering is designing and building this “direct-air capture” facility with 1PointFive, a joint venture between a subsidiary of Occidental Petroleum Corp. and the private equity firm Rusheen Capital Management. Carbon Engineering will devote much of 2021 to front-end engineering and design work in Texas, with construction slated to start the following year and operations by 2024, the partners say. The project is the biggest of its kind in the world and will likely cost hundreds of millions of dollars to develop.

Carbon Engineering is among a handful of companies with major direct-air capture developments underway this year. Zurich-based Climeworks is expanding across Europe, while Dublin’s Silicon Kingdom Holdings plans to install its first CO2-breathing “mechanical tree” in Arizona. Global Thermostat, headquartered in New York City, has three new projects in the works. All the companies say they intend to curb the high cost of capturing carbon by optimizing technology, reducing energy use, and scaling up operations.

The projects arrive as many climate experts warn that current measures to reduce emissions—such as adopting renewable energy and electrifying transportation—are no longer sufficient to avert catastrophe. To limit global warming to 1.5 °C, the world must also use “negative-emission technologies,” according to the United Nations Intergovernmental Panel on Climate Change’s 2018 report.

Global CO2 emissions from fossil fuels reached 33 billion metric tons in 2019. Existing direct-air capture projects would eliminate a tiny fraction of that total, and not all of the captured CO2 is expected to be permanently sequestered. Some of it will likely return to the atmosphere when used in synthetic fuels or other products. Companies say the goal is to continuously capture and “recycle” the greenhouse gas to avoid creating new emissions, while also generating revenue that can fund the technology.

Carbon removal can help compensate for sectors that are difficult to decarbonize, such as agriculture, cement making, and aviation, says Jennifer Wilcox, a chemical engineer and senior fellow at the World Resources Institute. “The climate models are saying clearly that if we don’t do carbon removal in addition to avoiding emissions, we will not reach our climate goals.”

Carbon Engineering’s plant in Texas will use banks of fans, each about 8.5 meters in diameter, to draw air into a large structure called a contactor. The air is pushed through a plastic mesh coated with a potassium hydroxide solution, which binds with the carbon dioxide. A series of chemical processes concentrate and compress the CO2 into tiny white pellets, which are then heated to 900 °C to release the carbon dioxide as a gas. Steve Oldham, CEO of Carbon Engineering, likens the plant to a refinery that produces chemicals at an industrial scale. “That’s the type of capability we’re going to need, to make a material impact on climate change,” he says.

At its pilot plant in British Columbia, Carbon Engineering combines the pure CO2 with hydrogen to produce synthetic crude oil. The facility can capture 1 metric ton of carbon dioxide per day; by comparison, the Texas operation is expected to capture over 2,700 metric tons daily. At the larger site, the captured gas will be injected into older oil wells, both sequestering the CO2 underground and forcing up any remaining oil. In addition to the work in Texas, the company is scaling up its Canadian operations, Oldham says. In 2021, it will open a new business and advanced-development center and expand research operations; the new facility will capture up to 4 metric tons of CO2 per day from the air.

Other direct-air capture firms are opting for a modular approach. Climeworks’ carbon collectors can be stacked to build facilities of any size. The system also uses fans, but the air runs over a solid filter material. Once saturated with CO2, the filter is heated to between 80 and 100 °C, releasing highly concentrated CO2 gas, which can be used in various ways.

For example, at Climeworks’ pilot site in Iceland—which is powered by geothermal energy—the company’s partner Carbfix reacts the concentrated CO2 with basaltic rock to lock it below ground. The site is now being expanded to capture 4,000 metric tons of carbon dioxide a year; it should be operational in the first half of 2021, says Daniel Egger, head of marketing and sales for Climeworks. The CO2 could also be used to make a more sustainable form of jet fuel; Climeworks is seeking financing for two CO2-to-fuel projects in Norway and the Netherlands.

Meanwhile, the company will continue working with the e-commerce platforms Stripe and Shopify. To cancel their carbon footprints, the two companies have committed to purchasing carbon credits from Climeworks, reflecting the amount of CO2 that Climeworks has removed from the air. Major tech firms in general are investing in carbon-reducing schemes to help meet their corporate environmental goals. Microsoft has pledged to be carbon negative by 2030 and to spend $1 billion to accelerate the development of technology for carbon reduction and removal.

“For all these companies that have targets to bring their emissions to ‘net zero,’ technologies like ours are absolutely needed,” Egger says.

Global energy giants are also backing direct-air capture to undo some of the damage caused by their products and operations. In September, for instance, ExxonMobil expanded an agreement with Global Thermostat to help scale the startup’s technology. Global Thermostat’s machines are the size of a shipping container and capture CO2 using amine-based adsorbents on honeycombed ceramic cubes, akin to a car’s catalytic converter.

Cofounder Peter Eisenberger, a professor of Earth and environmental science at Columbia University, says Global Thermostat’s goal is to remove billions of tons of carbon dioxide every year by licensing its technology to other firms. He believes the world will have to remove 50 billion metric tons of carbon dioxide over the next two decades to avoid catastrophic climate shifts. In 2021, the company will add three pilot projects, including a 2,000-metric-ton plant in Chile to produce synthetic fuels, as well as facilities in Latin America and the Middle East that will provide CO2 for bubbly beverages and water desalination, respectively.

Unlike its peers, Silicon Kingdom Holdings uses a passive system to draw in air. Klaus Lackner, a professor at Arizona State University, developed the company’s mechanical-tree technology. Each tree will have stacks of 150 disks coated in a carbon-adsorbing material; as wind blows over the disks, they trap carbon on their surfaces. The disks are then lowered into a bottom chamber, where an “energy-efficient process” releases the CO2 from the sorbent, says Pól Ó Móráin, CEO of Silicon Kingdom Holdings. The high-purity gas could be sequestered or reused in beverages, cement, fertilizer, or other industrial products. The startup plans to build and operate the first commercial-scale 2.5-meter-tall tree near the ASU campus in Tempe in 2021.

Ó Móráin says a dozen trees can capture 1 metric ton of carbon dioxide daily. The goal is to install carbon farms worldwide, each with up to 120,000 mechanical trees.

Wilcox of the World Resources Institute says there’s “no clear winner” among these emerging technologies for capturing carbon. They’re distinct from one another, she notes. “I think we need them all.”

An abridged version of this article appears in the January 2021 print issue as “The Carbon-Sucking Fans of West Texas.”

Gravity Energy Storage Will Show Its Potential in 2021

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/energy/batteries-storage/gravity-energy-storage-will-show-its-potential-in-2021

graphic link to special report landing page

Cranes are a familiar fixture of practically any city skyline, but one in the Swiss City of Ticino, near the Italian border, would stand out anywhere: It has six arms. This 110-meter-high starfish of the skyline isn’t intended for construction. It’s meant to prove that renewable energy can be stored by hefting heavy loads and dispatched by releasing them.

Energy Vault, the Swiss company that built the structure, has already begun a test program that will lead to its first commercial deployments in 2021. At least one competitor, Gravitricity, in Scotland, is nearing the same point. And there are at least two companies with similar ideas, New Energy Let’s Go and Gravity Power, that are searching for the funding to push forward.

To be sure, nearly all the world’s currently operational energy-storage facilities, which can generate a total of 174 gigawatts, rely on gravity. Pumped hydro storage, where water is pumped to a higher elevation and then run back through a turbine to generate electricity, has long dominated the energy-storage landscape. But pumped hydro requires some very specific geography—two big reservoirs of water at elevations with a vertical separation that’s large, but not too large. So building new sites is difficult.

Energy Vault, Gravity Power, and their competitors seek to use the same basic principle—lifting a mass and letting it drop—while making an energy-storage facility that can fit almost anywhere. At the same time they hope to best batteries—the new darling of renewable-energy storage—by offering lower long-term costs and fewer environmental issues.

In action, Energy Vault’s towers are constantly stacking and unstacking 35-metric-ton bricks arrayed in concentric rings. Bricks in an inner ring, for example, might be stacked up to store 35 megawatt-hours of energy. Then the system’s six arms would systematically disassemble it, lowering the bricks to build an outer ring and discharging energy in the process.

This joule-storing Jenga game can be complicated. To maintain a constant output, one block needs to be accelerating while another is decelerating. “That’s why we use six arms,” explains Robert Piconi, the company’s CEO and cofounder.

What’s more, the control system has to compensate for gusts of wind, the deflection of the crane as it picks up and sets down bricks, the elongation of the cable, pendulum effects, and more, he says.

Piconi sees several advantages over batteries. Advantage No. 1 is environmental. Instead of chemically reactive and difficult-to-recycle lithium-ion batteries, Energy Vault’s main expenditure is the bricks themselves, which can be made on-site using available dirt and waste material mixed with a new polymer from the Mexico-based cement giant Cemex

Another advantage, according to Piconi, is the lower operating expense, which the company calculates to be about half that of a battery installation with equivalent storage capacity. Battery-storage facilities must continually replace cells as they degrade. But that’s not the case for Energy Vault’s infrastructure.

The startup is confident enough in its numbers to claim that 2021 will see the start of multiple commercial installations. Energy Vault raised US $110 million in 2019 to build the demonstration unit in Ticino and prepare for a “multicontinent build-out,” says Piconi.

Compared with Energy Vault’s effort, Gravitricity’s energy-storage scheme seems simple. Instead of a six-armed crane shuttling blocks, Gravitricity plans to pull one or just a few much heavier weights up and down abandoned, kilometer-deep mine shafts.

These great masses, each one between 500 and 5,000 metric tons, need only move at mere centimeters per second to produce megawatt-level outputs. Using a single weight lends itself to applications that need high power quickly and for a short duration, such as dealing with second-by-second fluctuations in the grid and maintaining grid frequency, explains Chris Yendell, Gravitricity’s project development manager. Multiple-weight systems would be more suited to storing more energy and generating for longer periods, he says. 

Proving the second-to-second response is a primary goal of a 250-kilowatt concept demonstrator that Gravitricity is building in Scotland. Its 50-metric-ton weight will be suspended 7 meters up on a lattice tower. Testing should start during the first quarter of 2021. “We expect to be able to achieve full generation within less than one second of receiving a signal,” says Yendell.

The company will also be developing sites for a full-scale prototype during 2021. “We are currently liaising with mine owners in Europe and in South Africa, [and we’re] certainly interested in the United States as well,” says Yendell. Such a full-scale system would then come on line in 2023.

Gravity Power and its competitor New Energy Let’s Go, which acquired its technology from the now bankrupt Heindl Energy, are also looking underground for energy storage, but they are more closely inspired by pumped hydro. Instead of storing energy using reservoirs at different elevations, they pump water underground to lift an extremely heavy piston. Allowing the piston to fall pushes water through a turbine to generate electricity.

“Reservoirs are the Achilles’ heel of pumped hydro,” says Jim Fiske, the company’s founder. “The whole purpose of a Gravity Power plant is to remove the need for reservoirs. [Our plants] allow us to put pumped-hydro-scale power and storage capacity in 3 to 5 acres [1 to 2 hectares] of flat land.”

Fiske estimates that a 400-megawatt plant with 16 hours of storage (or 6.4 gigawatt-hours of energy) would have a piston that’s more than 8 million metric tons. That might sound ludicrous, but it’s well within the lifting abilities of today’s pumps and the constraints of construction processes, he says. 

While these companies expect such underground storage sites to be more economical than battery installations, they will still be expensive. But nations concerned about the changing climate may be willing to pay for storage options like these when they recognize the gravity of the crisis.

This article appears in the January 2021 print issue as “The Ups and Downs of Gravity Energy Storage.”

Lithium-Ion Battery Recycling Finally Takes Off in North America and Europe

Post Syndicated from Jean Kumagai original https://spectrum.ieee.org/energy/batteries-storage/lithiumion-battery-recycling-finally-takes-off-in-north-america-and-europe

graphic link to special report landing page

Later this year, the Canadian firm Li-Cycle will begin constructing a US $175 million plant in Rochester, N.Y., on the grounds of what used to be the  Eastman Kodak complex. When completed, it will be the largest lithium-ion battery-recycling plant in North America.

The plant will have an eventual capacity of 25 metric kilotons of input material, recovering 95 percent or more of the cobalt, nickel, lithium, and other valuable elements through the company’s zero-wastewater, zero-emissions process. “We’ll be one of the largest domestic sources of nickel and lithium, as well as the only source of cobalt in the United States,” says Ajay Kochhar, Li-Cycle’s cofounder and CEO.

Founded in late 2016, the company is part of a booming industry focused on preventing tens of thousands of tons of lithium-ion batteries from entering landfills. Of the 180,000 metric tons of Li-ion batteries available for recycling worldwide in 2019, just a little over half were recycled. As lithium-ion battery production soars, so does interest in recycling. 

According to London-based Circular Energy Storage, a consultancy that tracks the lithium-ion battery-recycling market, about a hundred companies worldwide recycle lithium-ion batteries or plan to do so soon. The industry is concentrated in China and South Korea, where the vast majority of the batteries are also made, but there are several dozen recycling startups in North America and Europe. In addition to Li-Cycle, that list includes Stockholm-based Northvolt, which is jointly building an EV-battery-recycling plant with Norway’s Hydro, and Tesla alum J.B. Straubel’s Redwood Materials, which has a broader scope of recycling electronic waste. [See sidebar, “14 Li-ion Battery-Recycling Projects to Watch.”]

These startups aim to automate, streamline, and clean up what has been a labor-intensive, inefficient, and dirty process. Traditionally, battery recycling involves either burning them to recover some of the metals, or else grinding the batteries up and treating the resulting “black mass” with solvents.

Battery recycling doesn’t just need to be cleaner—it also needs to be reliably profitable, says Jeff Spangenberger, director of the ReCell Center, a battery-recycling research collaboration supported by the U.S. Department of Energy. “Recycling batteries is better than if we mine new materials and throw the batteries away,” Spangenberger says. “But recycling companies have trouble making profits. We need to make it cost effective, so that people have an incentive to bring their batteries back.”

Li-Cycle will operate on a “spoke and hub” model, with the spokes handling the preliminary processing of old batteries and battery scrap, and the black mass feeding into a centrally located hub for final processing into battery-grade materials. The company’s first spoke is near Toronto, where Li-Cycle is headquartered; a second spoke just opened in Rochester, where the future hub is slated to open in 2022.

Li-Cycle engineers iteratively improved on traditional hydrometallurgical recycling, Kochhar says. For instance, rather than dismantling an EV battery pack into cells and discharging them, they separate the pack into larger modules and process them without discharging.

When it comes to battery chemistries, Li-Cycle is agnostic. Mainstream nickel manganese cobalt oxide batteries are just as easily recycled as ones based on lithium iron phosphate. “There is no uniformity in the industry,” Kochhar notes. “We don’t know the exact chemistry of the batteries, and we don’t need to know.” 

Just how many batteries will need to be recycled? In presentations, Kochhar refers to an “incoming tsunami” of spent lithium-ion batteries. With global sales of EVs expected to climb from 1.7 million in 2020 to 26 million in 2030, it’s easy to imagine we’ll soon be awash in spent batteries.

But lithium-ion batteries have long lives, says Hans Eric Melin, director of Circular Energy Storage. “Thirty percent of used EVs from the U.S. market are now in Russia, Ukraine, and Jordan, and the battery came along as a passenger on that journey,” Melin says. EV batteries can also be repurposed as stationary storage. “There’s still a lot of value in these [used] products,” he says. 

Melin estimates that the United States will have about 80 metric kilotons of Li-ion batteries to recycle in 2030, while Europe will have 132 metric kilotons. “Every [recycling] company is setting up a plant with thousands of tons of capacity, but you can’t recycle more material than you have,” he notes.

ReCell’s Spangenberger agrees that the need for increased battery-recycling capacity won’t be pressing for a while. That’s why his group’s research is focused on longer-term projects, including direct cathode recycling. Traditional recycling breaks the cathode down into a metal salt, and reforming the salt back into cathodes is expensive. ReCell plans to demonstrate a cost-effective method for recycling cathode powders this year, but it will be another five years before those processes will be ready for high-volume application.

Even if the battery tsunami hasn’t yet arrived, Kochhar says consumer electronics and EV manufacturers are interested in Li-Cycle’s services now. “Often, they’re pushing their suppliers to work with us, which has been great for us and really interesting to see,” Kochhar says.

“The researchers involved in recycling are very passionate about what they do—it’s a big technical challenge and they want to figure it out because it’s the right thing to do,” says Spangenberger. “But there’s also money to be made, and that’s the attraction.”

This article appears in the January 2021 print issue as “Momentum Builds for Lithium-Ion Battery Recycling.”

Perovskite Solar Out-Benches Rivals in 2021

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/tech-talk/energy/renewables/oxford-pv-sets-new-record-for-perovskite-solar-cells

In October, when the International Energy Agency pronounced solar energy the “cheapest electricity in history,” their claim rested on largely inefficient panel technologies in the 15-25 percent range. Imagine, say perovskite PV advocates, what new standards solar power could set with efficiencies of 30 percent or more. 

As of December, photovoltaic panel efficiency ratings north of 30 percent no longer seemed quite so theoretical either. 

December was the month the U.S. National Renewable Energy Laboratory certified U.K. startup Oxford PV’s new record: A single solar cell coated with the mineral perovskite, NREL confirmed, can now convert 29.52 percent of incident solar energy into electricity. According to NREL’s own benchmarking, conventional silicon cells appear to have maxed out at 27.6 percent. 

Oxford PV, a ten-year-old spinoff from the University of Oxford, in England, say they expect to cross over into the 30s soon, too.

At its current pace, the company says it expects to be manufacturing cells with 33 percent efficiency within four years. One competitor in the perovskite-silicon PV race, the German research institute Helmholtz-Zentrum Berlin, has already achieved 29.15 percent efficiency with their perovskite cell and expects to be able to push its rating up to 32.4 percent. 

“We hope [this technology] will change the face of photovoltaics and accelerate the adoption of solar to address climate change,” Chris Case, Oxford PV’s chief technology officer, says of his company’s tandem perovskite-silicon cells. 

The tandem approach—coating an ordinary silicon wafer with a thin-film layer of perovskite material—enables Oxford PV to capture more available solar radiation. The perovskite layer absorbs shorter wavelengths, while the silicon layer absorbs longer wavelengths. To improve efficiency, the company expects to refine the cells’ coatings and antireflection layers and remove defects and impurities, Case says. 

Companies and universities worldwide are looking to perovskites as a potential future replacement for silicon, in the hopes of making renewable energy more affordable and accessible.

Early prototypes of perovskite solar cells were unstable and degraded quickly. But over the past decade, researchers have steadily improved the stability and durability of perovskite materials for both indoor and outdoor applications.

Oxford PV expects to start selling its perovskite-silicon cells to the public in early 2022, says CEO Frank Averdung. That would make it the first company to bring such a product to the global solar market. 

The startup is expanding its pilot plant in Germany into a 100-megawatt-capacity solar cell factory. Oxford PV began producing small volumes of cells there in 2017. The company has field-tested the technology for more than a year, Case says.

So far, he adds, data suggest the cells perform about the same as commercial silicon panels. “We see no degradation that’s any different than reference commercial panels that we’re comparing to,” Case says. 

He says it typically takes a couple of years for efficiency achievements in the lab to appear in factory-produced cells. Thus the first devices off Oxford PV’s manufacturing line will have an efficiency of 26 percent—higher than any other commercially available solar cell, Averdung says. The company expects residential rooftop solar projects using its technology will generate 20 percent more power using the same number of cells as existing installations.

Some solar researchers remain skeptical of perovskites, pointing to the material’s potential to degrade when exposed to moisture, harsh temperatures, salt spray, oxygen, and other elements. Case says Oxford PV’s cells have passed a battery of accelerated stress tests, both internally and by third parties.

“They will definitely be expected to last as long or longer than any of the best silicon modules that are out there,” he says of the cells.

Germany’s Energiewende, 20 Years Later

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/energy/renewables/germanys-energiewende-20-years-later

In 2000, Germany launched a deliberately targeted program to decarbonize its primary energy supply, a plan more ambitious than anything seen anywhere else. The policy, called the Energiewende, is rooted in Germany’s naturalistic and romantic tradition, reflected in the rise of the Green Party and, more recently, in public opposition to nuclear electricity generation. These attitudes are not shared by the country’s two large neighbors: France built the world’s leading nuclear industrial complex with hardly any opposition, and Poland is content burning its coal.

The policy worked through the government subsidization of renewable electricity generated with photovoltaic cells and wind turbines and by burning fuels produced by the fermentation of crops and agricultural waste. It was accelerated in 2011 when Japan’s nuclear disaster in Fukushima led the German government to order that all its nuclear power plants be shut down by 2022.

During the past two decades, the Energiewende has been praised as an innovative miracle that will inexorably lead to a completely green Germany and criticized as an expensive, poorly coordinated overreach. I will merely present the facts.

The initiative has been expensive, and it has made a major difference. In 2000, 6.6 percent of Germany’s electricity came from renewable sources; in 2019, the share reached 41.1 percent. In 2000, Germany had an installed capacity of 121 gigawatts and it generated 577 terawatt-hours, which is 54 percent as much as it theoretically could have done (that is, 54 percent was its capacity factor). In 2019, the country produced just 5 percent more (607 TWh), but its installed capacity was 80 percent higher (218.1 GW) because it now had two generating systems.

The new system, using intermittent power from wind and solar, accounted for 110 GW, nearly 50 percent of all installed capacity in 2019, but operated with a capacity factor of just 20 percent. (That included a mere 10 percent for solar, which is hardly surprising, given that large parts of the country are as cloudy as Seattle.) The old system stood alongside it, almost intact, retaining nearly 85 percent of net generating capacity in 2019. Germany needs to keep the old system in order to meet demand on cloudy and calm days and to produce nearly half of total demand. In consequence, the capacity factor of this sector is also low.

It costs Germany a great deal to maintain such an excess of installed power. The average cost of electricity for German households has doubled since 2000. By 2019, households had to pay 34 U.S. cents per kilowatt-hour, compared to 22 cents per kilowatt-hour in France and 13 cents in the United States.

We can measure just how far the Energiewende has pushed Germany toward the ultimate goal of decarbonization. In 2000, the country derived nearly 84 percent of its total primary energy from fossil fuels; this share fell to about 78 percent in 2019. If continued, this rate of decline would leave fossil fuels still providing nearly 70 percent of the country’s primary energy supply in 2050.

Meanwhile, during the same 20-year period, the United States reduced the share of fossil fuels in its primary energy consumption from 85.7 percent to 80 percent, cutting almost exactly as much as Germany did. The conclusion is as surprising as it is indisputable. Without anything like the expensive, target-mandated Energiewende, the United States has decarbonized at least as fast as Germany, the supposed poster child of emerging greenness.

This article appears in the December 2020 print issue as “Energiewende, 20 Years Later.”

Goodbye, Centralized Power Grid. Hello, Autonomous Energy Grids

Post Syndicated from Benjamin Kroposki original https://spectrum.ieee.org/energy/the-smarter-grid/goodbye-centralized-power-grid-hello-autonomous-energy-grids

It’s great to have neighbors you can depend on, whether you’re borrowing a cup of sugar or you need someone to walk your dog while you’re out of town. In the western Colorado neighborhood of Basalt Vista, the residents are even closer than most: They share their electricity. But unlike your neighbor with the sugar, the residents of Basalt Vista may not even know when they’re being generous. The energy exchanges happen automatically, behind the scenes. What residents do know is how inexpensive, reliable, and renewable their electricity is.

The 27 smart homes in Basalt Vista, located about 290 kilometers west of Denver, are part of a pilot for an altogether new approach to the power grid. The entire neighborhood is interconnected through a microgrid that in turn connects to the main grid. Within each home, every smart appliance and energy resource—such as a storage battery, water heater, or solar photovoltaic (PV) system—is controlled to maximize energy efficiency.

On a larger scale, houses within the neighborhood can rapidly share power, creating reliable electricity for everyone—solar energy generated at one house can be used to charge the electric car next door. If a wildfire were to knock out power lines in the area, residents would still have electricity generated and stored within the neighborhood. From the spring through the fall, the PV systems can provide enough electricity and recharge the batteries for days at a time. In the dead of winter, with the heat running and snow on the solar panels, the backup power will last for about 2 hours.

In theory, power systems of any size could be covered in a patchwork of Basalt Vistas, layering regions and even an entire country in smart grids to automatically manage energy production and use across millions of controllable distributed energy resources. That concept underlies the autonomous energy grid (AEG), a vision for how the future of energy can be defined by resilience and efficiency.

The concept and core technology for the autonomous energy grid are being developed by our team at the National Renewable Energy Laboratory, in Golden, Colo. Since 2018, NREL and local utility Holy Cross Energy have been putting the concept into practice, starting with the construction of the first four houses in Basalt Vista. Each home has an 8-kilowatt rooftop PV system with lithium iron phosphate storage batteries, as well as energy-efficient, ­all-electric heating, cooling, water heaters, and appliances. All of those assets are monitored and can be controlled by the AEG. So far, average utility bills have been about 85 percent lower than typical electric bills for Colorado.

AEGs will create at least as many benefits for utilities as they do for customers. With AEGs monitoring distributed energy resources like rooftop solar and household storage batteries, a utility’s control room will become more like a highly automated air traffic control center. The result is that energy generated within an AEG is used more efficiently—it’s either consumed immediately or stored. Over time, the operator will have to invest less in building, operating, and maintaining larger generators—including costly “peaker” plants that are used only when demand is unusually high.

But can a network as large and complicated as a national power grid really operate in a decentralized, automated way? Our research says definitely yes. Projects like the one at Basalt Vista are helping us figure out our ideas about AEGs and demonstrate them in real-world settings, and thus are playing a crucial role in defining the future of the power grid. Here’s how.

Today, grid operators must overcome two big problems. First, an ever-growing number of distributed energy resources are being connected to the grid. In the United States, for instance, residential solar installations are expected to grow approximately 8 percent per year through 2050, while household battery systems are estimated to hit almost 1.8 gigawatts by 2025, and around 18.7 million EVs could be on U.S. roads by 2030. With such anticipated growth, it’s possible that a decade from now, most U.S. electricity customers could have a handful of controllable distributed energy resources in their homes. By that math, Pacific Gas & Electric Co.’s 4 million customers in the San Francisco Bay Area could have a total of some 20 million grid-tied systems that the utility would need to manage in order to reliably and economically operate its grid. That’s in addition to maintaining the poles, wires, transformers, switches, and centralized power plants in its network.

Because of the soaring number of grid-tied devices, operators will no longer be able to use centralized control in the not-so-distant future. Over a geographically dispersed network, the communication latencies alone make a centralized system impractical. Instead, operators will have to move to a system of distributed optimization and control.

The other problem operators face is that the grid is functioning under increasingly uncertain conditions, including fluctuating wind speeds, cloud cover, and unpredictable supply and demand. Therefore, the grid’s optimal state varies every second and must be robustly determined in real time.

A centrally controlled grid can’t handle this amount of coordination. That’s where AEGs come in. The idea of an autonomous energy grid grew out of NREL’s participation in a program called NODES (Network Optimized Distributed Energy Systems) sponsored by the U.S. Department of Energy’s vanguard energy agency, ARPA-E. Our lab’s contribution to NODES was to create algorithms for a model power grid made up entirely of distributed energy resources. Our algorithms had to factor in the limited computational capabilities of many customer devices (including rooftop solar, electric vehicles, batteries, smart-home appliances, and other loads) and yet still allow those devices to communicate and self-optimize. NODES, which wrapped up last year, was successful, but only as a framework for one “cell”—that is, one community controlled by one AEG.

Our group decided to carry the NODES idea further: to extend the model to an entire grid and its many component cells, allowing the cells to communicate with one another in a hierarchical system. The generation, storage, and loads are controlled using cellular building blocks in a distributed hierarchy that optimizes both local operation and operation of the cell when it’s interconnected to a larger grid.

In our model, each AEG consists of a network of energy generation, storage, and end-use technologies. In that sense, AEGs are very similar to microgrids, which are increasingly being deployed in the United States and elsewhere in the world. But an AEG is computationally more advanced, which allows its assets to cooperate in real time to match supply to demand on second-­by-second ­timescales. Similar to an autonomous vehicle, in which the vehicle makes local decisions about how to move around, an AEG acts as a self-driving power system, one that decides how and when to move energy. The result is that an AEG runs at high efficiency and can quickly bounce back from outages, or even avoid an outage altogether. A power grid that consists entirely of AEGs could deftly address challenges at every level, from individual customers up to the transmission system.

To develop the idea, we had to start somewhere. Basalt Vista presented an excellent opportunity to bring the AEG concept out of the lab and onto the grid. The neighborhood is designed to be net-zero energy, and it’s relatively close to NREL’s Energy Systems Integration Facility, where our group is based.

What’s more, Holy Cross Energy had been searching for a solution to manage the customer-owned energy resources and bulk generation in its system. In recent years, grid-connected, customer-owned resources have become much more affordable; Holy Cross’s grid has been seeing 10 to 15 new rooftop solar installations per week. By 2030, the utility plans to install a 150-megawatt solar-powered summer peaking system. Meanwhile, though, the utility had to deal with nonstandardized devices causing instabilities on its grid, occasional outages from severe weather and wildfires, variable generation from solar and wind energy, and an uncertain market for rooftop solar and other energy generated by its customers.

In short, what Holy Cross was facing looked very much like what other grid operators are confronting throughout the country and much of the world.

To develop the AEG concept, our group is working at the union of two fields: optimization theory and control theory. Optimization theory finds solutions, but might ignore real-world conditions. Control algorithms work to stabilize a system under less-than-ideal conditions. Together these two fields form the theoretical scaffolding for an AEG.

Of course, this theoretical scaffolding has to conform to the messy constraints of the real world. For example, the controllers that run the AEG algorithms aren’t supercomputers; they’re common computer platforms or embedded controllers at the grid edge, and they have to complete their calculations in well under 1 second. That translates to simpler code, and in this case, simpler is better. Meanwhile, though, the calculations must factor in latency in communications; in a distributed network, there will still be time delays as signals travel from one node to the next. Our algorithms must also be able to operate with sparse or missing data, and contend with variations created by equipment from different vendors.

Even if we produce beautiful algorithms, their success still depends on the physics of the topology of power lines and the accuracy of the models of the devices. For a large commercial building, where you want to choose what to turn on and off, you need an accurate model of that building at the right timescales. If such a model doesn’t exist, you have to build one. Doing that becomes an order of magnitude more difficult when the optimizations include many buildings and many models.

We’ve discovered that defining an abstract model is harder than optimizing the behavior of the real thing. In other words, we’re “cutting out the middleman” and instead using data and measurements to learn the optimal behavior directly. Using advanced data analytics and machine-learning techniques, we have dramatically sped up the time it takes to find optimal solutions.

To date, we’ve managed to overcome these hurdles at the small scale. NREL’s Energy Systems Integration Facility is an advanced test bed for vetting new models of energy integration and power-grid modernization. We’ve been able to test how practical our algorithms are before deploying them in the field; they may look good on paper, but if you’re trying to decide the fate of, say, a million devices in 1 second, you’d better be sure they really work. In our initial experiments with real power equipment—over 100 distributed resources at a time, totaling about half a megawatt—we were able to validate the AEG concepts by operating the systems across a range of scenarios.

Moving outside the laboratory, we first conducted a small demonstration in 2018 with the microgrid at the Stone Edge Farm Estate Vineyards and Winery in Sonoma, Calif., in partnership with the controller manufacturer Heila Technologies, in Somerville, Mass. The 785-kilowatt microgrid powers the 6.5-hectare farm through a combination of solar panels, fuel cells, and a microturbine that runs on natural gas and hydrogen, as well as storage in the form of batteries and hydrogen. An on-site electrolyzer feeds a hydrogen filling station for the farm’s three fuel-cell electric cars.

The microgrid is connected to the main grid but can also operate independently in “island” mode when needed. During wildfires in October 2017, for example, the main grid in and around Sonoma went down, and the farm was evacuated for 10 days, but the microgrid continued to run smoothly throughout. Our AEG demonstration at Stone Edge Farm connected 20 of the microgrid’s power assets, and we showed how those assets could function collectively as a virtual power plant in a resilient and efficient way. This experiment served as another proof of concept for the AEG.

Basalt Vista is taking the AEG concept even further. A net-zero-energy affordable housing district developed by Habitat for Humanity for schoolteachers and other local workers, it already had a lot going for it. The final results of this real-world experiment aren’t yet available, but seeing the first residents happily embrace this new frontier in energy has brought us another level of excitement about the future of AEGs.

We engineered our early demonstrations so that other utilities could safely and easily run trials of the AEG approach using standard interoperability protocols. Now our group is considering the additional challenges that AEGs will face when we scale up and when we transition from Holy Cross Energy’s rural deployment to the grid of a dense city. We’re now studying what this idea will look like throughout an energy system—within a wind farm, inside an office building, on a factory complex—and what effects it will have on power transmission and distribution. We’re also exploring the market mechanisms that would favor AEGs. It’s clear that broad collaboration across disciplines will be needed to push the concept forward.

Our group at NREL isn’t the only one looking at AEGs. Researchers at a number of leading universities have joined NREL in an effort to build the foundational science behind AEGs. Emiliano Dall’Anese of the University of Colorado, Boulder; Florian Dörfler of ETH Zurich; Ian A. Hiskens of the University of Michigan; Steven H. Low at Caltech’s Netlab; and Sean Meyn of the University of Florida are early contributors to the AEG vision and have participated in a series of workshops on the topic. These collaborations are already producing dozens of technical papers each year that continue to build out the foundations for AEGs.

Within NREL, the circle of AEG contributors is also expanding, and we’re looking at how the concept can apply to other forms of generation. One example is wind energy, where an AEG-enabled future means that control techniques similar to the ones deployed at Stone Edge Farm and Basalt Vista will autonomously manage large wind farms. By taking a large problem and breaking it into smaller cells, the AEG algorithms drastically reduce the time needed for all the turbines to come to a consensus on the wind’s direction and respond by turning to face into the wind, which can boost the total energy production. Over the course of a year, that could mean millions of dollars of added revenue for the operator.

In our research, we’re also considering how to optimally integrate the variable supply of wind energy into a bigger cell that includes other energy domains. For example, if a building’s energy management system has access to wind forecasts, it could shift its load in real time to match the available wind power. During an afternoon lull in wind speed, the building’s air-conditioning could be automatically adjusted upward a few degrees to reduce demand, with additional power drawn from battery storage.

We’re also looking at communications infrastructure. To achieve the fast response required by an AEG cell, communications can’t be clogged by simultaneous connections to millions of devices. In a new NREL partnership with the wireless company Anterix, of Woodland Park, N.J., we’re demonstrating how a dedicated LTE network for device communications would operate.

Reliable operation, of course, assumes that communication channels are protected from cyberthreats and physical threats. The possibility of such attacks is guiding the conversation in power systems toward resilience and reliability. We believe that AEGs should minimize the impact of both deliberate attacks and natural disasters and make the grid more resilient. That’s because the status of every grid-connected asset in every AEG cell will be checked on a second-by-second basis. Any sudden and unexpected change in status would trigger an appropriate response. In most cases, no drastic action would be required because the change is within the normal variability of operations. But if a major fault is the cause, the cell could automatically isolate itself, partially or entirely, from the rest of the network until the problem is resolved. Exploring the effects of AEGs on grid resilience is an ongoing priority at NREL.

For now, AEGs will show up first in neighborhoods like Basalt Vista and in other small-scale settings, such as hospitals and college campuses. Eventually, though, larger deployments should take place. In Hawaii, for instance, 350,000 customers have installed rooftop solar. With the state’s mandate for 100 percent renewable power by 2045, the amount of distributed solar could triple. The utility, Hawaiian Electric Company, anticipates having to connect about 750,000 solar inverters, as well as battery systems, electric vehicles, and other distributed energy resources. Accordingly, HECO is looking to push autonomous control down to the local level as much as possible, to minimize the need for communication between the control center and each device. A completely autonomous grid will take some time to implement. In particular, we’ll need to conduct extensive testing and demonstrations to show its feasibility with HECO’s current communications and control infrastructures. But eventually the AEG concept will allow the utility to prioritize controls and focus on critical operations rather than trying to manage individual devices.

We think it will be another decade before AEG rollouts become commonplace, but an AEG market may arrive sooner. This past year we’ve made progress in commercializing the AEG algorithms, and with support from DOE’s Solar Energy Technologies Office, NREL is now collaborating with Siemens on distributed control techniques. Likewise, NREL and the power management company Eaton Corp. have partnered to use the AEG work for autonomous, electrified transportation.

NREL has meanwhile explored how to sustain a distributed energy market using blockchain-based transactions—an option for so-called transactive energy markets. That project, in partnership with BlockCypher, successfully showed that a neighborhood like Basalt Vista could seamlessly monetize its energy sharing.

As we progress to a future of 100 percent clean energy, with a high concentration of inverter-based energy technologies, we will need a solution like AEGs to continue to operate the grid in a reliable, economic, and resilient way. Rather than looking to central power plants to meet their electricity needs, individual customers will increasingly be able to rely on one another. In a grid built on AEGs, being neighborly will be automatic.

This article appears in the December 2020 print issue as “Good Grids Make Good Neighbors.”

About the Author

Benjamin Kroposki is an IEEE Fellow and director of the Power Systems Engineering Center at the National Renewable Energy Laboratory, in Golden, Colo. Andrey Bernstein is NREL’s group manager of Energy Systems Control and Optimization, Jennifer King is a research engineer at NREL’s National Wind Technology Center, and Fei Ding is a senior research engineer at NREL.

Tomorrow’s Power Grid Will Be Autonomous

Post Syndicated from Benjamin Kroposki original https://spectrum.ieee.org/energy/the-smarter-grid/tomorrows-power-grid-will-be-autonomous

It’s great to have neighbors you can depend on, whether you’re borrowing a cup of sugar or you need someone to walk your dog while you’re out of town. In the western Colorado neighborhood of Basalt Vista, the residents are even closer than most: They share their electricity. But unlike your neighbor with the sugar, the residents of Basalt Vista may not even know when they’re being generous. The energy exchanges happen automatically, behind the scenes. What residents do know is how inexpensive, reliable, and renewable their electricity is.

The 27 smart homes in Basalt Vista, located about 290 kilometers west of Denver, are part of a pilot for an altogether new approach to the power grid. The entire neighborhood is interconnected through a microgrid that in turn connects to the main grid. Within each home, every smart appliance and energy resource—such as a storage battery, water heater, or solar photovoltaic (PV) system—is controlled to maximize energy efficiency.

On a larger scale, houses within the neighborhood can rapidly share power, creating reliable electricity for everyone—solar energy generated at one house can be used to charge the electric car next door. If a wildfire were to knock out power lines in the area, residents would still have electricity generated and stored within the neighborhood. From the spring through the fall, the PV systems can provide enough electricity and recharge the batteries for days at a time. In the dead of winter, with the heat running and snow on the solar panels, the backup power will last for about 2 hours.

In theory, power systems of any size could be covered in a patchwork of Basalt Vistas, layering regions and even an entire country in smart grids to automatically manage energy production and use across millions of controllable distributed energy resources. That concept underlies the autonomous energy grid (AEG), a vision for how the future of energy can be defined by resilience and efficiency.

The concept and core technology for the autonomous energy grid are being developed by our team at the National Renewable Energy Laboratory, in Golden, Colo. Since 2018, NREL and local utility Holy Cross Energy have been putting the concept into practice, starting with the construction of the first four houses in Basalt Vista. Each home has an 8-kilowatt rooftop PV system with lithium iron phosphate storage batteries, as well as energy-efficient, ­all-electric heating, cooling, water heaters, and appliances. All of those assets are monitored and can be controlled by the AEG. So far, average utility bills have been about 85 percent lower than typical electric bills for Colorado.

AEGs will create at least as many benefits for utilities as they do for customers. With AEGs monitoring distributed energy resources like rooftop solar and household storage batteries, a utility’s control room will become more like a highly automated air traffic control center. The result is that energy generated within an AEG is used more efficiently—it’s either consumed immediately or stored. Over time, the operator will have to invest less in building, operating, and maintaining larger generators—including costly “peaker” plants that are used only when demand is unusually high.

But can a network as large and complicated as a national power grid really operate in a decentralized, automated way? Our research says definitely yes. Projects like the one at Basalt Vista are helping us figure out our ideas about AEGs and demonstrate them in real-world settings, and thus are playing a crucial role in defining the future of the power grid. Here’s how.

Today, grid operators must overcome two big problems. First, an ever-growing number of distributed energy resources are being connected to the grid. In the United States, for instance, residential solar installations are expected to grow approximately 8 percent per year through 2050, while household battery systems are estimated to hit almost 1.8 gigawatts by 2025, and around 18.7 million EVs could be on U.S. roads by 2030. With such anticipated growth, it’s possible that a decade from now, most U.S. electricity customers could have a handful of controllable distributed energy resources in their homes. By that math, Pacific Gas & Electric Co.’s 4 million customers in the San Francisco Bay Area could have a total of some 20 million grid-tied systems that the utility would need to manage in order to reliably and economically operate its grid. That’s in addition to maintaining the poles, wires, transformers, switches, and centralized power plants in its network.

Because of the soaring number of grid-tied devices, operators will no longer be able to use centralized control in the not-so-distant future. Over a geographically dispersed network, the communication latencies alone make a centralized system impractical. Instead, operators will have to move to a system of distributed optimization and control.

The other problem operators face is that the grid is functioning under increasingly uncertain conditions, including fluctuating wind speeds, cloud cover, and unpredictable supply and demand. Therefore, the grid’s optimal state varies every second and must be robustly determined in real time.

A centrally controlled grid can’t handle this amount of coordination. That’s where AEGs come in. The idea of an autonomous energy grid grew out of NREL’s participation in a program called NODES (Network Optimized Distributed Energy Systems) sponsored by the U.S. Department of Energy’s vanguard energy agency, ARPA-E. Our lab’s contribution to NODES was to create algorithms for a model power grid made up entirely of distributed energy resources. Our algorithms had to factor in the limited computational capabilities of many customer devices (including rooftop solar, electric vehicles, batteries, smart-home appliances, and other loads) and yet still allow those devices to communicate and self-optimize. NODES, which wrapped up last year, was successful, but only as a framework for one “cell”—that is, one community controlled by one AEG.

Our group decided to carry the NODES idea further: to extend the model to an entire grid and its many component cells, allowing the cells to communicate with one another in a hierarchical system. The generation, storage, and loads are controlled using cellular building blocks in a distributed hierarchy that optimizes both local operation and operation of the cell when it’s interconnected to a larger grid.

In our model, each AEG consists of a network of energy generation, storage, and end-use technologies. In that sense, AEGs are very similar to microgrids, which are increasingly being deployed in the United States and elsewhere in the world. But an AEG is computationally more advanced, which allows its assets to cooperate in real time to match supply to demand on second-­by-second ­timescales. Similar to an autonomous vehicle, in which the vehicle makes local decisions about how to move around, an AEG acts as a self-driving power system, one that decides how and when to move energy. The result is that an AEG runs at high efficiency and can quickly bounce back from outages, or even avoid an outage altogether. A power grid that consists entirely of AEGs could deftly address challenges at every level, from individual customers up to the transmission system.

To develop the idea, we had to start somewhere. Basalt Vista presented an excellent opportunity to bring the AEG concept out of the lab and onto the grid. The neighborhood is designed to be net-zero energy, and it’s relatively close to NREL’s Energy Systems Integration Facility, where our group is based.

What’s more, Holy Cross Energy had been searching for a solution to manage the customer-owned energy resources and bulk generation in its system. In recent years, grid-connected, customer-owned resources have become much more affordable; Holy Cross’s grid has been seeing 10 to 15 new rooftop solar installations per week. By 2030, the utility plans to install a 150-megawatt solar-powered summer peaking system. Meanwhile, though, the utility had to deal with nonstandardized devices causing instabilities on its grid, occasional outages from severe weather and wildfires, variable generation from solar and wind energy, and an uncertain market for rooftop solar and other energy generated by its customers.

In short, what Holy Cross was facing looked very much like what other grid operators are confronting throughout the country and much of the world.

To develop the AEG concept, our group is working at the union of two fields: optimization theory and control theory. Optimization theory finds solutions, but might ignore real-world conditions. Control algorithms work to stabilize a system under less-than-ideal conditions. Together these two fields form the theoretical scaffolding for an AEG.

Of course, this theoretical scaffolding has to conform to the messy constraints of the real world. For example, the controllers that run the AEG algorithms aren’t supercomputers; they’re common computer platforms or embedded controllers at the grid edge, and they have to complete their calculations in well under 1 second. That translates to simpler code, and in this case, simpler is better. Meanwhile, though, the calculations must factor in latency in communications; in a distributed network, there will still be time delays as signals travel from one node to the next. Our algorithms must also be able to operate with sparse or missing data, and contend with variations created by equipment from different vendors.

Even if we produce beautiful algorithms, their success still depends on the physics of the topology of power lines and the accuracy of the models of the devices. For a large commercial building, where you want to choose what to turn on and off, you need an accurate model of that building at the right timescales. If such a model doesn’t exist, you have to build one. Doing that becomes an order of magnitude more difficult when the optimizations include many buildings and many models.

We’ve discovered that defining an abstract model is harder than optimizing the behavior of the real thing. In other words, we’re “cutting out the middleman” and instead using data and measurements to learn the optimal behavior directly. Using advanced data analytics and machine-learning techniques, we have dramatically sped up the time it takes to find optimal solutions.

To date, we’ve managed to overcome these hurdles at the small scale. NREL’s Energy Systems Integration Facility is an advanced test bed for vetting new models of energy integration and power-grid modernization. We’ve been able to test how practical our algorithms are before deploying them in the field; they may look good on paper, but if you’re trying to decide the fate of, say, a million devices in 1 second, you’d better be sure they really work. In our initial experiments with real power equipment—over 100 distributed resources at a time, totaling about half a megawatt—we were able to validate the AEG concepts by operating the systems across a range of scenarios.

Moving outside the laboratory, we first conducted a small demonstration in 2018 with the microgrid at the Stone Edge Farm Estate Vineyards and Winery in Sonoma, Calif., in partnership with the controller manufacturer Heila Technologies, in Somerville, Mass. The 785-kilowatt microgrid powers the 6.5-hectare farm through a combination of solar panels, fuel cells, and a microturbine that runs on natural gas and hydrogen, as well as storage in the form of batteries and hydrogen. An on-site electrolyzer feeds a hydrogen filling station for the farm’s three fuel-cell electric cars.

The microgrid is connected to the main grid but can also operate independently in “island” mode when needed. During wildfires in October 2017, for example, the main grid in and around Sonoma went down, and the farm was evacuated for 10 days, but the microgrid continued to run smoothly throughout. Our AEG demonstration at Stone Edge Farm connected 20 of the microgrid’s power assets, and we showed how those assets could function collectively as a virtual power plant in a resilient and efficient way. This experiment served as another proof of concept for the AEG.

Basalt Vista is taking the AEG concept even further. A net-zero-energy affordable housing district developed by Habitat for Humanity for schoolteachers and other local workers, it already had a lot going for it. The final results of this real-world experiment aren’t yet available, but seeing the first residents happily embrace this new frontier in energy has brought us another level of excitement about the future of AEGs.

We engineered our early demonstrations so that other utilities could safely and easily run trials of the AEG approach using standard interoperability protocols. Now our group is considering the additional challenges that AEGs will face when we scale up and when we transition from Holy Cross Energy’s rural deployment to the grid of a dense city. We’re now studying what this idea will look like throughout an energy system—within a wind farm, inside an office building, on a factory complex—and what effects it will have on power transmission and distribution. We’re also exploring the market mechanisms that would favor AEGs. It’s clear that broad collaboration across disciplines will be needed to push the concept forward.

Our group at NREL isn’t the only one looking at AEGs. Researchers at a number of leading universities have joined NREL in an effort to build the foundational science behind AEGs. Emiliano Dall’Anese of the University of Colorado, Boulder; Florian Dörfler of ETH Zurich; Ian A. Hiskens of the University of Michigan; Steven H. Low at Caltech’s Netlab; and Sean Meyn of the University of Florida are early contributors to the AEG vision and have participated in a series of workshops on the topic. These collaborations are already producing dozens of technical papers each year that continue to build out the foundations for AEGs.

Within NREL, the circle of AEG contributors is also expanding, and we’re looking at how the concept can apply to other forms of generation. One example is wind energy, where an AEG-enabled future means that control techniques similar to the ones deployed at Stone Edge Farm and Basalt Vista will autonomously manage large wind farms. By taking a large problem and breaking it into smaller cells, the AEG algorithms drastically reduce the time needed for all the turbines to come to a consensus on the wind’s direction and respond by turning to face into the wind, which can boost the total energy production. Over the course of a year, that could mean millions of dollars of added revenue for the operator.

In our research, we’re also considering how to optimally integrate the variable supply of wind energy into a bigger cell that includes other energy domains. For example, if a building’s energy management system has access to wind forecasts, it could shift its load in real time to match the available wind power. During an afternoon lull in wind speed, the building’s air-conditioning could be automatically adjusted upward a few degrees to reduce demand, with additional power drawn from battery storage.

We’re also looking at communications infrastructure. To achieve the fast response required by an AEG cell, communications can’t be clogged by simultaneous connections to millions of devices. In a new NREL partnership with the wireless company Anterix, of Woodland Park, N.J., we’re demonstrating how a dedicated LTE network for device communications would operate.

Reliable operation, of course, assumes that communication channels are protected from cyberthreats and physical threats. The possibility of such attacks is guiding the conversation in power systems toward resilience and reliability. We believe that AEGs should minimize the impact of both deliberate attacks and natural disasters and make the grid more resilient. That’s because the status of every grid-connected asset in every AEG cell will be checked on a second-by-second basis. Any sudden and unexpected change in status would trigger an appropriate response. In most cases, no drastic action would be required because the change is within the normal variability of operations. But if a major fault is the cause, the cell could automatically isolate itself, partially or entirely, from the rest of the network until the problem is resolved. Exploring the effects of AEGs on grid resilience is an ongoing priority at NREL.

For now, AEGs will show up first in neighborhoods like Basalt Vista and in other small-scale settings, such as hospitals and college campuses. Eventually, though, larger deployments should take place. In Hawaii, for instance, 350,000 customers have installed rooftop solar. With the state’s mandate for 100 percent renewable power by 2045, the amount of distributed solar could triple. The utility, Hawaiian Electric Company, anticipates having to connect about 750,000 solar inverters, as well as battery systems, electric vehicles, and other distributed energy resources. Accordingly, HECO is looking to push autonomous control down to the local level as much as possible, to minimize the need for communication between the control center and each device. A completely autonomous grid will take some time to implement. In particular, we’ll need to conduct extensive testing and demonstrations to show its feasibility with HECO’s current communications and control infrastructures. But eventually the AEG concept will allow the utility to prioritize controls and focus on critical operations rather than trying to manage individual devices.

We think it will be another decade before AEG rollouts become commonplace, but an AEG market may arrive sooner. This past year we’ve made progress in commercializing the AEG algorithms, and with support from DOE’s Solar Energy Technologies Office, NREL is now collaborating with Siemens on distributed control techniques. Likewise, NREL and the power management company Eaton Corp. have partnered to use the AEG work for autonomous, electrified transportation.

NREL has meanwhile explored how to sustain a distributed energy market using blockchain-based transactions—an option for so-called transactive energy markets. That project, in partnership with BlockCypher, successfully showed that a neighborhood like Basalt Vista could seamlessly monetize its energy sharing.

As we progress to a future of 100 percent clean energy, with a high concentration of inverter-based energy technologies, we will need a solution like AEGs to continue to operate the grid in a reliable, economic, and resilient way. Rather than looking to central power plants to meet their electricity needs, individual customers will increasingly be able to rely on one another. In a grid built on AEGs, being neighborly will be automatic.

This article appears in the December 2020 print issue as “Good Grids Make Good Neighbors.”

About the Author

Benjamin Kroposki is an IEEE Fellow and director of the Power Systems Engineering Center at the National Renewable Energy Laboratory, in Golden, Colo. Andrey Bernstein is NREL’s group manager of Energy Systems Control and Optimization, Jennifer King is a research engineer at NREL’s National Wind Technology Center, and Fei Ding is a senior research engineer at NREL.

Iron Powder Passes First Industrial Test as Renewable, Carbon Dioxide-Free Fuel

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/energywise/energy/renewables/iron-powder-passes-first-industrial-test-as-renewable-co2free-fuel

Simple question: What if we could curb this whole fossil fuel-fed climate change nightmare and burn something else as an energy source instead? As a bonus, what if that something else is one of the most common elements on Earth?

Simple answer: Let’s burn iron.

While setting fire to an iron ingot is probably more trouble than it’s worth, fine iron powder mixed with air is highly combustible. When you burn this mixture, you’re oxidizing the iron. Whereas a carbon fuel oxidizes into CO2, an iron fuel oxidizes into Fe2O3, which is just rust. The nice thing about rust is that it’s a solid which can be captured post-combustion. And that’s the only byproduct of the entire business—in goes the iron powder, and out comes energy in the form of heat and rust powder. Iron has an energy density of about 11.3 kWh/L, which is better than gasoline. Although its specific energy is a relatively poor 1.4 kWh/kg, meaning that for a given amount of energy, iron powder will take up a little bit less space than gasoline but it’ll be almost ten times heavier.

It might not be suitable for powering your car, in other words. It probably won’t heat your house either. But it could be ideal for industry, which is where it’s being tested right now.

Researchers from TU Eindhoven have been developing iron powder as a practical fuel for the past several years, and last month they installed an iron powder heating system at a brewery in the Netherlands, which is turning all that stored up energy into beer. Since electricity can’t efficiently produce the kind of heat required for many industrial applications (brewing included), iron powder is a viable zero-carbon option, with only rust left over.

So what happens to all that rust? This is where things get clever, because the iron isn’t just a fuel that’s consumed— it’s energy storage that can be recharged. And to recharge it, you take all that Fe2O3, strip out the oxygen, and turn it back into Fe, ready to be burned again. It’s not easy to do this, but much of the energy and work that it takes to pry those Os away from the Fes get returned to you when you burn the Fe the next time. The idea is that you can use the same iron over and over again, discharging it and recharging it just like you would a battery.

To maintain the zero-carbon nature of the iron fuel, the recharging process has to be zero-carbon as well. There are a variety of different ways of using electricity to turn rust back into iron, and the TU/e researchers are exploring three different technologies based on hot hydrogen reduction (which turns iron oxide and hydrogen into iron and water), as they described to us in an email:

Mesh Belt Furnace: In the mesh belt furnace the iron oxide is transported by a conveyor belt through a furnace in which hydrogen is added at 800-1000°C. The iron oxide is reduced to iron, which sticks together because of the heat, resulting in a layer of iron. This can then be ground up to obtain iron powder.
Fluidized Bed Reactor: This is a conventional reactor type, but its use in hydrogen reduction of iron oxide is new. In the fluidized bed reactor the reaction is carried out at lower temperatures around 600°C, avoiding sticking, but taking longer.
Entrained Flow Reactor: The entrained flow reactor is an attempt to implement flash ironmaking technology. This method performs the reaction at high temperatures, 1100-1400°C, by blowing the iron oxide through a reaction chamber together with the hydrogen flow to avoid sticking. This might be a good solution, but it is a new technology and has yet to be proven.

Both production of the hydrogen and the heat necessary to run the furnace or the reactors require energy, of course, but it’s grid energy that can come from renewable sources. 

If renewing the iron fuel requires hydrogen, an obvious question is why not just use hydrogen as a zero-carbon fuel in the first place? The problem with hydrogen is that as an energy storage medium, it’s super annoying to deal with, since storing useful amounts of it generally involves high pressure and extreme cold. In a localized industrial setting (like you’d have in your rust reduction plant) this isn’t as big of a deal, but once you start trying to distribute it, it becomes a real headache. Iron powder, on the other hand, is safe to handle, stores indefinitely, and can be easily moved with existing bulk carriers like rail.

Which is why its future looks to be in applications where weight is not a primary concern and collection of the rust is feasible. In addition to industrial heat generation (which will eventually include retrofitting coal-fired power plants to burn iron powder instead), the TU/e researchers are exploring whether iron powder could be used as fuel for large cargo ships, which are extraordinarily dirty carbon emitters that are also designed to carry a lot of weight. 

Philip de Goey, a professor of combustion technology at TU/e, told us that he hopes to be able to deploy 10 MW iron powder high-temperature heat systems for industry within the next four years, with 10 years to the first coal power plant conversion. There are still challenges, de Goey tells us: “the technology needs refinement and development, the market for metal powders needs to be scaled up, and metal powders have to be part of the future energy system and regarded as safe and clean alternative.” De Goey’s view is that iron powder has a significant but well-constrained role in energy storage, transport, and production that complements other zero-carbon sources like hydrogen. For a zero carbon energy future, de Goey says, “there is no winner or loser— we need them all.”

Going Carbon-Negative—Starting with Vodka

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/environment/going-carbonnegativestarting-with-vodka

Steven Cherry Hi this is Steven Cherry for Radio Spectrum.

In 2014, two Google engineers, writing in the pages of IEEE Spectrum, noted that “if all power plants and industrial facilities switch over to zero-carbon energy sources right now, we’ll still be left with a ruinous amount of CO2 in the atmosphere. It would take centuries for atmospheric levels to return to normal, which means centuries of warming and instability.” Citing the work of climatologist James Hansen, they continued: “To bring levels down below the safety threshold, Hansen’s models show that we must not only cease emitting CO2 as soon as possible but also actively remove the gas from the air and store the carbon in a stable form.”

One alternative is to grab carbon dioxide as it’s produced, and stuff it underground or elsewhere. People have been talking about CSS, which alternatively stands for carbon capture and storage, or carbon capture and sequestration, for well over a decade. But you can look around, for example at Exxon-Mobil’s website, and see how much progress hasn’t been made.

In fact, in 2015, a bunch of mostly Canadian energy producers decided on a different route. They went to the XPRIZE people and funded what came to be called the Carbon XPRIZE to, as a Spectrum article at the time said, turn “CO2 molecules into products with higher added value.”

In 2018, the XPRIZE announced 10 finalists, who divvied up a $5 million incremental prize. The prize timeline called for five teams each to begin an operational phase in two locations, one in Wyoming and the other in Alberta, culminating in a $20 million grand prize. And then the coronavirus hit, rebooting the prize timeline.

One of the more unlikely finalists emerged from the hipsterish Bushwick neighborhood of Brooklyn, N.Y. Their solution to climate change: vodka. Yes, vodka. The finalist, which calls itself the Air Company, takes carbon dioxide that has been liquified and distills it into ethanol, and then fine-tunes it into vodka. The resulting product is, the company claims, not only carbon-neutral but carbon negative.

The scientific half of founding duo of the Air Company is Stafford Sheehan—Staff, as he’s known. He had two startups under his belt by the time he graduated from Boston College. He started his next venture while in graduate school at Yale. He’s a prolific researcher but he’s determined to find commercially viable ways to reduce the carbon in the air, and he’s my guest today, via Skype.

Staff, welcome to the podcast.

Stafford Sheehan Thanks very much for having me. Steven.

Steven Cherry Staff, I’m sure people have been teasing you that maybe vodka doesn’t solve the problem of climate change entirely, but it can make us forget it for a while. But in serious engineering terms, the Air Company process seems a remarkable advance. Talk us through it. It starts with liquefied carbon dioxide.

Stafford Sheehan Yeah, happy to. So, we use liquefied carbon dioxide because we source it offsite in in Bushwick. But really, we can just feed any sort of carbon dioxide into our system. We combine the carbon dioxide with water by first splitting the water into hydrogen and oxygen. Water is H2O, so we use what’s called an electrolyzer to split water into hydrogen gas and oxygen gas and then combine the hydrogen together with carbon dioxide in a reactor over proprietary catalysts that I and my coworkers developed over the course of the last several years. And that produces a mixture of ethanol and water that we then distill to make a very, very clean and very, very pure vodka.

Steven Cherry Your claim that the product is carbon-negative is based on a life-cycle analysis. The calculation starts with an initial minus of the amount of carbon you take out of the atmosphere. And then we start adding back the carbon and carbon equivalents needed to get it into a bottle and onto the shelf of a hipster bar. That first step where your supplier takes carbon out of the atmosphere, puts it into liquefied form and then delivers it to your distillery. That puts about 10 percent of that that carbon back into the atmosphere.

Stafford Sheehan Yeah, 10 to 20 percent. When a tonne of carbon dioxide arrives in liquid form at our Bushwick facility, we assume that it took 200 kilograms of CO2 emitted—not only for the capture of the carbon dioxide; most of the carbon dioxide that we get actually comes from fuel ethanol fermentation. So we take the carbon dioxide emissions of the existing ethanol industry and we’re turning that into a higher purity ethanol. But it’s captured from those facilities and then it’s liquefied and transported to our Bushwick facility. And if you integrate the lifecycle carbon emissions of all of the equipment, all the steel, all of the transportation, every part of that process, then you you get about a maximum life-cycle CO2 emissions for the carbon dioxide of 200 kilograms per ton. So we still have eight hundred kilograms to play with at our facility.

Steven Cherry So another 10 percent gets eaten up by that electrolysis process.

Stafford Sheehan Yeah. The electrolysis process is highly dependent on what sort of electricity you use to power it with. We use a company called Clean Choice. And we’re we work very closely with a number of solar and wind deployers in New York State to make sure that all the electricity that’s used at our facility is solar or wind. And if you use wind energy, that’s the most carbon-friendly energy source that we have available there. Right now, the mix that we have, which is certified through Con Edison, is actually very heavily wind and a little bit of solar. But that was the lowest lifecycle-intensity electricity that we could get. So we get … it’s actually a little bit less than 10 percent of that is consumed by electrolysis. So the electrolysis is actually quite green as long as you power it with a very low-carbon source of electricity.

Steven Cherry And the distilling process, even though it’s solar-based, takes maybe another 13 percent or so?

Stafford Sheehan It’s in that ballpark. The distilling process is powered by an electric steam boiler. So we use the same electricity that we use to split water, to heat our water for the distillation system. So we have a fully electric distillery process. You could say that we’ve electrified vodka distilling.

Steven Cherry There’s presumably a bit more by way of carbon equivalents when it comes to the bottles the vodka comes in, shipping it to customers, and so on, but that’s true of any vodka that ends up on that shelf of any bar, and those also have a carbon-emitting farming process—whether it’s potatoes or sugar beets or wheat or whatever—that your process sidesteps.

Stafford Sheehan Yes. And I think one thing that’s really important is, this electrification act aspect by electrifying or all of our distillery processes, for example, if you’re boiling water using a natural gas boiler, your carbon emissions are going to be much, much higher as compared to boiling water using an electric steam boiler that’s powered with wind energy.

Steven Cherry It seems like if you just poured the vodka down the drain or into the East River, you would be benefiting the environment. I mean, would it be possible to do that on an industrial scale as a form of carbon capture and storage that really works?

Stafford Sheehan Yeah. I don’t think you’d want to pour good alcohol down the drain in any capacity just because the alcohol that we make can offset the use of fossil fuel alcohol.

So by putting the alcohol that we make—this carbon negative alcohol that we make—into the market, that means you have to make less fossil alcohol. And I’m including corn ethanol in that because so many fossil fuels go into its production. But that makes it so that our indirect CO2, our indirect CO2 utilization is very, very high because we’re offsetting a very carbon-intensive product.

Steven Cherry That’s interesting. I was thinking that maybe you could earn carbon credits and sell them for more than you might make with having a, you know, another pricey competitor to Grey Goose and Ketel One.

Stafford Sheehan The carbon credit, the carbon credit system is still very young, especially in the US.

We also … our technology still has a ways to scale between our Bushwick facility—which is, I would say, a micro distillery—and a real bona industrial process, which … we’re working on that right now.

Steven Cherry Speaking of which, though, it is rather pricey stuff at this point, isn’t it? Did I read $65 or $70 a bottle?

Stafford Sheehan Yeah, it’s pricey not only because you pay a premium for our electricity, for renewable electricity, but we also pay a premium for carbon dioxide that, you know, has that that only emits 10 to 20 percent of the carbon intensity of its actual weight, so we pay a lot more for the inputs than is typical—sustainability costs money—and also we’re building these systems, they’re R&D systems, and so they’re  more costly to operate on a R&D scale, on kind of our pilot plant scale. As we scale up, the cost will go down. But at the scales we’re at right now, we need to be able to sell a premium product to be able to have a viable business. Now, on top of that, the product is also won a lot of awards that put it in that price category. It’s won three gold medals in the three most prestigious blind taste test competitions. And it’s won a lot of other spirits and design industry awards that enable us to get that sort of cost for it.

Steven Cherry I’m eager to do my own blind taste testing. Vodka is typically 80 proof, meaning it’s 60 percent water. You and your co-founder went on an epic search for just the right water.

Stafford Sheehan That we did. We tested over … probably over one hundred and thirty different types of water. We tried to find which one was best to make vodka with using the very, very highly pure ethanol that comes out of our process. And it’s a very nuanced thing. Water, by changing things like the mineral content, the pH, by changing the very, very small trace impurities in the water—that in many cases are good for you—can really change the way the water feels in your mouth and the way that it tastes. And adding alcohol to water just really amplifies that. It lowers the boiling point and it makes it more volatile so that it feels different in your mouth. And so different types of water have a different mouth feel; they have a different taste. We did a lot of research on water to be able to find the right one to mix with our vodka.

Steven Cherry Did you end up where you started with New York water?

Stafford Sheehan Yes. In in a in a sense, we are we’re very, very close to where we started.

Steven Cherry I guess we have to add your vodka to the list that New Yorkers would claim includes New York’s bagels and New York’s pizza as uniquely good, because if their water.

Stafford Sheehan Bagels, pizza, vodka … hand sanitizer …

Steven Cherry It’s a well-balanced diet. So where do things stand with the XPRIZE? I gather you finally made it to Canada for this operational round, but take us through the journey getting there.

Stafford Sheehan So I initially entered the XPRIZE when it was soliciting for very first submissions—I believe it was 2016—and going through the different stages, we had at the end of 2017, we had very rigorous due diligence on our prototype scale. And we passed through that and got good marks and continuously progressed through to the finals where we are now. Now, of course, coronavirus kind of threw both our team and many other teams for a loop, delaying deployment, especially for us: We’re the only American team deploying in Canada. The other four teams that are deploying at the ACCTC [Alberta Carbon Conversion Technology Centre] are all Canadian teams. So being the only international team in a time of a global pandemic that, you know, essentially halted all international travel—and a lot of international commerce—put some substantial barriers in our way. But over the course of the last seven months or so, we’ve been able to get back on our feet. And I’m currently sitting in quarantine in Mississauga, Ontario, getting ready for a factory-acceptance test. That’s scheduled to happen right at the same time as quarantine ends. So we’re gonna be at the end of this month landing our skid in Alberta for the finals and then in November, going through diligence and everything else to prove out its operation and then operating it through the rest of the year.

Steven Cherry I understand that you weren’t one of the original 10 finalists named in 2018.

Stafford Sheehan No, we were not. We were the runner-up. There was a runner-up for each track—the Wyoming track and the Alberta track. And ultimately, there were teams that dropped out or merged for reasons within their own businesses. We were given the opportunity to rejoin the competition. We decided to take it because it was a good proving ground for our next step of scale, and it provided a lot of infrastructure that allowed us to do that at a reasonable cost—at a reasonable cost for us and at a reasonable cost in terms of our time.

Steven Cherry Staff, you were previously a co-founder of a startup called Catalytic Innovations. In fact, you were a 2016 Forbes magazine, 30-under-30 because of it. What was it? And is it? And how did it lead to Air Company and vodka?

Stafford Sheehan For sure. That was a company that I spun out of Yale University, along with a professor at Yale, Paul Anastas. We initially targeted making new catalysts for fuel cell and electrolysis industries, focusing around the water oxidation reaction. So to turn carbon dioxide—or to produce fuel in general using renewable electricity—there are three major things that need to happen. You need to have a very efficient renewable energy source. Trees, for example, use the sun. That’s photosynthesis. You have to be able to oxidize water into oxygen gas. And that’s why trees breathe out oxygen. And you have to be able to use the protons and electrons that come out of water oxidation to either reduce carbon dioxide or through some other method, produce a fuel. So I studied all three of those when I was in graduate school, and upon graduating, I spun out Catalytic Innovations that focused on the water oxidation reaction and commercializing materials that more efficiently produced oxygen for all of The man-made processes such as metal refining that do that chemistry. And that company found its niche in corrosion—anti-corrosion and corrosion protection—because one of the big challenges, whenever you’re producing oxygen, be it for renewable fuels or be it to produce zinc or to do a handful of different electrorefining and electrowinning processes in the metal industry. You always have a very serious corrosion problem. Did a lot of work in that industry in Catalytic Innovations, and they still continue to do work there, to this day.

Steven Cherry You and your current co-founder, Greg Constantine, are a classic match—a technologist, in this case an electrochemist and a marketer. If this were a movie, you would have met in a bar drinking vodka. And I understand you actually did meet at a bar. Were you drinking vodka?

Stafford Sheehan No, we were actually drinking whiskey. So I didn’t … I actually I’m not a big fan of vodka pre-Air Company, but it was the product that really gave us the best value proposition where really, really clean, highly pure ethanol is most important. So I’ve always been more of a whiskey man myself, and Greg and I met over whiskey in Israel when we were on a trip that was for Forbes. You know, they sent us out there because we were both part of their 30-Under-30 list and we became really good friends out there. And then several months later, fast forward, we started Air Company.

Steven Cherry Air Company’s charter makes it look like you would like to go far beyond vodka when it comes to finding useful things to do with CO2. In the very near term, you turned to using your alcohol in a way that contributes to our safety.

Stafford Sheehan Yeah. So we we had always planned the air company, not the air vodka company. We had always planned to go into several different verticals with ultra-high-purity ethanol that we create. And spirits is one of the places where you can realize the value proposition of a very clean and highly pure alcohol, very readily—spirits, fragrance is another one. But down the list a little bit is sanitizer, specifically hand sanitizer. And when coronavirus hit, we actually pivoted all of our technology because there was a really, really major shortage of sanitizer in New York City. A lot of my friends from graduate school that had kind of gone more on the medical track were telling me that the hospitals that they worked in, in New York didn’t have any hand sanitizer. And when the hospitals—for the nurses and doctors—ran out of hand sanitizer, that means you really have a shortage. And so we pivoted all of our technology to produce sanitizer in March. And for three months after that, we gave it away. We donated it to these hospitals, to the fire department, to NYPD and to other organizations in the city that needed it most.

Yeah, the hand sanitizer, I like to think, is also a very premium product. You can’t realize the benefits of the very, very clean and pure ethanol that we use for it as readily as you can with the bad guys since you’re not tasting it. But we did have to go through all of the facility registrations and that sort of thing to make the sanitizer because it is classified as a drug. So our pilot plant in and in Bushwick, which was a converted warehouse, I used to tell people in March that I always knew my future was going to be sitting in a dark warehouse in Bushwick making drugs. But, you know, never thought that it was actually going to become a reality.

Steven Cherry That was in the short term. By now, you can get sanitizer in every supermarket and Home Depot. What are the longer-term prospects for going beyond vodka?

Stafford Sheehan Longer term, we’re looking at commodity chemicals, even going on to fuel. So longer term, we’re looking at the other verticals where we can take advantage of the high-purity value proposition of our ethanol—like pharmaceuticals, as a chemical feedstock, things like that. But then as we scale, we want to be able to make renewable fuel as well from this and renewable chemicals. Ultimately, we want to we want to get to world scale with this technology, but we need to take the appropriate steps to get there. And what we’re doing now are the stepping-stones to scaling it.

Steven Cherry It seems like if you could locate the distilling operation right at the ethanol plant, you would just be making more ethanol for them with their waste product, avoid a lot of shipping and so forth. It, you would just become of value add to their industry.

Stafford Sheehan That is something that we hope to do in the long term. You know what, our current skids are fairly small scale where we couldn’t take a massive amount of CO2 with them. But as we scale, we do hope to get there gradually when we get to larger scales, like talking about several barrels per day rather than liters per hour, which is the scale we’re at now.

A lot of stuff you can turn CO2 into. One of the prime examples is calcium carbonate. C03-[[minus]] CO2 is CO2. You can very easily convert carbon dioxide into things like that for building materials. So pour concrete for different parts of bricks and things like that. There are a lot of different ways to mineralized CO2 as well. Like you can inject it into the ground. That will also turn it into carbon-based minerals. Beyond that, as far as more complex chemical conversion goes, the list is almost endless. You can make plastics. You can make pharmaceutical materials. You can make all sorts of crazy stuff from CO2. Almost any of the base chemicals that have carbon in them can come from CO2. And in a way, they do come from CO2 because all the petrochemicals that we mine from the ground, that they’re from photosynthesis that happened over the course of the last two billion years.

Have you ever seen the movie Forest Gump? There’s a part in that where Bubba, Gump’s buddy in the Vietnam War, talks about all the things you can do with shrimp. And it kind of goes on and on and on. But I could say the same about CO2. You can make plastic. You can make clothes. You can make sneakers. You can make alcohol. You can make any sort of chemical carbon-based ethylene, carbon monoxide, formic acid, methanol, ethanol. And there … The list goes on. Just about any carbon-based chemical you can think of. You can make from CO2.

Steven Cherry Would it be possible to pull carbon dioxide out of a plastic itself and thereby solve two problems at once?

Yeah, you could you could take plastic and capture the CO2 that’s emitted when you either incinerate it or where you gasify it. That is a strategy that’s used in certain places, gasification of waste, municipal waste. It doesn’t give you CO2, but it actually gives you something that you can do chemistry with a little more easily. It gives you a syngas—a mixture of carbon monoxide and hydrogen. So, there are a lot of different strategies that you can use to convert CO2 into things better for the planet than global warming.

Steven Cherry If hydrogen is a byproduct of that, you have a ready use for it.

Stafford Sheehan Yeah, exactly, that is one of the many places where we could source feedstock materials for our process. Our process is versatile and that’s one of the big advantages to it.

If we get hydrogen, as a byproduct of chloralkali production, for example, we can use that instead of having to source the electrolyzer. If our CO2 comes from direct air capture, we can use that. And that means we can place our plants pretty much wherever there’s literally air, water and sunlight. As far as the products that come out, liquid products that are made from CO2 have a big advantage in that they can be transported and they’re not as volatile, obviously, as the gases.

Steven Cherry Well, Staff, it’s a remarkable story, one that certainly earns you that XPRIZE finalist berth. We wish you great luck with it. But it seems like your good fortune is self-made and assured, in any event to the benefit of the planet. Thank you for joining us today.

Stafford Sheehan Thanks very much for having me, Steven.

Steven Cherry We’ve been speaking with Staff Sheehan, co-founder of the Air Company, a Brooklyn startup working to actively undo the toxic effects of global warming.

This interview was recorded October 2, 2020. Our thanks to Miles of Gotham Podcast Studio for our audio engineering; our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

 

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

 

The Lithium-Ion Battery With Built-In Fire Suppression

Post Syndicated from Dexter Johnson original https://spectrum.ieee.org/tech-talk/energy/batteries-storage/liion-batteries-more-efficient-fireproof

If there are superstars in battery research, you would be safe in identifying at least one of them as Yi Cui, a scientist at Stanford University, whose research group over the years has introduced some key breakthroughs in battery technology.

Now Cui and his research team, in collaboration with SLAC National Accelerator Laboratory, have offered some exciting new capabilities for lithium-ion batteries based around a new polymer material they are using in the current collectors for them. The researchers claim this new design to current collectors increases efficiency in Li-ion batteries and reduces the risks of fires associated with these batteries.

Current collectors are thin metal foils that distribute current to and from electrodes in batteries. Typically these metal foils are made from copper. Cui and his team redesigned these current collectors so that they are still largely made from copper but are now surrounded by a polymer.

The Stanford team claim in their research published in the journal Nature Energy that the polymer makes the current collector 80 percent lighter, leading to an increase in energy density from 16 to 26 percent. This is a significant boost over the average yearly increase of energy density for Li-ion batteries, which has been stuck at 5 percent a year seemingly forever.

This method of lightening the batteries is a bit of a novel approach to boosting energy density. Over the years we have seen many attempts to increase energy density by enlarging the surface area of electrodes through the use of new electrode materials—such as nanostructured silicon  in place of activated carbon. While increased surface area may increase charge capacity, energy density is calculated by the total energy over the total weight of the battery.

The Stanford team have calculated the increase of 16 to 26 percent in the gravimetric energy density of their batteries by replacing the commercial  copper/aluminum current collectors (8.06 mg/cm2 for copper and 5.0 mg/cm2 for aluminum) with their polymer collections current collectors (1.54 mg/cm2 for polymer-copper material and 1.05 mg/cm2 for polymer-aluminum). 

“Current collectors don’t contribute to the total energy but contribute to the total weight of battery,” explained Yusheng Ye, a researcher at Stanford and co-author of this research. “That’s why we call current collectors ‘dead weight’ in batteries, in contrast to ‘active weight’ of electrode materials.”

By reducing the weight of the current collector, the energy density can be increased, even when the total energy of the battery is almost unchanged. Despite the increased energy density offered by this research, it may not entirely alleviate so-called “range anxiety” associated with electric vehicles in which people have a fear of running out of power before reaching the next charge location. While the press release claims that this work will extend the range of electric vehicles, Ye noted that the specific energy improvement in this latest development is based on the battery itself. As a result, it is only likely to have around a 10% improvement in the range of an electric vehicle.

“In order to improve the range from 400 miles to 600 miles, for example, more engineering work would need to be done taking into account the active parts of the batteries will need to be addressed together with our ultra-light current collectors,” said Ye.

Beyond improved energy density efficiency, the polymer-based charge collectors are expected to help reduce the fires associated with Li-ion batteries. Of course, traditional copper current collectors don’t contribute to battery combustion on their own. The combustion issues in Li-ion batteries  are related to the electrolyte and separator that are not used within the recommended temperatures and voltage windows.

“One of the key innovations in our novel current collector is that we are able to embed fire retardant inside without sacrificing the energy density and mechanical strength of the current collector,” said Ye. “Whenever the battery has combustion issues, our current collector will instantaneously release the fire retardant and extinguish the fire. Such function cannot be achieved with traditional copper or aluminum current collector.”

The researchers have patented the technology and are in discussions with battery manufacturers for commercialization. Cui and his team have already worked out some of the costs associated with adopting the polymer and they appear attractive. According to Ye, the cost of the polymer composite charge collector is around $1.3 per m2, which is a bit lower than the cost of copper foil, which is around $1.4 per m2. With these encouraging numbers, Ye added: “We are expecting industry to adopt this technology within the next few years.”

Why Does the U.S. Have Three Electrical Grids?

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/renewables/why-does-the-us-have-three-electrical-grids

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

If you look at lists of the 100 greatest inventions of all time, electricity figures prominently. Once you get past some key enablers that can’t really be called inventions—fire, money, the wheel, calendars, the alphabet—you find things like light bulbs, the automobile, refrigeration, radios, the telegraph and telephone, airplanes, computers and the Internet. Antibiotics and the modern hospital would be impossible without refrigeration. The vaccines we’re all waiting for depend on electricity in a hundred different ways.

It’s the key to modern life as we know it, and yet, universal, reliable service remains an unsolved problem. By one estimate, a billion people still do without it. Even in a modern city like Mumbai, generators are commonplace, because of an uncertain electrical grid. This year, California once again saw rolling blackouts, and with our contemporary climate producing heat waves that can stretch from the Pacific Coast to the Rocky Mountains, they won’t be the last.

Electricity is hard to store and hard to move, and electrical grids are complex, creaky, and expensive to change. In the early 20teens, Europe began merging its distinct grids into a continent-wide supergrid, an algorithm-based project that IEEE Spectrum wrote about in 2014. The need for a continent-wide supergrid in the U.S. has been almost as great, and by 2018 the planning of one was pretty far long—until it hit a roadblock that, two years later, still stymies any progress. The problem is not the technology, and not even the cost. The problem is political. That’s the conclusion of an extensively reported investigation jointly conducted by The Atlantic magazine and InvestigateWest, a watchdog nonprofit that was founded in 2009 after the one of Seattle’s daily newspapers stopped publishing. The resulting article, with the heading, “Who Killed the Supergrid?”, was written by Peter Fairley, who has been a longtime contributing editor for IEEE Spectrum and is my guest today. He joins us via Skype.

Peter, welcome to the podcast.

Peter Fairley It’s great to be here, Steven.

Steven Cherry Peter, you wrote that 2014 article in Spectrum about the Pan-European Hybrid Electricity Market Integration Algorithm, which you say was needed to tie together separate fiefdoms. Maybe you can tell us what was bad about the separate fiefdoms served Europe nobly for a century.

Peter Fairley Thanks for the question, Steven. That story was about a pretty wonky development that nevertheless was very significant. Europe, over the last century, has amalgamated its power systems to the point where the European grid now exchange’s electricity, literally across the continent, north, south, east, west. But until fairly recently, there have been sort of different power markets operating within it. So even though the different regions are all physically interconnected, there’s a limit to how much power can actually flow all the way from Spain up to Central Europe. And so there are these individual regional markets that handle keeping the power supply and demand in balance, and putting prices on electricity. And that algorithm basically made a big step towards integrating them all. So that you’d have one big market and a more competitive, open market and the ability to, for example, if you have spare wind power in one area, to then make use of that in some place a thousand kilometers away.

Steven Cherry The U.S. also has separate fiefdoms. Specifically, there are three that barely interact at all. What are they? And why can’t they share power?

Peter Fairley Now, in this case, when we’re talking about the U.S. fiefdoms, we’re talking about big zones that are physically divided. You have the Eastern—what’s called the Eastern Interconnection—which is a huge zone of synchronous AC power that’s basically most of North America east of the Rockies. You have the Western Interconnection, which is most of North America west of the Rockies. And then you have Texas, which has its own separate grid.

Steven Cherry And why can’t they share power?

Peter Fairley Everything within those separate zones is synched up. So you’ve got your 60 hertz AC wave; 60 times a second the AC power flow is changing direction. And all of the generators, all of the power consumption within each zone is doing that synchronously. But the east is doing it on its own. The West is on a different phase. Same for Texas.

Now you can trickle some power across those divides, across what are called “seams” that separate those, using DC power converters—basically, sort of giant substations with the world’s largest electronic devices—which are taking some AC power from one zone, turning it into DC power, and then producing a synthetic AC wave, to put that power into another zone. So to give you a sense of just what the scale of the transfers is and how small it is, the East and the West interconnects have a total of about 950 gigawatts of power-generating capacity together. And they can share a little over one gigawatt of electricity.

Steven Cherry So barely one-tenth of one percent. There are enormous financial benefits and reliability benefits to uniting the three. Let’s start with reliability.

Peter Fairley Historically, when grids started out, you would have literally a power system for one neighborhood and a separate power system for another. And then ultimately, over the last century, they have amalgamated. Cities connected with each other and then states connected with each other. Now we have these huge interconnections. And reliability has been one of the big drivers for that because you can imagine a situation where if you if you’re in city X and your biggest power generator goes offline, you know, burn out or whatever. If you’re interconnected with your neighbor, they probably have some spare generating capacity and they can help you out. They can keep the system from going down.

So similarly, if you could interconnect the three big power systems in North America, they could support each other. So, for example, if you have a major blackout or a major weather event like we saw last month—there was this massive heatwave in the West, and much of the West was struggling to keep the lights on. It wasn’t just California. If they were more strongly interconnected with Texas or the Eastern Interconnect, they could have leaned on those neighbors for extra power supply.

Steven Cherry Yeah, your article imagines, for example, the sun rising in the West during a heatwave sending power east; the sun setting in the Midwest, wind farms could send power westward. What about the financial benefits of tying together these three interconnects? Are they substantial? And are they enough to pay for the work that would be needed to unify them into a supergrid?

Peter Fairley The financial benefits are substantial and they would pay for themselves. And there’s really two reasons for that. One is as old as our systems, and that is, if you interconnect your power grids, then all of the generators in the amalgamated system can, in theory, they can all serve that total load. And what that means is they’re all competing against each other. And power plants that are inefficient are more likely to be driven out of the market or to operate less frequently. And so that the whole system becomes more efficient, more cost-effective, and prices tend to go down. You see that kind of savings when you look at interconnecting the big grids in North America. Consumers benefit—not necessarily all the power generators, right? There you get more winners and losers. And so that’s the old part of transmission economics.

What’s new is the increasing reliance on renewable energy and particularly variable renewable energy supplies like wind and solar. Their production tends to be more kind of bunchy, where you have days when there’s no wind and you have days when you’ve got so much wind that the local system can barely handle it. So there are a number of reasons why renewable energy really benefits economically when it’s in a larger system. You just get better utilization of the same installations.

Steven Cherry And that’s all true, even though sending power 1000 miles or 3000 miles? You lose a fair amount of that generation, don’t you?

Peter Fairley It’s less than people imagine, especially if you’re using the latest high voltage direct current power transmission equipment. DC power lines transmit power more efficiently than AC lines do, because the physics are actually pretty straightforward. An AC current will ride on the outside of a power cable, whereas a DC current will use the entire cross-section of the metal. And so you get less resistance overall, less heating, and less loss. And so. And the power electronics that you need on either side of a long power line like that are also becoming much more efficient. So you’re talking about losses of a couple of percent on lines that, for example in China, span over 3000 kilometers.

Steven Cherry The reliability benefits, the financial benefits, the way a supergrid would be an important step for helping us move off of our largely carbon-based sources of power—we know all this in part because in the mid-2010s a study was made of the feasibility—including the financial feasibility—of unifying the U.S. in one single supergrid. Tell us about the Interconnections Seams Study.

Peter Fairley So the Interconnection Seams Study [Seams] was one of a suite of studies that got started in 2016 at the National Renewable Energy Laboratory in Colorado, which is one of the national labs operated by the U.S. Department of Energy. And the premise of the Seams study was that the electronic converters sitting between the east and the west grids were getting old; they were built largely in the 70s; they are going to start to fail and need to be replaced.

And the people at NREL were saying, this is an opportunity. Let’s think—and the power operators along the seam were thinking the same thing—we’re gonna have to replace these things. Let’s study our strategic options rather than have them go out of service and just automatically replace them with similar equipment. So what they posited was, let’s look at some longer DC connections to tie the East and the West together—and maybe some bigger ones. And let’s see if they pay for themselves. Let’s see if they have the kind of transformative effects that one would imagine that they would, just based on the theory. So they set up a big simulation modeling effort and they started running the numbers…

Now, of course, this got started in 2016 under President Obama. And it continued to 2017 and 2018 under a very different president. And basically, they affirmed that tying these grids, together with long DC lines, was a great idea, that it would pay for itself, that it would make much better use of renewable energy. But it also showed that it would accelerate the shutdown of coal-fired power. And that got them in some hot water with the new masters at the Department of Energy.

Steven Cherry By 2018 the study was largely completed and researchers will begin to share its conclusions with other energy experts and policymakers. For example, there was a meeting in Iowa. You describe where there is a lot of excitement over the scenes study. You write that things took a dramatic turn at one such gathering in Lawrence, Kansas.

Peter Fairley Yes. So the study was complete as far as the researchers were concerned. And they were working on their final task under their contract from the Department of Energy, which was to write and submit a journal article in this case. They were targeting an IEEE journal. And they, as you say, had started making some presentations. The second one was in August, in Kansas, and there’s a DOE official—a political appointee—who’s sitting in the audience and she does not like what she’s hearing. She, while the talk is going on, pulls out her cell phone, writes an email to DOE headquarters, and throws a red flag in the air.

Steven Cherry The drama moved up the political chain to a pretty high perch.

Peter Fairley According to an email from one of the researchers that I obtained and is presented in the InvestigateWest version of this article, it went all the way to the current secretary of energy, Daniel Brouillette, and perhaps to the then-Secretary of Energy, former Texas Governor [Rick] Perry.

Steven Cherry And the problem you say in that article was essentially the U.S. administration’s connections to—devotion to—the coal industry.

Peter Fairley Right. You’ve got a president who has made a lot of noise both during his election campaign and since then about clean, beautiful coal. He is committed to trying to stop the bleeding in the U.S. coal industry, to slow down or stop the ongoing mothballing of coal-fired power plants. His Secretary of Energy. Rick Perry is doing everything he can to deliver on Trump’s promises. And along comes this study that says we can have a cleaner, more efficient power system with less coal. And yes, so it just ran completely counter to the political narrative of the day.

Steven Cherry You said earlier the financial benefits to consumers are unequivocal. But in the case of the energy providers, there would be winners and losers and the losers with largely come from the coal industry.

Peter Fairley I would just add one thing to that, and that is and this depends on really the different systems. You’re looking at the different conditions and scenarios and assumptions. But, you know, in a scenario where you have more renewable energy, there are also going to be impacts on natural gas. And the oil and gas industry is definitely also a major political backer of the Trump administration.

Steven Cherry The irony is that the grid is moving off of coal anyway, and to some extent, oil and even natural gas, isn’t it?

Peter Fairley Definitely oil. It’s just a very expensive and inefficient way to produce power. So we’ve been shutting that down for a long time. There’s very little left. We are shutting down coal at a rapid rate in spite of every effort to save it. Natural gas is growing. So natural gas has really been—even more so than renewables—the beneficiary of the coal shutdown. Natural gas is very cheap in the U.S. thanks to widespread fracking. And so it’s coming on strong and it’s still growing.

Steven Cherry Where is the Seams study now?

Peter Fairley The Seams study is sitting at the National Renewable Energy Lab. Its leaders, under pressure from the political appointees at DOE, its leaders have kept it under wraps. It appears that there may have been some additional work done on the study since it got held up in 2018. But we don’t know what the nature of that work was. Yeah, so it’s just kind of missing in action at this point.

My sources tell me that there is an effort underway at the lab to get it out. And I think the reason for that is that they’ve taken a real hit in terms of the morale of their staff. the NREL Seams study is not the only one that’s been held up, that is being held up. In fact, it’s one of dozens, according to my follow-up reporting. And, you know, NREL researchers are feeling pretty hard done by and I think the management is trying to show its staff that it has some scientific integrity.

But I think it’s important to note that there are other political barriers to building a supergrid. It might be a no brainer on paper, but in addition to the pushback from the fossil-fuel industry that we’re seeing with Seams, there are other political crosscurrents that have long stood in the way of long-distance transmission in the U.S. For example—and this is a huge one—that, in the U.S., most states have their own public utility commission that has to approve new power lines. And when you’re looking at the kind of lines that Seams contemplated, or that would be part of a supergrid, you’re talking about long lines that have to span, in some cases, a dozen states. And so you need to get approval from each of those states to transit— to send power from point A to point X. And that is a huge challenge. There’s a wonderful book that really explores that side of things called Superpower [Simon & Schuster, 2019] by the Wall Street Journal’s Russell Gold.

Steven Cherry The politics that led to the suppression of the publication of the Seams study go beyond Seams itself don’t they? There are consequences, for example, at the Office of Energy Efficiency and Renewable Energy.

Peter Fairley Absolutely. Seams is one of several dozen studies that I know of right now that are held up and they go way beyond transmission. They get into energy efficiency upgrades to low-income housing, prices for solar power… So, for example—and I believe this hasn’t been reported yet; I’m working on it—the Department of Energy has hitherto published annual reports on renewable energy technologies like wind and solar. And, in those, they provide the latest update on how much it costs to build a solar power plant, for example. And they also update their goals for the technology. Those annual reports have now been canceled. They will be every other year, if not less frequent. That’s an example of politics getting in the way because the cost savings from delaying those reports are not great, but the potential impact on the market is. There are many studies, not just those performed by the Department of Energy that will use those official price numbers in their simulations. And so if you delay updating those prices for something like solar, where the prices are coming down rapidly, you are making renewable energy look less competitive.

Steven Cherry And even beyond the Department of Energy, the EPA, for example, has censored itself on the topic of climate change, removing information and databases from its own Web sites.

Peter Fairley That’s right. The way I think of it is, when you tell a lie, it begets other lies. And you and you have to tell more lies to cover your initial lie and to maintain the fiction. And I see the same thing at work here with the Trump administration. When the president says that climate change is a hoax, when the president says that coal is a clean source of power, it then falls to the people below him on the political food chain to somehow make the world fit his fantastical and anti-science vision. And so, you just get this proliferation of information control in a hopeless bid to try and bend the facts to somehow make the great leader look reasonable and rational.

Steven Cherry You say even e-mails related to the Seams study have disappeared, something you found in your Freedom of Information Act requests. What about the national labs themselves? Historically, they have been almost academic research organizations or at least a home for unfettered academic freedom style research.

Peter Fairley That’s the idea. There has been this presumption or practice in the past, under past administrations, that the national labs had some independence. And that’s not to say that there’s never been political oversight or influence on the labs. Certainly, the Department of Energy decides what research it’s going to fund at the labs. And so that in itself shapes the research landscape. But there was always this idea that the labs would then be—you fund the study and then it’s up to the labs to do the best work they can and to publish the results. And the idea that you are deep-sixing studies that are simply politically inconvenient or altering the content of the studies to fit the politics that’s new. That’s what people at the lab say is new under the Trump administration. It violates. DOE’s own scientific integrity policies in some cases, for example, with the Lawrence Berkeley National Laboratory. It violates the lab’s scientific integrity policy and the contract language under which the University of California system operates that lab for the Department of Energy. So, yeah, the independence of the national labs is under threat today. And there are absolutely concerns among scientists that precedents are being set that could affect how the labs operate, even if, let’s say, President Trump is voted out of office in November.

Steven Cherry Along those lines, what do you think the future of grid unification is?

Peter Fairley Well, Steven, I’ve been writing about climate and energy for over 20 years now, and I would have lost my mind if I wasn’t a hopeful person. So I still feel optimistic about our ability to recognize the huge challenge that climate change poses and to change the way we live and to change our energy system. And so I do think that we will see longer power lines helping regions share energy in the future. I am hopeful about that. It’s just it makes too much sense to leave that on the shelf.

Steven Cherry Well, Peter, it’s an amazing investigation of the sort that reminds us why the press is important enough to democracy to be called the fourth estate. Thanks for publishing this work and for joining us today.

Peter Fairley Thank you so much. Steven. It’s been a pleasure.

Steven Cherry We’ve been speaking with Peter Fairley, a journalist who focuses on energy and the environment, about his researching and reporting on the suspension of work on a potential unification of the U.S. energy grid.

This interview was recorded September 11, 2020. Our audio engineering was by Gotham Podcast Studio; our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

Airbus Plans Hydrogen-Powered Carbon-Neutral Planes by 2035. Can They Work?

Post Syndicated from Ned Potter original https://spectrum.ieee.org/energywise/energy/environment/airbus-plans-hydrogenpowered-carbonneutral-planes-by-2035-can-they-work

Imagine that it is December 2035 – about 15 years from now – and you are taking an international flight in order to be at home with family for the holidays. Airports and planes have not changed much since your childhood: Your flight is late as usual. But the Airbus jet at your gate is different. It is a giant V-shaped blended-wing aircraft, vaguely reminiscent of a boomerang. The taper of the wings is so gentle that one cannot really say where the fuselage ends and the wings begin. The plane is a big lifting body, with room for you and 200 fellow passengers.

One other important thing you notice before you board: The plane is venting vapor, a lot of it, even on a crisp morning. That, you know, is because the plane is fueled by liquid hydrogen, cooled to -253 degrees C, which boils off despite the plane’s extensive insulation. This is part of the vision Airbus, the French-based aviation giant, presents as part of its effort against global climate change.

Airbus is now betting heavily on hydrogen as a fuel of the future. It has just unveiled early plans for three “ZEROe” airliners, each using liquid hydrogen to take the place of today’s hydrocarbon-based jet-fuel compounds.

“It is really our intent in 15 years to have an entry into service of a hydrogen-powered airliner,” says Amanda Simpson, vice president for research and technology at Airbus Americas. Hydrogen, she says, “has the most energy per unit mass of…well, anything. And because it burns with oxygen to [yield] water, it is entirely environmentally friendly.”

But is a hydrogen future realistic for commercial aviation? Is it practical from an engineering, environmental, or economic standpoint? Certainly, people at Airbus say they need to decarbonize, and research on battery technology for electric planes has been disappointing. Meanwhile, China, currently the world’s largest producer of carbon dioxide, pledged last month to become carbon neutral by 2060. And 175 countries have signed on to the 2015 Paris agreement to fight global warming.

According to the European Commission, aviation alone accounts for between 2 and 3 percent of the world’s greenhouse gas emissions – about as much as entire countries like Japan or Germany.

Two of the planes Airbus has shown in artist renditions would barely get a second glance at today’s airports. One—with a capacity of 120-200 passengers, a cruising speed of about 830 kilometers per hour (kph), and a range of more than 3,500 km—looks like a conventional twin-engine jet. The second looks like almost any other turboprop you’ve ever seen; it’s a short-haul plane that can carry up to 100 passengers with a range of at least 1,800 km and a cruising speed of 612 kph. Each plane would get electric power from fuel cells. The company said it won’t have most other specifications for several years; it said to think of the images as “concepts,” meant to generate ideas for future planes.

The third rendering, an illustration of that blended-wing aircraft, showed some of the potential—and potential challenges—of hydrogen as a fuel. Airbus said the plane might have a cruising speed of 830 kph and a range of 3,500 km, without releasing carbon into the air. Liquid hydrogen contains about three times as much energy in each kilogram as today’s jet fuel. On the other hand, a kilogram of liquid hydrogen takes up three times the space. So, a plane would need either to give up cabin space or have more inside volume. A blended wing, with its bulbous shape, Airbus says, may solve the problem. And as a bonus, blended wings have shown they can be 20 percent more fuel-efficient than today’s tube-and-wing aircraft.

“My first reaction is: Let’s do it. Let’s make it happen,” says Daniel Esposito, a chemical engineer at Columbia University whose research covers hydrogen production. He says hydrogen can be handled safely and has a minimal carbon footprint if it’s made by electrolysis (splitting water into hydrogen and oxygen) using renewable electricity. Most industrial hydrogen today is extracted from natural gas, which negates some of the carbon benefit, but the International Energy Agency says that with renewable electricity capacity quickly growing (it passed coal as a power source in 2019), the cost of carbon-free hydrogen could drop.

“It can be done,” he says. “It’s just a matter of the political will and the will of companies like Airbus and Boeing to take the lead on this.”

Others have their doubts. “A lot of these things, you can; the question is, should you?” says Richard Pat Anderson, a professor of aerospace engineering at Embry-Riddle Aeronautical University. “When we say, ‘Should you?’ and you get into economics, then it becomes a much more difficult conversation.” Anderson says battery-powered aircraft are likely to become practical later in this century, and it is a dubious proposition to build the massive – and costly – infrastructure for hydrogen power in the meantime.

But in a warming world, Airbus says, the aviation sector needs to get going. McKinsey & Company, the consulting firm, surveyed airline customers last year and found 62 percent of younger fliers (under age 35) “really worried about climate change” and agreed that “aviation should definitely become carbon neutral.”

So, you’re on that jetway 15 years from now, on the way home. What will power the plane you’re boarding?

“Hydrogen is coming,” says Simpson at Airbus. “It’s already here.”

Exclusive: Airborne Wind Energy Company Closes Shop, Opens Patents

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/renewables/exclusive-airborne-wind-energy-company-closes-shop-opens-patents

This week, a 13-year experiment in harnessing wind power using kites and modified gliders finally closes down for good. But the technology behind it is open-sourced and is being passed on to others in the field.

As of 10 September, the airborne wind energy (AWE) company Makani Technologies has officially announced its closure. A key investor, the energy company Shell, also released a statement to the press indicating that “given the current economic environment” it would not be developing any of Makani’s intellectual property either. Meanwhile, Makani’s parent company, X, Alphabet’s moonshot factory, has made a non-assertion pledge on Makani’s patent portfolio. That means anyone who wants to use Makani patents, designs, software, and research results can do so without fear of legal reprisal.

Makani’s story, recounted last year on this site, is now the subject of a 110-minute documentary called Pulling Power from the Sky—also free to view.

When she was emerging from graduate studies at MIT in 2009, Paula Echeverri (once Makani’s chief engineer) said the company was a compelling team to join, especially for a former aerospace engineering student.

“Energy kite design is not quite aircraft design and not quite wind turbine design,” she said.

The idea behind the company’s technology is to raise the altitude of the wind energy harvesting to hundreds of meters in the sky—where the winds are typically both stronger and more steady. Because a traditional windmill reaching anywhere approaching these heights would be impractical, Makani was looking into kites or gliders that could ascend to altitude first—fastened to the ground by a tether. Only then would the flyer begin harvesting energy from wind gusts.

Pulling Power recounts Makani’s story from its very earliest days, circa 2006, when kites like the ones kite surfers use were the wind energy harvester of choice. However, using kites also means drawing power out of the tug on the kite’s tether. Which, as revealed by the company’s early experiments, couldn’t compete with propellers on a glider plane.

What became the Makani basic flyer, the M600 Energy Kite, looked like an oversized hobbyist’s glider but with a bank of propellers across the wing. These props would first be used to loft the glider to its energy-harvesting altitude. Then the engine would shut off and the glider would ride the air currents—using the props as mini wind turbines.

According to a free 1,180-page ebook (Part 1Part 2Part 3The Energy Kite, which Makani is also releasing online, the company soon found a potentially profitable niche in operating offshore.

Just in terms of tonnage, AWE had a big advantage over traditional offshore wind farms. Wind turbines (in shallow water) fixed to the seabed might require 200 to 400 tons of metal for every megawatt of power the turbine generated. And floating deep-water turbines, anchored to seabed by cables, typically involve 800 tons or more per megawatt. Meanwhile, a Makani AWE platform—which can be anchored in even deeper water—weighed only 70 tons per rated megawatt of generating capacity.

Yet, according to the ebook, in real-world tests, Makani’s M600 proved difficult to fly at optimum speed. In high winds, it couldn’t fly fast enough to pull as much power out of the wind as the designers had hoped. In low winds, it often flew too fast. In all cases, the report says, the rotors just couldn’t operate at peak capacity through much of the flyer’s maneuvers. The upshot: The company had a photogenic oversized model airplane, but not the technology that’d give regular wind turbines a run for their money.

Don’t take Makani’s word for it, though, says Echeverri. Not only is the company releasing its patents into the wild, it’s also giving away its code baseflight logs, and a Makani flyer simulation tool called KiteFAST.

“I think that the physics and the technical aspects are still such that, in floating offshore wind, there’s a ton of opportunity for innovation,” says Echeverri.

One of the factors the Makani team didn’t anticipate in the company’s early years, she said, was how precipitously electricity prices would continue to dropleaving precious little room at the margins for new technologies like AWEs to blossom and grow.

“We’re thinking about the existing airborne wind industry,” Echeverri said. “For people working on the particular problems we’d been working on, we don’t want to bury those lessons. We also found this to be a really inspiring journey for us as engineers—a joyful journey… It is worthwhile to work on hard problems.”