Tag Archives: energy

Bright X-Rays, AI, and Robotic Labs—A Roadmap for Better Batteries

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/batteries-storage/bright-xrays-ai-and-robotic-labsa-roadmap-for-better-batteries

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

Batteries have come a long way. What used to power flashlights and toys, Timex watches and Sony Walkmans, are now found in everything from phones and laptops to cars and planes.

Batteries all work the same: Chemical energy is converted to electrical energy by creating a flow of electrons from one material to another; that flow generates an electrical current.

Yet batteries are also wildly different, both because the light bulb in a flashlight and the engine in a Tesla have different needs, and because battery technology keeps improving as researchers fiddle with every part of the system: the two chemistries that make up the anode and the cathode, and the electrolyte and how the ions pass through it from one to the other.

A Chinese proverb says, “Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.” The Christian Bible says, “follow me and I will make you fishers of men.”

In other words, a more engineering-oriented proverb would say, “let’s create a lab and develop techniques for measuring the efficacy of different fishing rods, which will help us develop different rods for different bodies of water and different species of fish.”

The Argonne National Laboratory is one such lab. There, under the leadership of Venkat Srinivasan, director of its Collaborative Center for Energy Storage Science, a team of scientists has developed a quiver of techniques for precisely measuring the velocity and behavior of ions and comparing it to mathematical models of battery designs.

Venkat Srinivasan [Ven-kat Sri-ni-va-san] is also deputy director of Argonne’s Joint Center for Energy Storage Research, a national program that looks beyond the current generation of lithium–ion batteries. He was previously a staff scientist at Lawrence Berkeley National Laboratory, wrote a popular blog, “This Week in Batteries,” and is my guest today via Teams.

Venkat, welcome to the podcast.

Venkat Srinivasan Thank you so much. I appreciate the time. I always love talking about batteries, so it’d be great to have this conversation.

Steven Cherry I think I gave about as simplistic a description of batteries as one could give, maybe we could start with what are the main battery types today and why is one better than another for a given application?

Venkat Srinivasan So, Steve, there are two kinds of batteries that I think all of us use in our daily lives. One of them is a primary battery. The ones that you don’t recharge. So a common one is something that you might be putting in your children’s toys or something like that.

The second, which I think is the one that is sort of powering everything that we think of, things like electric cars and grid storage or rechargeable batteries. So these are the ones where we have to go back and charge them again. So let’s talk a little bit more about rechargeable batteries that are a number of them that are sitting somewhere in the world. You have lead–acid batteries that are sitting in your car today. They’ve been sitting there for the last 30, 40 years where they used to stop the car for lighting the car up when the engine is not on. This is something that will continue to be in our cars for quite some time.

[n-dashes].

You’re also seeing lithium–ion batteries that are not powering the car itself. Instead of having internal combustion engine and gasoline, you’re seeing more pure chemicals coming out that have lithium–ion batteries. And then the third battery, which we sort of don’t see, but we have some in different places are nickel–cadmium and metal–hydride batteries. These are kind of going away slowly. But the Toyota Prius is a great example of a nickel–metal hybrid. But many people still drive Priuses—I have one—that still has nickel-metal batteries in them. These are some of the classes of materials that are more common. But there are others, like flow batteries, that people haven’t really probably thought about and haven’t seen, which is being researched quite a bit, there are companies that are trying to install flow batteries for grid storage, which are also rechargeable batteries that are of a different type.

The most prevalent of these is lithium–ion; that’s the chemistry that has completely changed electric vehicle transportation. It’s changed the way we speak on our phones. The iPhone would not be possible if not for the lithium–ion battery. It’s the battery that has pretty much revolutionized all of transportation. And it’s the reason why the Nobel Prize two years ago went to the lithium–ion batteries for the discovery and ultimately the commercialization of the technology—it’s because it had such a wide impact.

Steven Cherry I gather that remarkably, we’ve designed all these different batteries and can power a cell phone for a full day and power a car from New York to Boston without fully understanding the chemistry involved. I’m going to offer a comparison and I’d like you to say whether it’s accurate or not.

We developed vaccines for smallpox beginning in 1798; we ended smallpox as a threat to humanity—all without understanding the actual mechanisms at the genetic level or even the cellular level by which the vaccine confers immunity. But the coronavirus vaccines we’re now deploying were developed in record time because we were able to study the virus and how it interacts with human organs at those deeper levels. And the comparison here is that with these new techniques developed at Argonne and elsewhere, we can finally understand battery chemistry at the most fundamental level.

Venkat Srinivasan That is absolutely correct. If you go back in time and ask yourself, what about the batteries like the acid batteries and the nickel–cadmium batteries—did we invent them in some systematic fashion? Well, I guess not, right?

Certainly once the materials were discovered, there was a lot of innovation that went into it using what was state-of-the-art techniques at that time to make them better and better and better. But to a large extent, the story that you just said about the vaccines with the smallpox is probably very similar to the kinds of things that are happening in batteries, the older chemistries.

The world has changed now. If you look at the kinds of things we are doing today, like you said, that in a variety of techniques, both experimental but also mathematical, meaning, now computer simulations have come to our aid and now we’re able to take a deeper understanding on how batteries behave and then use that to discover new materials—first, maybe on a computer, but certainly in the lab at some point. So this is something that is also happening in the battery world. The kinds of innovations you are seeing now with COVID vaccines are the kinds of things we are seeing happen in the battery world in terms of discovering the next big breakthrough.

Steven Cherry So I gather the main technology you’re using now is ultraright X-rays and you’re using it to come up with for the first time the electrical current, something known as the transport number. Let’s let’s start with the X-rays.

Venkat Srinivasan We used to cycle the battery up back. Things used to happen to them. We then had to open up the battery and see what happened on the inside. And as you can imagine, right when you open up a battery, you hope that nothing changes by the time you take it to your experimental technique of choice to look at what’s happening on the inside. But oftentimes things change. So what you have inside the battery during its operation may not be the same as what you’re probing when you open up the cell. So a trend that’s been going on for some time now is to say, well, maybe we should be thinking about in situ to operando methods, meaning inside the party’s environment during operation, trying to find more information in the cell.

Typically all battery people will do is they’ll send a current into the battery and then measure the potential or vice versa. That’s a common thing that’s done. So what we are trying to do now is do one more thing on top of that: Can we probe something on the inside without opening up the cell? X-rays come into play because these are extremely powerful light, they can go through the battery casing, go into the cell, and you can actually start seeing things inside the battery itself during operando operation, meaning you can pass current keep the battery in the environment you want it to be and send the X-ray beam and see what’s happening on the inside.

So this is a trend that we’ve been slowly exploring, going back a decade. And a decade ago, we probably did not have the resolution to be able to see things at a very minute scale. So we were seeing maybe a few tens of microns of what was happening in these batteries. Maybe we were measuring things once every minute or so, but we’re slowly getting better and better; we’re making the resolution tighter, meaning we can see smaller features and we are trying to get the time resolution such that we can see things at a faster and faster time. So that trend is something that is going to is helping us and we continue to help us make batteries better.

Steven Cherry So if I could push my comparison a little further, we developed the COVID vaccines in record time and with stunning efficiency. I mean, 95 percent effective right out of the gate. Will this new ability to look inside the battery while it’s in operation, will this create new generations of better batteries in record time?

Venkat Srinivasan That will be the hope. And I do want to bring in two aspects that I think work complementarily with each other. One is the extreme techniques—and related techniques like X-ray, so we should not forget that there are non-X-ray techniques also that give us information that can be crucially important. But along with that, there has been this revolution in computing that has really come to the forefront in the last five to 10 years. What this computing revolution is that basically because computers are getting more and more powerful and computing resources are getting cheaper, we are able to now start to calculate on computers all sorts of things. For example, we can calculate how much lithium can a material hold—without actually having to go into the lab. And we can do this in a high-throughput fashion: screen a variety of materials and start to see which of these looks the most promising. Similarly, we can do it, same thing, to ask: Can we find iron conductors to find, say, solid-state battery materials using the same techniques?

Now, once you have these kinds of materials in play and you do them very, very fast using computers, you can start to think about how do you combine them with these X-ray techniques. So you could imagine that you’re finding a material on the computer. You’re trying to synthesize them and during the synthesis you try to watch and see, are you making the material you were predicting or did something happen during synthesis where you were not able to make the particular material?

And using this complementary way of looking at things, I think in the next five to 10 years you’re going to see this amazing acceleration of material discovery between the computing and the X-ray sources and other techniques of experimental methods. They’re going to see this incredible acceleration in terms of finding new things. You know, the big trick in materials—and this is certainly true for battery materials—if you can find those materials, maybe one of them looks interesting. So the job here is to cycle through those thousand as quickly as possible to find that one nugget that can be exciting. And so what we’re seeing now with computing and with these X-rays is the ability to cycle through many materials very quickly so that we can start to pin down which of those which of the one among those thousand looks the most promising that we can spend a lot more resources and time on them.

Steven Cherry We’ve been relying on lithium–ion for quite a while. It was first developed in 1985 and first used commercially by Sony in 1991. These batteries are somewhat infamous for occasionally exploding in phones and laptops and living rooms and on airplanes and even in the airplanes themselves in the case of the Boeing 787. Do you think this research will lead to safer batteries?

Venkat Srinivasan Absolutely. The first thing I should clarify is that the lithium–ion from the 1990s is not the same lithium–ion we used today. There have been many generations of materials that have changed over time; they’ve gotten better; the energy density has actually gone up by a factor of three in those twenty-five years, and there’s a chance that it’s going to continue to go up by another factor of two in the next decade or so. The reality is that when we use the word lithium–ion, we’re actually talking about a variety of material classes that go into the into the anodes, the cathode, and the electrolytes that make up the lithium–ion batteries. So the first thing to kind of notice is that these materials are changing continuously, what the new techniques are bringing is a way for us to push the boundaries of lithium–ion, meaning there is still a lot of room left for lithium–ion to get better, and these new techniques are allowing us to invent the next generation of cathode materials, anode materials, and electrolytes that could be used in the system to continue to push on things like energy density, fast-charge capability, cycle life. These are the kinds of big problems we’re worried about. So these techniques are certainly going to allow us to get there.

There is another important thing to think about for lithium–ion, which is recyclability. I think it’s been pretty clear that as the market for batteries starts to go up, they’re going to have a lot of batteries that are going to reach end-of-life at some stage and we do not want to throw them away. We want to take out the precious metals in them, the ones that we think are going to be useful for the next generation of batteries. And we want to make sure we dispose of them in a very sort of a safe and efficient manner for the environment. So I think that is also an area of R&D that’s going to be enabled by these kinds of techniques.

The last thing I’d say is that we’re thinking hard about systems that go beyond lithium–ion, things like solid-state batteries, things like magnesium-based batteries … And those kinds of chemistries, we really feel like taking these modern techniques and putting them in play is going to accelerate the development time frame. So you mentioned 1985 and 1991; lithium–ion battery research started in the 1950s and 60s, and it’s taken as many decades before we could get to a stage where Sony could actually go and commercialize it. And we think we can accelerate the timeline pretty significantly for things like solid-state batteries or magnesium-based batteries because of all the modern techniques.

Steven Cherry Charging time is also a big area for potential improvement, especially in electric cars, which still only have a driving range that maybe gets to 400 kilometers, in practical terms. Will we be getting to the point where we can recharge in the time it takes to get a White Chocolate Gingerbread Frappuccino at Starbucks?

Venkat Srinivasan That’s the that’s the dream. So Argonne actually leads a project for the Department of Energy working with multiple other national labs on enabling 10-minute charging of batteries. I will say that in the last two or three years, there’s been tremendous progress in this area. Instead of a forty-five-minute charge or a one-hour charge that was considered to be a fast charge. We now feel like there is a possibility of getting under 30 minutes of charging. They still have to be proven out. They have to be implemented at large scale. But more and more as we learn using these similar techniques that I can see a little bit more about, that there is a lot of work happening at the Advanced Photon Source looking at fast charging of batteries, trying to understand the phenomenon that is stopping us from charging very fast. These same techniques are allowing us to think about how to solve the problem.

And I’ll take a bet in the next five years, we’ll start to look at 10-minute charging as something that is going to be possible. Three or four years ago, I would not have said that. But in the next five years, I think they are going to start saying, hey, you know, I think there are ways in which you can start to get to this kind of charging time. Certainly it’s a big challenge. It’s not just a challenge in the battery side, it’s a challenge in how are we going to get the electricity to reach the electric car? I mean, there’s going to be a problem there. There’s a lot of heat generation that happens in these systems. We’ve got to find a way to pull it out. So there’s a lot of challenges that we have to solve. But I think these techniques are slowly giving us answers to, why is it a problem to begin with? And allowing us to start to test various hypotheses to find ways to solve the problem.

Steven Cherry The last area where I think people are looking for dramatic improvement is weight and bulk. It’s important in our cell phones and it’s also important in electric cars.

Venkat Srinivasan Yeah, absolutely. So frankly, it’s not just in electric cars. At Argonne they’re starting to think about light-duty vehicles, which is our passenger cars, but also heavy-duty vehicles. Right. I mean, what happens when you start to think about trucking across the country carrying a heavy payload? We are trying to think hard about aviation, about marine, and rail. As you start to get to these kinds of applications, the energy density requirement goes up dramatically.

I’ll give you some numbers. If you look at today’s lithium–ion batteries at the pack level, the energy density is approximately 180 watt-hours per kilogram, give or take. Depending on the company, That could be a little bit higher or lower, but approximately 180 Wh/kg. If we look at a 737 going across the country or a significant distance carrying a number of passengers, the kinds of energy density you would need is upwards of 800 Wh/kg. So just to give you a sense for that, right, we said it’s 180 for today’s lithium–ion. We’re talking about four to five times the energy density of today’s lithium–ion before we can start to think about electric aviation. So energy density would gravimetric and volumetric. It’s going to be extremely important in the future. Much of the R&D that we are doing is trying to discover materials that allow us to increase energy density. The hope is that you will increase energy density. You will make the battery charge very fine. To get them to last very long, all simultaneously, that tends to be a big deal, but it is not all about compromising between these different competing metrics—cycle life, calendar life, cost, safety, performance, all of them tend to play against each other. But the big hope is that we are able to improve the energy density without compromising on these other metrics. That’s kind of the big focus of the R&D that’s going on worldwide, but certainly at Argonne.

Steven Cherry I gather there’s also a new business model for conducting this research, a nonprofit organization that brings corporate and government, and academic research all under one aegis. Tell us about CalCharge.

Venkat Srinivasan Yeah, if you kind of think about the battery world and this is true for many of these hard technologies, the sort of the cleantech or greentech as people have come to call them. There is a lot of innovation that is needed, which means in our lab R&D, the kinds of techniques and models that we’re talking about is crucially important. But it’s also important for us to find a way to make them into a market, meaning you have to be able to take that lab innovation; you’ve got to be able to manufacture them; you’ve got to get them in the hands of, say, a car company that’s going to test them and ultimately qualify them and then integrate them into the vehicle.

So this is a long road to go from lab to market. And the traditional way you’ve thought about this is you will want to throw it across the fence, right. So, say at Argonne National Lab, invent something and then we throw it across the fence to industry and then you hope that industry takes it from there and they run with it and they solve the problems. That tends to be an extremely inefficient process. That’s because oftentimes that a national lab might stop is not enough for an industry to run with it—there are multiple paths that show up. And when you integrate these devices into the company’s existing other components there are problems that show up when you get it up to manufacturing, when you start to get up to a larger scale; there are problems that show up and you make a pact with it. And oftentimes the solution to these problems goes back to the material. So the fundamental principle that me and many others have started thinking about is you do not want to keep R&D, the manufacturing, and the market separate. You have to find a way to connect them up.

And if you connect them up very closely, then the market starts to drive the R&D, the R&D innovation starts to get the people to the manufacturing world excited. And there is this close connection among all of these three things that makes things go faster and faster. We’ve seen this in other industries and it certainly will be true in the battery world. So we’ve been trying very, very hard to kind of enable these kinds of what I would call public-private[NDASH] partnerships, ways in which we, the public, meaning the national lab systems, can start to interact with the private companies and find ways to move this along. So this is a concept that I think of me and a few others have been sort of thinking about for quite some time. Before I moved to Argonne, I was at Lawrence Berkeley. And at Lawrence Berkeley—the Bay Area has a very rich ecosystem of battery companies, especially startup companies.

So I created this entity called CalCharge, which was a way to connect up the local ecosystem in the San Francisco Bay Area to the national labs in the area—Lawrence Berkeley, SLAC, and Sandia National Labs in Livermore. So those are the three that were connected. And the idea behind this is how do we take the sort of the national lab facilities, the people, and the kind of the amazing brains that they have and use them to start to solve some of the problems that is facing? And how do we take the IP that is sitting in the lab and how do we move them to market using these startups so that we can continuously work with each other, make sure that we don’t have these valleys of death as we’ve come to call them, when we move from lab to market and try to accelerate that. I’ve been doing very similar things at Argonne in the last four years thinking hard about how do you do this, but on a national scale.

So we’ve been working closely with the Department of Energy, working with various entities both in the Chicagoland area, but also in the wider U.S. community, to start to think about enabling these kinds of ecosystems where national labs like ours and others across the country—there are 70 national labs, Department of Energy national labs—maybe a dozen of them have expertise that can be used for the free world. How do we connect them up? And the local universities that are the different parts of the country with amazing expertise, how do you connect them up to these startups, the big companies, the manufacturers, the car companies that are coming in, but also the material companies, companies that are providing lithium for a supply chain perspective? So my dream is that we would have this big ecosystem of everybody talking to each other, finding ways to leverage each other and ultimately making this technology something that can reach the market as quickly as possible.

Steven Cherry And right now, who is waiting on whom? Is there enough new research that it’s up to the corporations to do something with it? Or are they looking for specific improvements that that they need to wait for you to make?

Venkat Srinivasan All of the above. That is probably quite a bit of R&D that’s going on that industry is not aware of, and that tends to be a big problem—there’s a visibility problem when it comes to the kinds of things that are going on in the national labs and the academic world. There are things where we are not aware of the problems that industry is facing. And I think these kinds of disconnects where sometimes the lack of awareness keeps things from happening fast is what we need to solve. And the more connections we have, the more interactions we have, the more conversations we have with each other, the exposure increases. And when the exposure increases, we have a better chance of being able to solve these kinds of problems where the lack of information stops us from getting the kinds of innovation that we could get.

Steven Cherry And at your end, at the research end, I gather one immediate improvement you’re looking to make is the brightness of the X-rays. Is there anything else that we should look forward to?

Venkat Srinivasan Yeah, there are a couple of things that I think are very important. The first one is the brightness of the X-rays. There’s an upgrade that is coming up for the advanced photon source that’s going to change the time resolution in which we can start to see these batteries. So, for example, when you’re charging the batteries very fast, you can get data very quickly. So that’s going to be super important. The second one is you can also start to think about seeing features that are even smaller than the kinds of features we see today. So that’s the first big thing.

The second thing that is connected to that is artificial intelligence and machine learning is becoming something that is permeating through all forms of research, including battery research, we use AI and ML for all sorts of things. But one thing we’ve been thinking about is how do we connect up AI and ML to the kinds of X-ray techniques we’ve been using. So, for example, instead of looking all over the battery to see if there is a problem, can we use signatures but of where the problems could be occurring? So that these machine learning tools can quickly go in and identify the spot where things could be going wrong so that you can spend all your time and energy taking data at that particular spot. So that, again, we’re being very efficient with the time that we have to ensure that we’re catching the problems we have to catch. So I think the next big thing that is going on is this whole artificial intelligence and machine learning that is going to be integral for us in the battery discovery world.

The last thing which is an emerging trend is what is called automated labs or self-driving labs. The idea behind this is that instead of a human being going in and sort of synthesizing a material starting in the morning and finishing the evening and then characterizing it the next day and finding out what happened to it and then going back and trying the next material, could we start to do this using robotics? This is something that’s been a trend for a while now. But where things are heading is that more and more robots can start to do things that a human being could do. So you could imagine robots going in and synthesizing electrolyte molecules, mixing them up, testing for the conductivity and trying to see if the conductivity is higher than the one that you had before. If it’s not going back and iterating on finding a new molecule based on the previous results so that you can efficiently try to find the answer for a higher conductive electrolyte than one that you have is your baseline. Robots work 24/7. So it kind of makes it very, very useful for us to think about these ways of innovating. Robots generate a lot of data, which we now know how to handle because of all the machine learning tools we’ll be developing in the last three, four, five years. So all of a sudden, the synergy, the intersection between machine learning, the ability to analyze a lot of data, and robotics are starting to come into play. And I think we’re going to see that that’s going to open up new ways to discover materials in a rapid fashion.

Steven Cherry Well, Venkat, if you will forgive a rather obvious pun, the future of battery technology seems bright. And I wish you and your colleagues at Argonne and CalCharge every success. Thank you for your role in this research and for being here today.

Thank you so much. I appreciate the time you’ve taken to ask me this questions.

We’ve been speaking with Venkat Srivinasan of Argonne National Lab about a newfound ability to study batteries at the molecular level and about improvements that might result from it.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

This interview was recorded January 6, 2021 using Adobe Audition and edited in Audacity. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on the Spectrum website, where you can also sign up for alerts, or on Spotify, Apple, Google—wherever you get your podcasts. We welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

See Also:

Battery of tests: Scientists figure out how to track what happens inside batteries

Concentration and velocity profiles in a polymeric lithium-ion battery electrolyte

Carbon Engineering’s Tech Will Suck Carbon From the Sky

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energy/fossil-fuels/carbon-engineerings-tech-will-suck-carbon-from-the-sky

graphic link to special report landing page

West Texas is a hydrocarbon hot spot, with thousands of wells pumping millions of barrels of oil and billions of cubic feet of natural gas from the Permian Basin. When burned, all that oil and gas will release vast amounts of greenhouse gases into the atmosphere.

A new facility there aims to do the opposite. Rows of giant fans spread across a flat, arid field will pull carbon dioxide from the air and then pump it deep underground. When completed, the project could capture 1 million metric tons of carbon dioxide per year, doing the air-scrubbing work of some 40 million trees.

Canadian firm Carbon Engineering is designing and building this “direct-air capture” facility with 1PointFive, a joint venture between a subsidiary of Occidental Petroleum Corp. and the private equity firm Rusheen Capital Management. Carbon Engineering will devote much of 2021 to front-end engineering and design work in Texas, with construction slated to start the following year and operations by 2024, the partners say. The project is the biggest of its kind in the world and will likely cost hundreds of millions of dollars to develop.

Carbon Engineering is among a handful of companies with major direct-air capture developments underway this year. Zurich-based Climeworks is expanding across Europe, while Dublin’s Silicon Kingdom Holdings plans to install its first CO2-breathing “mechanical tree” in Arizona. Global Thermostat, headquartered in New York City, has three new projects in the works. All the companies say they intend to curb the high cost of capturing carbon by optimizing technology, reducing energy use, and scaling up operations.

The projects arrive as many climate experts warn that current measures to reduce emissions—such as adopting renewable energy and electrifying transportation—are no longer sufficient to avert catastrophe. To limit global warming to 1.5 °C, the world must also use “negative-emission technologies,” according to the United Nations Intergovernmental Panel on Climate Change’s 2018 report.

Global CO2 emissions from fossil fuels reached 33 billion metric tons in 2019. Existing direct-air capture projects would eliminate a tiny fraction of that total, and not all of the captured CO2 is expected to be permanently sequestered. Some of it will likely return to the atmosphere when used in synthetic fuels or other products. Companies say the goal is to continuously capture and “recycle” the greenhouse gas to avoid creating new emissions, while also generating revenue that can fund the technology.

Carbon removal can help compensate for sectors that are difficult to decarbonize, such as agriculture, cement making, and aviation, says Jennifer Wilcox, a chemical engineer and senior fellow at the World Resources Institute. “The climate models are saying clearly that if we don’t do carbon removal in addition to avoiding emissions, we will not reach our climate goals.”

Carbon Engineering’s plant in Texas will use banks of fans, each about 8.5 meters in diameter, to draw air into a large structure called a contactor. The air is pushed through a plastic mesh coated with a potassium hydroxide solution, which binds with the carbon dioxide. A series of chemical processes concentrate and compress the CO2 into tiny white pellets, which are then heated to 900 °C to release the carbon dioxide as a gas. Steve Oldham, CEO of Carbon Engineering, likens the plant to a refinery that produces chemicals at an industrial scale. “That’s the type of capability we’re going to need, to make a material impact on climate change,” he says.

At its pilot plant in British Columbia, Carbon Engineering combines the pure CO2 with hydrogen to produce synthetic crude oil. The facility can capture 1 metric ton of carbon dioxide per day; by comparison, the Texas operation is expected to capture over 2,700 metric tons daily. At the larger site, the captured gas will be injected into older oil wells, both sequestering the CO2 underground and forcing up any remaining oil. In addition to the work in Texas, the company is scaling up its Canadian operations, Oldham says. In 2021, it will open a new business and advanced-development center and expand research operations; the new facility will capture up to 4 metric tons of CO2 per day from the air.

Other direct-air capture firms are opting for a modular approach. Climeworks’ carbon collectors can be stacked to build facilities of any size. The system also uses fans, but the air runs over a solid filter material. Once saturated with CO2, the filter is heated to between 80 and 100 °C, releasing highly concentrated CO2 gas, which can be used in various ways.

For example, at Climeworks’ pilot site in Iceland—which is powered by geothermal energy—the company’s partner Carbfix reacts the concentrated CO2 with basaltic rock to lock it below ground. The site is now being expanded to capture 4,000 metric tons of carbon dioxide a year; it should be operational in the first half of 2021, says Daniel Egger, head of marketing and sales for Climeworks. The CO2 could also be used to make a more sustainable form of jet fuel; Climeworks is seeking financing for two CO2-to-fuel projects in Norway and the Netherlands.

Meanwhile, the company will continue working with the e-commerce platforms Stripe and Shopify. To cancel their carbon footprints, the two companies have committed to purchasing carbon credits from Climeworks, reflecting the amount of CO2 that Climeworks has removed from the air. Major tech firms in general are investing in carbon-reducing schemes to help meet their corporate environmental goals. Microsoft has pledged to be carbon negative by 2030 and to spend $1 billion to accelerate the development of technology for carbon reduction and removal.

“For all these companies that have targets to bring their emissions to ‘net zero,’ technologies like ours are absolutely needed,” Egger says.

Global energy giants are also backing direct-air capture to undo some of the damage caused by their products and operations. In September, for instance, ExxonMobil expanded an agreement with Global Thermostat to help scale the startup’s technology. Global Thermostat’s machines are the size of a shipping container and capture CO2 using amine-based adsorbents on honeycombed ceramic cubes, akin to a car’s catalytic converter.

Cofounder Peter Eisenberger, a professor of Earth and environmental science at Columbia University, says Global Thermostat’s goal is to remove billions of tons of carbon dioxide every year by licensing its technology to other firms. He believes the world will have to remove 50 billion metric tons of carbon dioxide over the next two decades to avoid catastrophic climate shifts. In 2021, the company will add three pilot projects, including a 2,000-metric-ton plant in Chile to produce synthetic fuels, as well as facilities in Latin America and the Middle East that will provide CO2 for bubbly beverages and water desalination, respectively.

Unlike its peers, Silicon Kingdom Holdings uses a passive system to draw in air. Klaus Lackner, a professor at Arizona State University, developed the company’s mechanical-tree technology. Each tree will have stacks of 150 disks coated in a carbon-adsorbing material; as wind blows over the disks, they trap carbon on their surfaces. The disks are then lowered into a bottom chamber, where an “energy-efficient process” releases the CO2 from the sorbent, says Pól Ó Móráin, CEO of Silicon Kingdom Holdings. The high-purity gas could be sequestered or reused in beverages, cement, fertilizer, or other industrial products. The startup plans to build and operate the first commercial-scale 2.5-meter-tall tree near the ASU campus in Tempe in 2021.

Ó Móráin says a dozen trees can capture 1 metric ton of carbon dioxide daily. The goal is to install carbon farms worldwide, each with up to 120,000 mechanical trees.

Wilcox of the World Resources Institute says there’s “no clear winner” among these emerging technologies for capturing carbon. They’re distinct from one another, she notes. “I think we need them all.”

An abridged version of this article appears in the January 2021 print issue as “The Carbon-Sucking Fans of West Texas.”

Gravity Energy Storage Will Show Its Potential in 2021

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/energy/batteries-storage/gravity-energy-storage-will-show-its-potential-in-2021

graphic link to special report landing page

Cranes are a familiar fixture of practically any city skyline, but one in the Swiss City of Ticino, near the Italian border, would stand out anywhere: It has six arms. This 110-meter-high starfish of the skyline isn’t intended for construction. It’s meant to prove that renewable energy can be stored by hefting heavy loads and dispatched by releasing them.

Energy Vault, the Swiss company that built the structure, has already begun a test program that will lead to its first commercial deployments in 2021. At least one competitor, Gravitricity, in Scotland, is nearing the same point. And there are at least two companies with similar ideas, New Energy Let’s Go and Gravity Power, that are searching for the funding to push forward.

To be sure, nearly all the world’s currently operational energy-storage facilities, which can generate a total of 174 gigawatts, rely on gravity. Pumped hydro storage, where water is pumped to a higher elevation and then run back through a turbine to generate electricity, has long dominated the energy-storage landscape. But pumped hydro requires some very specific geography—two big reservoirs of water at elevations with a vertical separation that’s large, but not too large. So building new sites is difficult.

Energy Vault, Gravity Power, and their competitors seek to use the same basic principle—lifting a mass and letting it drop—while making an energy-storage facility that can fit almost anywhere. At the same time they hope to best batteries—the new darling of renewable-energy storage—by offering lower long-term costs and fewer environmental issues.

In action, Energy Vault’s towers are constantly stacking and unstacking 35-metric-ton bricks arrayed in concentric rings. Bricks in an inner ring, for example, might be stacked up to store 35 megawatt-hours of energy. Then the system’s six arms would systematically disassemble it, lowering the bricks to build an outer ring and discharging energy in the process.

This joule-storing Jenga game can be complicated. To maintain a constant output, one block needs to be accelerating while another is decelerating. “That’s why we use six arms,” explains Robert Piconi, the company’s CEO and cofounder.

What’s more, the control system has to compensate for gusts of wind, the deflection of the crane as it picks up and sets down bricks, the elongation of the cable, pendulum effects, and more, he says.

Piconi sees several advantages over batteries. Advantage No. 1 is environmental. Instead of chemically reactive and difficult-to-recycle lithium-ion batteries, Energy Vault’s main expenditure is the bricks themselves, which can be made on-site using available dirt and waste material mixed with a new polymer from the Mexico-based cement giant Cemex

Another advantage, according to Piconi, is the lower operating expense, which the company calculates to be about half that of a battery installation with equivalent storage capacity. Battery-storage facilities must continually replace cells as they degrade. But that’s not the case for Energy Vault’s infrastructure.

The startup is confident enough in its numbers to claim that 2021 will see the start of multiple commercial installations. Energy Vault raised US $110 million in 2019 to build the demonstration unit in Ticino and prepare for a “multicontinent build-out,” says Piconi.

Compared with Energy Vault’s effort, Gravitricity’s energy-storage scheme seems simple. Instead of a six-armed crane shuttling blocks, Gravitricity plans to pull one or just a few much heavier weights up and down abandoned, kilometer-deep mine shafts.

These great masses, each one between 500 and 5,000 metric tons, need only move at mere centimeters per second to produce megawatt-level outputs. Using a single weight lends itself to applications that need high power quickly and for a short duration, such as dealing with second-by-second fluctuations in the grid and maintaining grid frequency, explains Chris Yendell, Gravitricity’s project development manager. Multiple-weight systems would be more suited to storing more energy and generating for longer periods, he says. 

Proving the second-to-second response is a primary goal of a 250-kilowatt concept demonstrator that Gravitricity is building in Scotland. Its 50-metric-ton weight will be suspended 7 meters up on a lattice tower. Testing should start during the first quarter of 2021. “We expect to be able to achieve full generation within less than one second of receiving a signal,” says Yendell.

The company will also be developing sites for a full-scale prototype during 2021. “We are currently liaising with mine owners in Europe and in South Africa, [and we’re] certainly interested in the United States as well,” says Yendell. Such a full-scale system would then come on line in 2023.

Gravity Power and its competitor New Energy Let’s Go, which acquired its technology from the now bankrupt Heindl Energy, are also looking underground for energy storage, but they are more closely inspired by pumped hydro. Instead of storing energy using reservoirs at different elevations, they pump water underground to lift an extremely heavy piston. Allowing the piston to fall pushes water through a turbine to generate electricity.

“Reservoirs are the Achilles’ heel of pumped hydro,” says Jim Fiske, the company’s founder. “The whole purpose of a Gravity Power plant is to remove the need for reservoirs. [Our plants] allow us to put pumped-hydro-scale power and storage capacity in 3 to 5 acres [1 to 2 hectares] of flat land.”

Fiske estimates that a 400-megawatt plant with 16 hours of storage (or 6.4 gigawatt-hours of energy) would have a piston that’s more than 8 million metric tons. That might sound ludicrous, but it’s well within the lifting abilities of today’s pumps and the constraints of construction processes, he says. 

While these companies expect such underground storage sites to be more economical than battery installations, they will still be expensive. But nations concerned about the changing climate may be willing to pay for storage options like these when they recognize the gravity of the crisis.

This article appears in the January 2021 print issue as “The Ups and Downs of Gravity Energy Storage.”

Lithium-Ion Battery Recycling Finally Takes Off in North America and Europe

Post Syndicated from Jean Kumagai original https://spectrum.ieee.org/energy/batteries-storage/lithiumion-battery-recycling-finally-takes-off-in-north-america-and-europe

graphic link to special report landing page

Later this year, the Canadian firm Li-Cycle will begin constructing a US $175 million plant in Rochester, N.Y., on the grounds of what used to be the  Eastman Kodak complex. When completed, it will be the largest lithium-ion battery-recycling plant in North America.

The plant will have an eventual capacity of 25 metric kilotons of input material, recovering 95 percent or more of the cobalt, nickel, lithium, and other valuable elements through the company’s zero-wastewater, zero-emissions process. “We’ll be one of the largest domestic sources of nickel and lithium, as well as the only source of cobalt in the United States,” says Ajay Kochhar, Li-Cycle’s cofounder and CEO.

Founded in late 2016, the company is part of a booming industry focused on preventing tens of thousands of tons of lithium-ion batteries from entering landfills. Of the 180,000 metric tons of Li-ion batteries available for recycling worldwide in 2019, just a little over half were recycled. As lithium-ion battery production soars, so does interest in recycling. 

According to London-based Circular Energy Storage, a consultancy that tracks the lithium-ion battery-recycling market, about a hundred companies worldwide recycle lithium-ion batteries or plan to do so soon. The industry is concentrated in China and South Korea, where the vast majority of the batteries are also made, but there are several dozen recycling startups in North America and Europe. In addition to Li-Cycle, that list includes Stockholm-based Northvolt, which is jointly building an EV-battery-recycling plant with Norway’s Hydro, and Tesla alum J.B. Straubel’s Redwood Materials, which has a broader scope of recycling electronic waste. [See sidebar, “14 Li-ion Battery-Recycling Projects to Watch.”]

These startups aim to automate, streamline, and clean up what has been a labor-intensive, inefficient, and dirty process. Traditionally, battery recycling involves either burning them to recover some of the metals, or else grinding the batteries up and treating the resulting “black mass” with solvents.

Battery recycling doesn’t just need to be cleaner—it also needs to be reliably profitable, says Jeff Spangenberger, director of the ReCell Center, a battery-recycling research collaboration supported by the U.S. Department of Energy. “Recycling batteries is better than if we mine new materials and throw the batteries away,” Spangenberger says. “But recycling companies have trouble making profits. We need to make it cost effective, so that people have an incentive to bring their batteries back.”

Li-Cycle will operate on a “spoke and hub” model, with the spokes handling the preliminary processing of old batteries and battery scrap, and the black mass feeding into a centrally located hub for final processing into battery-grade materials. The company’s first spoke is near Toronto, where Li-Cycle is headquartered; a second spoke just opened in Rochester, where the future hub is slated to open in 2022.

Li-Cycle engineers iteratively improved on traditional hydrometallurgical recycling, Kochhar says. For instance, rather than dismantling an EV battery pack into cells and discharging them, they separate the pack into larger modules and process them without discharging.

When it comes to battery chemistries, Li-Cycle is agnostic. Mainstream nickel manganese cobalt oxide batteries are just as easily recycled as ones based on lithium iron phosphate. “There is no uniformity in the industry,” Kochhar notes. “We don’t know the exact chemistry of the batteries, and we don’t need to know.” 

Just how many batteries will need to be recycled? In presentations, Kochhar refers to an “incoming tsunami” of spent lithium-ion batteries. With global sales of EVs expected to climb from 1.7 million in 2020 to 26 million in 2030, it’s easy to imagine we’ll soon be awash in spent batteries.

But lithium-ion batteries have long lives, says Hans Eric Melin, director of Circular Energy Storage. “Thirty percent of used EVs from the U.S. market are now in Russia, Ukraine, and Jordan, and the battery came along as a passenger on that journey,” Melin says. EV batteries can also be repurposed as stationary storage. “There’s still a lot of value in these [used] products,” he says. 

Melin estimates that the United States will have about 80 metric kilotons of Li-ion batteries to recycle in 2030, while Europe will have 132 metric kilotons. “Every [recycling] company is setting up a plant with thousands of tons of capacity, but you can’t recycle more material than you have,” he notes.

ReCell’s Spangenberger agrees that the need for increased battery-recycling capacity won’t be pressing for a while. That’s why his group’s research is focused on longer-term projects, including direct cathode recycling. Traditional recycling breaks the cathode down into a metal salt, and reforming the salt back into cathodes is expensive. ReCell plans to demonstrate a cost-effective method for recycling cathode powders this year, but it will be another five years before those processes will be ready for high-volume application.

Even if the battery tsunami hasn’t yet arrived, Kochhar says consumer electronics and EV manufacturers are interested in Li-Cycle’s services now. “Often, they’re pushing their suppliers to work with us, which has been great for us and really interesting to see,” Kochhar says.

“The researchers involved in recycling are very passionate about what they do—it’s a big technical challenge and they want to figure it out because it’s the right thing to do,” says Spangenberger. “But there’s also money to be made, and that’s the attraction.”

This article appears in the January 2021 print issue as “Momentum Builds for Lithium-Ion Battery Recycling.”

Perovskite Solar Out-Benches Rivals in 2021

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/tech-talk/energy/renewables/oxford-pv-sets-new-record-for-perovskite-solar-cells

In October, when the International Energy Agency pronounced solar energy the “cheapest electricity in history,” their claim rested on largely inefficient panel technologies in the 15-25 percent range. Imagine, say perovskite PV advocates, what new standards solar power could set with efficiencies of 30 percent or more. 

As of December, photovoltaic panel efficiency ratings north of 30 percent no longer seemed quite so theoretical either. 

December was the month the U.S. National Renewable Energy Laboratory certified U.K. startup Oxford PV’s new record: A single solar cell coated with the mineral perovskite, NREL confirmed, can now convert 29.52 percent of incident solar energy into electricity. According to NREL’s own benchmarking, conventional silicon cells appear to have maxed out at 27.6 percent. 

Oxford PV, a ten-year-old spinoff from the University of Oxford, in England, say they expect to cross over into the 30s soon, too.

At its current pace, the company says it expects to be manufacturing cells with 33 percent efficiency within four years. One competitor in the perovskite-silicon PV race, the German research institute Helmholtz-Zentrum Berlin, has already achieved 29.15 percent efficiency with their perovskite cell and expects to be able to push its rating up to 32.4 percent. 

“We hope [this technology] will change the face of photovoltaics and accelerate the adoption of solar to address climate change,” Chris Case, Oxford PV’s chief technology officer, says of his company’s tandem perovskite-silicon cells. 

The tandem approach—coating an ordinary silicon wafer with a thin-film layer of perovskite material—enables Oxford PV to capture more available solar radiation. The perovskite layer absorbs shorter wavelengths, while the silicon layer absorbs longer wavelengths. To improve efficiency, the company expects to refine the cells’ coatings and antireflection layers and remove defects and impurities, Case says. 

Companies and universities worldwide are looking to perovskites as a potential future replacement for silicon, in the hopes of making renewable energy more affordable and accessible.

Early prototypes of perovskite solar cells were unstable and degraded quickly. But over the past decade, researchers have steadily improved the stability and durability of perovskite materials for both indoor and outdoor applications.

Oxford PV expects to start selling its perovskite-silicon cells to the public in early 2022, says CEO Frank Averdung. That would make it the first company to bring such a product to the global solar market. 

The startup is expanding its pilot plant in Germany into a 100-megawatt-capacity solar cell factory. Oxford PV began producing small volumes of cells there in 2017. The company has field-tested the technology for more than a year, Case says.

So far, he adds, data suggest the cells perform about the same as commercial silicon panels. “We see no degradation that’s any different than reference commercial panels that we’re comparing to,” Case says. 

He says it typically takes a couple of years for efficiency achievements in the lab to appear in factory-produced cells. Thus the first devices off Oxford PV’s manufacturing line will have an efficiency of 26 percent—higher than any other commercially available solar cell, Averdung says. The company expects residential rooftop solar projects using its technology will generate 20 percent more power using the same number of cells as existing installations.

Some solar researchers remain skeptical of perovskites, pointing to the material’s potential to degrade when exposed to moisture, harsh temperatures, salt spray, oxygen, and other elements. Case says Oxford PV’s cells have passed a battery of accelerated stress tests, both internally and by third parties.

“They will definitely be expected to last as long or longer than any of the best silicon modules that are out there,” he says of the cells.

Germany’s Energiewende, 20 Years Later

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/energy/renewables/germanys-energiewende-20-years-later

In 2000, Germany launched a deliberately targeted program to decarbonize its primary energy supply, a plan more ambitious than anything seen anywhere else. The policy, called the Energiewende, is rooted in Germany’s naturalistic and romantic tradition, reflected in the rise of the Green Party and, more recently, in public opposition to nuclear electricity generation. These attitudes are not shared by the country’s two large neighbors: France built the world’s leading nuclear industrial complex with hardly any opposition, and Poland is content burning its coal.

The policy worked through the government subsidization of renewable electricity generated with photovoltaic cells and wind turbines and by burning fuels produced by the fermentation of crops and agricultural waste. It was accelerated in 2011 when Japan’s nuclear disaster in Fukushima led the German government to order that all its nuclear power plants be shut down by 2022.

During the past two decades, the Energiewende has been praised as an innovative miracle that will inexorably lead to a completely green Germany and criticized as an expensive, poorly coordinated overreach. I will merely present the facts.

The initiative has been expensive, and it has made a major difference. In 2000, 6.6 percent of Germany’s electricity came from renewable sources; in 2019, the share reached 41.1 percent. In 2000, Germany had an installed capacity of 121 gigawatts and it generated 577 terawatt-hours, which is 54 percent as much as it theoretically could have done (that is, 54 percent was its capacity factor). In 2019, the country produced just 5 percent more (607 TWh), but its installed capacity was 80 percent higher (218.1 GW) because it now had two generating systems.

The new system, using intermittent power from wind and solar, accounted for 110 GW, nearly 50 percent of all installed capacity in 2019, but operated with a capacity factor of just 20 percent. (That included a mere 10 percent for solar, which is hardly surprising, given that large parts of the country are as cloudy as Seattle.) The old system stood alongside it, almost intact, retaining nearly 85 percent of net generating capacity in 2019. Germany needs to keep the old system in order to meet demand on cloudy and calm days and to produce nearly half of total demand. In consequence, the capacity factor of this sector is also low.

It costs Germany a great deal to maintain such an excess of installed power. The average cost of electricity for German households has doubled since 2000. By 2019, households had to pay 34 U.S. cents per kilowatt-hour, compared to 22 cents per kilowatt-hour in France and 13 cents in the United States.

We can measure just how far the Energiewende has pushed Germany toward the ultimate goal of decarbonization. In 2000, the country derived nearly 84 percent of its total primary energy from fossil fuels; this share fell to about 78 percent in 2019. If continued, this rate of decline would leave fossil fuels still providing nearly 70 percent of the country’s primary energy supply in 2050.

Meanwhile, during the same 20-year period, the United States reduced the share of fossil fuels in its primary energy consumption from 85.7 percent to 80 percent, cutting almost exactly as much as Germany did. The conclusion is as surprising as it is indisputable. Without anything like the expensive, target-mandated Energiewende, the United States has decarbonized at least as fast as Germany, the supposed poster child of emerging greenness.

This article appears in the December 2020 print issue as “Energiewende, 20 Years Later.”

Goodbye, Centralized Power Grid. Hello, Autonomous Energy Grids

Post Syndicated from Benjamin Kroposki original https://spectrum.ieee.org/energy/the-smarter-grid/goodbye-centralized-power-grid-hello-autonomous-energy-grids

It’s great to have neighbors you can depend on, whether you’re borrowing a cup of sugar or you need someone to walk your dog while you’re out of town. In the western Colorado neighborhood of Basalt Vista, the residents are even closer than most: They share their electricity. But unlike your neighbor with the sugar, the residents of Basalt Vista may not even know when they’re being generous. The energy exchanges happen automatically, behind the scenes. What residents do know is how inexpensive, reliable, and renewable their electricity is.

The 27 smart homes in Basalt Vista, located about 290 kilometers west of Denver, are part of a pilot for an altogether new approach to the power grid. The entire neighborhood is interconnected through a microgrid that in turn connects to the main grid. Within each home, every smart appliance and energy resource—such as a storage battery, water heater, or solar photovoltaic (PV) system—is controlled to maximize energy efficiency.

On a larger scale, houses within the neighborhood can rapidly share power, creating reliable electricity for everyone—solar energy generated at one house can be used to charge the electric car next door. If a wildfire were to knock out power lines in the area, residents would still have electricity generated and stored within the neighborhood. From the spring through the fall, the PV systems can provide enough electricity and recharge the batteries for days at a time. In the dead of winter, with the heat running and snow on the solar panels, the backup power will last for about 2 hours.

In theory, power systems of any size could be covered in a patchwork of Basalt Vistas, layering regions and even an entire country in smart grids to automatically manage energy production and use across millions of controllable distributed energy resources. That concept underlies the autonomous energy grid (AEG), a vision for how the future of energy can be defined by resilience and efficiency.

The concept and core technology for the autonomous energy grid are being developed by our team at the National Renewable Energy Laboratory, in Golden, Colo. Since 2018, NREL and local utility Holy Cross Energy have been putting the concept into practice, starting with the construction of the first four houses in Basalt Vista. Each home has an 8-kilowatt rooftop PV system with lithium iron phosphate storage batteries, as well as energy-efficient, ­all-electric heating, cooling, water heaters, and appliances. All of those assets are monitored and can be controlled by the AEG. So far, average utility bills have been about 85 percent lower than typical electric bills for Colorado.

AEGs will create at least as many benefits for utilities as they do for customers. With AEGs monitoring distributed energy resources like rooftop solar and household storage batteries, a utility’s control room will become more like a highly automated air traffic control center. The result is that energy generated within an AEG is used more efficiently—it’s either consumed immediately or stored. Over time, the operator will have to invest less in building, operating, and maintaining larger generators—including costly “peaker” plants that are used only when demand is unusually high.

But can a network as large and complicated as a national power grid really operate in a decentralized, automated way? Our research says definitely yes. Projects like the one at Basalt Vista are helping us figure out our ideas about AEGs and demonstrate them in real-world settings, and thus are playing a crucial role in defining the future of the power grid. Here’s how.

Today, grid operators must overcome two big problems. First, an ever-growing number of distributed energy resources are being connected to the grid. In the United States, for instance, residential solar installations are expected to grow approximately 8 percent per year through 2050, while household battery systems are estimated to hit almost 1.8 gigawatts by 2025, and around 18.7 million EVs could be on U.S. roads by 2030. With such anticipated growth, it’s possible that a decade from now, most U.S. electricity customers could have a handful of controllable distributed energy resources in their homes. By that math, Pacific Gas & Electric Co.’s 4 million customers in the San Francisco Bay Area could have a total of some 20 million grid-tied systems that the utility would need to manage in order to reliably and economically operate its grid. That’s in addition to maintaining the poles, wires, transformers, switches, and centralized power plants in its network.

Because of the soaring number of grid-tied devices, operators will no longer be able to use centralized control in the not-so-distant future. Over a geographically dispersed network, the communication latencies alone make a centralized system impractical. Instead, operators will have to move to a system of distributed optimization and control.

The other problem operators face is that the grid is functioning under increasingly uncertain conditions, including fluctuating wind speeds, cloud cover, and unpredictable supply and demand. Therefore, the grid’s optimal state varies every second and must be robustly determined in real time.

A centrally controlled grid can’t handle this amount of coordination. That’s where AEGs come in. The idea of an autonomous energy grid grew out of NREL’s participation in a program called NODES (Network Optimized Distributed Energy Systems) sponsored by the U.S. Department of Energy’s vanguard energy agency, ARPA-E. Our lab’s contribution to NODES was to create algorithms for a model power grid made up entirely of distributed energy resources. Our algorithms had to factor in the limited computational capabilities of many customer devices (including rooftop solar, electric vehicles, batteries, smart-home appliances, and other loads) and yet still allow those devices to communicate and self-optimize. NODES, which wrapped up last year, was successful, but only as a framework for one “cell”—that is, one community controlled by one AEG.

Our group decided to carry the NODES idea further: to extend the model to an entire grid and its many component cells, allowing the cells to communicate with one another in a hierarchical system. The generation, storage, and loads are controlled using cellular building blocks in a distributed hierarchy that optimizes both local operation and operation of the cell when it’s interconnected to a larger grid.

In our model, each AEG consists of a network of energy generation, storage, and end-use technologies. In that sense, AEGs are very similar to microgrids, which are increasingly being deployed in the United States and elsewhere in the world. But an AEG is computationally more advanced, which allows its assets to cooperate in real time to match supply to demand on second-­by-second ­timescales. Similar to an autonomous vehicle, in which the vehicle makes local decisions about how to move around, an AEG acts as a self-driving power system, one that decides how and when to move energy. The result is that an AEG runs at high efficiency and can quickly bounce back from outages, or even avoid an outage altogether. A power grid that consists entirely of AEGs could deftly address challenges at every level, from individual customers up to the transmission system.

To develop the idea, we had to start somewhere. Basalt Vista presented an excellent opportunity to bring the AEG concept out of the lab and onto the grid. The neighborhood is designed to be net-zero energy, and it’s relatively close to NREL’s Energy Systems Integration Facility, where our group is based.

What’s more, Holy Cross Energy had been searching for a solution to manage the customer-owned energy resources and bulk generation in its system. In recent years, grid-connected, customer-owned resources have become much more affordable; Holy Cross’s grid has been seeing 10 to 15 new rooftop solar installations per week. By 2030, the utility plans to install a 150-megawatt solar-powered summer peaking system. Meanwhile, though, the utility had to deal with nonstandardized devices causing instabilities on its grid, occasional outages from severe weather and wildfires, variable generation from solar and wind energy, and an uncertain market for rooftop solar and other energy generated by its customers.

In short, what Holy Cross was facing looked very much like what other grid operators are confronting throughout the country and much of the world.

To develop the AEG concept, our group is working at the union of two fields: optimization theory and control theory. Optimization theory finds solutions, but might ignore real-world conditions. Control algorithms work to stabilize a system under less-than-ideal conditions. Together these two fields form the theoretical scaffolding for an AEG.

Of course, this theoretical scaffolding has to conform to the messy constraints of the real world. For example, the controllers that run the AEG algorithms aren’t supercomputers; they’re common computer platforms or embedded controllers at the grid edge, and they have to complete their calculations in well under 1 second. That translates to simpler code, and in this case, simpler is better. Meanwhile, though, the calculations must factor in latency in communications; in a distributed network, there will still be time delays as signals travel from one node to the next. Our algorithms must also be able to operate with sparse or missing data, and contend with variations created by equipment from different vendors.

Even if we produce beautiful algorithms, their success still depends on the physics of the topology of power lines and the accuracy of the models of the devices. For a large commercial building, where you want to choose what to turn on and off, you need an accurate model of that building at the right timescales. If such a model doesn’t exist, you have to build one. Doing that becomes an order of magnitude more difficult when the optimizations include many buildings and many models.

We’ve discovered that defining an abstract model is harder than optimizing the behavior of the real thing. In other words, we’re “cutting out the middleman” and instead using data and measurements to learn the optimal behavior directly. Using advanced data analytics and machine-learning techniques, we have dramatically sped up the time it takes to find optimal solutions.

To date, we’ve managed to overcome these hurdles at the small scale. NREL’s Energy Systems Integration Facility is an advanced test bed for vetting new models of energy integration and power-grid modernization. We’ve been able to test how practical our algorithms are before deploying them in the field; they may look good on paper, but if you’re trying to decide the fate of, say, a million devices in 1 second, you’d better be sure they really work. In our initial experiments with real power equipment—over 100 distributed resources at a time, totaling about half a megawatt—we were able to validate the AEG concepts by operating the systems across a range of scenarios.

Moving outside the laboratory, we first conducted a small demonstration in 2018 with the microgrid at the Stone Edge Farm Estate Vineyards and Winery in Sonoma, Calif., in partnership with the controller manufacturer Heila Technologies, in Somerville, Mass. The 785-kilowatt microgrid powers the 6.5-hectare farm through a combination of solar panels, fuel cells, and a microturbine that runs on natural gas and hydrogen, as well as storage in the form of batteries and hydrogen. An on-site electrolyzer feeds a hydrogen filling station for the farm’s three fuel-cell electric cars.

The microgrid is connected to the main grid but can also operate independently in “island” mode when needed. During wildfires in October 2017, for example, the main grid in and around Sonoma went down, and the farm was evacuated for 10 days, but the microgrid continued to run smoothly throughout. Our AEG demonstration at Stone Edge Farm connected 20 of the microgrid’s power assets, and we showed how those assets could function collectively as a virtual power plant in a resilient and efficient way. This experiment served as another proof of concept for the AEG.

Basalt Vista is taking the AEG concept even further. A net-zero-energy affordable housing district developed by Habitat for Humanity for schoolteachers and other local workers, it already had a lot going for it. The final results of this real-world experiment aren’t yet available, but seeing the first residents happily embrace this new frontier in energy has brought us another level of excitement about the future of AEGs.

We engineered our early demonstrations so that other utilities could safely and easily run trials of the AEG approach using standard interoperability protocols. Now our group is considering the additional challenges that AEGs will face when we scale up and when we transition from Holy Cross Energy’s rural deployment to the grid of a dense city. We’re now studying what this idea will look like throughout an energy system—within a wind farm, inside an office building, on a factory complex—and what effects it will have on power transmission and distribution. We’re also exploring the market mechanisms that would favor AEGs. It’s clear that broad collaboration across disciplines will be needed to push the concept forward.

Our group at NREL isn’t the only one looking at AEGs. Researchers at a number of leading universities have joined NREL in an effort to build the foundational science behind AEGs. Emiliano Dall’Anese of the University of Colorado, Boulder; Florian Dörfler of ETH Zurich; Ian A. Hiskens of the University of Michigan; Steven H. Low at Caltech’s Netlab; and Sean Meyn of the University of Florida are early contributors to the AEG vision and have participated in a series of workshops on the topic. These collaborations are already producing dozens of technical papers each year that continue to build out the foundations for AEGs.

Within NREL, the circle of AEG contributors is also expanding, and we’re looking at how the concept can apply to other forms of generation. One example is wind energy, where an AEG-enabled future means that control techniques similar to the ones deployed at Stone Edge Farm and Basalt Vista will autonomously manage large wind farms. By taking a large problem and breaking it into smaller cells, the AEG algorithms drastically reduce the time needed for all the turbines to come to a consensus on the wind’s direction and respond by turning to face into the wind, which can boost the total energy production. Over the course of a year, that could mean millions of dollars of added revenue for the operator.

In our research, we’re also considering how to optimally integrate the variable supply of wind energy into a bigger cell that includes other energy domains. For example, if a building’s energy management system has access to wind forecasts, it could shift its load in real time to match the available wind power. During an afternoon lull in wind speed, the building’s air-conditioning could be automatically adjusted upward a few degrees to reduce demand, with additional power drawn from battery storage.

We’re also looking at communications infrastructure. To achieve the fast response required by an AEG cell, communications can’t be clogged by simultaneous connections to millions of devices. In a new NREL partnership with the wireless company Anterix, of Woodland Park, N.J., we’re demonstrating how a dedicated LTE network for device communications would operate.

Reliable operation, of course, assumes that communication channels are protected from cyberthreats and physical threats. The possibility of such attacks is guiding the conversation in power systems toward resilience and reliability. We believe that AEGs should minimize the impact of both deliberate attacks and natural disasters and make the grid more resilient. That’s because the status of every grid-connected asset in every AEG cell will be checked on a second-by-second basis. Any sudden and unexpected change in status would trigger an appropriate response. In most cases, no drastic action would be required because the change is within the normal variability of operations. But if a major fault is the cause, the cell could automatically isolate itself, partially or entirely, from the rest of the network until the problem is resolved. Exploring the effects of AEGs on grid resilience is an ongoing priority at NREL.

For now, AEGs will show up first in neighborhoods like Basalt Vista and in other small-scale settings, such as hospitals and college campuses. Eventually, though, larger deployments should take place. In Hawaii, for instance, 350,000 customers have installed rooftop solar. With the state’s mandate for 100 percent renewable power by 2045, the amount of distributed solar could triple. The utility, Hawaiian Electric Company, anticipates having to connect about 750,000 solar inverters, as well as battery systems, electric vehicles, and other distributed energy resources. Accordingly, HECO is looking to push autonomous control down to the local level as much as possible, to minimize the need for communication between the control center and each device. A completely autonomous grid will take some time to implement. In particular, we’ll need to conduct extensive testing and demonstrations to show its feasibility with HECO’s current communications and control infrastructures. But eventually the AEG concept will allow the utility to prioritize controls and focus on critical operations rather than trying to manage individual devices.

We think it will be another decade before AEG rollouts become commonplace, but an AEG market may arrive sooner. This past year we’ve made progress in commercializing the AEG algorithms, and with support from DOE’s Solar Energy Technologies Office, NREL is now collaborating with Siemens on distributed control techniques. Likewise, NREL and the power management company Eaton Corp. have partnered to use the AEG work for autonomous, electrified transportation.

NREL has meanwhile explored how to sustain a distributed energy market using blockchain-based transactions—an option for so-called transactive energy markets. That project, in partnership with BlockCypher, successfully showed that a neighborhood like Basalt Vista could seamlessly monetize its energy sharing.

As we progress to a future of 100 percent clean energy, with a high concentration of inverter-based energy technologies, we will need a solution like AEGs to continue to operate the grid in a reliable, economic, and resilient way. Rather than looking to central power plants to meet their electricity needs, individual customers will increasingly be able to rely on one another. In a grid built on AEGs, being neighborly will be automatic.

This article appears in the December 2020 print issue as “Good Grids Make Good Neighbors.”

About the Author

Benjamin Kroposki is an IEEE Fellow and director of the Power Systems Engineering Center at the National Renewable Energy Laboratory, in Golden, Colo. Andrey Bernstein is NREL’s group manager of Energy Systems Control and Optimization, Jennifer King is a research engineer at NREL’s National Wind Technology Center, and Fei Ding is a senior research engineer at NREL.

Tomorrow’s Power Grid Will Be Autonomous

Post Syndicated from Benjamin Kroposki original https://spectrum.ieee.org/energy/the-smarter-grid/tomorrows-power-grid-will-be-autonomous

It’s great to have neighbors you can depend on, whether you’re borrowing a cup of sugar or you need someone to walk your dog while you’re out of town. In the western Colorado neighborhood of Basalt Vista, the residents are even closer than most: They share their electricity. But unlike your neighbor with the sugar, the residents of Basalt Vista may not even know when they’re being generous. The energy exchanges happen automatically, behind the scenes. What residents do know is how inexpensive, reliable, and renewable their electricity is.

The 27 smart homes in Basalt Vista, located about 290 kilometers west of Denver, are part of a pilot for an altogether new approach to the power grid. The entire neighborhood is interconnected through a microgrid that in turn connects to the main grid. Within each home, every smart appliance and energy resource—such as a storage battery, water heater, or solar photovoltaic (PV) system—is controlled to maximize energy efficiency.

On a larger scale, houses within the neighborhood can rapidly share power, creating reliable electricity for everyone—solar energy generated at one house can be used to charge the electric car next door. If a wildfire were to knock out power lines in the area, residents would still have electricity generated and stored within the neighborhood. From the spring through the fall, the PV systems can provide enough electricity and recharge the batteries for days at a time. In the dead of winter, with the heat running and snow on the solar panels, the backup power will last for about 2 hours.

In theory, power systems of any size could be covered in a patchwork of Basalt Vistas, layering regions and even an entire country in smart grids to automatically manage energy production and use across millions of controllable distributed energy resources. That concept underlies the autonomous energy grid (AEG), a vision for how the future of energy can be defined by resilience and efficiency.

The concept and core technology for the autonomous energy grid are being developed by our team at the National Renewable Energy Laboratory, in Golden, Colo. Since 2018, NREL and local utility Holy Cross Energy have been putting the concept into practice, starting with the construction of the first four houses in Basalt Vista. Each home has an 8-kilowatt rooftop PV system with lithium iron phosphate storage batteries, as well as energy-efficient, ­all-electric heating, cooling, water heaters, and appliances. All of those assets are monitored and can be controlled by the AEG. So far, average utility bills have been about 85 percent lower than typical electric bills for Colorado.

AEGs will create at least as many benefits for utilities as they do for customers. With AEGs monitoring distributed energy resources like rooftop solar and household storage batteries, a utility’s control room will become more like a highly automated air traffic control center. The result is that energy generated within an AEG is used more efficiently—it’s either consumed immediately or stored. Over time, the operator will have to invest less in building, operating, and maintaining larger generators—including costly “peaker” plants that are used only when demand is unusually high.

But can a network as large and complicated as a national power grid really operate in a decentralized, automated way? Our research says definitely yes. Projects like the one at Basalt Vista are helping us figure out our ideas about AEGs and demonstrate them in real-world settings, and thus are playing a crucial role in defining the future of the power grid. Here’s how.

Today, grid operators must overcome two big problems. First, an ever-growing number of distributed energy resources are being connected to the grid. In the United States, for instance, residential solar installations are expected to grow approximately 8 percent per year through 2050, while household battery systems are estimated to hit almost 1.8 gigawatts by 2025, and around 18.7 million EVs could be on U.S. roads by 2030. With such anticipated growth, it’s possible that a decade from now, most U.S. electricity customers could have a handful of controllable distributed energy resources in their homes. By that math, Pacific Gas & Electric Co.’s 4 million customers in the San Francisco Bay Area could have a total of some 20 million grid-tied systems that the utility would need to manage in order to reliably and economically operate its grid. That’s in addition to maintaining the poles, wires, transformers, switches, and centralized power plants in its network.

Because of the soaring number of grid-tied devices, operators will no longer be able to use centralized control in the not-so-distant future. Over a geographically dispersed network, the communication latencies alone make a centralized system impractical. Instead, operators will have to move to a system of distributed optimization and control.

The other problem operators face is that the grid is functioning under increasingly uncertain conditions, including fluctuating wind speeds, cloud cover, and unpredictable supply and demand. Therefore, the grid’s optimal state varies every second and must be robustly determined in real time.

A centrally controlled grid can’t handle this amount of coordination. That’s where AEGs come in. The idea of an autonomous energy grid grew out of NREL’s participation in a program called NODES (Network Optimized Distributed Energy Systems) sponsored by the U.S. Department of Energy’s vanguard energy agency, ARPA-E. Our lab’s contribution to NODES was to create algorithms for a model power grid made up entirely of distributed energy resources. Our algorithms had to factor in the limited computational capabilities of many customer devices (including rooftop solar, electric vehicles, batteries, smart-home appliances, and other loads) and yet still allow those devices to communicate and self-optimize. NODES, which wrapped up last year, was successful, but only as a framework for one “cell”—that is, one community controlled by one AEG.

Our group decided to carry the NODES idea further: to extend the model to an entire grid and its many component cells, allowing the cells to communicate with one another in a hierarchical system. The generation, storage, and loads are controlled using cellular building blocks in a distributed hierarchy that optimizes both local operation and operation of the cell when it’s interconnected to a larger grid.

In our model, each AEG consists of a network of energy generation, storage, and end-use technologies. In that sense, AEGs are very similar to microgrids, which are increasingly being deployed in the United States and elsewhere in the world. But an AEG is computationally more advanced, which allows its assets to cooperate in real time to match supply to demand on second-­by-second ­timescales. Similar to an autonomous vehicle, in which the vehicle makes local decisions about how to move around, an AEG acts as a self-driving power system, one that decides how and when to move energy. The result is that an AEG runs at high efficiency and can quickly bounce back from outages, or even avoid an outage altogether. A power grid that consists entirely of AEGs could deftly address challenges at every level, from individual customers up to the transmission system.

To develop the idea, we had to start somewhere. Basalt Vista presented an excellent opportunity to bring the AEG concept out of the lab and onto the grid. The neighborhood is designed to be net-zero energy, and it’s relatively close to NREL’s Energy Systems Integration Facility, where our group is based.

What’s more, Holy Cross Energy had been searching for a solution to manage the customer-owned energy resources and bulk generation in its system. In recent years, grid-connected, customer-owned resources have become much more affordable; Holy Cross’s grid has been seeing 10 to 15 new rooftop solar installations per week. By 2030, the utility plans to install a 150-megawatt solar-powered summer peaking system. Meanwhile, though, the utility had to deal with nonstandardized devices causing instabilities on its grid, occasional outages from severe weather and wildfires, variable generation from solar and wind energy, and an uncertain market for rooftop solar and other energy generated by its customers.

In short, what Holy Cross was facing looked very much like what other grid operators are confronting throughout the country and much of the world.

To develop the AEG concept, our group is working at the union of two fields: optimization theory and control theory. Optimization theory finds solutions, but might ignore real-world conditions. Control algorithms work to stabilize a system under less-than-ideal conditions. Together these two fields form the theoretical scaffolding for an AEG.

Of course, this theoretical scaffolding has to conform to the messy constraints of the real world. For example, the controllers that run the AEG algorithms aren’t supercomputers; they’re common computer platforms or embedded controllers at the grid edge, and they have to complete their calculations in well under 1 second. That translates to simpler code, and in this case, simpler is better. Meanwhile, though, the calculations must factor in latency in communications; in a distributed network, there will still be time delays as signals travel from one node to the next. Our algorithms must also be able to operate with sparse or missing data, and contend with variations created by equipment from different vendors.

Even if we produce beautiful algorithms, their success still depends on the physics of the topology of power lines and the accuracy of the models of the devices. For a large commercial building, where you want to choose what to turn on and off, you need an accurate model of that building at the right timescales. If such a model doesn’t exist, you have to build one. Doing that becomes an order of magnitude more difficult when the optimizations include many buildings and many models.

We’ve discovered that defining an abstract model is harder than optimizing the behavior of the real thing. In other words, we’re “cutting out the middleman” and instead using data and measurements to learn the optimal behavior directly. Using advanced data analytics and machine-learning techniques, we have dramatically sped up the time it takes to find optimal solutions.

To date, we’ve managed to overcome these hurdles at the small scale. NREL’s Energy Systems Integration Facility is an advanced test bed for vetting new models of energy integration and power-grid modernization. We’ve been able to test how practical our algorithms are before deploying them in the field; they may look good on paper, but if you’re trying to decide the fate of, say, a million devices in 1 second, you’d better be sure they really work. In our initial experiments with real power equipment—over 100 distributed resources at a time, totaling about half a megawatt—we were able to validate the AEG concepts by operating the systems across a range of scenarios.

Moving outside the laboratory, we first conducted a small demonstration in 2018 with the microgrid at the Stone Edge Farm Estate Vineyards and Winery in Sonoma, Calif., in partnership with the controller manufacturer Heila Technologies, in Somerville, Mass. The 785-kilowatt microgrid powers the 6.5-hectare farm through a combination of solar panels, fuel cells, and a microturbine that runs on natural gas and hydrogen, as well as storage in the form of batteries and hydrogen. An on-site electrolyzer feeds a hydrogen filling station for the farm’s three fuel-cell electric cars.

The microgrid is connected to the main grid but can also operate independently in “island” mode when needed. During wildfires in October 2017, for example, the main grid in and around Sonoma went down, and the farm was evacuated for 10 days, but the microgrid continued to run smoothly throughout. Our AEG demonstration at Stone Edge Farm connected 20 of the microgrid’s power assets, and we showed how those assets could function collectively as a virtual power plant in a resilient and efficient way. This experiment served as another proof of concept for the AEG.

Basalt Vista is taking the AEG concept even further. A net-zero-energy affordable housing district developed by Habitat for Humanity for schoolteachers and other local workers, it already had a lot going for it. The final results of this real-world experiment aren’t yet available, but seeing the first residents happily embrace this new frontier in energy has brought us another level of excitement about the future of AEGs.

We engineered our early demonstrations so that other utilities could safely and easily run trials of the AEG approach using standard interoperability protocols. Now our group is considering the additional challenges that AEGs will face when we scale up and when we transition from Holy Cross Energy’s rural deployment to the grid of a dense city. We’re now studying what this idea will look like throughout an energy system—within a wind farm, inside an office building, on a factory complex—and what effects it will have on power transmission and distribution. We’re also exploring the market mechanisms that would favor AEGs. It’s clear that broad collaboration across disciplines will be needed to push the concept forward.

Our group at NREL isn’t the only one looking at AEGs. Researchers at a number of leading universities have joined NREL in an effort to build the foundational science behind AEGs. Emiliano Dall’Anese of the University of Colorado, Boulder; Florian Dörfler of ETH Zurich; Ian A. Hiskens of the University of Michigan; Steven H. Low at Caltech’s Netlab; and Sean Meyn of the University of Florida are early contributors to the AEG vision and have participated in a series of workshops on the topic. These collaborations are already producing dozens of technical papers each year that continue to build out the foundations for AEGs.

Within NREL, the circle of AEG contributors is also expanding, and we’re looking at how the concept can apply to other forms of generation. One example is wind energy, where an AEG-enabled future means that control techniques similar to the ones deployed at Stone Edge Farm and Basalt Vista will autonomously manage large wind farms. By taking a large problem and breaking it into smaller cells, the AEG algorithms drastically reduce the time needed for all the turbines to come to a consensus on the wind’s direction and respond by turning to face into the wind, which can boost the total energy production. Over the course of a year, that could mean millions of dollars of added revenue for the operator.

In our research, we’re also considering how to optimally integrate the variable supply of wind energy into a bigger cell that includes other energy domains. For example, if a building’s energy management system has access to wind forecasts, it could shift its load in real time to match the available wind power. During an afternoon lull in wind speed, the building’s air-conditioning could be automatically adjusted upward a few degrees to reduce demand, with additional power drawn from battery storage.

We’re also looking at communications infrastructure. To achieve the fast response required by an AEG cell, communications can’t be clogged by simultaneous connections to millions of devices. In a new NREL partnership with the wireless company Anterix, of Woodland Park, N.J., we’re demonstrating how a dedicated LTE network for device communications would operate.

Reliable operation, of course, assumes that communication channels are protected from cyberthreats and physical threats. The possibility of such attacks is guiding the conversation in power systems toward resilience and reliability. We believe that AEGs should minimize the impact of both deliberate attacks and natural disasters and make the grid more resilient. That’s because the status of every grid-connected asset in every AEG cell will be checked on a second-by-second basis. Any sudden and unexpected change in status would trigger an appropriate response. In most cases, no drastic action would be required because the change is within the normal variability of operations. But if a major fault is the cause, the cell could automatically isolate itself, partially or entirely, from the rest of the network until the problem is resolved. Exploring the effects of AEGs on grid resilience is an ongoing priority at NREL.

For now, AEGs will show up first in neighborhoods like Basalt Vista and in other small-scale settings, such as hospitals and college campuses. Eventually, though, larger deployments should take place. In Hawaii, for instance, 350,000 customers have installed rooftop solar. With the state’s mandate for 100 percent renewable power by 2045, the amount of distributed solar could triple. The utility, Hawaiian Electric Company, anticipates having to connect about 750,000 solar inverters, as well as battery systems, electric vehicles, and other distributed energy resources. Accordingly, HECO is looking to push autonomous control down to the local level as much as possible, to minimize the need for communication between the control center and each device. A completely autonomous grid will take some time to implement. In particular, we’ll need to conduct extensive testing and demonstrations to show its feasibility with HECO’s current communications and control infrastructures. But eventually the AEG concept will allow the utility to prioritize controls and focus on critical operations rather than trying to manage individual devices.

We think it will be another decade before AEG rollouts become commonplace, but an AEG market may arrive sooner. This past year we’ve made progress in commercializing the AEG algorithms, and with support from DOE’s Solar Energy Technologies Office, NREL is now collaborating with Siemens on distributed control techniques. Likewise, NREL and the power management company Eaton Corp. have partnered to use the AEG work for autonomous, electrified transportation.

NREL has meanwhile explored how to sustain a distributed energy market using blockchain-based transactions—an option for so-called transactive energy markets. That project, in partnership with BlockCypher, successfully showed that a neighborhood like Basalt Vista could seamlessly monetize its energy sharing.

As we progress to a future of 100 percent clean energy, with a high concentration of inverter-based energy technologies, we will need a solution like AEGs to continue to operate the grid in a reliable, economic, and resilient way. Rather than looking to central power plants to meet their electricity needs, individual customers will increasingly be able to rely on one another. In a grid built on AEGs, being neighborly will be automatic.

This article appears in the December 2020 print issue as “Good Grids Make Good Neighbors.”

About the Author

Benjamin Kroposki is an IEEE Fellow and director of the Power Systems Engineering Center at the National Renewable Energy Laboratory, in Golden, Colo. Andrey Bernstein is NREL’s group manager of Energy Systems Control and Optimization, Jennifer King is a research engineer at NREL’s National Wind Technology Center, and Fei Ding is a senior research engineer at NREL.

Iron Powder Passes First Industrial Test as Renewable, Carbon Dioxide-Free Fuel

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/energywise/energy/renewables/iron-powder-passes-first-industrial-test-as-renewable-co2free-fuel

Simple question: What if we could curb this whole fossil fuel-fed climate change nightmare and burn something else as an energy source instead? As a bonus, what if that something else is one of the most common elements on Earth?

Simple answer: Let’s burn iron.

While setting fire to an iron ingot is probably more trouble than it’s worth, fine iron powder mixed with air is highly combustible. When you burn this mixture, you’re oxidizing the iron. Whereas a carbon fuel oxidizes into CO2, an iron fuel oxidizes into Fe2O3, which is just rust. The nice thing about rust is that it’s a solid which can be captured post-combustion. And that’s the only byproduct of the entire business—in goes the iron powder, and out comes energy in the form of heat and rust powder. Iron has an energy density of about 11.3 kWh/L, which is better than gasoline. Although its specific energy is a relatively poor 1.4 kWh/kg, meaning that for a given amount of energy, iron powder will take up a little bit less space than gasoline but it’ll be almost ten times heavier.

It might not be suitable for powering your car, in other words. It probably won’t heat your house either. But it could be ideal for industry, which is where it’s being tested right now.

Researchers from TU Eindhoven have been developing iron powder as a practical fuel for the past several years, and last month they installed an iron powder heating system at a brewery in the Netherlands, which is turning all that stored up energy into beer. Since electricity can’t efficiently produce the kind of heat required for many industrial applications (brewing included), iron powder is a viable zero-carbon option, with only rust left over.

So what happens to all that rust? This is where things get clever, because the iron isn’t just a fuel that’s consumed— it’s energy storage that can be recharged. And to recharge it, you take all that Fe2O3, strip out the oxygen, and turn it back into Fe, ready to be burned again. It’s not easy to do this, but much of the energy and work that it takes to pry those Os away from the Fes get returned to you when you burn the Fe the next time. The idea is that you can use the same iron over and over again, discharging it and recharging it just like you would a battery.

To maintain the zero-carbon nature of the iron fuel, the recharging process has to be zero-carbon as well. There are a variety of different ways of using electricity to turn rust back into iron, and the TU/e researchers are exploring three different technologies based on hot hydrogen reduction (which turns iron oxide and hydrogen into iron and water), as they described to us in an email:

Mesh Belt Furnace: In the mesh belt furnace the iron oxide is transported by a conveyor belt through a furnace in which hydrogen is added at 800-1000°C. The iron oxide is reduced to iron, which sticks together because of the heat, resulting in a layer of iron. This can then be ground up to obtain iron powder.
Fluidized Bed Reactor: This is a conventional reactor type, but its use in hydrogen reduction of iron oxide is new. In the fluidized bed reactor the reaction is carried out at lower temperatures around 600°C, avoiding sticking, but taking longer.
Entrained Flow Reactor: The entrained flow reactor is an attempt to implement flash ironmaking technology. This method performs the reaction at high temperatures, 1100-1400°C, by blowing the iron oxide through a reaction chamber together with the hydrogen flow to avoid sticking. This might be a good solution, but it is a new technology and has yet to be proven.

Both production of the hydrogen and the heat necessary to run the furnace or the reactors require energy, of course, but it’s grid energy that can come from renewable sources. 

If renewing the iron fuel requires hydrogen, an obvious question is why not just use hydrogen as a zero-carbon fuel in the first place? The problem with hydrogen is that as an energy storage medium, it’s super annoying to deal with, since storing useful amounts of it generally involves high pressure and extreme cold. In a localized industrial setting (like you’d have in your rust reduction plant) this isn’t as big of a deal, but once you start trying to distribute it, it becomes a real headache. Iron powder, on the other hand, is safe to handle, stores indefinitely, and can be easily moved with existing bulk carriers like rail.

Which is why its future looks to be in applications where weight is not a primary concern and collection of the rust is feasible. In addition to industrial heat generation (which will eventually include retrofitting coal-fired power plants to burn iron powder instead), the TU/e researchers are exploring whether iron powder could be used as fuel for large cargo ships, which are extraordinarily dirty carbon emitters that are also designed to carry a lot of weight. 

Philip de Goey, a professor of combustion technology at TU/e, told us that he hopes to be able to deploy 10 MW iron powder high-temperature heat systems for industry within the next four years, with 10 years to the first coal power plant conversion. There are still challenges, de Goey tells us: “the technology needs refinement and development, the market for metal powders needs to be scaled up, and metal powders have to be part of the future energy system and regarded as safe and clean alternative.” De Goey’s view is that iron powder has a significant but well-constrained role in energy storage, transport, and production that complements other zero-carbon sources like hydrogen. For a zero carbon energy future, de Goey says, “there is no winner or loser— we need them all.”

Going Carbon-Negative—Starting with Vodka

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/environment/going-carbonnegativestarting-with-vodka

Steven Cherry Hi this is Steven Cherry for Radio Spectrum.

In 2014, two Google engineers, writing in the pages of IEEE Spectrum, noted that “if all power plants and industrial facilities switch over to zero-carbon energy sources right now, we’ll still be left with a ruinous amount of CO2 in the atmosphere. It would take centuries for atmospheric levels to return to normal, which means centuries of warming and instability.” Citing the work of climatologist James Hansen, they continued: “To bring levels down below the safety threshold, Hansen’s models show that we must not only cease emitting CO2 as soon as possible but also actively remove the gas from the air and store the carbon in a stable form.”

One alternative is to grab carbon dioxide as it’s produced, and stuff it underground or elsewhere. People have been talking about CSS, which alternatively stands for carbon capture and storage, or carbon capture and sequestration, for well over a decade. But you can look around, for example at Exxon-Mobil’s website, and see how much progress hasn’t been made.

In fact, in 2015, a bunch of mostly Canadian energy producers decided on a different route. They went to the XPRIZE people and funded what came to be called the Carbon XPRIZE to, as a Spectrum article at the time said, turn “CO2 molecules into products with higher added value.”

In 2018, the XPRIZE announced 10 finalists, who divvied up a $5 million incremental prize. The prize timeline called for five teams each to begin an operational phase in two locations, one in Wyoming and the other in Alberta, culminating in a $20 million grand prize. And then the coronavirus hit, rebooting the prize timeline.

One of the more unlikely finalists emerged from the hipsterish Bushwick neighborhood of Brooklyn, N.Y. Their solution to climate change: vodka. Yes, vodka. The finalist, which calls itself the Air Company, takes carbon dioxide that has been liquified and distills it into ethanol, and then fine-tunes it into vodka. The resulting product is, the company claims, not only carbon-neutral but carbon negative.

The scientific half of founding duo of the Air Company is Stafford Sheehan—Staff, as he’s known. He had two startups under his belt by the time he graduated from Boston College. He started his next venture while in graduate school at Yale. He’s a prolific researcher but he’s determined to find commercially viable ways to reduce the carbon in the air, and he’s my guest today, via Skype.

Staff, welcome to the podcast.

Stafford Sheehan Thanks very much for having me. Steven.

Steven Cherry Staff, I’m sure people have been teasing you that maybe vodka doesn’t solve the problem of climate change entirely, but it can make us forget it for a while. But in serious engineering terms, the Air Company process seems a remarkable advance. Talk us through it. It starts with liquefied carbon dioxide.

Stafford Sheehan Yeah, happy to. So, we use liquefied carbon dioxide because we source it offsite in in Bushwick. But really, we can just feed any sort of carbon dioxide into our system. We combine the carbon dioxide with water by first splitting the water into hydrogen and oxygen. Water is H2O, so we use what’s called an electrolyzer to split water into hydrogen gas and oxygen gas and then combine the hydrogen together with carbon dioxide in a reactor over proprietary catalysts that I and my coworkers developed over the course of the last several years. And that produces a mixture of ethanol and water that we then distill to make a very, very clean and very, very pure vodka.

Steven Cherry Your claim that the product is carbon-negative is based on a life-cycle analysis. The calculation starts with an initial minus of the amount of carbon you take out of the atmosphere. And then we start adding back the carbon and carbon equivalents needed to get it into a bottle and onto the shelf of a hipster bar. That first step where your supplier takes carbon out of the atmosphere, puts it into liquefied form and then delivers it to your distillery. That puts about 10 percent of that that carbon back into the atmosphere.

Stafford Sheehan Yeah, 10 to 20 percent. When a tonne of carbon dioxide arrives in liquid form at our Bushwick facility, we assume that it took 200 kilograms of CO2 emitted—not only for the capture of the carbon dioxide; most of the carbon dioxide that we get actually comes from fuel ethanol fermentation. So we take the carbon dioxide emissions of the existing ethanol industry and we’re turning that into a higher purity ethanol. But it’s captured from those facilities and then it’s liquefied and transported to our Bushwick facility. And if you integrate the lifecycle carbon emissions of all of the equipment, all the steel, all of the transportation, every part of that process, then you you get about a maximum life-cycle CO2 emissions for the carbon dioxide of 200 kilograms per ton. So we still have eight hundred kilograms to play with at our facility.

Steven Cherry So another 10 percent gets eaten up by that electrolysis process.

Stafford Sheehan Yeah. The electrolysis process is highly dependent on what sort of electricity you use to power it with. We use a company called Clean Choice. And we’re we work very closely with a number of solar and wind deployers in New York State to make sure that all the electricity that’s used at our facility is solar or wind. And if you use wind energy, that’s the most carbon-friendly energy source that we have available there. Right now, the mix that we have, which is certified through Con Edison, is actually very heavily wind and a little bit of solar. But that was the lowest lifecycle-intensity electricity that we could get. So we get … it’s actually a little bit less than 10 percent of that is consumed by electrolysis. So the electrolysis is actually quite green as long as you power it with a very low-carbon source of electricity.

Steven Cherry And the distilling process, even though it’s solar-based, takes maybe another 13 percent or so?

Stafford Sheehan It’s in that ballpark. The distilling process is powered by an electric steam boiler. So we use the same electricity that we use to split water, to heat our water for the distillation system. So we have a fully electric distillery process. You could say that we’ve electrified vodka distilling.

Steven Cherry There’s presumably a bit more by way of carbon equivalents when it comes to the bottles the vodka comes in, shipping it to customers, and so on, but that’s true of any vodka that ends up on that shelf of any bar, and those also have a carbon-emitting farming process—whether it’s potatoes or sugar beets or wheat or whatever—that your process sidesteps.

Stafford Sheehan Yes. And I think one thing that’s really important is, this electrification act aspect by electrifying or all of our distillery processes, for example, if you’re boiling water using a natural gas boiler, your carbon emissions are going to be much, much higher as compared to boiling water using an electric steam boiler that’s powered with wind energy.

Steven Cherry It seems like if you just poured the vodka down the drain or into the East River, you would be benefiting the environment. I mean, would it be possible to do that on an industrial scale as a form of carbon capture and storage that really works?

Stafford Sheehan Yeah. I don’t think you’d want to pour good alcohol down the drain in any capacity just because the alcohol that we make can offset the use of fossil fuel alcohol.

So by putting the alcohol that we make—this carbon negative alcohol that we make—into the market, that means you have to make less fossil alcohol. And I’m including corn ethanol in that because so many fossil fuels go into its production. But that makes it so that our indirect CO2, our indirect CO2 utilization is very, very high because we’re offsetting a very carbon-intensive product.

Steven Cherry That’s interesting. I was thinking that maybe you could earn carbon credits and sell them for more than you might make with having a, you know, another pricey competitor to Grey Goose and Ketel One.

Stafford Sheehan The carbon credit, the carbon credit system is still very young, especially in the US.

We also … our technology still has a ways to scale between our Bushwick facility—which is, I would say, a micro distillery—and a real bona industrial process, which … we’re working on that right now.

Steven Cherry Speaking of which, though, it is rather pricey stuff at this point, isn’t it? Did I read $65 or $70 a bottle?

Stafford Sheehan Yeah, it’s pricey not only because you pay a premium for our electricity, for renewable electricity, but we also pay a premium for carbon dioxide that, you know, has that that only emits 10 to 20 percent of the carbon intensity of its actual weight, so we pay a lot more for the inputs than is typical—sustainability costs money—and also we’re building these systems, they’re R&D systems, and so they’re  more costly to operate on a R&D scale, on kind of our pilot plant scale. As we scale up, the cost will go down. But at the scales we’re at right now, we need to be able to sell a premium product to be able to have a viable business. Now, on top of that, the product is also won a lot of awards that put it in that price category. It’s won three gold medals in the three most prestigious blind taste test competitions. And it’s won a lot of other spirits and design industry awards that enable us to get that sort of cost for it.

Steven Cherry I’m eager to do my own blind taste testing. Vodka is typically 80 proof, meaning it’s 60 percent water. You and your co-founder went on an epic search for just the right water.

Stafford Sheehan That we did. We tested over … probably over one hundred and thirty different types of water. We tried to find which one was best to make vodka with using the very, very highly pure ethanol that comes out of our process. And it’s a very nuanced thing. Water, by changing things like the mineral content, the pH, by changing the very, very small trace impurities in the water—that in many cases are good for you—can really change the way the water feels in your mouth and the way that it tastes. And adding alcohol to water just really amplifies that. It lowers the boiling point and it makes it more volatile so that it feels different in your mouth. And so different types of water have a different mouth feel; they have a different taste. We did a lot of research on water to be able to find the right one to mix with our vodka.

Steven Cherry Did you end up where you started with New York water?

Stafford Sheehan Yes. In in a in a sense, we are we’re very, very close to where we started.

Steven Cherry I guess we have to add your vodka to the list that New Yorkers would claim includes New York’s bagels and New York’s pizza as uniquely good, because if their water.

Stafford Sheehan Bagels, pizza, vodka … hand sanitizer …

Steven Cherry It’s a well-balanced diet. So where do things stand with the XPRIZE? I gather you finally made it to Canada for this operational round, but take us through the journey getting there.

Stafford Sheehan So I initially entered the XPRIZE when it was soliciting for very first submissions—I believe it was 2016—and going through the different stages, we had at the end of 2017, we had very rigorous due diligence on our prototype scale. And we passed through that and got good marks and continuously progressed through to the finals where we are now. Now, of course, coronavirus kind of threw both our team and many other teams for a loop, delaying deployment, especially for us: We’re the only American team deploying in Canada. The other four teams that are deploying at the ACCTC [Alberta Carbon Conversion Technology Centre] are all Canadian teams. So being the only international team in a time of a global pandemic that, you know, essentially halted all international travel—and a lot of international commerce—put some substantial barriers in our way. But over the course of the last seven months or so, we’ve been able to get back on our feet. And I’m currently sitting in quarantine in Mississauga, Ontario, getting ready for a factory-acceptance test. That’s scheduled to happen right at the same time as quarantine ends. So we’re gonna be at the end of this month landing our skid in Alberta for the finals and then in November, going through diligence and everything else to prove out its operation and then operating it through the rest of the year.

Steven Cherry I understand that you weren’t one of the original 10 finalists named in 2018.

Stafford Sheehan No, we were not. We were the runner-up. There was a runner-up for each track—the Wyoming track and the Alberta track. And ultimately, there were teams that dropped out or merged for reasons within their own businesses. We were given the opportunity to rejoin the competition. We decided to take it because it was a good proving ground for our next step of scale, and it provided a lot of infrastructure that allowed us to do that at a reasonable cost—at a reasonable cost for us and at a reasonable cost in terms of our time.

Steven Cherry Staff, you were previously a co-founder of a startup called Catalytic Innovations. In fact, you were a 2016 Forbes magazine, 30-under-30 because of it. What was it? And is it? And how did it lead to Air Company and vodka?

Stafford Sheehan For sure. That was a company that I spun out of Yale University, along with a professor at Yale, Paul Anastas. We initially targeted making new catalysts for fuel cell and electrolysis industries, focusing around the water oxidation reaction. So to turn carbon dioxide—or to produce fuel in general using renewable electricity—there are three major things that need to happen. You need to have a very efficient renewable energy source. Trees, for example, use the sun. That’s photosynthesis. You have to be able to oxidize water into oxygen gas. And that’s why trees breathe out oxygen. And you have to be able to use the protons and electrons that come out of water oxidation to either reduce carbon dioxide or through some other method, produce a fuel. So I studied all three of those when I was in graduate school, and upon graduating, I spun out Catalytic Innovations that focused on the water oxidation reaction and commercializing materials that more efficiently produced oxygen for all of The man-made processes such as metal refining that do that chemistry. And that company found its niche in corrosion—anti-corrosion and corrosion protection—because one of the big challenges, whenever you’re producing oxygen, be it for renewable fuels or be it to produce zinc or to do a handful of different electrorefining and electrowinning processes in the metal industry. You always have a very serious corrosion problem. Did a lot of work in that industry in Catalytic Innovations, and they still continue to do work there, to this day.

Steven Cherry You and your current co-founder, Greg Constantine, are a classic match—a technologist, in this case an electrochemist and a marketer. If this were a movie, you would have met in a bar drinking vodka. And I understand you actually did meet at a bar. Were you drinking vodka?

Stafford Sheehan No, we were actually drinking whiskey. So I didn’t … I actually I’m not a big fan of vodka pre-Air Company, but it was the product that really gave us the best value proposition where really, really clean, highly pure ethanol is most important. So I’ve always been more of a whiskey man myself, and Greg and I met over whiskey in Israel when we were on a trip that was for Forbes. You know, they sent us out there because we were both part of their 30-Under-30 list and we became really good friends out there. And then several months later, fast forward, we started Air Company.

Steven Cherry Air Company’s charter makes it look like you would like to go far beyond vodka when it comes to finding useful things to do with CO2. In the very near term, you turned to using your alcohol in a way that contributes to our safety.

Stafford Sheehan Yeah. So we we had always planned the air company, not the air vodka company. We had always planned to go into several different verticals with ultra-high-purity ethanol that we create. And spirits is one of the places where you can realize the value proposition of a very clean and highly pure alcohol, very readily—spirits, fragrance is another one. But down the list a little bit is sanitizer, specifically hand sanitizer. And when coronavirus hit, we actually pivoted all of our technology because there was a really, really major shortage of sanitizer in New York City. A lot of my friends from graduate school that had kind of gone more on the medical track were telling me that the hospitals that they worked in, in New York didn’t have any hand sanitizer. And when the hospitals—for the nurses and doctors—ran out of hand sanitizer, that means you really have a shortage. And so we pivoted all of our technology to produce sanitizer in March. And for three months after that, we gave it away. We donated it to these hospitals, to the fire department, to NYPD and to other organizations in the city that needed it most.

Yeah, the hand sanitizer, I like to think, is also a very premium product. You can’t realize the benefits of the very, very clean and pure ethanol that we use for it as readily as you can with the bad guys since you’re not tasting it. But we did have to go through all of the facility registrations and that sort of thing to make the sanitizer because it is classified as a drug. So our pilot plant in and in Bushwick, which was a converted warehouse, I used to tell people in March that I always knew my future was going to be sitting in a dark warehouse in Bushwick making drugs. But, you know, never thought that it was actually going to become a reality.

Steven Cherry That was in the short term. By now, you can get sanitizer in every supermarket and Home Depot. What are the longer-term prospects for going beyond vodka?

Stafford Sheehan Longer term, we’re looking at commodity chemicals, even going on to fuel. So longer term, we’re looking at the other verticals where we can take advantage of the high-purity value proposition of our ethanol—like pharmaceuticals, as a chemical feedstock, things like that. But then as we scale, we want to be able to make renewable fuel as well from this and renewable chemicals. Ultimately, we want to we want to get to world scale with this technology, but we need to take the appropriate steps to get there. And what we’re doing now are the stepping-stones to scaling it.

Steven Cherry It seems like if you could locate the distilling operation right at the ethanol plant, you would just be making more ethanol for them with their waste product, avoid a lot of shipping and so forth. It, you would just become of value add to their industry.

Stafford Sheehan That is something that we hope to do in the long term. You know what, our current skids are fairly small scale where we couldn’t take a massive amount of CO2 with them. But as we scale, we do hope to get there gradually when we get to larger scales, like talking about several barrels per day rather than liters per hour, which is the scale we’re at now.

A lot of stuff you can turn CO2 into. One of the prime examples is calcium carbonate. C03-[[minus]] CO2 is CO2. You can very easily convert carbon dioxide into things like that for building materials. So pour concrete for different parts of bricks and things like that. There are a lot of different ways to mineralized CO2 as well. Like you can inject it into the ground. That will also turn it into carbon-based minerals. Beyond that, as far as more complex chemical conversion goes, the list is almost endless. You can make plastics. You can make pharmaceutical materials. You can make all sorts of crazy stuff from CO2. Almost any of the base chemicals that have carbon in them can come from CO2. And in a way, they do come from CO2 because all the petrochemicals that we mine from the ground, that they’re from photosynthesis that happened over the course of the last two billion years.

Have you ever seen the movie Forest Gump? There’s a part in that where Bubba, Gump’s buddy in the Vietnam War, talks about all the things you can do with shrimp. And it kind of goes on and on and on. But I could say the same about CO2. You can make plastic. You can make clothes. You can make sneakers. You can make alcohol. You can make any sort of chemical carbon-based ethylene, carbon monoxide, formic acid, methanol, ethanol. And there … The list goes on. Just about any carbon-based chemical you can think of. You can make from CO2.

Steven Cherry Would it be possible to pull carbon dioxide out of a plastic itself and thereby solve two problems at once?

Yeah, you could you could take plastic and capture the CO2 that’s emitted when you either incinerate it or where you gasify it. That is a strategy that’s used in certain places, gasification of waste, municipal waste. It doesn’t give you CO2, but it actually gives you something that you can do chemistry with a little more easily. It gives you a syngas—a mixture of carbon monoxide and hydrogen. So, there are a lot of different strategies that you can use to convert CO2 into things better for the planet than global warming.

Steven Cherry If hydrogen is a byproduct of that, you have a ready use for it.

Stafford Sheehan Yeah, exactly, that is one of the many places where we could source feedstock materials for our process. Our process is versatile and that’s one of the big advantages to it.

If we get hydrogen, as a byproduct of chloralkali production, for example, we can use that instead of having to source the electrolyzer. If our CO2 comes from direct air capture, we can use that. And that means we can place our plants pretty much wherever there’s literally air, water and sunlight. As far as the products that come out, liquid products that are made from CO2 have a big advantage in that they can be transported and they’re not as volatile, obviously, as the gases.

Steven Cherry Well, Staff, it’s a remarkable story, one that certainly earns you that XPRIZE finalist berth. We wish you great luck with it. But it seems like your good fortune is self-made and assured, in any event to the benefit of the planet. Thank you for joining us today.

Stafford Sheehan Thanks very much for having me, Steven.

Steven Cherry We’ve been speaking with Staff Sheehan, co-founder of the Air Company, a Brooklyn startup working to actively undo the toxic effects of global warming.

This interview was recorded October 2, 2020. Our thanks to Miles of Gotham Podcast Studio for our audio engineering; our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

 

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

 

The Lithium-Ion Battery With Built-In Fire Suppression

Post Syndicated from Dexter Johnson original https://spectrum.ieee.org/tech-talk/energy/batteries-storage/liion-batteries-more-efficient-fireproof

If there are superstars in battery research, you would be safe in identifying at least one of them as Yi Cui, a scientist at Stanford University, whose research group over the years has introduced some key breakthroughs in battery technology.

Now Cui and his research team, in collaboration with SLAC National Accelerator Laboratory, have offered some exciting new capabilities for lithium-ion batteries based around a new polymer material they are using in the current collectors for them. The researchers claim this new design to current collectors increases efficiency in Li-ion batteries and reduces the risks of fires associated with these batteries.

Current collectors are thin metal foils that distribute current to and from electrodes in batteries. Typically these metal foils are made from copper. Cui and his team redesigned these current collectors so that they are still largely made from copper but are now surrounded by a polymer.

The Stanford team claim in their research published in the journal Nature Energy that the polymer makes the current collector 80 percent lighter, leading to an increase in energy density from 16 to 26 percent. This is a significant boost over the average yearly increase of energy density for Li-ion batteries, which has been stuck at 5 percent a year seemingly forever.

This method of lightening the batteries is a bit of a novel approach to boosting energy density. Over the years we have seen many attempts to increase energy density by enlarging the surface area of electrodes through the use of new electrode materials—such as nanostructured silicon  in place of activated carbon. While increased surface area may increase charge capacity, energy density is calculated by the total energy over the total weight of the battery.

The Stanford team have calculated the increase of 16 to 26 percent in the gravimetric energy density of their batteries by replacing the commercial  copper/aluminum current collectors (8.06 mg/cm2 for copper and 5.0 mg/cm2 for aluminum) with their polymer collections current collectors (1.54 mg/cm2 for polymer-copper material and 1.05 mg/cm2 for polymer-aluminum). 

“Current collectors don’t contribute to the total energy but contribute to the total weight of battery,” explained Yusheng Ye, a researcher at Stanford and co-author of this research. “That’s why we call current collectors ‘dead weight’ in batteries, in contrast to ‘active weight’ of electrode materials.”

By reducing the weight of the current collector, the energy density can be increased, even when the total energy of the battery is almost unchanged. Despite the increased energy density offered by this research, it may not entirely alleviate so-called “range anxiety” associated with electric vehicles in which people have a fear of running out of power before reaching the next charge location. While the press release claims that this work will extend the range of electric vehicles, Ye noted that the specific energy improvement in this latest development is based on the battery itself. As a result, it is only likely to have around a 10% improvement in the range of an electric vehicle.

“In order to improve the range from 400 miles to 600 miles, for example, more engineering work would need to be done taking into account the active parts of the batteries will need to be addressed together with our ultra-light current collectors,” said Ye.

Beyond improved energy density efficiency, the polymer-based charge collectors are expected to help reduce the fires associated with Li-ion batteries. Of course, traditional copper current collectors don’t contribute to battery combustion on their own. The combustion issues in Li-ion batteries  are related to the electrolyte and separator that are not used within the recommended temperatures and voltage windows.

“One of the key innovations in our novel current collector is that we are able to embed fire retardant inside without sacrificing the energy density and mechanical strength of the current collector,” said Ye. “Whenever the battery has combustion issues, our current collector will instantaneously release the fire retardant and extinguish the fire. Such function cannot be achieved with traditional copper or aluminum current collector.”

The researchers have patented the technology and are in discussions with battery manufacturers for commercialization. Cui and his team have already worked out some of the costs associated with adopting the polymer and they appear attractive. According to Ye, the cost of the polymer composite charge collector is around $1.3 per m2, which is a bit lower than the cost of copper foil, which is around $1.4 per m2. With these encouraging numbers, Ye added: “We are expecting industry to adopt this technology within the next few years.”

Why Does the U.S. Have Three Electrical Grids?

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/renewables/why-does-the-us-have-three-electrical-grids

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

If you look at lists of the 100 greatest inventions of all time, electricity figures prominently. Once you get past some key enablers that can’t really be called inventions—fire, money, the wheel, calendars, the alphabet—you find things like light bulbs, the automobile, refrigeration, radios, the telegraph and telephone, airplanes, computers and the Internet. Antibiotics and the modern hospital would be impossible without refrigeration. The vaccines we’re all waiting for depend on electricity in a hundred different ways.

It’s the key to modern life as we know it, and yet, universal, reliable service remains an unsolved problem. By one estimate, a billion people still do without it. Even in a modern city like Mumbai, generators are commonplace, because of an uncertain electrical grid. This year, California once again saw rolling blackouts, and with our contemporary climate producing heat waves that can stretch from the Pacific Coast to the Rocky Mountains, they won’t be the last.

Electricity is hard to store and hard to move, and electrical grids are complex, creaky, and expensive to change. In the early 20teens, Europe began merging its distinct grids into a continent-wide supergrid, an algorithm-based project that IEEE Spectrum wrote about in 2014. The need for a continent-wide supergrid in the U.S. has been almost as great, and by 2018 the planning of one was pretty far long—until it hit a roadblock that, two years later, still stymies any progress. The problem is not the technology, and not even the cost. The problem is political. That’s the conclusion of an extensively reported investigation jointly conducted by The Atlantic magazine and InvestigateWest, a watchdog nonprofit that was founded in 2009 after the one of Seattle’s daily newspapers stopped publishing. The resulting article, with the heading, “Who Killed the Supergrid?”, was written by Peter Fairley, who has been a longtime contributing editor for IEEE Spectrum and is my guest today. He joins us via Skype.

Peter, welcome to the podcast.

Peter Fairley It’s great to be here, Steven.

Steven Cherry Peter, you wrote that 2014 article in Spectrum about the Pan-European Hybrid Electricity Market Integration Algorithm, which you say was needed to tie together separate fiefdoms. Maybe you can tell us what was bad about the separate fiefdoms served Europe nobly for a century.

Peter Fairley Thanks for the question, Steven. That story was about a pretty wonky development that nevertheless was very significant. Europe, over the last century, has amalgamated its power systems to the point where the European grid now exchange’s electricity, literally across the continent, north, south, east, west. But until fairly recently, there have been sort of different power markets operating within it. So even though the different regions are all physically interconnected, there’s a limit to how much power can actually flow all the way from Spain up to Central Europe. And so there are these individual regional markets that handle keeping the power supply and demand in balance, and putting prices on electricity. And that algorithm basically made a big step towards integrating them all. So that you’d have one big market and a more competitive, open market and the ability to, for example, if you have spare wind power in one area, to then make use of that in some place a thousand kilometers away.

Steven Cherry The U.S. also has separate fiefdoms. Specifically, there are three that barely interact at all. What are they? And why can’t they share power?

Peter Fairley Now, in this case, when we’re talking about the U.S. fiefdoms, we’re talking about big zones that are physically divided. You have the Eastern—what’s called the Eastern Interconnection—which is a huge zone of synchronous AC power that’s basically most of North America east of the Rockies. You have the Western Interconnection, which is most of North America west of the Rockies. And then you have Texas, which has its own separate grid.

Steven Cherry And why can’t they share power?

Peter Fairley Everything within those separate zones is synched up. So you’ve got your 60 hertz AC wave; 60 times a second the AC power flow is changing direction. And all of the generators, all of the power consumption within each zone is doing that synchronously. But the east is doing it on its own. The West is on a different phase. Same for Texas.

Now you can trickle some power across those divides, across what are called “seams” that separate those, using DC power converters—basically, sort of giant substations with the world’s largest electronic devices—which are taking some AC power from one zone, turning it into DC power, and then producing a synthetic AC wave, to put that power into another zone. So to give you a sense of just what the scale of the transfers is and how small it is, the East and the West interconnects have a total of about 950 gigawatts of power-generating capacity together. And they can share a little over one gigawatt of electricity.

Steven Cherry So barely one-tenth of one percent. There are enormous financial benefits and reliability benefits to uniting the three. Let’s start with reliability.

Peter Fairley Historically, when grids started out, you would have literally a power system for one neighborhood and a separate power system for another. And then ultimately, over the last century, they have amalgamated. Cities connected with each other and then states connected with each other. Now we have these huge interconnections. And reliability has been one of the big drivers for that because you can imagine a situation where if you if you’re in city X and your biggest power generator goes offline, you know, burn out or whatever. If you’re interconnected with your neighbor, they probably have some spare generating capacity and they can help you out. They can keep the system from going down.

So similarly, if you could interconnect the three big power systems in North America, they could support each other. So, for example, if you have a major blackout or a major weather event like we saw last month—there was this massive heatwave in the West, and much of the West was struggling to keep the lights on. It wasn’t just California. If they were more strongly interconnected with Texas or the Eastern Interconnect, they could have leaned on those neighbors for extra power supply.

Steven Cherry Yeah, your article imagines, for example, the sun rising in the West during a heatwave sending power east; the sun setting in the Midwest, wind farms could send power westward. What about the financial benefits of tying together these three interconnects? Are they substantial? And are they enough to pay for the work that would be needed to unify them into a supergrid?

Peter Fairley The financial benefits are substantial and they would pay for themselves. And there’s really two reasons for that. One is as old as our systems, and that is, if you interconnect your power grids, then all of the generators in the amalgamated system can, in theory, they can all serve that total load. And what that means is they’re all competing against each other. And power plants that are inefficient are more likely to be driven out of the market or to operate less frequently. And so that the whole system becomes more efficient, more cost-effective, and prices tend to go down. You see that kind of savings when you look at interconnecting the big grids in North America. Consumers benefit—not necessarily all the power generators, right? There you get more winners and losers. And so that’s the old part of transmission economics.

What’s new is the increasing reliance on renewable energy and particularly variable renewable energy supplies like wind and solar. Their production tends to be more kind of bunchy, where you have days when there’s no wind and you have days when you’ve got so much wind that the local system can barely handle it. So there are a number of reasons why renewable energy really benefits economically when it’s in a larger system. You just get better utilization of the same installations.

Steven Cherry And that’s all true, even though sending power 1000 miles or 3000 miles? You lose a fair amount of that generation, don’t you?

Peter Fairley It’s less than people imagine, especially if you’re using the latest high voltage direct current power transmission equipment. DC power lines transmit power more efficiently than AC lines do, because the physics are actually pretty straightforward. An AC current will ride on the outside of a power cable, whereas a DC current will use the entire cross-section of the metal. And so you get less resistance overall, less heating, and less loss. And so. And the power electronics that you need on either side of a long power line like that are also becoming much more efficient. So you’re talking about losses of a couple of percent on lines that, for example in China, span over 3000 kilometers.

Steven Cherry The reliability benefits, the financial benefits, the way a supergrid would be an important step for helping us move off of our largely carbon-based sources of power—we know all this in part because in the mid-2010s a study was made of the feasibility—including the financial feasibility—of unifying the U.S. in one single supergrid. Tell us about the Interconnections Seams Study.

Peter Fairley So the Interconnection Seams Study [Seams] was one of a suite of studies that got started in 2016 at the National Renewable Energy Laboratory in Colorado, which is one of the national labs operated by the U.S. Department of Energy. And the premise of the Seams study was that the electronic converters sitting between the east and the west grids were getting old; they were built largely in the 70s; they are going to start to fail and need to be replaced.

And the people at NREL were saying, this is an opportunity. Let’s think—and the power operators along the seam were thinking the same thing—we’re gonna have to replace these things. Let’s study our strategic options rather than have them go out of service and just automatically replace them with similar equipment. So what they posited was, let’s look at some longer DC connections to tie the East and the West together—and maybe some bigger ones. And let’s see if they pay for themselves. Let’s see if they have the kind of transformative effects that one would imagine that they would, just based on the theory. So they set up a big simulation modeling effort and they started running the numbers…

Now, of course, this got started in 2016 under President Obama. And it continued to 2017 and 2018 under a very different president. And basically, they affirmed that tying these grids, together with long DC lines, was a great idea, that it would pay for itself, that it would make much better use of renewable energy. But it also showed that it would accelerate the shutdown of coal-fired power. And that got them in some hot water with the new masters at the Department of Energy.

Steven Cherry By 2018 the study was largely completed and researchers will begin to share its conclusions with other energy experts and policymakers. For example, there was a meeting in Iowa. You describe where there is a lot of excitement over the scenes study. You write that things took a dramatic turn at one such gathering in Lawrence, Kansas.

Peter Fairley Yes. So the study was complete as far as the researchers were concerned. And they were working on their final task under their contract from the Department of Energy, which was to write and submit a journal article in this case. They were targeting an IEEE journal. And they, as you say, had started making some presentations. The second one was in August, in Kansas, and there’s a DOE official—a political appointee—who’s sitting in the audience and she does not like what she’s hearing. She, while the talk is going on, pulls out her cell phone, writes an email to DOE headquarters, and throws a red flag in the air.

Steven Cherry The drama moved up the political chain to a pretty high perch.

Peter Fairley According to an email from one of the researchers that I obtained and is presented in the InvestigateWest version of this article, it went all the way to the current secretary of energy, Daniel Brouillette, and perhaps to the then-Secretary of Energy, former Texas Governor [Rick] Perry.

Steven Cherry And the problem you say in that article was essentially the U.S. administration’s connections to—devotion to—the coal industry.

Peter Fairley Right. You’ve got a president who has made a lot of noise both during his election campaign and since then about clean, beautiful coal. He is committed to trying to stop the bleeding in the U.S. coal industry, to slow down or stop the ongoing mothballing of coal-fired power plants. His Secretary of Energy. Rick Perry is doing everything he can to deliver on Trump’s promises. And along comes this study that says we can have a cleaner, more efficient power system with less coal. And yes, so it just ran completely counter to the political narrative of the day.

Steven Cherry You said earlier the financial benefits to consumers are unequivocal. But in the case of the energy providers, there would be winners and losers and the losers with largely come from the coal industry.

Peter Fairley I would just add one thing to that, and that is and this depends on really the different systems. You’re looking at the different conditions and scenarios and assumptions. But, you know, in a scenario where you have more renewable energy, there are also going to be impacts on natural gas. And the oil and gas industry is definitely also a major political backer of the Trump administration.

Steven Cherry The irony is that the grid is moving off of coal anyway, and to some extent, oil and even natural gas, isn’t it?

Peter Fairley Definitely oil. It’s just a very expensive and inefficient way to produce power. So we’ve been shutting that down for a long time. There’s very little left. We are shutting down coal at a rapid rate in spite of every effort to save it. Natural gas is growing. So natural gas has really been—even more so than renewables—the beneficiary of the coal shutdown. Natural gas is very cheap in the U.S. thanks to widespread fracking. And so it’s coming on strong and it’s still growing.

Steven Cherry Where is the Seams study now?

Peter Fairley The Seams study is sitting at the National Renewable Energy Lab. Its leaders, under pressure from the political appointees at DOE, its leaders have kept it under wraps. It appears that there may have been some additional work done on the study since it got held up in 2018. But we don’t know what the nature of that work was. Yeah, so it’s just kind of missing in action at this point.

My sources tell me that there is an effort underway at the lab to get it out. And I think the reason for that is that they’ve taken a real hit in terms of the morale of their staff. the NREL Seams study is not the only one that’s been held up, that is being held up. In fact, it’s one of dozens, according to my follow-up reporting. And, you know, NREL researchers are feeling pretty hard done by and I think the management is trying to show its staff that it has some scientific integrity.

But I think it’s important to note that there are other political barriers to building a supergrid. It might be a no brainer on paper, but in addition to the pushback from the fossil-fuel industry that we’re seeing with Seams, there are other political crosscurrents that have long stood in the way of long-distance transmission in the U.S. For example—and this is a huge one—that, in the U.S., most states have their own public utility commission that has to approve new power lines. And when you’re looking at the kind of lines that Seams contemplated, or that would be part of a supergrid, you’re talking about long lines that have to span, in some cases, a dozen states. And so you need to get approval from each of those states to transit— to send power from point A to point X. And that is a huge challenge. There’s a wonderful book that really explores that side of things called Superpower [Simon & Schuster, 2019] by the Wall Street Journal’s Russell Gold.

Steven Cherry The politics that led to the suppression of the publication of the Seams study go beyond Seams itself don’t they? There are consequences, for example, at the Office of Energy Efficiency and Renewable Energy.

Peter Fairley Absolutely. Seams is one of several dozen studies that I know of right now that are held up and they go way beyond transmission. They get into energy efficiency upgrades to low-income housing, prices for solar power… So, for example—and I believe this hasn’t been reported yet; I’m working on it—the Department of Energy has hitherto published annual reports on renewable energy technologies like wind and solar. And, in those, they provide the latest update on how much it costs to build a solar power plant, for example. And they also update their goals for the technology. Those annual reports have now been canceled. They will be every other year, if not less frequent. That’s an example of politics getting in the way because the cost savings from delaying those reports are not great, but the potential impact on the market is. There are many studies, not just those performed by the Department of Energy that will use those official price numbers in their simulations. And so if you delay updating those prices for something like solar, where the prices are coming down rapidly, you are making renewable energy look less competitive.

Steven Cherry And even beyond the Department of Energy, the EPA, for example, has censored itself on the topic of climate change, removing information and databases from its own Web sites.

Peter Fairley That’s right. The way I think of it is, when you tell a lie, it begets other lies. And you and you have to tell more lies to cover your initial lie and to maintain the fiction. And I see the same thing at work here with the Trump administration. When the president says that climate change is a hoax, when the president says that coal is a clean source of power, it then falls to the people below him on the political food chain to somehow make the world fit his fantastical and anti-science vision. And so, you just get this proliferation of information control in a hopeless bid to try and bend the facts to somehow make the great leader look reasonable and rational.

Steven Cherry You say even e-mails related to the Seams study have disappeared, something you found in your Freedom of Information Act requests. What about the national labs themselves? Historically, they have been almost academic research organizations or at least a home for unfettered academic freedom style research.

Peter Fairley That’s the idea. There has been this presumption or practice in the past, under past administrations, that the national labs had some independence. And that’s not to say that there’s never been political oversight or influence on the labs. Certainly, the Department of Energy decides what research it’s going to fund at the labs. And so that in itself shapes the research landscape. But there was always this idea that the labs would then be—you fund the study and then it’s up to the labs to do the best work they can and to publish the results. And the idea that you are deep-sixing studies that are simply politically inconvenient or altering the content of the studies to fit the politics that’s new. That’s what people at the lab say is new under the Trump administration. It violates. DOE’s own scientific integrity policies in some cases, for example, with the Lawrence Berkeley National Laboratory. It violates the lab’s scientific integrity policy and the contract language under which the University of California system operates that lab for the Department of Energy. So, yeah, the independence of the national labs is under threat today. And there are absolutely concerns among scientists that precedents are being set that could affect how the labs operate, even if, let’s say, President Trump is voted out of office in November.

Steven Cherry Along those lines, what do you think the future of grid unification is?

Peter Fairley Well, Steven, I’ve been writing about climate and energy for over 20 years now, and I would have lost my mind if I wasn’t a hopeful person. So I still feel optimistic about our ability to recognize the huge challenge that climate change poses and to change the way we live and to change our energy system. And so I do think that we will see longer power lines helping regions share energy in the future. I am hopeful about that. It’s just it makes too much sense to leave that on the shelf.

Steven Cherry Well, Peter, it’s an amazing investigation of the sort that reminds us why the press is important enough to democracy to be called the fourth estate. Thanks for publishing this work and for joining us today.

Peter Fairley Thank you so much. Steven. It’s been a pleasure.

Steven Cherry We’ve been speaking with Peter Fairley, a journalist who focuses on energy and the environment, about his researching and reporting on the suspension of work on a potential unification of the U.S. energy grid.

This interview was recorded September 11, 2020. Our audio engineering was by Gotham Podcast Studio; our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

Airbus Plans Hydrogen-Powered Carbon-Neutral Planes by 2035. Can They Work?

Post Syndicated from Ned Potter original https://spectrum.ieee.org/energywise/energy/environment/airbus-plans-hydrogenpowered-carbonneutral-planes-by-2035-can-they-work

Imagine that it is December 2035 – about 15 years from now – and you are taking an international flight in order to be at home with family for the holidays. Airports and planes have not changed much since your childhood: Your flight is late as usual. But the Airbus jet at your gate is different. It is a giant V-shaped blended-wing aircraft, vaguely reminiscent of a boomerang. The taper of the wings is so gentle that one cannot really say where the fuselage ends and the wings begin. The plane is a big lifting body, with room for you and 200 fellow passengers.

One other important thing you notice before you board: The plane is venting vapor, a lot of it, even on a crisp morning. That, you know, is because the plane is fueled by liquid hydrogen, cooled to -253 degrees C, which boils off despite the plane’s extensive insulation. This is part of the vision Airbus, the French-based aviation giant, presents as part of its effort against global climate change.

Airbus is now betting heavily on hydrogen as a fuel of the future. It has just unveiled early plans for three “ZEROe” airliners, each using liquid hydrogen to take the place of today’s hydrocarbon-based jet-fuel compounds.

“It is really our intent in 15 years to have an entry into service of a hydrogen-powered airliner,” says Amanda Simpson, vice president for research and technology at Airbus Americas. Hydrogen, she says, “has the most energy per unit mass of…well, anything. And because it burns with oxygen to [yield] water, it is entirely environmentally friendly.”

But is a hydrogen future realistic for commercial aviation? Is it practical from an engineering, environmental, or economic standpoint? Certainly, people at Airbus say they need to decarbonize, and research on battery technology for electric planes has been disappointing. Meanwhile, China, currently the world’s largest producer of carbon dioxide, pledged last month to become carbon neutral by 2060. And 175 countries have signed on to the 2015 Paris agreement to fight global warming.

According to the European Commission, aviation alone accounts for between 2 and 3 percent of the world’s greenhouse gas emissions – about as much as entire countries like Japan or Germany.

Two of the planes Airbus has shown in artist renditions would barely get a second glance at today’s airports. One—with a capacity of 120-200 passengers, a cruising speed of about 830 kilometers per hour (kph), and a range of more than 3,500 km—looks like a conventional twin-engine jet. The second looks like almost any other turboprop you’ve ever seen; it’s a short-haul plane that can carry up to 100 passengers with a range of at least 1,800 km and a cruising speed of 612 kph. Each plane would get electric power from fuel cells. The company said it won’t have most other specifications for several years; it said to think of the images as “concepts,” meant to generate ideas for future planes.

The third rendering, an illustration of that blended-wing aircraft, showed some of the potential—and potential challenges—of hydrogen as a fuel. Airbus said the plane might have a cruising speed of 830 kph and a range of 3,500 km, without releasing carbon into the air. Liquid hydrogen contains about three times as much energy in each kilogram as today’s jet fuel. On the other hand, a kilogram of liquid hydrogen takes up three times the space. So, a plane would need either to give up cabin space or have more inside volume. A blended wing, with its bulbous shape, Airbus says, may solve the problem. And as a bonus, blended wings have shown they can be 20 percent more fuel-efficient than today’s tube-and-wing aircraft.

“My first reaction is: Let’s do it. Let’s make it happen,” says Daniel Esposito, a chemical engineer at Columbia University whose research covers hydrogen production. He says hydrogen can be handled safely and has a minimal carbon footprint if it’s made by electrolysis (splitting water into hydrogen and oxygen) using renewable electricity. Most industrial hydrogen today is extracted from natural gas, which negates some of the carbon benefit, but the International Energy Agency says that with renewable electricity capacity quickly growing (it passed coal as a power source in 2019), the cost of carbon-free hydrogen could drop.

“It can be done,” he says. “It’s just a matter of the political will and the will of companies like Airbus and Boeing to take the lead on this.”

Others have their doubts. “A lot of these things, you can; the question is, should you?” says Richard Pat Anderson, a professor of aerospace engineering at Embry-Riddle Aeronautical University. “When we say, ‘Should you?’ and you get into economics, then it becomes a much more difficult conversation.” Anderson says battery-powered aircraft are likely to become practical later in this century, and it is a dubious proposition to build the massive – and costly – infrastructure for hydrogen power in the meantime.

But in a warming world, Airbus says, the aviation sector needs to get going. McKinsey & Company, the consulting firm, surveyed airline customers last year and found 62 percent of younger fliers (under age 35) “really worried about climate change” and agreed that “aviation should definitely become carbon neutral.”

So, you’re on that jetway 15 years from now, on the way home. What will power the plane you’re boarding?

“Hydrogen is coming,” says Simpson at Airbus. “It’s already here.”

Exclusive: Airborne Wind Energy Company Closes Shop, Opens Patents

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/renewables/exclusive-airborne-wind-energy-company-closes-shop-opens-patents

This week, a 13-year experiment in harnessing wind power using kites and modified gliders finally closes down for good. But the technology behind it is open-sourced and is being passed on to others in the field.

As of 10 September, the airborne wind energy (AWE) company Makani Technologies has officially announced its closure. A key investor, the energy company Shell, also released a statement to the press indicating that “given the current economic environment” it would not be developing any of Makani’s intellectual property either. Meanwhile, Makani’s parent company, X, Alphabet’s moonshot factory, has made a non-assertion pledge on Makani’s patent portfolio. That means anyone who wants to use Makani patents, designs, software, and research results can do so without fear of legal reprisal.

Makani’s story, recounted last year on this site, is now the subject of a 110-minute documentary called Pulling Power from the Sky—also free to view.

When she was emerging from graduate studies at MIT in 2009, Paula Echeverri (once Makani’s chief engineer) said the company was a compelling team to join, especially for a former aerospace engineering student.

“Energy kite design is not quite aircraft design and not quite wind turbine design,” she said.

The idea behind the company’s technology is to raise the altitude of the wind energy harvesting to hundreds of meters in the sky—where the winds are typically both stronger and more steady. Because a traditional windmill reaching anywhere approaching these heights would be impractical, Makani was looking into kites or gliders that could ascend to altitude first—fastened to the ground by a tether. Only then would the flyer begin harvesting energy from wind gusts.

Pulling Power recounts Makani’s story from its very earliest days, circa 2006, when kites like the ones kite surfers use were the wind energy harvester of choice. However, using kites also means drawing power out of the tug on the kite’s tether. Which, as revealed by the company’s early experiments, couldn’t compete with propellers on a glider plane.

What became the Makani basic flyer, the M600 Energy Kite, looked like an oversized hobbyist’s glider but with a bank of propellers across the wing. These props would first be used to loft the glider to its energy-harvesting altitude. Then the engine would shut off and the glider would ride the air currents—using the props as mini wind turbines.

According to a free 1,180-page ebook (Part 1Part 2Part 3The Energy Kite, which Makani is also releasing online, the company soon found a potentially profitable niche in operating offshore.

Just in terms of tonnage, AWE had a big advantage over traditional offshore wind farms. Wind turbines (in shallow water) fixed to the seabed might require 200 to 400 tons of metal for every megawatt of power the turbine generated. And floating deep-water turbines, anchored to seabed by cables, typically involve 800 tons or more per megawatt. Meanwhile, a Makani AWE platform—which can be anchored in even deeper water—weighed only 70 tons per rated megawatt of generating capacity.

Yet, according to the ebook, in real-world tests, Makani’s M600 proved difficult to fly at optimum speed. In high winds, it couldn’t fly fast enough to pull as much power out of the wind as the designers had hoped. In low winds, it often flew too fast. In all cases, the report says, the rotors just couldn’t operate at peak capacity through much of the flyer’s maneuvers. The upshot: The company had a photogenic oversized model airplane, but not the technology that’d give regular wind turbines a run for their money.

Don’t take Makani’s word for it, though, says Echeverri. Not only is the company releasing its patents into the wild, it’s also giving away its code baseflight logs, and a Makani flyer simulation tool called KiteFAST.

“I think that the physics and the technical aspects are still such that, in floating offshore wind, there’s a ton of opportunity for innovation,” says Echeverri.

One of the factors the Makani team didn’t anticipate in the company’s early years, she said, was how precipitously electricity prices would continue to dropleaving precious little room at the margins for new technologies like AWEs to blossom and grow.

“We’re thinking about the existing airborne wind industry,” Echeverri said. “For people working on the particular problems we’d been working on, we don’t want to bury those lessons. We also found this to be a really inspiring journey for us as engineers—a joyful journey… It is worthwhile to work on hard problems.”

Exclusive: GM Can Manage an EV’s Batteries Wirelessly—and Remotely

Post Syndicated from Lawrence Ulrich original https://spectrum.ieee.org/cars-that-think/energy/batteries-storage/ieee-spectrum-exclusive-gm-can-manage-an-evs-batteries-wirelesslyand-remotely

When the battery dies in your smartphone, what do you do? You complain bitterly about its too-short lifespan, even as you shell out big bucks for a new device. 

Electric vehicles can’t work that way: Cars need batteries that last as long as the vehicles do. One way of getting to that goal is by keeping close tabs on every battery in every EV, both to extend a battery’s life and to learn how to design longer-lived successors.

IEEE Spectrum got an exclusive look at General Motors’ wireless battery management system. It’s a first in any EV anywhere (not even Tesla has one). The wireless technology, created with Analog Devices, Inc., will be standard on a full range of GM EVs, with the company aiming for at least 1 million global sales by mid-decade. 

Those vehicles will be powered by GM’s proprietary Ultium batteries, produced at a new US $2.3 billion plant in Ohio, in partnership with South Korea’s LG Chem

Unlike today’s battery modules, which link up to an on-board management system through a tangle of orange wiring, GM’s system features RF antennas integrated on circuit boards. The antennas allow the transfer of data via a 2.4-gigahertz wireless protocol similar to Bluetooth but with lower power. Slave modules report back to an onboard master, sending measurements of cell voltages and other data.  That onboard master can also talk through the cloud to GM. 

The upshot is cradle-to-grave monitoring of battery health and operation, including real-time data from drivers in wildly different climates or usage cases. That all-seeing capability includes vast inventories of batteries—even before workers install them in cars on assembly lines.

“You can have one central warehouse monitoring all these devices,” says Fiona Meyer-Teruel, GM’s lead engineer for battery system electronics.

GM can essentially plug-and-play battery modules for a vast range of EVs, including heavy-duty trucks and sleek performance cars, without having to redesign wiring harnesses or communications systems for each. That can help the company speed models to market and ensure the profitability that has eluded most EV makers. GM engineers and executives said they’ve driven the cost of Ultium batteries, with their nickel-cobalt-manganese-aluminum chemistry, below the $100 per kilowatt-hour mark—long a Holy Grail for battery development. And GM has vowed that it will turn a profit on every Ultium-powered car it makes.

The wireless management system will let those EVs balance the charge within individual battery cell groups for optimal performance. Software and battery nodes can be reprogrammed over-the-air. With that in mind, the system was designed with end-to-end encryption to prevent hacking.

Repurposing partially spent batteries also gets easier because there’s no need to overhaul the management system or fiddle with hard-to-recycle wiring. Wireless packs can go straight into their new roles, typically as load-balancing workhorses for  the grid.

“You can easily rescale the batteries for a second life, when they’re down to, say, 70-percent performance,” says Meyer-Teruel. 

The enormous GM and LG Chem factory, now under construction, will have the capacity to produce 30 gigawatt-hours of batteries each year, 50 percent more than Tesla’s Gigafactory in Nevada. The plant investment is a fraction of the $20 billion that GM is slated to pour into electric and autonomous cars by 2025, en route to the “all-electric future” touted by CEO Mary Barra. 

A reborn, electric GMC Hummer with up to 1,000 horsepower will be the first of about 20 GM Ultium-powered models, mainly for the U.S. and Chinese markets, when it reaches showrooms next year. It will be followed by a Cadillac Lyriq crossover SUV in 2022, and soon thereafter by an electric Chevrolet pickup.   

Andy Oury, GM’s lead architect for high-voltage batteries, said those customers will see benefits from the wireless system, without necessarily having to buy a new car. 

“Say, seven years from now, a customer needs an Ultium 1.0 battery, but we’re already using 2.0.,” Oury said. “As long as they’re compatible, we can install the better one: Just broadcast the new chemistry, and incorporate new calibration tables to run it.” 

Tim Grewe, GM’s director of global electrification, says consumers may soon expect batteries to last four to five times as long as today’s, and companies need to respond. To that end, the wireless system stores metadata from each cell. Real-time battery health checks will refocus the network of modules and sensors when needed, safeguarding battery health over the vehicle’s lifespan. Vehicle owners will be able to opt in or out of more extensive monitoring of driving patterns. Analyzing that  granular data, Grewe said, can tease out tiny differences between battery batches, suppliers, or performance in varying regions and climates. 

“It’s not that we’re getting bad batteries today, but there’s a lot of small variations,” says Grewe. “Now we can run the data: Was that electrolyte a little different, was the processing of that electrode coating a little different?
 
“Now, no matter where it is—in the factory, assembling the car, or down the line—we have a record of cloud-based data and machine learning to draw upon.” 

The eco-friendly approach eliminates about a kilogram per vehicle, as well as three meters of wiring. Jettisoning nearly 90 percent of pack wiring ekes out another advantage: Throughout the industry, wired battery connectors demand enough physical clearance for human techs to squeeze two fingers inside. Eliminating the wiring and touchpoints carves out room to stuff more batteries into a given space, with a lower-profile design. Which leaves plenty of room for a thumbs-up. 

Solar Closing in on “Practical” Hydrogen Production

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/renewables/solar-closing-in-on-practical-hydrogen-production

Israeli and Italian scientists have developed a renewable energy technology that converts solar energy to hydrogen fuel — and it’s reportedly at the threshold of “practical” viability.

The new solar tech would offer a sustainable way to turn water and sunlight into storable energy for fuel cells, whether that stored power feeds into the electrical grid or goes to fuel-cell powered trucks, trains, cars, ships, planes or industrial processes.

Think of this research as a sort of artificial photosynthesis, said Lilac Amirav, associate professor of chemistry at the Technion — Israel Institute of Technology in Haifa. (If it could be scaled up, the technology could eventually be the basis of “solar factories” in which arrays of solar collectors split water into stores of hydrogen fuel——as well as, for reasons discussed below, one or more other industrial chemicals.)

“We [start with] a semiconductor that’s very similar to what we have in solar panels,” says Amirav. But rather than taking the photovoltaic route of using sunlight to liberate a current of electrons, the reaction they’re studying harnesses sunlight to efficiently and cost-effectively peel off hydrogen from water molecules.

The big hurdle to date has been that hydrogen and oxygen just as readily recombine once they’re split apart—that is, unless a catalyst can be introduced to the reaction that shunts water’s two component elements away from one another.

Enter the rod-shaped nanoparticles Amirav and co-researchers have developed. The wand-like rods (50-60 nanometers long and just 4.5 nm in diameter) are all tipped with platinum spheres 2–3 nm in diameter, like nano-size marbles fastened onto the ends of drinking straws.

Since 2010, when the team first began publishing papers about such specially tuned nanorods, they’ve been tweaking the design to maximize its ability to extract as much hydrogen and excess energy as possible from “solar-to-chemical energy conversion.”

Which brings us back to those “other” industrial chemicals. Because creating molecular hydrogen out of water also yields oxygen, they realized they had to figure out what to do with that byproduct. “When you’re thinking about artificial photosynthesis, you care about hydrogen—because hydrogen’s a fuel,” says Amirav. “Oxygen is not such an interesting product. But that is the bottleneck of the process.”

There’s no getting around the fact that oxygen liberated from split water molecules carries energy away from the reaction, too. So, unless it’s harnessed, it ultimately represents just wasted solar energy—which means lost efficiency in the overall reaction.

So, the researchers added another reaction to the process. Not only does their platinum-tipped nanorod catalyst use solar energy to turn water into hydrogen, it also uses the liberated oxygen to convert the organic molecule benzylamine into the industrial chemical benzaldehyde (commonly used in dyes, flavoring extracts, and perfumes).

All told, the nanorods convert 4.2 percent of the energy of incoming sunlight into chemical bonds. Considering the energy in the hydrogen fuel alone, they convert 3.6 percent of sunlight energy into stored fuel.

These might seem like minuscule figures. But 3.6 percent is still considerably better than the 1-2 percent range that previous technologies had achieved. And according to the U.S. Department of Energy, 5-10 percent efficiency is all that’s needed to reach what the researchers call the “practical feasibility threshold” for solar hydrogen generation.

Between February and August of this year, Amirav and her colleagues published about the above innovations in the journals NanoEnergy and Chemistry Europe. They also recently presented their research at the fall virtual meeting of the American Chemical Society.

In their presentation, which hinted at future directions for their work, they teased further efficiency improvements courtesy of new new work with AI data mining experts.

“We are looking for alternative organic transformations,” says Amirav. This way, she and her collaborators hope, their solar factories can produce hydrogen fuel plus an array of other useful industrial byproducts. In the future, their artificial photosynthesis process could yield low-emission energy, plus some beneficial chemical extracts as a “practical” and “feasible” side-effect.

Emrod Chases The Dream Of Utility-Scale Wireless Power Transmission

Post Syndicated from David Wagman original https://spectrum.ieee.org/energywise/energy/the-smarter-grid/emrod-chases-the-dream-of-utilityscale-wireless-power-transmission

California wildfires knock out electric power to thousands of people; a hurricane destroys transmission lines that link electric power stations to cities and towns; an earthquake shatters homes and disrupts power service. The headlines are dramatic and seem to occur more and more often.

The fundamental vulnerability in each case is that the power grid relies on metal cables to carry electricity every meter along the way. Since the days of Nikola Tesla and his famous coil, inventors and engineers have dreamt of being able to send large amounts of electricity over long distances, and all without wires.

During the next several months, a startup company, a government-backed innovation institute and a major electric utility will aim to scale up a wireless electric power transmission system that they say will offer a commercially viable alternative to traditional wire transmission and distribution systems.

The underlying idea is nothing new: energy is converted into electromagnetic radiation by a transmitting antenna, picked up by a receiving antenna, and then distributed locally by conventional means. This is the same thing that happens in any radio system, but in radio the amount of power that reaches the receiver can be minuscule; picking up a few picowatts is all that is needed to deliver an intelligible signal. By contrast, the amount of raw energy sent via wireless power transfer is most important, and means the fraction of transmitted energy that is received becomes the key design parameter.

What’s new here is how New Zealand startup Emrod has borrowed ideas from radar and optics and used metamaterials to focus the transmitted radiation even more tightly than previous microwave-based wireless power attempts.

The “quasi-optical” system shapes the electromagnetic pulse into a cylindrical beam, thus making it “completely different” from the way a cell phone tower or radio antenna works, said Dr. Ray Simpkin, chief science officer at Emrod, which has a Silicon Valley office in addition to its New Zealand base. Simpkin’s background is in radar technology and he is on loan from Callaghan Innovation, the New Zealand government-sponsored innovation institute that is backing the wireless power startup.

Emrod’s laboratory prototype currently operates indoors at a distance of just 2 meters. Work is under way to build a 40-meter demonstration system, but it, too, will be indoors where conditions can be easily managed. Sometime next year though Emrod plans a field test at a still-to-be-determined grid-connected facility operated by Powerco, New Zealand’s second largest utility with around 1.1 million customers.

In an email, Powerco said that it is funding the test with an eye toward learning how much power the system can transmit and over what distance. The utility also is providing technical assistance to help Emrod connect the system to its distribution network. Before that can happen, however, the system must meet a number of safety, performance and environmental requirements.

One safety feature will be an array of lasers spaced along the edges of flat-panel receivers that are planned to catch and then pass along the focused energy beam. These lasers are pointed at sensors at the transmitter array so that if a bird in flight, for example, interrupted one of the lasers, the transmitter would pause a portion of the energy beam long enough for the bird to fly through.

Emrod’s electromagnetic beam operates at frequencies classified as industrial, scientific and medical (ISM). The company’s founder, Greg Kushnir, said in a recent interview that the power densities are roughly the equivalent of standing in the sun outside at noon, or around 1 kW per square meter.

Emrod sees an opportunity for utilities to deploy its technology to deliver electric service to remote areas and locations with difficult terrain. The company is looking at the feasibility of spanning a 30-km strait between the southern tip of New Zealand and Stewart Island. Emrod estimates that a 40-square-meter transmitter would do the job. And, although without offering detailed cost estimates, Simpkin said the system could cost around 60 percent that of a subsea cable.

Another potential application would be in post-disaster recovery. In that scenario, mobile transmitters would be deployed to close a gap between damaged or destroyed transmission and distribution lines.

The company has a “reasonable handle” on costs, Simpkin said, with the main areas for improvement coming from commercially available transmitter components. Here, the company expects that advancements in 5G communications technology will spur efficiency improvements. At present, its least efficient point is at the transmitter where existing electronic components are no better than around 70 percent efficient.

“The rule book hasn’t really been written,” he said, for this effort to meld wireless power transfer with radar and optics. “We are taking a softly, softly approach.”

ITER Celebrates Milestone, Still at Least a Decade Away From Fusing Atoms

Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/energy/nuclear/iter-fusion-reactor

It was a twinkle in U.S. President Ronald Reagan’s eye, an enthusiasm he shared with General Secretary Mikhail Gorbachev of the Soviet Union: boundless stores of clean energy from nuclear fusion.

That was 35 years ago. 

On July 28, 2020, the product of these Cold Warriors’ mutual infatuation with fusion, the International Thermonuclear Experimental Reactor (ITER) in Saint-Paul-lès-Durance, France inaugurated the start of the machine assembly phase of this industrial-scale tokamak nuclear fusion reactor. 

An experiment to demonstrate the feasibility of nuclear fusion as a virtually inexhaustible, waste-free and non-polluting source of energy, ITER has already been 30-plus years in planning, with tens of billions invested. And if there are new fusion reactors designed based on research conducted here, they won’t be powering anything until the latter half of this century.

Speaking from Elysée Palace in Paris via an internet link during last month’s launch ceremony, President Emmanuel Macron said, [ITER] is proof that what brings together people and nations is stronger than what pulls them apart. [It is] a promise of progress, and of confidence in science.” Indeed, as the COVID-19 pandemic continues to baffle modern science around the world, ITER is a welcome beacon of hope.

ITER comprises 35 collaborating countries, including members of the European Union, China, India, Japan, Russia, South Korea and the United States, which are directly contributing to the project either in cash or in kind with components and services. The EU has contributed about 45%, while the others pitch in about 9% each. The total cost of the project could be anywhere between $22 billion to $65 billion—even though the latter figure has been disputed.

The idea for ITER was sparked back in 1985, at the Geneva Superpower Summit, where President Ronald Reagan of the United States and General Secretary Mikhail Gorbachev of the Soviet Union spoke of an international collaboration to develop fusion energy. A year later, at the US–USSR Summit in Reykjavik, an agreement was reached between the European Union’s Euratom, Japan, the Soviet Union and the United States to jointly start work on the design of a fusion reactor. At that time, controlled release of fusion power hadn’t even been demonstrated—that only happened in 1991, by the Joint European Torus (JET) in the UK.

The first big component to be installed at ITER was the 1,250-metric ton cryostat base, which was lowered into the tokamak pit in late May 2020. The cryostat is India’s contribution to the reactor, and uses specialized tools specifically procured for ITER by the Korean Domestic Agency to place components weighing hundreds of tonnes and having positioning tolerances of a few millimeters. Machine assembly is scheduled to finish by the end of 2024, and by mid-2025, we are likely to see first plasma production.

Anil Bhardwaj, group leader of the cryostat team, tells IEEE Spectrum “First plasma will only verify [various] compliances for initial preparation of the plasma. That does not mean that we are achieving fusion.”

That will come another decade or so down the line.

If everything goes to plan, the first deuterium–tritium fusion experiments will be demonstrated by 2035, and will in essence be replicating the fusion reactions that take place in the sun. ITER estimates that for 50 MW of power injected into the tokamak to heat the plasma (up to 150 million degrees Celsius), 500 MW of thermal power for 400- to 600-second periods will be output, a tenfold return (expressed as Q ≥ 10). The existing record as of now is Q = 0.67, held by the JET tokamak.

Despite recent progress, there is still a lot of uncertainty around ITER. Critics decry the hyperbole around it, especially of it being a magic-bullet solution to the worlds energy problems, in the words of Daniel Jassby, a former researcher at the Princeton Plasma Physics Lab. His 2017 article explains why “scaling down the sun” may not be the ideal fallback plan.

“In the most feasible terrestrial fusion reaction [using deuterium–tritium fuel], 80% of the fusion output is in the form of barrages of neutron bullets, whose conversion to electrical energy is a dubious endeavor,” he said in an interview. Switching to a different type of reactor based on much weaker fusion reactions might result in less neutron production, but also are unlikely to produce net energy of any type.

Delays and mismanagement have also plagued ITER, something that Jassby contends was a result of poor leadership. “There are only a few people in the world who have the technological, administrative and political expertise that allow them to make continuous progress in directing and completing a multinational project,” he said. Bernard Bigot, who took over as director-general five years ago, possesses the requisite skillset, in Jassby’s opinion. At present, ITER is running about six years behind schedule.

Critics of ITER are also concerned about diverting resources from developing existing renewable energies. “The greatest energy issue of our time is not supply, but how to choose among the plethora of existing energy sources for wide-scale deployment,” Jassby said. ITER’s value, however, he said, lies in delinking the fantasy of electricity from fusion energy, thus saving hundreds of billions of dollars in the long run.

Jassby thinks that if successful, ITER will allow physicists to study long-lived, high-temperature fusioning plasmas or the development of neutron sources. There are practical applications for fusion neutrons, he says, such as isotope production, radiography and activation analysis. He adds that ITER can have significant benefits if new technologies emerge application in other fields, such as superconducting magnets, new materials and novel fabrication techniques.

Philippa Browning, professor of astrophysics at the University of Manchester, believes that only something of the scale of ITER can test how things work in fusion reactors. “It may well be that in future alternative devices turn out to be better, but those advantages could be incorporated into the successor to ITER which will be a demonstration fusion power station… The route to fusion power is slow, [so] we can hope that it will be ready when it is really needed in the second half of this century.” Meanwhile, she added, “it is important that other approaches to fusion are explored in parallel, smaller and more agile projects.”

One of the most impressive things about ITER, Browning said, is the combination of a truly international cooperation pushing at the frontiers in many ways. “Understanding how plasmas interact with magnetic fields is a hugely challenging scientific problem… There are all sorts of scientific and technological spin-offs, as well as the direct contribution to achieving, hopefully, a fusion power station.”

A Battery That’s Tough Enough To Take Structural Loads

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/energywise/energy/batteries-storage/a-structural-battery-that-makes-up-the-machine-that-it-powers

Batteries can add considerable mass to any design, and they have to be supported using a sufficiently strong structure, which can add significant mass of its own. Now researchers at the University of Michigan have designed a structural zinc-air battery, one that integrates directly into the machine that it powers and serves as a load-bearing part. 

That feature saves weight and thus increases effective storage capacity, adding to the already hefty energy density of the zinc-air chemistry. And the very elements that make the battery physically strong help contain the chemistry’s longstanding tendency to degrade over many hundreds of charge-discharge cycles. 

The research is being published today in Science Robotics.

Nicholas Kotov, a professor of chemical engineer, is the leader of the project. He would not say how many watt-hours his prototype stores per gram, but he did note that zinc air—because it draw on ambient air for its electricity-producing reactions—is inherently about three times as energy-dense as lithium-ion cells. And, because using the battery as a structural part means dispensing with an interior battery pack, you could free up perhaps 20 percent of a machine’s interior. Along with other factors the new battery could in principle provide as much as 72 times the energy per unit of volume (not of mass) as today’s lithium-ion workhorses.

“It’s not as if we invented something that was there before us,” Kotov says. ”I look in the mirror and I see my layer of fat—that’s for the storage of energy, but it also serves other purposes,” like keeping you warm in the wintertime.  (A similar advance occurred in rocketry when designers learned how to make some liquid propellant tanks load bearing, eliminating the mass penalty of having separate external hull and internal tank walls.)

Others have spoken of putting batteries, including the lithium-ion kind, into load-bearing parts in vehicles. Ford, BMW, and Airbus, for instance, have expressed interest in the idea. The main problem to overcome is the tradeoff in load-bearing batteries between electrochemical performance and mechanical strength.

The Michigan group get both qualities by using a solid electrolyte (which can’t leak under stress) and by covering the electrodes with a membrane whose nanostructure of fibers is derived from Kevlar. That makes the membrane tough enough to suppress the growth of dendrites—branching fibers of metal that tend to form on an electrode with every charge-discharge cycle and which degrade the battery.

The Kevlar need not be purchased new but can be salvaged from discarded body armor. Other manufacturing steps should be easy, too, Kotov says. He has only just begun to talk to potential commercial partners, but he says there’s no reason why his battery couldn’t hit the market in the next three or four years.

Drones and other autonomous robots might be the most logical first application because their range is so severely chained to their battery capacity. Also, because such robots don’t carry people about, they face less of a hurdle from safety regulators leery of a fundamentally new battery type.

“And it’s not just about the big Amazon robots but also very small ones,” Kotov says. “Energy storage is a very significant issue for small and flexible soft robots.”

Here’s a video showing how Kotov’s lab has used batteries to form the “exoskeleton” of robots that scuttle like worms or scorpions.

The Electric Weed-Zapper Renaissance

Post Syndicated from David Schneider original https://spectrum.ieee.org/tech-talk/energy/environment/the-electric-weed-zapper-renaissance

In the 1890s, U.S. railroad companies struggled with what remains a problem for railroads across the world: weeds. The solution that 19th-century railroad engineers devised made use of a then-new technology—high-voltage electricity, which they discovered could zap troublesome vegetation overgrowing their tracks. Somewhat later, the people in charge of maintaining tracks turned using fire instead. But the approach to weed control that they and countless others ultimately adopted was applying chemical herbicides, which were easier to manage and more effective.

The use of herbicides, whether on railroad rights of way, agricultural fields, or suburban gardens, later raised health concerns, though. More than 100,000 people in the United States, for example, have claimed that Monsanto’s Roundup weed killer caused them to get cancer—claims that Bayer, which now owns Monsanto, is trying hard of late to settle.

Meanwhile, more and more places are banning the use of Roundup and similar glyphosate herbicides. Currently, half of all U.S. states have legal restrictions in place that limit the use of such chemical weed killers. Such restrictions are also in place in 19 other countries, including Austria, which banned the chemical in 2019, and Germany, which will be phasing it out by 2023. So, it’s no wonder that the concept of using electricity to kill weeds is undergoing a renaissance.

Actually, the idea never really died. A U.S. company called Lasco has been selling electric weed-killing equipment for decades. More recently, another U.S. company has been marketing this technology under the name “The Weed Zapper.” But the most interesting developments along these lines are in Europe, where electric weed control seems to be gaining real traction.

One company trying to replace herbicides with electricity is RootWave, based in the U.K. Andrew Diprose, RootWave’s CEO, is the son of Michael Diprose, who spent much of his career as a researcher at the University of Sheffield studying ways to control weeds with electricity.

Electricity, the younger Diprose explains, boasts some key benefits over other non-chemical forms of weed control, which include using hot water, steam, and mechanical extraction. In particular, electric weed control doesn’t require any water. It’s also considerably more energy efficient than using steam, which requires an order of magnitude more fuel. And unlike mechanical means, electric weed killing is also consistent with modern “no till” agricultural practices. What’s more, Diprose asserts, the cost is now comparable with chemical herbicides.

Unlike the electric weed-killing gear that’s long been sold in the United States, RootWave’s equipment runs at tens of kilohertz—a much higher frequency than the power mains. This brings two advantages. For one, it makes the equipment lighter, because the transformers required to raise the voltage to weed-zapping levels (thousands of volts) can be much smaller. It also makes the equipment safer, because higher frequencies pose less of a threat of electrocution. Should you accidentally touch a live electrode “you will get a burn,” says Diprose, but there is much less of a threat of causing cardiac arrest than there would be with a system that operated at 50 or 60 hertz.

RootWave has two systems, a hand-carried one operating at 5 kilowatts and a 20-kilowatt version carried by a tractor. The company is currently collaborating with various industrial partners, including another U.K. startup called Small Robot Company, which plans to outfit an agricultural robot for automated weed killing with electricity. 

And RootWave isn’t the only European company trying to revive this old idea. Netherlands-based CNH Industrial is also promoting electric weed control with a tractor-mounted system it has dubbed “XPower.” Like RootWave’s tractor-mounted system, the electrodes are swept over a field at a prescribed height, killing the weeds that poke up higher than the crop to be preserved.

Of the many advantages CNH touts for its weed-electrocution system (which presumably applies to all such systems, ever since the 1890s) is “No specific resistance expectable.” I should certainly hope not. But I do think that a more apropos wording here, for something that destroys weeds by placing them in a high-voltage electrical circuit, might be a phrase that both Star Trek fans and electrical engineers could better appreciate: “Resistance is futile.