Tag Archives: Energy/Renewables

Germany’s Energiewende, 20 Years Later

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/energy/renewables/germanys-energiewende-20-years-later

In 2000, Germany launched a deliberately targeted program to decarbonize its primary energy supply, a plan more ambitious than anything seen anywhere else. The policy, called the Energiewende, is rooted in Germany’s naturalistic and romantic tradition, reflected in the rise of the Green Party and, more recently, in public opposition to nuclear electricity generation. These attitudes are not shared by the country’s two large neighbors: France built the world’s leading nuclear industrial complex with hardly any opposition, and Poland is content burning its coal.

The policy worked through the government subsidization of renewable electricity generated with photovoltaic cells and wind turbines and by burning fuels produced by the fermentation of crops and agricultural waste. It was accelerated in 2011 when Japan’s nuclear disaster in Fukushima led the German government to order that all its nuclear power plants be shut down by 2022.

During the past two decades, the Energiewende has been praised as an innovative miracle that will inexorably lead to a completely green Germany and criticized as an expensive, poorly coordinated overreach. I will merely present the facts.

The initiative has been expensive, and it has made a major difference. In 2000, 6.6 percent of Germany’s electricity came from renewable sources; in 2019, the share reached 41.1 percent. In 2000, Germany had an installed capacity of 121 gigawatts and it generated 577 terawatt-hours, which is 54 percent as much as it theoretically could have done (that is, 54 percent was its capacity factor). In 2019, the country produced just 5 percent more (607 TWh), but its installed capacity was 80 percent higher (218.1 GW) because it now had two generating systems.

The new system, using intermittent power from wind and solar, accounted for 110 GW, nearly 50 percent of all installed capacity in 2019, but operated with a capacity factor of just 20 percent. (That included a mere 10 percent for solar, which is hardly surprising, given that large parts of the country are as cloudy as Seattle.) The old system stood alongside it, almost intact, retaining nearly 85 percent of net generating capacity in 2019. Germany needs to keep the old system in order to meet demand on cloudy and calm days and to produce nearly half of total demand. In consequence, the capacity factor of this sector is also low.

It costs Germany a great deal to maintain such an excess of installed power. The average cost of electricity for German households has doubled since 2000. By 2019, households had to pay 34 U.S. cents per kilowatt-hour, compared to 22 cents per kilowatt-hour in France and 13 cents in the United States.

We can measure just how far the Energiewende has pushed Germany toward the ultimate goal of decarbonization. In 2000, the country derived nearly 84 percent of its total primary energy from fossil fuels; this share fell to about 78 percent in 2019. If continued, this rate of decline would leave fossil fuels still providing nearly 70 percent of the country’s primary energy supply in 2050.

Meanwhile, during the same 20-year period, the United States reduced the share of fossil fuels in its primary energy consumption from 85.7 percent to 80 percent, cutting almost exactly as much as Germany did. The conclusion is as surprising as it is indisputable. Without anything like the expensive, target-mandated Energiewende, the United States has decarbonized at least as fast as Germany, the supposed poster child of emerging greenness.

This article appears in the December 2020 print issue as “Energiewende, 20 Years Later.”

Iron Powder Passes First Industrial Test as Renewable, Carbon Dioxide-Free Fuel

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/energywise/energy/renewables/iron-powder-passes-first-industrial-test-as-renewable-co2free-fuel

Simple question: What if we could curb this whole fossil fuel-fed climate change nightmare and burn something else as an energy source instead? As a bonus, what if that something else is one of the most common elements on Earth?

Simple answer: Let’s burn iron.

While setting fire to an iron ingot is probably more trouble than it’s worth, fine iron powder mixed with air is highly combustible. When you burn this mixture, you’re oxidizing the iron. Whereas a carbon fuel oxidizes into CO2, an iron fuel oxidizes into Fe2O3, which is just rust. The nice thing about rust is that it’s a solid which can be captured post-combustion. And that’s the only byproduct of the entire business—in goes the iron powder, and out comes energy in the form of heat and rust powder. Iron has an energy density of about 11.3 kWh/L, which is better than gasoline. Although its specific energy is a relatively poor 1.4 kWh/kg, meaning that for a given amount of energy, iron powder will take up a little bit less space than gasoline but it’ll be almost ten times heavier.

It might not be suitable for powering your car, in other words. It probably won’t heat your house either. But it could be ideal for industry, which is where it’s being tested right now.

Researchers from TU Eindhoven have been developing iron powder as a practical fuel for the past several years, and last month they installed an iron powder heating system at a brewery in the Netherlands, which is turning all that stored up energy into beer. Since electricity can’t efficiently produce the kind of heat required for many industrial applications (brewing included), iron powder is a viable zero-carbon option, with only rust left over.

So what happens to all that rust? This is where things get clever, because the iron isn’t just a fuel that’s consumed— it’s energy storage that can be recharged. And to recharge it, you take all that Fe2O3, strip out the oxygen, and turn it back into Fe, ready to be burned again. It’s not easy to do this, but much of the energy and work that it takes to pry those Os away from the Fes get returned to you when you burn the Fe the next time. The idea is that you can use the same iron over and over again, discharging it and recharging it just like you would a battery.

To maintain the zero-carbon nature of the iron fuel, the recharging process has to be zero-carbon as well. There are a variety of different ways of using electricity to turn rust back into iron, and the TU/e researchers are exploring three different technologies based on hot hydrogen reduction (which turns iron oxide and hydrogen into iron and water), as they described to us in an email:

Mesh Belt Furnace: In the mesh belt furnace the iron oxide is transported by a conveyor belt through a furnace in which hydrogen is added at 800-1000°C. The iron oxide is reduced to iron, which sticks together because of the heat, resulting in a layer of iron. This can then be ground up to obtain iron powder.
Fluidized Bed Reactor: This is a conventional reactor type, but its use in hydrogen reduction of iron oxide is new. In the fluidized bed reactor the reaction is carried out at lower temperatures around 600°C, avoiding sticking, but taking longer.
Entrained Flow Reactor: The entrained flow reactor is an attempt to implement flash ironmaking technology. This method performs the reaction at high temperatures, 1100-1400°C, by blowing the iron oxide through a reaction chamber together with the hydrogen flow to avoid sticking. This might be a good solution, but it is a new technology and has yet to be proven.

Both production of the hydrogen and the heat necessary to run the furnace or the reactors require energy, of course, but it’s grid energy that can come from renewable sources. 

If renewing the iron fuel requires hydrogen, an obvious question is why not just use hydrogen as a zero-carbon fuel in the first place? The problem with hydrogen is that as an energy storage medium, it’s super annoying to deal with, since storing useful amounts of it generally involves high pressure and extreme cold. In a localized industrial setting (like you’d have in your rust reduction plant) this isn’t as big of a deal, but once you start trying to distribute it, it becomes a real headache. Iron powder, on the other hand, is safe to handle, stores indefinitely, and can be easily moved with existing bulk carriers like rail.

Which is why its future looks to be in applications where weight is not a primary concern and collection of the rust is feasible. In addition to industrial heat generation (which will eventually include retrofitting coal-fired power plants to burn iron powder instead), the TU/e researchers are exploring whether iron powder could be used as fuel for large cargo ships, which are extraordinarily dirty carbon emitters that are also designed to carry a lot of weight. 

Philip de Goey, a professor of combustion technology at TU/e, told us that he hopes to be able to deploy 10 MW iron powder high-temperature heat systems for industry within the next four years, with 10 years to the first coal power plant conversion. There are still challenges, de Goey tells us: “the technology needs refinement and development, the market for metal powders needs to be scaled up, and metal powders have to be part of the future energy system and regarded as safe and clean alternative.” De Goey’s view is that iron powder has a significant but well-constrained role in energy storage, transport, and production that complements other zero-carbon sources like hydrogen. For a zero carbon energy future, de Goey says, “there is no winner or loser— we need them all.”

Why Does the U.S. Have Three Electrical Grids?

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/renewables/why-does-the-us-have-three-electrical-grids

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

If you look at lists of the 100 greatest inventions of all time, electricity figures prominently. Once you get past some key enablers that can’t really be called inventions—fire, money, the wheel, calendars, the alphabet—you find things like light bulbs, the automobile, refrigeration, radios, the telegraph and telephone, airplanes, computers and the Internet. Antibiotics and the modern hospital would be impossible without refrigeration. The vaccines we’re all waiting for depend on electricity in a hundred different ways.

It’s the key to modern life as we know it, and yet, universal, reliable service remains an unsolved problem. By one estimate, a billion people still do without it. Even in a modern city like Mumbai, generators are commonplace, because of an uncertain electrical grid. This year, California once again saw rolling blackouts, and with our contemporary climate producing heat waves that can stretch from the Pacific Coast to the Rocky Mountains, they won’t be the last.

Electricity is hard to store and hard to move, and electrical grids are complex, creaky, and expensive to change. In the early 20teens, Europe began merging its distinct grids into a continent-wide supergrid, an algorithm-based project that IEEE Spectrum wrote about in 2014. The need for a continent-wide supergrid in the U.S. has been almost as great, and by 2018 the planning of one was pretty far long—until it hit a roadblock that, two years later, still stymies any progress. The problem is not the technology, and not even the cost. The problem is political. That’s the conclusion of an extensively reported investigation jointly conducted by The Atlantic magazine and InvestigateWest, a watchdog nonprofit that was founded in 2009 after the one of Seattle’s daily newspapers stopped publishing. The resulting article, with the heading, “Who Killed the Supergrid?”, was written by Peter Fairley, who has been a longtime contributing editor for IEEE Spectrum and is my guest today. He joins us via Skype.

Peter, welcome to the podcast.

Peter Fairley It’s great to be here, Steven.

Steven Cherry Peter, you wrote that 2014 article in Spectrum about the Pan-European Hybrid Electricity Market Integration Algorithm, which you say was needed to tie together separate fiefdoms. Maybe you can tell us what was bad about the separate fiefdoms served Europe nobly for a century.

Peter Fairley Thanks for the question, Steven. That story was about a pretty wonky development that nevertheless was very significant. Europe, over the last century, has amalgamated its power systems to the point where the European grid now exchange’s electricity, literally across the continent, north, south, east, west. But until fairly recently, there have been sort of different power markets operating within it. So even though the different regions are all physically interconnected, there’s a limit to how much power can actually flow all the way from Spain up to Central Europe. And so there are these individual regional markets that handle keeping the power supply and demand in balance, and putting prices on electricity. And that algorithm basically made a big step towards integrating them all. So that you’d have one big market and a more competitive, open market and the ability to, for example, if you have spare wind power in one area, to then make use of that in some place a thousand kilometers away.

Steven Cherry The U.S. also has separate fiefdoms. Specifically, there are three that barely interact at all. What are they? And why can’t they share power?

Peter Fairley Now, in this case, when we’re talking about the U.S. fiefdoms, we’re talking about big zones that are physically divided. You have the Eastern—what’s called the Eastern Interconnection—which is a huge zone of synchronous AC power that’s basically most of North America east of the Rockies. You have the Western Interconnection, which is most of North America west of the Rockies. And then you have Texas, which has its own separate grid.

Steven Cherry And why can’t they share power?

Peter Fairley Everything within those separate zones is synched up. So you’ve got your 60 hertz AC wave; 60 times a second the AC power flow is changing direction. And all of the generators, all of the power consumption within each zone is doing that synchronously. But the east is doing it on its own. The West is on a different phase. Same for Texas.

Now you can trickle some power across those divides, across what are called “seams” that separate those, using DC power converters—basically, sort of giant substations with the world’s largest electronic devices—which are taking some AC power from one zone, turning it into DC power, and then producing a synthetic AC wave, to put that power into another zone. So to give you a sense of just what the scale of the transfers is and how small it is, the East and the West interconnects have a total of about 950 gigawatts of power-generating capacity together. And they can share a little over one gigawatt of electricity.

Steven Cherry So barely one-tenth of one percent. There are enormous financial benefits and reliability benefits to uniting the three. Let’s start with reliability.

Peter Fairley Historically, when grids started out, you would have literally a power system for one neighborhood and a separate power system for another. And then ultimately, over the last century, they have amalgamated. Cities connected with each other and then states connected with each other. Now we have these huge interconnections. And reliability has been one of the big drivers for that because you can imagine a situation where if you if you’re in city X and your biggest power generator goes offline, you know, burn out or whatever. If you’re interconnected with your neighbor, they probably have some spare generating capacity and they can help you out. They can keep the system from going down.

So similarly, if you could interconnect the three big power systems in North America, they could support each other. So, for example, if you have a major blackout or a major weather event like we saw last month—there was this massive heatwave in the West, and much of the West was struggling to keep the lights on. It wasn’t just California. If they were more strongly interconnected with Texas or the Eastern Interconnect, they could have leaned on those neighbors for extra power supply.

Steven Cherry Yeah, your article imagines, for example, the sun rising in the West during a heatwave sending power east; the sun setting in the Midwest, wind farms could send power westward. What about the financial benefits of tying together these three interconnects? Are they substantial? And are they enough to pay for the work that would be needed to unify them into a supergrid?

Peter Fairley The financial benefits are substantial and they would pay for themselves. And there’s really two reasons for that. One is as old as our systems, and that is, if you interconnect your power grids, then all of the generators in the amalgamated system can, in theory, they can all serve that total load. And what that means is they’re all competing against each other. And power plants that are inefficient are more likely to be driven out of the market or to operate less frequently. And so that the whole system becomes more efficient, more cost-effective, and prices tend to go down. You see that kind of savings when you look at interconnecting the big grids in North America. Consumers benefit—not necessarily all the power generators, right? There you get more winners and losers. And so that’s the old part of transmission economics.

What’s new is the increasing reliance on renewable energy and particularly variable renewable energy supplies like wind and solar. Their production tends to be more kind of bunchy, where you have days when there’s no wind and you have days when you’ve got so much wind that the local system can barely handle it. So there are a number of reasons why renewable energy really benefits economically when it’s in a larger system. You just get better utilization of the same installations.

Steven Cherry And that’s all true, even though sending power 1000 miles or 3000 miles? You lose a fair amount of that generation, don’t you?

Peter Fairley It’s less than people imagine, especially if you’re using the latest high voltage direct current power transmission equipment. DC power lines transmit power more efficiently than AC lines do, because the physics are actually pretty straightforward. An AC current will ride on the outside of a power cable, whereas a DC current will use the entire cross-section of the metal. And so you get less resistance overall, less heating, and less loss. And so. And the power electronics that you need on either side of a long power line like that are also becoming much more efficient. So you’re talking about losses of a couple of percent on lines that, for example in China, span over 3000 kilometers.

Steven Cherry The reliability benefits, the financial benefits, the way a supergrid would be an important step for helping us move off of our largely carbon-based sources of power—we know all this in part because in the mid-2010s a study was made of the feasibility—including the financial feasibility—of unifying the U.S. in one single supergrid. Tell us about the Interconnections Seams Study.

Peter Fairley So the Interconnection Seams Study [Seams] was one of a suite of studies that got started in 2016 at the National Renewable Energy Laboratory in Colorado, which is one of the national labs operated by the U.S. Department of Energy. And the premise of the Seams study was that the electronic converters sitting between the east and the west grids were getting old; they were built largely in the 70s; they are going to start to fail and need to be replaced.

And the people at NREL were saying, this is an opportunity. Let’s think—and the power operators along the seam were thinking the same thing—we’re gonna have to replace these things. Let’s study our strategic options rather than have them go out of service and just automatically replace them with similar equipment. So what they posited was, let’s look at some longer DC connections to tie the East and the West together—and maybe some bigger ones. And let’s see if they pay for themselves. Let’s see if they have the kind of transformative effects that one would imagine that they would, just based on the theory. So they set up a big simulation modeling effort and they started running the numbers…

Now, of course, this got started in 2016 under President Obama. And it continued to 2017 and 2018 under a very different president. And basically, they affirmed that tying these grids, together with long DC lines, was a great idea, that it would pay for itself, that it would make much better use of renewable energy. But it also showed that it would accelerate the shutdown of coal-fired power. And that got them in some hot water with the new masters at the Department of Energy.

Steven Cherry By 2018 the study was largely completed and researchers will begin to share its conclusions with other energy experts and policymakers. For example, there was a meeting in Iowa. You describe where there is a lot of excitement over the scenes study. You write that things took a dramatic turn at one such gathering in Lawrence, Kansas.

Peter Fairley Yes. So the study was complete as far as the researchers were concerned. And they were working on their final task under their contract from the Department of Energy, which was to write and submit a journal article in this case. They were targeting an IEEE journal. And they, as you say, had started making some presentations. The second one was in August, in Kansas, and there’s a DOE official—a political appointee—who’s sitting in the audience and she does not like what she’s hearing. She, while the talk is going on, pulls out her cell phone, writes an email to DOE headquarters, and throws a red flag in the air.

Steven Cherry The drama moved up the political chain to a pretty high perch.

Peter Fairley According to an email from one of the researchers that I obtained and is presented in the InvestigateWest version of this article, it went all the way to the current secretary of energy, Daniel Brouillette, and perhaps to the then-Secretary of Energy, former Texas Governor [Rick] Perry.

Steven Cherry And the problem you say in that article was essentially the U.S. administration’s connections to—devotion to—the coal industry.

Peter Fairley Right. You’ve got a president who has made a lot of noise both during his election campaign and since then about clean, beautiful coal. He is committed to trying to stop the bleeding in the U.S. coal industry, to slow down or stop the ongoing mothballing of coal-fired power plants. His Secretary of Energy. Rick Perry is doing everything he can to deliver on Trump’s promises. And along comes this study that says we can have a cleaner, more efficient power system with less coal. And yes, so it just ran completely counter to the political narrative of the day.

Steven Cherry You said earlier the financial benefits to consumers are unequivocal. But in the case of the energy providers, there would be winners and losers and the losers with largely come from the coal industry.

Peter Fairley I would just add one thing to that, and that is and this depends on really the different systems. You’re looking at the different conditions and scenarios and assumptions. But, you know, in a scenario where you have more renewable energy, there are also going to be impacts on natural gas. And the oil and gas industry is definitely also a major political backer of the Trump administration.

Steven Cherry The irony is that the grid is moving off of coal anyway, and to some extent, oil and even natural gas, isn’t it?

Peter Fairley Definitely oil. It’s just a very expensive and inefficient way to produce power. So we’ve been shutting that down for a long time. There’s very little left. We are shutting down coal at a rapid rate in spite of every effort to save it. Natural gas is growing. So natural gas has really been—even more so than renewables—the beneficiary of the coal shutdown. Natural gas is very cheap in the U.S. thanks to widespread fracking. And so it’s coming on strong and it’s still growing.

Steven Cherry Where is the Seams study now?

Peter Fairley The Seams study is sitting at the National Renewable Energy Lab. Its leaders, under pressure from the political appointees at DOE, its leaders have kept it under wraps. It appears that there may have been some additional work done on the study since it got held up in 2018. But we don’t know what the nature of that work was. Yeah, so it’s just kind of missing in action at this point.

My sources tell me that there is an effort underway at the lab to get it out. And I think the reason for that is that they’ve taken a real hit in terms of the morale of their staff. the NREL Seams study is not the only one that’s been held up, that is being held up. In fact, it’s one of dozens, according to my follow-up reporting. And, you know, NREL researchers are feeling pretty hard done by and I think the management is trying to show its staff that it has some scientific integrity.

But I think it’s important to note that there are other political barriers to building a supergrid. It might be a no brainer on paper, but in addition to the pushback from the fossil-fuel industry that we’re seeing with Seams, there are other political crosscurrents that have long stood in the way of long-distance transmission in the U.S. For example—and this is a huge one—that, in the U.S., most states have their own public utility commission that has to approve new power lines. And when you’re looking at the kind of lines that Seams contemplated, or that would be part of a supergrid, you’re talking about long lines that have to span, in some cases, a dozen states. And so you need to get approval from each of those states to transit— to send power from point A to point X. And that is a huge challenge. There’s a wonderful book that really explores that side of things called Superpower [Simon & Schuster, 2019] by the Wall Street Journal’s Russell Gold.

Steven Cherry The politics that led to the suppression of the publication of the Seams study go beyond Seams itself don’t they? There are consequences, for example, at the Office of Energy Efficiency and Renewable Energy.

Peter Fairley Absolutely. Seams is one of several dozen studies that I know of right now that are held up and they go way beyond transmission. They get into energy efficiency upgrades to low-income housing, prices for solar power… So, for example—and I believe this hasn’t been reported yet; I’m working on it—the Department of Energy has hitherto published annual reports on renewable energy technologies like wind and solar. And, in those, they provide the latest update on how much it costs to build a solar power plant, for example. And they also update their goals for the technology. Those annual reports have now been canceled. They will be every other year, if not less frequent. That’s an example of politics getting in the way because the cost savings from delaying those reports are not great, but the potential impact on the market is. There are many studies, not just those performed by the Department of Energy that will use those official price numbers in their simulations. And so if you delay updating those prices for something like solar, where the prices are coming down rapidly, you are making renewable energy look less competitive.

Steven Cherry And even beyond the Department of Energy, the EPA, for example, has censored itself on the topic of climate change, removing information and databases from its own Web sites.

Peter Fairley That’s right. The way I think of it is, when you tell a lie, it begets other lies. And you and you have to tell more lies to cover your initial lie and to maintain the fiction. And I see the same thing at work here with the Trump administration. When the president says that climate change is a hoax, when the president says that coal is a clean source of power, it then falls to the people below him on the political food chain to somehow make the world fit his fantastical and anti-science vision. And so, you just get this proliferation of information control in a hopeless bid to try and bend the facts to somehow make the great leader look reasonable and rational.

Steven Cherry You say even e-mails related to the Seams study have disappeared, something you found in your Freedom of Information Act requests. What about the national labs themselves? Historically, they have been almost academic research organizations or at least a home for unfettered academic freedom style research.

Peter Fairley That’s the idea. There has been this presumption or practice in the past, under past administrations, that the national labs had some independence. And that’s not to say that there’s never been political oversight or influence on the labs. Certainly, the Department of Energy decides what research it’s going to fund at the labs. And so that in itself shapes the research landscape. But there was always this idea that the labs would then be—you fund the study and then it’s up to the labs to do the best work they can and to publish the results. And the idea that you are deep-sixing studies that are simply politically inconvenient or altering the content of the studies to fit the politics that’s new. That’s what people at the lab say is new under the Trump administration. It violates. DOE’s own scientific integrity policies in some cases, for example, with the Lawrence Berkeley National Laboratory. It violates the lab’s scientific integrity policy and the contract language under which the University of California system operates that lab for the Department of Energy. So, yeah, the independence of the national labs is under threat today. And there are absolutely concerns among scientists that precedents are being set that could affect how the labs operate, even if, let’s say, President Trump is voted out of office in November.

Steven Cherry Along those lines, what do you think the future of grid unification is?

Peter Fairley Well, Steven, I’ve been writing about climate and energy for over 20 years now, and I would have lost my mind if I wasn’t a hopeful person. So I still feel optimistic about our ability to recognize the huge challenge that climate change poses and to change the way we live and to change our energy system. And so I do think that we will see longer power lines helping regions share energy in the future. I am hopeful about that. It’s just it makes too much sense to leave that on the shelf.

Steven Cherry Well, Peter, it’s an amazing investigation of the sort that reminds us why the press is important enough to democracy to be called the fourth estate. Thanks for publishing this work and for joining us today.

Peter Fairley Thank you so much. Steven. It’s been a pleasure.

Steven Cherry We’ve been speaking with Peter Fairley, a journalist who focuses on energy and the environment, about his researching and reporting on the suspension of work on a potential unification of the U.S. energy grid.

This interview was recorded September 11, 2020. Our audio engineering was by Gotham Podcast Studio; our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

Exclusive: Airborne Wind Energy Company Closes Shop, Opens Patents

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/renewables/exclusive-airborne-wind-energy-company-closes-shop-opens-patents

This week, a 13-year experiment in harnessing wind power using kites and modified gliders finally closes down for good. But the technology behind it is open-sourced and is being passed on to others in the field.

As of 10 September, the airborne wind energy (AWE) company Makani Technologies has officially announced its closure. A key investor, the energy company Shell, also released a statement to the press indicating that “given the current economic environment” it would not be developing any of Makani’s intellectual property either. Meanwhile, Makani’s parent company, X, Alphabet’s moonshot factory, has made a non-assertion pledge on Makani’s patent portfolio. That means anyone who wants to use Makani patents, designs, software, and research results can do so without fear of legal reprisal.

Makani’s story, recounted last year on this site, is now the subject of a 110-minute documentary called Pulling Power from the Sky—also free to view.

When she was emerging from graduate studies at MIT in 2009, Paula Echeverri (once Makani’s chief engineer) said the company was a compelling team to join, especially for a former aerospace engineering student.

“Energy kite design is not quite aircraft design and not quite wind turbine design,” she said.

The idea behind the company’s technology is to raise the altitude of the wind energy harvesting to hundreds of meters in the sky—where the winds are typically both stronger and more steady. Because a traditional windmill reaching anywhere approaching these heights would be impractical, Makani was looking into kites or gliders that could ascend to altitude first—fastened to the ground by a tether. Only then would the flyer begin harvesting energy from wind gusts.

Pulling Power recounts Makani’s story from its very earliest days, circa 2006, when kites like the ones kite surfers use were the wind energy harvester of choice. However, using kites also means drawing power out of the tug on the kite’s tether. Which, as revealed by the company’s early experiments, couldn’t compete with propellers on a glider plane.

What became the Makani basic flyer, the M600 Energy Kite, looked like an oversized hobbyist’s glider but with a bank of propellers across the wing. These props would first be used to loft the glider to its energy-harvesting altitude. Then the engine would shut off and the glider would ride the air currents—using the props as mini wind turbines.

According to a free 1,180-page ebook (Part 1Part 2Part 3The Energy Kite, which Makani is also releasing online, the company soon found a potentially profitable niche in operating offshore.

Just in terms of tonnage, AWE had a big advantage over traditional offshore wind farms. Wind turbines (in shallow water) fixed to the seabed might require 200 to 400 tons of metal for every megawatt of power the turbine generated. And floating deep-water turbines, anchored to seabed by cables, typically involve 800 tons or more per megawatt. Meanwhile, a Makani AWE platform—which can be anchored in even deeper water—weighed only 70 tons per rated megawatt of generating capacity.

Yet, according to the ebook, in real-world tests, Makani’s M600 proved difficult to fly at optimum speed. In high winds, it couldn’t fly fast enough to pull as much power out of the wind as the designers had hoped. In low winds, it often flew too fast. In all cases, the report says, the rotors just couldn’t operate at peak capacity through much of the flyer’s maneuvers. The upshot: The company had a photogenic oversized model airplane, but not the technology that’d give regular wind turbines a run for their money.

Don’t take Makani’s word for it, though, says Echeverri. Not only is the company releasing its patents into the wild, it’s also giving away its code baseflight logs, and a Makani flyer simulation tool called KiteFAST.

“I think that the physics and the technical aspects are still such that, in floating offshore wind, there’s a ton of opportunity for innovation,” says Echeverri.

One of the factors the Makani team didn’t anticipate in the company’s early years, she said, was how precipitously electricity prices would continue to dropleaving precious little room at the margins for new technologies like AWEs to blossom and grow.

“We’re thinking about the existing airborne wind industry,” Echeverri said. “For people working on the particular problems we’d been working on, we don’t want to bury those lessons. We also found this to be a really inspiring journey for us as engineers—a joyful journey… It is worthwhile to work on hard problems.”

Solar Closing in on “Practical” Hydrogen Production

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/renewables/solar-closing-in-on-practical-hydrogen-production

Israeli and Italian scientists have developed a renewable energy technology that converts solar energy to hydrogen fuel — and it’s reportedly at the threshold of “practical” viability.

The new solar tech would offer a sustainable way to turn water and sunlight into storable energy for fuel cells, whether that stored power feeds into the electrical grid or goes to fuel-cell powered trucks, trains, cars, ships, planes or industrial processes.

Think of this research as a sort of artificial photosynthesis, said Lilac Amirav, associate professor of chemistry at the Technion — Israel Institute of Technology in Haifa. (If it could be scaled up, the technology could eventually be the basis of “solar factories” in which arrays of solar collectors split water into stores of hydrogen fuel——as well as, for reasons discussed below, one or more other industrial chemicals.)

“We [start with] a semiconductor that’s very similar to what we have in solar panels,” says Amirav. But rather than taking the photovoltaic route of using sunlight to liberate a current of electrons, the reaction they’re studying harnesses sunlight to efficiently and cost-effectively peel off hydrogen from water molecules.

The big hurdle to date has been that hydrogen and oxygen just as readily recombine once they’re split apart—that is, unless a catalyst can be introduced to the reaction that shunts water’s two component elements away from one another.

Enter the rod-shaped nanoparticles Amirav and co-researchers have developed. The wand-like rods (50-60 nanometers long and just 4.5 nm in diameter) are all tipped with platinum spheres 2–3 nm in diameter, like nano-size marbles fastened onto the ends of drinking straws.

Since 2010, when the team first began publishing papers about such specially tuned nanorods, they’ve been tweaking the design to maximize its ability to extract as much hydrogen and excess energy as possible from “solar-to-chemical energy conversion.”

Which brings us back to those “other” industrial chemicals. Because creating molecular hydrogen out of water also yields oxygen, they realized they had to figure out what to do with that byproduct. “When you’re thinking about artificial photosynthesis, you care about hydrogen—because hydrogen’s a fuel,” says Amirav. “Oxygen is not such an interesting product. But that is the bottleneck of the process.”

There’s no getting around the fact that oxygen liberated from split water molecules carries energy away from the reaction, too. So, unless it’s harnessed, it ultimately represents just wasted solar energy—which means lost efficiency in the overall reaction.

So, the researchers added another reaction to the process. Not only does their platinum-tipped nanorod catalyst use solar energy to turn water into hydrogen, it also uses the liberated oxygen to convert the organic molecule benzylamine into the industrial chemical benzaldehyde (commonly used in dyes, flavoring extracts, and perfumes).

All told, the nanorods convert 4.2 percent of the energy of incoming sunlight into chemical bonds. Considering the energy in the hydrogen fuel alone, they convert 3.6 percent of sunlight energy into stored fuel.

These might seem like minuscule figures. But 3.6 percent is still considerably better than the 1-2 percent range that previous technologies had achieved. And according to the U.S. Department of Energy, 5-10 percent efficiency is all that’s needed to reach what the researchers call the “practical feasibility threshold” for solar hydrogen generation.

Between February and August of this year, Amirav and her colleagues published about the above innovations in the journals NanoEnergy and Chemistry Europe. They also recently presented their research at the fall virtual meeting of the American Chemical Society.

In their presentation, which hinted at future directions for their work, they teased further efficiency improvements courtesy of new new work with AI data mining experts.

“We are looking for alternative organic transformations,” says Amirav. This way, she and her collaborators hope, their solar factories can produce hydrogen fuel plus an array of other useful industrial byproducts. In the future, their artificial photosynthesis process could yield low-emission energy, plus some beneficial chemical extracts as a “practical” and “feasible” side-effect.

South Africa’s Lights Flicker as its Electric Utility Ponders a Future Without Carbon

Post Syndicated from David Wagman original https://spectrum.ieee.org/energywise/energy/renewables/south-africas-lights-flicker-as-its-electric-utility-ponders-a-future-without-carbon

Optimists look to the future of Eskom, South Africa’s electric utility, and see a glass half full. Pessimists see a glass nearly empty.

The optimists look to what they say is a rare opportunity for a national utility to fully embrace renewable energy resources. In so doing, the power provider could decarbonize its fleet of electric generating plants as well as Africa’s largest economy. Tens of thousands of green energy jobs could be created and gigawatts of renewable generating resources could be added each year for the next 10 or 20 years.

The pessimists—some might call them realists—point to a utility buckling under a massive debt load, singed by a legacy of government cronyism, disappointed by the performance problems of two of the world’s newest and biggest coal-fired power plants, and swamped beneath a backlog of deferred maintenance projects.

In particular, deferred maintenance across the utility’s 37,000 megawatts (MW) of installed capacity has led to forced outages and rolling blackouts in recent weeks during this, South Africa’s winter season.

In late July, the utility urged consumers to turn off lights after four of six generating units at the 3,600-MW Tutuka power station shut down following equipment failures. Only days earlier, the utility had restored power after four other power plants suffered unexpected outages.

Eskom’s problems have been decades in the making and are closely tied to the country’s transition away from Apartheid, the policy of racial segregation that ruled South Africa from the late 1940s until the early 1990s.

As a state-owned entity, Eskom made massive investment in coal-fired generating capacity during the 1970s and 1980s. That capacity led to low electricity tariffs driven still lower by policymakers’ decision not to set aside money for future investment.

But as economic growth and energy consumption accelerated during the 1990s, the government see-sawed over whether to pursue new generating capacity in a restructured energy sector, led by private-sector investment, or by government-led investment, through Eskom.

It ultimately opted for government investment, and in the early 2000s, accepted proposals to build two enormous coal-fired power stations, known as Medupi (meaning “a gentle rain”) and Kusile (meaning “the dawn has come”).

Those decisions to build came at a time when South African President Jacob Zuma was entering office, and utility oversight was weakened when Zuma installed senior managers who “ran amok,” says Mark Swilling, who directs the Sustainable Development Program at Stellenbosch University. Eskom then did “the worst possible thing,” says Swilling: It took on enormous debt loads at a time when fiscal and management oversight was being reduced.

At the same time, the utility took on the role of project manager for both power plants. One source familiar with the projects said that no engineering firm wanted the work because of “all the variables” around the government’s role. And, because few big infrastructure projects had been undertaken since the end of Apartheid, little insight existed into factors such as labor productivity.

The government anted up billions of dollars in guarantees to support construction of both power plants. But engineering flaws, coupled with inexperienced project managers led to delays and cost overruns at both sites.

What’s more, in 2015, a U.S. District Court fined equipment supplier Hitachi $19 million for improper payments it made to secure contracts to provide boilers for the power plants. The Japanese-based industrial conglomerate—which fell under the reach of U.S. securities law because the company did business in the U.S.—agreed to pay the fine but did not admit guilt.

Hitachi has since become part of Mitsubishi. Mitsubishi Hitachi Power Systems-Africa (MHPS-A) did not reply to requests for an interview.

Earlier this year, a South African government probe questioned fees Eskom paid to U.S.-based engineering group Black & Veatch as far back as 2005. The probe questioned alleged price escalations along with supposedly no-bid contracts for work related to the Kusile power plant and other energy infrastructure projects.

In an email response to a request for comment about the inquiry, Black & Veatch spokesperson Patrick MacElroy said, “We were selected by Eskom in an open and transparent process to provide engineering, project management, and construction management support services for the Kusile power station by the Eskom board, the Tender Committee, and by National Treasury. This initial project award was extended, and the scope increased through the addition of approved annual Task Orders including the current agreement governing our operations on the project.”

MacElroy said that over time, the engineering firm’s role expanded, “which required additional resources compared to the initial project plan.” Using boldface, underlined type in his email, MacElroy said that “at no point was Black & Veatch responsible for the design of the boilers or coal ash systems often highlighted as a source of cost overruns and technical issues.”

One source calls the power plants “an albatross around the country’s neck.”

Medupi Power Station is a dry-cooled coal-fired power station with an installed capacity of 4,764 MW, placing it among the largest coal-fired power stations in the world. When work first began on the plant in 2007, Eskom said it could be built for around $4.7 billion. By last year, that estimate had grown to nearly $14 billion.

The first 794 MW unit was commissioned and handed over to Eskom Generation in August 2015. Another five units—each also approaching 800 MW in capacity—were completed and delivered at roughly nine-month intervals. The final unit is expected to achieve commercial status later this year.

A series of technical problems have plagued the plant, including steam piping pressure, boiler weld defects, ash system blockage and damage to mill crushers used to prepare coal for burning.

Most troubling, however, may be design defects with the power plant’s boilers, which were supplied by Hitachi prior to its joining Mitsubishi Hitachi Power Systems-Africa.

“There are fundamental problems” with the boilers, says Mark Swilling. In particular, design engineers “got the boilers wrong” for the type of coal that was to be burned. A May 2020 system status briefing given by two senior Eskom executives reported that technical solutions had been agreed to between Eskom and MHPS-A for boiler defects at both the Medupi and Kusile power stations.

The repairs were first made to Medupi Unit 3 during a 75-day shutdown earlier this year. Medupi unit 6 was next for a 75-day repair outage. Three additional units are slated to be modified during the remainder of the year.

The World Bank, which provided some of the financing for the two power plants, issued a report this past February and said the boiler’s defects were impacting two key plant performance metrics: energy availability factor (EAF) and unplanned capability loss factor (UCLF). These measures, 92% and 2%, respectively, were “far from ideal values,” the report said.

The World Bank report said that Eskom’s EAF target for Medupi was 75%. That was a full 17 points below where it should have been. Even with the reduced target, some of the plant’s generating units were still operating at a substandard level.

Meanwhile, the target for UCLF, which represents unplanned availability and power capacity losses, stood at 18%, nine times as high as the optimal value. As of February, all units except one were operating above this target, the World Bank report said.

The report blamed the poor performance on “major plant defects” along with what it said was an inadequate maintenance and operating regime and difficulty managing spare parts.

The bottom line, says Swilling, is that the Medupi plant will “never, ever” achieve the performance targets that were specified before the generating units were built.

Further substantiating his outlook is another setback: It may not be until the early 2030s before flue gas desulfurization equipment is installed on each unit. A July oversight report by the African Development Bank said that retrofits to add those environmental controls, which were specified back in 2007, could cost another $2.5 billion.

Amid the turmoil, Andre de Ruyter took over as Eskom CEO in early January. Appointed by South African President Cyril Ramaphosa, de Ruyter is tasked with overseeing a plan to split Eskom into three units: for generation, transmission, and distribution, in an attempt to achieve efficiencies and attract investment. The executive has also undertaken a series of initiatives that some say are particularly bold.

First, de Ruyter committed to a schedule of power plant maintenance work “no matter the consequences,” as Swilling puts it. The result has been razor-thin reserve margins that have led to frequent electric service disruptions as some generating units are repaired while still others suffer forced outages.

Second, de Ruyter has said that Eskom will never build another coal-fired power plant. And third, he has called for the utility to lead an energy transformation in South Africa to decarbonize the grid and the economy.

The utility may have no other choice but to move away from fossil fuels. The average age of its coal-dominated power generating fleet is approaching 40 years. The legacy of deferred maintenance has not been kind. As a result, the price tag to repair the existing assets could exceed the cost of shutting them down and replacing the capacity with renewable energy resources.

That’s a tall order. After all, the utility has around 37 gigawatts of installed generating capacity, much of it coal-fired. By comparison, the country’s nascent, largely private renewable energy sector has installed around 5 GW of capacity over the past half-decade. An unprecedented boom in utility-scale wind and solar projects would need to ramp up rapidly just to keep pace with an expected surge of coal plant retirements.

But here, too, politics may get in the way. The country’s Minister of Public Enterprises supports a shift to renewable energy while the Minster of Mineral Resources and Energy does not.

“There is a lot of political uncertainty,” Swilling observes.

Whether or not Eskom—and, by extension, South Africa—shoulders this effort to shift from coal to renewables remains unclear. For now, at least, the utility’s glass appears to be nearly empty, drained by decades of bad management, meddlesome politicians, and mishandled infrastructure investments.

However, if Eskom in particular, and South Africans in general, embrace a plan to decarbonize the grid and build massive amounts of renewable energy, then the country may be able to fill its glass to overflowing as it shows the world how to restructure an economy in a way that fully embraces renewable energy.

Spherical Solar Cells Soak Up Scattered Sunlight

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/energywise/energy/renewables/spherical-solar-cells-soak-up-scattered-sunlight

Flat solar panels still face big limitations when it comes to making the most of the available sunlight each day. A new spherical solar cell design aims to boost solar power harvesting potential from nearly every angle without requiring expensive moving parts to keep tracking the sun’s apparent movement across the sky. 

Dethroned! Renewables Generated More Power than King Coal in April

Post Syndicated from Sandy Ong original https://spectrum.ieee.org/energywise/energy/renewables/renewables-generated-more-power-than-coal-in-april

For the first time ever, renewable energy supplied more power to the U.S. electricity grid than coal-fired plants for 47 days straight. The run is impressive because it trounces the previous record of nine continuous days last June and exceeds the total number of days renewables beat coal in all of 2019 (38 days).

In a recent report, the Institute for Energy Economics and Financial Analysis (IEEFA) details how the streak was first observed on 25 March and continued through to 10 May, the day the data was last analyzed. 

“We’ll probably track it again at the end of May, so the period could actually be longer,” says Dennis Wamsted, an energy analyst at IEEFA. Already, the figures for April speak volumes: wind, hydropower, and utility-scale solar sources produced 58.7 terawatt-hours (TWh) of electricity compared with coal’s 40.6 TWh—or 22.2% and 15.3% of the market respectively.  

In reality, the gap between the two sources is likely to be much larger, says Wamsted. That’s because the U.S. Energy Information Administration (EIA) database, where IEEFA obtains its data from, excludes power generated by rooftop solar panels, which itself is a huge power source.

The news that renewables overtook coal in the month of April isn’t surprising, says Brian Murray, director of the Duke University Energy Initiative. The first time this happened was last year, also in April. The month marks “shoulder season,” he says, “when heating is coming off but air-conditioning hasn’t really kicked in yet.” It’s when electricity demand is typically the lowest, which is why many power plants schedule their yearly maintenance during this time.

Spring is also when wind and hydropower generation peak, says Murray. Various thermal forces come into play with the Sun’s new positioning, and the melting snowpacks feed rivers and fill up reservoirs. 

“Normally you would expect some sort of rebound of coal generation in the summer, but I think there’s a variety of reasons why that’s not going to happen this year,” he says. “One has to do with coronavirus.”

With the pandemic placing most of the country in lockdown and economic activity declining, the EIA estimates that U.S. demand for electric power will fall by 5% in 2020. This, in turn, will drive coal production down by a quarter. In contrast, renewables are still expected to grow by 11%. The reason behind this is partly due to how energy is dispatched to the grid. Because of cheaper costs, renewables are used first if available, followed by nuclear power, natural gas, and then finally coal.

Coronavirus aside, the transition has been a long time coming. “Renewables have been on an inexorable rise for the last 10 years, increasingly eating coal’s lunch,” says Mike O’Boyle, director of electricity policy at Energy Innovation, a San Francisco-based think tank. The average coal plant in the U.S. is 40 years old, and these aging, inefficient plants are finding it increasingly difficult to compete against ever-cheaper renewable energy sources.

A decade ago, the average coal plant generated as much as 67% of its capacity. Today, that figure has dropped to 48%. And in the next five years, coal production is expected to fall to two-thirds of 2014 levels—a decline of 90 gigawatts (GW)—as increasing numbers of plants shut. 

“And that’s without policy changes that we anticipate will strengthen in the U.S., in which more than a third of people are in a state, city, or utility with a 100% clean energy goal,” says O’Boyle. Already, 30 states have renewable portfolio standards, or policies designed to increase electricity generation from renewable resources.

The transition towards renewables is one that’s being observed all across the world today. Global use of coal-powered electricity fell 3% last year, the biggest drop on record after nearly four decades. In Europe, the figure was 24%. The region has been remarkably progressive in its march towards renewable energy—last month saw both Sweden and Austria closing their last remaining coal plants, while the U.K. went through its longest coal-free stretch (35 days) since the Industrial Revolution more than 230 years ago.

But coal is still king in many parts of the world. For developing countries where electricity can be scarce and unreliable, the fossil fuel is often seen as the best option for power. 

The good news, however, is that the world’s two largest consumers of coal are investing heavily in renewables. Although China is still heavily reliant coal, it also boasts the largest capacity of wind, solar, and hydropower in the world today. India, with it abundant sunshine, is pursuing an aggressive solar plan. It is building the world’s largest solar park, and Prime Minister Narendra Modi has pledged that the country will produce 100 GW of solar power—five times what the U.S. generates—by 2022.

Today, renewable energy sources offer the cheapest form of power in two-thirds of the world, and they look set to get cheaper. They now provide up to 30% of global electricity demand, a figure is expected to grow to 50% by 2050. As a recent United Nations report put it: renewables are now “looking all grown up.”

Next-Gen Solar Cells Can Harvest Indoor Lighting for IoT Devices

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energywise/energy/renewables/next-gen-solar-cells-harvest-indoor-lighting-iot-devices

Billions of Internet-connected devices now adorn our walls and ceilings, sensing, monitoring, and transmitting data to smartphones and far-flung servers. As gadgets proliferate, so too does their electricity demand and need for household batteries, most of which wind up in landfills. To combat waste, researchers are devising new types of solar cells that can harvest energy from the indoor lights we’re already using.

The dominant material used in today’s solar cells, crystalline silicon, doesn’t perform as well under lamps as it does beneath the blazing sun. But emerging alternatives—such as perovskite solar cells and dye-sensitized materials—may prove to be significantly more efficient at converting artificial lighting to electrical power. 

A group of researchers from Italy, Germany, and Colombia is developing flexible perovskite solar cells specifically for indoor devices. In recent tests, their thin-film solar cell delivered power conversion efficiencies of more than 20 percent under 200 lux, the typical amount of illuminance in homes. That’s about triple the indoor efficiency of polycrystalline silicon, according to Thomas Brown, a project leader and engineering professor at the University of Rome Tor Vergata.

A Bright Spot for Solar Windows Powered By Perovskites

Post Syndicated from Sandy Ong original https://spectrum.ieee.org/energywise/energy/renewables/solar-windows-powered-perovskites

To most of us, windows are little more than glass panes that let light in and keep bad weather out. But to some scientists, windows represent possibility—the chance to take passive parts of a building and transform them into active power generators.

Anthony Chesman is one researcher working to develop such solar windows. “There are a lot of windows in the world that aren’t being used for any other purpose than to allow lighting in and for people to see through,” says Chesman, who is from Australia’s national science agency CSIRO. “But really, there’s a huge opportunity there in turning those windows into a space that can also generate electricity,” he says.

EnergySails Aim to Harness Wind and Sun To Clean Up Cargo Ships

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energywise/energy/renewables/energysails-harness-wind-sun-clean-up-cargo-ships

The global shipping industry is experiencing a wind-powered revival. Metal cylinders now spin from the decks of a half-dozen cargo ships, easing the burden on diesel engines and curbing fuel consumption. Devices like giant towing kites, vertical suction wings, and telescoping masts are well underway, while canvas sails flutter once more on smaller vessels. 

The latest development in “wind-assisted propulsion” comes from Japan. Eco Marine Power (EMP) recently unveiled a full-scale version of its EnergySail system at the Onomichi Marine Tech Test Center in Hiroshima Prefecture. The rigid, rectangular device is slightly curved and can be positioned into the wind to create lift, helping propel vessels forward. Marine-grade solar panels along the face can supply electricity for onboard lighting and equipment.

Greg Atkinson, EMP’s chief technology officer, says the 4-meter-tall sail will undergo shore-based testing this year, in preparation for sea trials. The device will deliver 1-kilowatt in peak solar power, or kWp, though the startup is still evaluating which type of photovoltaic panel to use. The potential sail power is yet to be determined, he says.

The EnergySail is one piece of EMP’s larger technology platform. The Fukuoka-based firm is also developing an integrated system that includes deck-mounted solar panels; recyclable marine batteries; charging systems; and computer programs that automatically rotate sails to capture optimal amounts of wind, or lower the devices when not in use or during bad weather. Atkinson notes that moving an EnergySail (mainly to optimize its wind collection) may affect how much sunlight it receives, though the panels can still collect solar power when lying flat.

The startup’s ultimate goal is to hoist about a dozen EnergySails on a tanker or freighter that has the available deck space. An array of that size could deliver power savings of up to 15 percent, depending on wind conditions and the vessel’s size, models show.

Gavin Allwright, secretary of the International Windship Association, says that figure is in line with projections for other wind-assisted technologies, which can help watercraft achieve between 5 and 20 percent fuel savings compared to typical ships. (EMP is not a member of the association.) For instance, the Finnish company Norsepower recently outfitted a Maersk oil tanker with two spinning rotor sails. The devices lowered the vessel’s fuel use by 8.2 percent on average during a 12-month trial period.

Shipping companies are increasingly investing in clean energy as international regulators move to slash global greenhouse gas emissions. Nearly all commercial cargo ships use oil or gas to carry goods across the globe; together, they contribute up to 3 percent of the world’s total annual fossil fuel emissions. Zero-emission alternatives like hydrogen fuel cells and ammonia-burning engines are still years from commercialization. But wind-assisted propulsion represents a more immediate, if partial, solution. 

For its EnergySail unit, EMP partnered with Teramoto Iron Works, which built the first rigid sails in the 1980s. Those devices — called JAMDA sails after the Japan Marine Machinery Development Association—were shown to reduce ships’ fuel use by between 10 to 30 percent on smaller coastal vessels, despite some technical issues. However, the experiment was short-lived. Plunging oil prices eroded the business case for efficiency upgrades, and shipowners later took them down.

EMP is currently talking with several shipowners to start installing its full energy system, potentially later this year. For the sea trial, the startup plans to install a deck-mounted solar array with up to 25 kWp; battery packs; computer systems; and one or two EnergySails. Atkinson says it may take two to three years of testing to verify whether the equipment can weather harsh conditions, including fierce winds and corrosive saltwater. 

Separately, EMP has started testing the non-sail portion of its platform. In May 2019, the company installed a 1.2-kWp solar array on a large crane vessel owned by Singaporean carrier Masterbulk. The setup also includes a 3.6-kilowatt-hour VRLA (valve regulated lead acid) battery pack made by Furukawa Battery Co. An onboard monitoring system automatically reports and logs fuel-consumption data in real time and calculates daily emissions of carbon and sulfur dioxide.

EMP previously tested Furukawa’s batteries on a vessel in Greece. During the day, solar panels recharged the batteries, which keep the voltage stable and could directly power the vessel’s lighting load. The batteries could also store the excess solar power to keep the lights on at night. It took the partners about five years of testing to ensure the system was stable. 

Atkinson says that, so far, the COVID-19 pandemic hasn’t disrupted the company’s work or halted its plans for the year.

“We can do much of the design work remotely and by using cloud-based applications,” he says. “Also, we can use virtual wind tunnels and [Computer Aided Design] applications for much of the initial design work for the sea trials phase.”

Across the industry, however, the coronavirus outbreak is wreaking economic havoc. Allwright says that shipowner interest in wind-assisted propulsion was “absolutely crazy” until a few weeks ago. “Now, shipping companies are saying, ‘Look, we can’t invest in new technology right now because we’re trying to survive,’” he says. 

Still, some technology developers are nonetheless accelerating their design work, in the hopes of launching projects as soon as the industry bounces back. “This pause gives the providers an extra 12 months to get these things tested and ready for action,” Allwright says.

Redox-Flow Cell Stores Renewable Energy as Hydrogen

Post Syndicated from Sandy Ong original https://spectrum.ieee.org/energywise/energy/renewables/storing-renewable-energy-hydrogen-redoxflow-cell

When it comes to renewables, the big question is: How do we store all that energy for use later on? Because such energy is intermittent in nature, storing it when there is a surplus is key to ensuring a continuous supply—for rainy days (literally), at night, or when the wind doesn’t blow.

Using today’s lithium-ion batteries for long-term grid storage isn’t feasible for a number of reasons. For example, they have fixed charge capacities and don’t hold charge well over extended periods of time. 

The solution, some propose, is to store energy chemically—in the form of hydrogen fuel—rather than electrically. This involves using devices called electrolyzers that make use of renewable energy to split water into hydrogen and oxygen gas. 

Saitec Teams Up With German Utility RWE to Test Floating Wind Turbines in Bay of Biscay

Post Syndicated from Lynne Peskoe-Yang original https://spectrum.ieee.org/energywise/energy/renewables/saitec-rwe-test-floating-wind-turbines

Kilometers off the coast of Basque Country in northern Spain, a new twist on offshore wind energy will soon face its final test. The Spanish firm Saitec Engineering made headlines late last year with its distinctive floating turbine concept, and promised to deploy a prototype in April. Last week, that launch took on new significance when Saitec announced a partnership with the renewables division of the German energy titan RWE.

The potential to harvest wind from beyond the shoreline is substantial. “The farther from shore [the wind farm is located], the bigger the wind resource is,” said Luis González-Pinto, chief operating officer of Saitec Offshore Technologies.

Prototype Offers High Hopes for High-Efficiency Solar Cells

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energywise/energy/renewables/prototype-high-efficiency-solar-cells-news

Scientists continue to tinker with recipes for turning sunlight into electricity. By testing new materials and components, in varying sizes and combinations, their goal is to produce solar cells that are more efficient and less expensive to manufacture, allowing for wider adoption of renewable energy. 

The latest development in that effort comes from researchers in St. Petersburg, Russia. The group recently created a tiny prototype of a high-efficiency solar cell using gallium phosphide and nitrogen. If successful, the cells could nearly double today’s efficiency rates—that is, the degree to which incoming solar energy is converted into electrical power.

The new approach could theoretically achieve efficiencies of up to 45 percent, the scientists said. By contrast, conventional silicon cells are typically less than 20 percent efficient. 

AltaRock Energy Melts Rock With Millimeter Waves for Geothermal Wells

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energy/renewables/altarock-energy-melts-rock-with-millimeter-waves-for-geothermal-wells

A vast supply of heat lies beneath our feet. Yet today’s drilling methods can barely push through dense rocks and high-pressure conditions to reach it. A new generation of “enhanced” drilling systems aims to obliterate those barriers and unlock unprecedented supplies of geothermal energy.

AltaRock Energy is leading an effort to melt and vaporize rocks with millimeter waves. Instead of grinding away with mechanical drills, scientists use a gyrotron—a specialized high-frequency microwave-beam generator—to open holes in slabs of hard rock. The goal is to penetrate rock at faster speeds, to greater depths, and at a lower cost than conventional drills do.

The Seattle-based company recently received a US $3.9 million grant from the U.S. Department of Energy’s Advanced Research Projects Agency–Energy (ARPA-E). The three-year initiative will enable scientists to demonstrate the technology at increasingly larger scales, from burning through hand-size samples to room-size slabs. Project partners say they hope to start drilling in real-world test sites before the grant period ends in September 2022.

AltaRock estimates that just 0.1 percent of the planet’s heat content could supply humanity’s total energy needs for 2 million years. Earth’s core, at a scorching 6,000 °C, radiates heat through layers of magma, continental crust, and sedimentary rock. At extreme depths, that heat is available in constant supply anywhere on the planet. But most geothermal projects don’t reach deeper than 3 kilometers, owing to technical or financial restrictions. Many wells tap heat from geysers or hot springs close to the surface.

That’s one reason why, despite its potential, geothermal energy accounts for only about 0.2 percent of global power capacity, according to the International Renewable Energy Association.

“Today we have an access problem,” says Carlos Araque, CEO of Quaise, an affiliate of AltaRock. “The promise is that, if we could drill 10 to 20 km deep, we’d basically have access to an infinite source of energy.”

The ARPA-E initiative uses technology first developed by Paul Woskov, a senior research engineer at MIT’s Plasma Science and Fusion Center. Since 2008, Woskov and his colleagues have used a 10-kilowatt gyrotron to produce millimeter waves at frequencies between 30 and 300 gigahertz. Elsewhere, millimeter waves are used for many purposes, including 5G wireless networks, airport security, and astronomy. While producing those waves requires only milliwatts of power, it takes several megawatts to drill through rocks.

To start, MIT researchers place a piece of rock in a test chamber, then blast it with high-powered, high-frequency beams. A metallic waveguide directs the beams to form holes. Compressed gas is injected to prevent plasma from breaking down and bursting into flames, which would hamper the process. In trials, millimeter waves have bored holes through granite, basalt, sandstone, and limestone.

The ARPA-E grant will allow the MIT team to develop their process using megawatt-size gyrotrons at Oak Ridge National Laboratory, in Tennessee. “We’re trying to bring forward a disruption in technology to open up the way for deep geothermal energy,” Araque says.

Other enhanced geothermal systems now under way use mechanical methods to extract energy from deeper wells and hotter sources. In Iceland, engineers are drilling 5 km deep into magma reservoirs, boring down between two tectonic plates. Demonstration projects in Australia, Japan, Mexico, and the U.S. West—including one by AltaRock—involve drilling artificial fractures into continental rocks. Engineers then inject water or liquid biomass into the fractures and pump it to the surface. When the liquid surpasses 374 °C and 22,100 kilopascals of pressure, it becomes a “supercritical” fluid, meaning it can transfer energy more efficiently and flow more easily than water from a typical well.

However, such efforts can trigger seismic activity, and projects in Switzerland and South Korea were shut down after earthquakes rattled surrounding cities. Such risks aren’t expected for millimeter-wave drilling. Araque says that while beams could spill outside their boreholes, any damage would be confined deep below ground.

Maria Richards, coordinator at Southern Methodist University’s Geothermal Laboratory, in Dallas, says that one advantage of using millimeter waves is that the drilling can occur almost anywhere—including alongside existing power plants. At shuttered coal facilities, deep geothermal wells could produce steam to drive the existing turbines.

The Texas laboratory previously explored using geothermal power to help natural-gas plants operate more efficiently. “In the end, it was too expensive. But if we could have drilled deeper and gotten higher temperatures, a project like ours would’ve been more profitable,” Richards says. She notes that millimeter-wave beams could also reach high-pressure offshore oil and gas reservoirs that are too dangerous for mechanical drills to tap.

This article appears in the March 2020 print issue as “AltaRock Melts Rock For Geothermal Wells.”

The Pros and Cons of the World’s Biggest Solar Park


Post Syndicated from Peter Fairley original https://spectrum.ieee.org/energy/renewables/the-pros-and-cons-of-the-worlds-biggest-solar-park

It’s 10 a.m. and Indian peanut farmer Venkeapream is relaxing at his family compound in Pavagada, an arid area north of Bangalore. The 67-year-old retired three years ago upon leasing his land to the Karnataka state government. That land is now part of a 53-square-kilometer area festooned with millions of solar panels. As his fields yield carbon-free electricity, Venkeapream pursues his passion full time: playing the electric harmonium, a portable reed organ.

With a capacity of 2 gigawatts and counting, Pavagada’s arrays represent the world’s largest cluster of photovoltaics. It’s also one of the most successful examples of a solar “park,” whereby governments provide multiple companies land and transmission—two big hurdles that slow solar development. Solar parks account for much of the 25.5 GW of solar capacity India has added in the last five years. The states of Rajasthan and Gujarat have, respectively, 2.25-GW and 5.29-GW solar parks under way, and Egypt’s 1.8-GW installation is one of several new international projects.

Alas, even as they speed the growth of renewable energy, solar parks also concentrate some of solar energy’s liabilities.

Sheshagiri Rao, an agricultural researcher and farmer based near Pavagada, says lease payments give peanut farmers such as Venkeapream a steadier income. But Rao says shepherds who held traditional rights to graze their fields were fenced out without compensation, and many have sold out. In Venkeapream’s village, flocks once totaled 2,000 to 3,000 sheep. There are now only about 600 left.

The constant need to keep dust off the panels, meanwhile, has put more strain on already overtapped groundwater supplies. Local farmers bring water to clean the more than 400,000 panels at the Pavagada site of Indian energy developer Acme Cleantech Solutions. “At least 2 liters of water is required to clean one panel. This is huge,” says B. Prabhakar, Acme’s site manager. Robotic dusters allow Acme to clean just twice a month, but most operators lack such equipment.

Then there are the power surges and drops created as clouds pass over Pavagada—generation swings that must be countered with coal-fired and hydropower plants. Balancing renewable energy swings is a growing challenge for grid operators in Karnataka, which leads India in solar capacity and also has more than 4 GW of variable wind power.

Karnataka capped new solar parks at 0.2 GW after launching Pavagada. Analysts heralded the state’s apparent shift toward distributed installations, such as rooftop solar systems, during a November 2019 meeting on sustainable energy in neighboring state Tamil Nadu. As Saptak Ghosh, who leads renewable energy programs at the Bangalore-based Center for Study of Science, Technology & Policy (CSTEP), put it: “Pavagada will be the end of big solar parks in Karnataka. Smaller is the future.”

Just a few days later, though, news broke that Karnataka’s renewable energy arm was acquiring land for three 2.5-GW solar megaparks. The state’s move may reflect pressure from the national government to accelerate solar installations, as well as confidence that Pavagada’s shortcomings can be fixed.

Instead of harming shepherds, for example, solar operators could open their gates. Grass and weeds growing amidst the panels pose a serious fire risk, according to Acme’s Prabhakar. Increasingly, operators in other countries rely on sheep to keep vegetation down.

Higher-tech solutions may ultimately address Pavagada’s water consumption and cloud-induced power swings. Israeli robotics firm Ecoppia is already providing what it calls “water free” cleaning at the Pavagada site operated by Fortum, a Finnish energy company.

Karnataka’s solution for power swings at its new megaparks, meanwhile, is to plug the parks straight into the national grid’s biggest power lines. The trio of plants are a joint project with the national-government-owned Solar Energy Corporation of India, and designed to export renewable electricity to other states. Power stations outside of Karnataka will balance the solar parks’ generation, according to Ghosh’s colleague, CSTEP senior research engineer and power-grid specialist Milind R.

India’s government is eager to help, having promised to boost renewable capacity to 175 GW by March 2022 and to 450 GW by 2030. As Thomas Spencer, research fellow at the Energy and Resources Institute, a New Delhi–based nonprofit, noted at the November meeting in Tamil Nadu, India is “well off the track” for meeting either target.

This article appears in the February 2020 print issue as “India Grapples With Vast Solar Park.”

Bon Voyage for the Autonomous Ship Mayflower

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energy/renewables/bon-voyage-for-the-autonomous-ship-mayflower

graphic link to special report landing page

In September a modern-day Mayflower will launch from Plymouth, a seaside town on the English Channel. And as its namesake did precisely 400 years earlier, this boat will set a course for the other side of the Atlantic.

Weather permitting, the 2020 voyage will follow the same course, but that’s about the only thing the two ships will have in common. Instead of carrying pilgrims intent on beginning a new life in the New World, this ship will be fully autonomous, with no crew or passengers on board. It will be powered in part by solar panels and a wind turbine at its stern. The boat has a backup electrical generator on board, although there are no plans to refuel the boat at sea if the generator backup runs dry.

The ship will cross the Atlantic to Plymouth, Mass., in 12 days instead of the 60 days of the 1620 voyage. It’ll be made of aluminum and composite materials. And it will measure 15 meters and weigh 5 metric tons—half as long and 1/36 as heavy as the original wooden boat. Just as a spacefaring mission would, the new Mayflower will contain science bays for experiments to measure oceanographic, climate, and meteorological data. And its trimaran design makes it look a little like a sleek, scaled-down, seagoing version of the Battlestar Galactica, from the TV series of the same name.

“It doesn’t conform to any specific class, regulations, or rules,” says Rachel Nicholls-Lee, the naval architect designing the boat. But because the International Organization for Standardization has a set of standards for oceangoing vessels under 24 meters in length, Nicholls-Lee is designing the boat as close to those specs as she can. Of course, without anyone on board, the new Mayflower also sets some of its own standards, too. For instance, the tightly waterproofed interior will barely have room for a human to crawl around in and to access its computer servers.

“It’s the best access we can have, really,” says Nicholls-Lee. “There won’t be room for people. So it is cozy. It’s doable, but it’s not somewhere you’d want to spend much time.” She adds that there’s just one meter between the waterline and the top of the boat’s hull. Atop the hull will also be a “sail fin” that juts up vertically to exploit wind power for propulsion, while a vertical turbine exploits it to generate electricity.

Nicholls-Lee’s architectural firm, Whiskerstay, based in Cornwall, England, provides the nautical design expertise for a team that also includes the project’s cofounders, Greg Cook and Brett Phaneuf. Cook (based in Chester, Conn.) and Phaneuf (based in Plymouth, England) jointly head up the marine exploration nonprofit Promare.

Phaneuf, who’s also the managing director of the Plymouth-based submersibles consulting company MSubs, says the idea for the autonomous Mayflower quadricentennial voyage came up at a Plymouth city council meeting in 2016. With the 400th anniversary of the original voyage fast approaching, Phaneuf said Plymouth city councillors were chatting about ideas for commemorating the historical event.

“Someone said, ‘We’re thinking about building a replica,’ ” Phaneuf says. “[I said], ‘That’s not the best idea. There’s already a replica in Massachusetts, and I grew up not far from it and Plymouth Rock.’ Instead of building something that is a 17th-century ship, we should build something that represents what the marine enterprise for the next 400 years is going to look like.” The town’s officials liked the idea, and they gave him the resources to start working on it.

The No. 1 challenge was clear from the start: “How do you get a ship across the ocean without sinking?” Phaneuf says. “The big issue isn’t automation, because automation is all around us in our daily lives. Much of the modern world is automated—people just don’t realize it. Reliability at sea is really the big challenge.”

But the team’s budget constrained its ability to tackle the reliability problem head-on. The ship will be at sea on its own with no crewed vessel tailing it. In fact, its designers are assuming that much of the Atlantic crossing will have to be done with spotty satellite communications at best.

Phaneuf says that the new Mayflower will have little competition in the autonomous sailing ship category. “There are lots of automated boats, ranging in size from less than one meter to about 10 meters,” he says. “But are they ships? Are they fully autonomous? Not really.” Not that the team’s Mayflower is going to be vastly larger than 10 meters in length. “We only have enough money to build a boat that’s 15 meters long,” he says. “Not big, by the ocean’s standards. And even if it was as big as an aircraft carrier, there’s a few of them at the bottom of the ocean from the years gone by.”

Cook, who consults with Phaneuf and the Mayflower project from a distance in his Connecticut office, says the 400-year-anniversary deadline was always on researchers’ minds.

“There are a lot of days when you think we’ll never get this done,” Cook says. “And you just keep your head down and power through it. You go over it, you go under it, you go around it, or you go through it. Because you’ve got to get it. And we will.”

When IEEE Spectrum contacted Cook in October, he was negotiating with the shipyard in Gdańsk, Poland, that’s building the new Mayflower’s hull. The yard needed plans executed to a level of detail that the team was not quite ready to provide. But parts of the boat needed to be completed promptly, so the day’s balancing act was already under way.

The next day’s challenge, Cook says, involved the output from the radar systems on the boat. The finest commercial radars in the world, he says, are worthless if they can’t output raw radar data—which the computers on the ship will need to process. So finding a radar system that represents the best mix of quality, affordability, and versatility with respect to output was another struggle.

Nicholls-Lee specializes in designing sustainable energy systems for boats, so she was up to the challenge of developing a boat that one day might not need to refuel. The ship will have 15 solar panels, each just 3 millimeters thick, which means they’ll follow the curve of the hull. “They’re very low profile; they’re not going to get washed off or anything like that,” Nicholls-Lee says. On a clear day, the panels could potentially generate some 2.5 kilowatts.

The sail fin is expected to propel the boat to its currently projected average cruising speed of 10 knots. When it operates just on electricity—to be stored in a half-ton battery bank in the hull—the Mayflower should make from 4 to 5 knots.

The ship’s eyes and ears sit near the stern, Nicholls-Lee says. Radar, cameras, lights, antennas, satellite-navigation equipment, and sonar pods will all be perched above the hull on a specially outfitted mast.

Nicholls-Lee says she’s been negotiating “with the AI team, who want the mast with all the equipment on it as high up as possible.” The mast really can’t be placed further forward on the boat, she says, because anything that’s closer to the bow gets the worst of the waves and the weather. And although the boat could keep moving if its sail fin snapped off, the loss of the mast would leave the Mayflower unable to navigate, leaving it more or less dead in the water.

The problem with putting the sensors behind the sail fin, Nicholls-Lee says, is that it means losing a fair portion of the field of view. That’s a trade-off the engineers are willing to work with if it helps to reduce their chances of being demasted by a particularly nasty wave or swell. In the worst case, in which the sail fin gets stuck in one position, blocking the radar, sonar, and cameras, the fin has an emergency clutch. Resorting to that clutch would deprive the ship of the wind’s propulsive power, but at least it wouldn’t blind the ship.

Behind all that hardware is the software, which of course ultimately does the piloting. IBM supplies the AI package, together with cloud computing power.

The 8 to 10 core team members are now adapting the hardware and software to the problem of transatlantic navigation, Phaneuf says. An example of what they’re tweaking is an element of the Mayflower’s software stack called the operational decision manager.

“It’s a thing that parses rules,” Phaneuf says. “It’s used in fiscal markets. It looks at bank swaps or credit card applications, making tens of thousands or millions of decisions, over and over again, all day. You put in a set of rules textually, and it keeps refining as you give it more input. So in this case we can put in all the collision regulations and all sorts of exceptions and alternative hypotheses that take into account when people don’t follow the rules.”

Eric Aquaronne, a cloud-and-AI strategist for IBM in Nice, France, says that ultimately the Mayflower’s software must output a perhaps deceptively simple set of decisions. “In the end, it has to decide, Do I go right, left, or change my speed?” Aquaronne says.

Yet within those options, at every instant during the boat’s voyage are hidden a whole universe of weather, sensor, and regulatory data, as well as communications with the IBM systems onshore that continue to train the AI algorithms. (The boat will sometimes lose the satellite connection, Phaneuf notes, at which point it is really on its own, running its AI inference algorithms locally.)

Today very little weather data is collected from the ocean’s surface, Phaneuf notes. A successful Mayflower voyage that gathered such data for months on end could therefore make a strong case for having more such autonomous ships out in the ocean.

“We can help refine weather models, and if you have more of these things out on the ocean, you could make weather prediction ever more resolute,” he says. But, he adds, “it’s the first voyage. So we’re trying not to go too crazy. I’m really just worried about getting across. I’m letting the other guys worry about the science packages. I’m mostly concerned with the ‘not sinking’ part now—and the ‘get there relatively close to where it’s supposed to be’ part. After that, the sky’s the limit.”

For Two Power Grid Experts, Hurricane Maria Became a Huge Experiment

Post Syndicated from Jean Kumagai original https://spectrum.ieee.org/energywise/energy/renewables/for-two-power-grid-experts-hurricane-maria-became-a-huge-experiment

In research, sometimes the investigator becomes part of the experiment. That’s exactly what happened to Efraín O’Neill-Carrillo and Agustín Irizarry-Rivera, both professors of electrical engineering at the University of Puerto Rico Mayagüez, when Hurricane Maria hit Puerto Rico on 20 September 2017. Along with every other resident of the island, they lost power in an islandwide blackout that lasted for months.

The two have studied Puerto Rico’s fragile electricity infrastructure for nearly two decades and, considering the island’s location in a hurricane zone, had been proposing ways to make it more resilient.

They also practice what they preach. Back in 2008, O’Neill-Carrillo outfitted his home with a 1.1-kilowatt rooftop photovoltaic system and a 5.4-kilowatt-hour battery bank that could operate independently of the main grid. He was on a business trip when Maria struck, but he worried a bit less knowing that his family would have power.

Irizarry-Rivera [top] wasn’t so lucky. His home in San Germán also had solar panels. “But it was a grid-tied system,” he says, “so of course it wasn’t working.” It didn’t have storage or the necessary control electronics to allow his household to draw electricity directly from the solar panels, he explains.

“I estimated I wouldn’t get [grid] power until March,” Irizarry-Rivera says. “It came back in February, so I wasn’t too far off.” In the meantime, he spent more than a month acquiring and installing batteries, charge controllers, and a new stand-alone inverter. His family then relied exclusively on solar power for 101 days, until grid power was restored.

In “How to Harden Puerto Rico’s Grid Against Hurricanes,” the two engineers describe how Puerto Rico could benefit from community microgrids made up of similar small PV systems. The amount of power they produce wouldn’t meet the average Puerto Rican household’s typical demand. But, Irizarry-Rivera points out, you quickly learn to get by with less.

“We got a lot of things done with 4 kilowatt-hours a day,” he says of his own household. “We had lighting and our personal electronics working, we could wash our clothes, run our refrigerator. Everything else is just luxuries and conveniences.”

This article appears in the November 2019 print issue as “After Maria.”

Alphabet’s Makani Tests Wind Energy Kites in the North Sea

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/renewables/alphabets-makani-tests-wind-energy-kites-in-the-north-sea

The idea is simple: Send kites or tethered drones hundreds of meters up in the sky to generate electricity from the persistent winds aloft. With such technologies, it might even be possible to produce wind energy around the clock. However, the engineering required to realize this vision is still very much a work in progress.

Dozens of companies and researchers devoted to developing technologies that produce wind power while adrift high in the sky gathered at a conference in Glasgow, Scotland last week. They presented studies, experiments, field tests, and simulations describing the efficiency and cost-effectiveness of various technologies collectively described as airborne wind energy (AWE).

In August, Alameda, Calif.-based Makani Technologies ran demonstration flights of its airborne wind turbines—which the company calls energy kites—in the North Sea, some 10 kilometers off the coast of Norway. According to Makani CEO Fort Felker, the North Sea tests consisted of a launch and “landing” test for the flyer followed by a flight test, in which the kite stayed aloft for an hour in “robust crosswind(s).” The flights were the first offshore tests of the company’s kite-and-buoy setup. The company has, however, been conducting onshore flights of various incarnations of their energy kites in California and Hawaii.

Wind Turbines Just Keep Getting Bigger, But There’s a Limit

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/energy/renewables/wind-turbines-just-keep-getting-bigger-but-theres-a-limit

Wind turbines have certainly grown up. When the Danish firm Vestas began the trend toward gigantism, in 1981, its three-blade machines were capable of a mere 55 kilowatts. That figure rose to 500 kW in 1995, reached 2 MW in 1999, and today stands at 5.6 MW. In 2021, MHI Vestas Offshore Wind’s V164 will rise 105 meters high at the hub, swing 80-meter blades, and generate up to 10 MW, making it the first commercially available double-digit turbine ever. Not to be left behind, General Electric’s Renewable Energy is developing a 12-MW machine with a 260-meter tower and 107-meter blades, also rolling out by 2021.

That is clearly pushing the envelope, although it must be noted that still larger designs have been considered. In 2011, the UpWind project released what it called a predesign of a 20-MW offshore machine with a rotor diameter of 252 meters (three times the wingspan of an Airbus A380) and a hub diameter of 6 meters. So far, the limit of the largest conceptual designs stands at 50 MW, with height exceeding 300 meters and with 200-meter blades that could flex (much like palm fronds) in furious winds.

To imply, as an enthusiastic promoter did, that building such a structure would pose no fundamental technical problems because it stands no higher than the Eiffel tower, constructed 130 years ago, is to choose an inappropriate comparison. If the constructible height of an artifact were the determinant of wind-turbine design then we might as well refer to the Burj Khalifa in Dubai, a skyscraper that topped 800 meters in 2010, or to the Jeddah Tower, which will reach 1,000 meters in 2021. Erecting a tall tower is no great problem; it’s quite another proposition, however, to engineer a tall tower that can support a massive nacelle and rotating blades for many years of safe operation.

Larger turbines must face the inescapable effects of scaling. Turbine power increases with the square of the radius swept by its blades: A turbine with blades twice as long would, theoretically, be four times as powerful. But the expansion of the surface swept by the rotor puts a greater strain on the entire assembly, and because blade mass should (at first glance) increase as a cube of blade length, larger designs should be extraordinarily heavy. In reality, designs using lightweight synthetic materials and balsa can keep the actual exponent to as little as 2.3.

Even so, the mass (and hence the cost) adds up. Each of the three blades of Vestas’s 10-MW machine will weigh 35 metric tons, and the nacelle will come to nearly 400 tons. GE’s record-breaking design will have blades of 55 tons, a nacelle of 600 tons, and a tower of 2,550 tons. Merely transporting such long and massive blades is an unusual challenge, although it could be made easier by using a segmented design.

Exploring likely limits of commercial capacity is more useful than forecasting specific maxima for given dates. Available wind turbine power [PDF] is equal to half the density of the air (which is 1.23 kilograms per cubic meter) times the area swept by the blades (pi times the radius squared) times the cube of wind velocity. Assuming a wind velocity of 12 meters per second and an energy-conversion coefficient of 0.4, then a 100-MW turbine would require rotors nearly 550 meters in diameter.

To predict when we’ll get such a machine, just answer this question: When will we be able to produce 275-meter blades of plastic composites and balsa, figure out their transport and their coupling to nacelles hanging 300 meters above the ground, ensure their survival in cyclonic winds, and guarantee their reliable operation for at least 15 or 20 years? Not soon.

This article appears in the November 2019 print issue as “Wind Turbines: How Big?”