Tag Archives: energy

Exclusive: Airborne Wind Energy Company Closes Shop, Opens Patents

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/renewables/exclusive-airborne-wind-energy-company-closes-shop-opens-patents

This week, a 13-year experiment in harnessing wind power using kites and modified gliders finally closes down for good. But the technology behind it is open-sourced and is being passed on to others in the field.

As of 10 September, the airborne wind energy (AWE) company Makani Technologies has officially announced its closure. A key investor, the energy company Shell, also released a statement to the press indicating that “given the current economic environment” it would not be developing any of Makani’s intellectual property either. Meanwhile, Makani’s parent company, X, Alphabet’s moonshot factory, has made a non-assertion pledge on Makani’s patent portfolio. That means anyone who wants to use Makani patents, designs, software, and research results can do so without fear of legal reprisal.

Makani’s story, recounted last year on this site, is now the subject of a 110-minute documentary called Pulling Power from the Sky—also free to view.

When she was emerging from graduate studies at MIT in 2009, Paula Echeverri (once Makani’s chief engineer) said the company was a compelling team to join, especially for a former aerospace engineering student.

“Energy kite design is not quite aircraft design and not quite wind turbine design,” she said.

The idea behind the company’s technology is to raise the altitude of the wind energy harvesting to hundreds of meters in the sky—where the winds are typically both stronger and more steady. Because a traditional windmill reaching anywhere approaching these heights would be impractical, Makani was looking into kites or gliders that could ascend to altitude first—fastened to the ground by a tether. Only then would the flyer begin harvesting energy from wind gusts.

Pulling Power recounts Makani’s story from its very earliest days, circa 2006, when kites like the ones kite surfers use were the wind energy harvester of choice. However, using kites also means drawing power out of the tug on the kite’s tether. Which, as revealed by the company’s early experiments, couldn’t compete with propellers on a glider plane.

What became the Makani basic flyer, the M600 Energy Kite, looked like an oversized hobbyist’s glider but with a bank of propellers across the wing. These props would first be used to loft the glider to its energy-harvesting altitude. Then the engine would shut off and the glider would ride the air currents—using the props as mini wind turbines.

According to a free 1,180-page ebook (Part 1Part 2Part 3The Energy Kite, which Makani is also releasing online, the company soon found a potentially profitable niche in operating offshore.

Just in terms of tonnage, AWE had a big advantage over traditional offshore wind farms. Wind turbines (in shallow water) fixed to the seabed might require 200 to 400 tons of metal for every megawatt of power the turbine generated. And floating deep-water turbines, anchored to seabed by cables, typically involve 800 tons or more per megawatt. Meanwhile, a Makani AWE platform—which can be anchored in even deeper water—weighed only 70 tons per rated megawatt of generating capacity.

Yet, according to the ebook, in real-world tests, Makani’s M600 proved difficult to fly at optimum speed. In high winds, it couldn’t fly fast enough to pull as much power out of the wind as the designers had hoped. In low winds, it often flew too fast. In all cases, the report says, the rotors just couldn’t operate at peak capacity through much of the flyer’s maneuvers. The upshot: The company had a photogenic oversized model airplane, but not the technology that’d give regular wind turbines a run for their money.

Don’t take Makani’s word for it, though, says Echeverri. Not only is the company releasing its patents into the wild, it’s also giving away its code baseflight logs, and a Makani flyer simulation tool called KiteFAST.

“I think that the physics and the technical aspects are still such that, in floating offshore wind, there’s a ton of opportunity for innovation,” says Echeverri.

One of the factors the Makani team didn’t anticipate in the company’s early years, she said, was how precipitously electricity prices would continue to dropleaving precious little room at the margins for new technologies like AWEs to blossom and grow.

“We’re thinking about the existing airborne wind industry,” Echeverri said. “For people working on the particular problems we’d been working on, we don’t want to bury those lessons. We also found this to be a really inspiring journey for us as engineers—a joyful journey… It is worthwhile to work on hard problems.”

Exclusive: GM Can Manage an EV’s Batteries Wirelessly—and Remotely

Post Syndicated from Lawrence Ulrich original https://spectrum.ieee.org/cars-that-think/energy/batteries-storage/ieee-spectrum-exclusive-gm-can-manage-an-evs-batteries-wirelesslyand-remotely

When the battery dies in your smartphone, what do you do? You complain bitterly about its too-short lifespan, even as you shell out big bucks for a new device. 

Electric vehicles can’t work that way: Cars need batteries that last as long as the vehicles do. One way of getting to that goal is by keeping close tabs on every battery in every EV, both to extend a battery’s life and to learn how to design longer-lived successors.

IEEE Spectrum got an exclusive look at General Motors’ wireless battery management system. It’s a first in any EV anywhere (not even Tesla has one). The wireless technology, created with Analog Devices, Inc., will be standard on a full range of GM EVs, with the company aiming for at least 1 million global sales by mid-decade. 

Those vehicles will be powered by GM’s proprietary Ultium batteries, produced at a new US $2.3 billion plant in Ohio, in partnership with South Korea’s LG Chem

Unlike today’s battery modules, which link up to an on-board management system through a tangle of orange wiring, GM’s system features RF antennas integrated on circuit boards. The antennas allow the transfer of data via a 2.4-gigahertz wireless protocol similar to Bluetooth but with lower power. Slave modules report back to an onboard master, sending measurements of cell voltages and other data.  That onboard master can also talk through the cloud to GM. 

The upshot is cradle-to-grave monitoring of battery health and operation, including real-time data from drivers in wildly different climates or usage cases. That all-seeing capability includes vast inventories of batteries—even before workers install them in cars on assembly lines.

“You can have one central warehouse monitoring all these devices,” says Fiona Meyer-Teruel, GM’s lead engineer for battery system electronics.

GM can essentially plug-and-play battery modules for a vast range of EVs, including heavy-duty trucks and sleek performance cars, without having to redesign wiring harnesses or communications systems for each. That can help the company speed models to market and ensure the profitability that has eluded most EV makers. GM engineers and executives said they’ve driven the cost of Ultium batteries, with their nickel-cobalt-manganese-aluminum chemistry, below the $100 per kilowatt-hour mark—long a Holy Grail for battery development. And GM has vowed that it will turn a profit on every Ultium-powered car it makes.

The wireless management system will let those EVs balance the charge within individual battery cell groups for optimal performance. Software and battery nodes can be reprogrammed over-the-air. With that in mind, the system was designed with end-to-end encryption to prevent hacking.

Repurposing partially spent batteries also gets easier because there’s no need to overhaul the management system or fiddle with hard-to-recycle wiring. Wireless packs can go straight into their new roles, typically as load-balancing workhorses for  the grid.

“You can easily rescale the batteries for a second life, when they’re down to, say, 70-percent performance,” says Meyer-Teruel. 

The enormous GM and LG Chem factory, now under construction, will have the capacity to produce 30 gigawatt-hours of batteries each year, 50 percent more than Tesla’s Gigafactory in Nevada. The plant investment is a fraction of the $20 billion that GM is slated to pour into electric and autonomous cars by 2025, en route to the “all-electric future” touted by CEO Mary Barra. 

A reborn, electric GMC Hummer with up to 1,000 horsepower will be the first of about 20 GM Ultium-powered models, mainly for the U.S. and Chinese markets, when it reaches showrooms next year. It will be followed by a Cadillac Lyriq crossover SUV in 2022, and soon thereafter by an electric Chevrolet pickup.   

Andy Oury, GM’s lead architect for high-voltage batteries, said those customers will see benefits from the wireless system, without necessarily having to buy a new car. 

“Say, seven years from now, a customer needs an Ultium 1.0 battery, but we’re already using 2.0.,” Oury said. “As long as they’re compatible, we can install the better one: Just broadcast the new chemistry, and incorporate new calibration tables to run it.” 

Tim Grewe, GM’s director of global electrification, says consumers may soon expect batteries to last four to five times as long as today’s, and companies need to respond. To that end, the wireless system stores metadata from each cell. Real-time battery health checks will refocus the network of modules and sensors when needed, safeguarding battery health over the vehicle’s lifespan. Vehicle owners will be able to opt in or out of more extensive monitoring of driving patterns. Analyzing that  granular data, Grewe said, can tease out tiny differences between battery batches, suppliers, or performance in varying regions and climates. 

“It’s not that we’re getting bad batteries today, but there’s a lot of small variations,” says Grewe. “Now we can run the data: Was that electrolyte a little different, was the processing of that electrode coating a little different?
 
“Now, no matter where it is—in the factory, assembling the car, or down the line—we have a record of cloud-based data and machine learning to draw upon.” 

The eco-friendly approach eliminates about a kilogram per vehicle, as well as three meters of wiring. Jettisoning nearly 90 percent of pack wiring ekes out another advantage: Throughout the industry, wired battery connectors demand enough physical clearance for human techs to squeeze two fingers inside. Eliminating the wiring and touchpoints carves out room to stuff more batteries into a given space, with a lower-profile design. Which leaves plenty of room for a thumbs-up. 

Solar Closing in on “Practical” Hydrogen Production

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/renewables/solar-closing-in-on-practical-hydrogen-production

Israeli and Italian scientists have developed a renewable energy technology that converts solar energy to hydrogen fuel — and it’s reportedly at the threshold of “practical” viability.

The new solar tech would offer a sustainable way to turn water and sunlight into storable energy for fuel cells, whether that stored power feeds into the electrical grid or goes to fuel-cell powered trucks, trains, cars, ships, planes or industrial processes.

Think of this research as a sort of artificial photosynthesis, said Lilac Amirav, associate professor of chemistry at the Technion — Israel Institute of Technology in Haifa. (If it could be scaled up, the technology could eventually be the basis of “solar factories” in which arrays of solar collectors split water into stores of hydrogen fuel——as well as, for reasons discussed below, one or more other industrial chemicals.)

“We [start with] a semiconductor that’s very similar to what we have in solar panels,” says Amirav. But rather than taking the photovoltaic route of using sunlight to liberate a current of electrons, the reaction they’re studying harnesses sunlight to efficiently and cost-effectively peel off hydrogen from water molecules.

The big hurdle to date has been that hydrogen and oxygen just as readily recombine once they’re split apart—that is, unless a catalyst can be introduced to the reaction that shunts water’s two component elements away from one another.

Enter the rod-shaped nanoparticles Amirav and co-researchers have developed. The wand-like rods (50-60 nanometers long and just 4.5 nm in diameter) are all tipped with platinum spheres 2–3 nm in diameter, like nano-size marbles fastened onto the ends of drinking straws.

Since 2010, when the team first began publishing papers about such specially tuned nanorods, they’ve been tweaking the design to maximize its ability to extract as much hydrogen and excess energy as possible from “solar-to-chemical energy conversion.”

Which brings us back to those “other” industrial chemicals. Because creating molecular hydrogen out of water also yields oxygen, they realized they had to figure out what to do with that byproduct. “When you’re thinking about artificial photosynthesis, you care about hydrogen—because hydrogen’s a fuel,” says Amirav. “Oxygen is not such an interesting product. But that is the bottleneck of the process.”

There’s no getting around the fact that oxygen liberated from split water molecules carries energy away from the reaction, too. So, unless it’s harnessed, it ultimately represents just wasted solar energy—which means lost efficiency in the overall reaction.

So, the researchers added another reaction to the process. Not only does their platinum-tipped nanorod catalyst use solar energy to turn water into hydrogen, it also uses the liberated oxygen to convert the organic molecule benzylamine into the industrial chemical benzaldehyde (commonly used in dyes, flavoring extracts, and perfumes).

All told, the nanorods convert 4.2 percent of the energy of incoming sunlight into chemical bonds. Considering the energy in the hydrogen fuel alone, they convert 3.6 percent of sunlight energy into stored fuel.

These might seem like minuscule figures. But 3.6 percent is still considerably better than the 1-2 percent range that previous technologies had achieved. And according to the U.S. Department of Energy, 5-10 percent efficiency is all that’s needed to reach what the researchers call the “practical feasibility threshold” for solar hydrogen generation.

Between February and August of this year, Amirav and her colleagues published about the above innovations in the journals NanoEnergy and Chemistry Europe. They also recently presented their research at the fall virtual meeting of the American Chemical Society.

In their presentation, which hinted at future directions for their work, they teased further efficiency improvements courtesy of new new work with AI data mining experts.

“We are looking for alternative organic transformations,” says Amirav. This way, she and her collaborators hope, their solar factories can produce hydrogen fuel plus an array of other useful industrial byproducts. In the future, their artificial photosynthesis process could yield low-emission energy, plus some beneficial chemical extracts as a “practical” and “feasible” side-effect.

Emrod Chases The Dream Of Utility-Scale Wireless Power Transmission

Post Syndicated from David Wagman original https://spectrum.ieee.org/energywise/energy/the-smarter-grid/emrod-chases-the-dream-of-utilityscale-wireless-power-transmission

California wildfires knock out electric power to thousands of people; a hurricane destroys transmission lines that link electric power stations to cities and towns; an earthquake shatters homes and disrupts power service. The headlines are dramatic and seem to occur more and more often.

The fundamental vulnerability in each case is that the power grid relies on metal cables to carry electricity every meter along the way. Since the days of Nikola Tesla and his famous coil, inventors and engineers have dreamt of being able to send large amounts of electricity over long distances, and all without wires.

During the next several months, a startup company, a government-backed innovation institute and a major electric utility will aim to scale up a wireless electric power transmission system that they say will offer a commercially viable alternative to traditional wire transmission and distribution systems.

The underlying idea is nothing new: energy is converted into electromagnetic radiation by a transmitting antenna, picked up by a receiving antenna, and then distributed locally by conventional means. This is the same thing that happens in any radio system, but in radio the amount of power that reaches the receiver can be minuscule; picking up a few picowatts is all that is needed to deliver an intelligible signal. By contrast, the amount of raw energy sent via wireless power transfer is most important, and means the fraction of transmitted energy that is received becomes the key design parameter.

What’s new here is how New Zealand startup Emrod has borrowed ideas from radar and optics and used metamaterials to focus the transmitted radiation even more tightly than previous microwave-based wireless power attempts.

The “quasi-optical” system shapes the electromagnetic pulse into a cylindrical beam, thus making it “completely different” from the way a cell phone tower or radio antenna works, said Dr. Ray Simpkin, chief science officer at Emrod, which has a Silicon Valley office in addition to its New Zealand base. Simpkin’s background is in radar technology and he is on loan from Callaghan Innovation, the New Zealand government-sponsored innovation institute that is backing the wireless power startup.

Emrod’s laboratory prototype currently operates indoors at a distance of just 2 meters. Work is under way to build a 40-meter demonstration system, but it, too, will be indoors where conditions can be easily managed. Sometime next year though Emrod plans a field test at a still-to-be-determined grid-connected facility operated by Powerco, New Zealand’s second largest utility with around 1.1 million customers.

In an email, Powerco said that it is funding the test with an eye toward learning how much power the system can transmit and over what distance. The utility also is providing technical assistance to help Emrod connect the system to its distribution network. Before that can happen, however, the system must meet a number of safety, performance and environmental requirements.

One safety feature will be an array of lasers spaced along the edges of flat-panel receivers that are planned to catch and then pass along the focused energy beam. These lasers are pointed at sensors at the transmitter array so that if a bird in flight, for example, interrupted one of the lasers, the transmitter would pause a portion of the energy beam long enough for the bird to fly through.

Emrod’s electromagnetic beam operates at frequencies classified as industrial, scientific and medical (ISM). The company’s founder, Greg Kushnir, said in a recent interview that the power densities are roughly the equivalent of standing in the sun outside at noon, or around 1 kW per square meter.

Emrod sees an opportunity for utilities to deploy its technology to deliver electric service to remote areas and locations with difficult terrain. The company is looking at the feasibility of spanning a 30-km strait between the southern tip of New Zealand and Stewart Island. Emrod estimates that a 40-square-meter transmitter would do the job. And, although without offering detailed cost estimates, Simpkin said the system could cost around 60 percent that of a subsea cable.

Another potential application would be in post-disaster recovery. In that scenario, mobile transmitters would be deployed to close a gap between damaged or destroyed transmission and distribution lines.

The company has a “reasonable handle” on costs, Simpkin said, with the main areas for improvement coming from commercially available transmitter components. Here, the company expects that advancements in 5G communications technology will spur efficiency improvements. At present, its least efficient point is at the transmitter where existing electronic components are no better than around 70 percent efficient.

“The rule book hasn’t really been written,” he said, for this effort to meld wireless power transfer with radar and optics. “We are taking a softly, softly approach.”

ITER Celebrates Milestone, Still at Least a Decade Away From Fusing Atoms

Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/energy/nuclear/iter-fusion-reactor

It was a twinkle in U.S. President Ronald Reagan’s eye, an enthusiasm he shared with General Secretary Mikhail Gorbachev of the Soviet Union: boundless stores of clean energy from nuclear fusion.

That was 35 years ago. 

On July 28, 2020, the product of these Cold Warriors’ mutual infatuation with fusion, the International Thermonuclear Experimental Reactor (ITER) in Saint-Paul-lès-Durance, France inaugurated the start of the machine assembly phase of this industrial-scale tokamak nuclear fusion reactor. 

An experiment to demonstrate the feasibility of nuclear fusion as a virtually inexhaustible, waste-free and non-polluting source of energy, ITER has already been 30-plus years in planning, with tens of billions invested. And if there are new fusion reactors designed based on research conducted here, they won’t be powering anything until the latter half of this century.

Speaking from Elysée Palace in Paris via an internet link during last month’s launch ceremony, President Emmanuel Macron said, [ITER] is proof that what brings together people and nations is stronger than what pulls them apart. [It is] a promise of progress, and of confidence in science.” Indeed, as the COVID-19 pandemic continues to baffle modern science around the world, ITER is a welcome beacon of hope.

ITER comprises 35 collaborating countries, including members of the European Union, China, India, Japan, Russia, South Korea and the United States, which are directly contributing to the project either in cash or in kind with components and services. The EU has contributed about 45%, while the others pitch in about 9% each. The total cost of the project could be anywhere between $22 billion to $65 billion—even though the latter figure has been disputed.

The idea for ITER was sparked back in 1985, at the Geneva Superpower Summit, where President Ronald Reagan of the United States and General Secretary Mikhail Gorbachev of the Soviet Union spoke of an international collaboration to develop fusion energy. A year later, at the US–USSR Summit in Reykjavik, an agreement was reached between the European Union’s Euratom, Japan, the Soviet Union and the United States to jointly start work on the design of a fusion reactor. At that time, controlled release of fusion power hadn’t even been demonstrated—that only happened in 1991, by the Joint European Torus (JET) in the UK.

The first big component to be installed at ITER was the 1,250-metric ton cryostat base, which was lowered into the tokamak pit in late May 2020. The cryostat is India’s contribution to the reactor, and uses specialized tools specifically procured for ITER by the Korean Domestic Agency to place components weighing hundreds of tonnes and having positioning tolerances of a few millimeters. Machine assembly is scheduled to finish by the end of 2024, and by mid-2025, we are likely to see first plasma production.

Anil Bhardwaj, group leader of the cryostat team, tells IEEE Spectrum “First plasma will only verify [various] compliances for initial preparation of the plasma. That does not mean that we are achieving fusion.”

That will come another decade or so down the line.

If everything goes to plan, the first deuterium–tritium fusion experiments will be demonstrated by 2035, and will in essence be replicating the fusion reactions that take place in the sun. ITER estimates that for 50 MW of power injected into the tokamak to heat the plasma (up to 150 million degrees Celsius), 500 MW of thermal power for 400- to 600-second periods will be output, a tenfold return (expressed as Q ≥ 10). The existing record as of now is Q = 0.67, held by the JET tokamak.

Despite recent progress, there is still a lot of uncertainty around ITER. Critics decry the hyperbole around it, especially of it being a magic-bullet solution to the worlds energy problems, in the words of Daniel Jassby, a former researcher at the Princeton Plasma Physics Lab. His 2017 article explains why “scaling down the sun” may not be the ideal fallback plan.

“In the most feasible terrestrial fusion reaction [using deuterium–tritium fuel], 80% of the fusion output is in the form of barrages of neutron bullets, whose conversion to electrical energy is a dubious endeavor,” he said in an interview. Switching to a different type of reactor based on much weaker fusion reactions might result in less neutron production, but also are unlikely to produce net energy of any type.

Delays and mismanagement have also plagued ITER, something that Jassby contends was a result of poor leadership. “There are only a few people in the world who have the technological, administrative and political expertise that allow them to make continuous progress in directing and completing a multinational project,” he said. Bernard Bigot, who took over as director-general five years ago, possesses the requisite skillset, in Jassby’s opinion. At present, ITER is running about six years behind schedule.

Critics of ITER are also concerned about diverting resources from developing existing renewable energies. “The greatest energy issue of our time is not supply, but how to choose among the plethora of existing energy sources for wide-scale deployment,” Jassby said. ITER’s value, however, he said, lies in delinking the fantasy of electricity from fusion energy, thus saving hundreds of billions of dollars in the long run.

Jassby thinks that if successful, ITER will allow physicists to study long-lived, high-temperature fusioning plasmas or the development of neutron sources. There are practical applications for fusion neutrons, he says, such as isotope production, radiography and activation analysis. He adds that ITER can have significant benefits if new technologies emerge application in other fields, such as superconducting magnets, new materials and novel fabrication techniques.

Philippa Browning, professor of astrophysics at the University of Manchester, believes that only something of the scale of ITER can test how things work in fusion reactors. “It may well be that in future alternative devices turn out to be better, but those advantages could be incorporated into the successor to ITER which will be a demonstration fusion power station… The route to fusion power is slow, [so] we can hope that it will be ready when it is really needed in the second half of this century.” Meanwhile, she added, “it is important that other approaches to fusion are explored in parallel, smaller and more agile projects.”

One of the most impressive things about ITER, Browning said, is the combination of a truly international cooperation pushing at the frontiers in many ways. “Understanding how plasmas interact with magnetic fields is a hugely challenging scientific problem… There are all sorts of scientific and technological spin-offs, as well as the direct contribution to achieving, hopefully, a fusion power station.”

A Battery That’s Tough Enough To Take Structural Loads

Post Syndicated from Philip E. Ross original https://spectrum.ieee.org/energywise/energy/batteries-storage/a-structural-battery-that-makes-up-the-machine-that-it-powers

Batteries can add considerable mass to any design, and they have to be supported using a sufficiently strong structure, which can add significant mass of its own. Now researchers at the University of Michigan have designed a structural zinc-air battery, one that integrates directly into the machine that it powers and serves as a load-bearing part. 

That feature saves weight and thus increases effective storage capacity, adding to the already hefty energy density of the zinc-air chemistry. And the very elements that make the battery physically strong help contain the chemistry’s longstanding tendency to degrade over many hundreds of charge-discharge cycles. 

The research is being published today in Science Robotics.

Nicholas Kotov, a professor of chemical engineer, is the leader of the project. He would not say how many watt-hours his prototype stores per gram, but he did note that zinc air—because it draw on ambient air for its electricity-producing reactions—is inherently about three times as energy-dense as lithium-ion cells. And, because using the battery as a structural part means dispensing with an interior battery pack, you could free up perhaps 20 percent of a machine’s interior. Along with other factors the new battery could in principle provide as much as 72 times the energy per unit of volume (not of mass) as today’s lithium-ion workhorses.

“It’s not as if we invented something that was there before us,” Kotov says. ”I look in the mirror and I see my layer of fat—that’s for the storage of energy, but it also serves other purposes,” like keeping you warm in the wintertime.  (A similar advance occurred in rocketry when designers learned how to make some liquid propellant tanks load bearing, eliminating the mass penalty of having separate external hull and internal tank walls.)

Others have spoken of putting batteries, including the lithium-ion kind, into load-bearing parts in vehicles. Ford, BMW, and Airbus, for instance, have expressed interest in the idea. The main problem to overcome is the tradeoff in load-bearing batteries between electrochemical performance and mechanical strength.

The Michigan group get both qualities by using a solid electrolyte (which can’t leak under stress) and by covering the electrodes with a membrane whose nanostructure of fibers is derived from Kevlar. That makes the membrane tough enough to suppress the growth of dendrites—branching fibers of metal that tend to form on an electrode with every charge-discharge cycle and which degrade the battery.

The Kevlar need not be purchased new but can be salvaged from discarded body armor. Other manufacturing steps should be easy, too, Kotov says. He has only just begun to talk to potential commercial partners, but he says there’s no reason why his battery couldn’t hit the market in the next three or four years.

Drones and other autonomous robots might be the most logical first application because their range is so severely chained to their battery capacity. Also, because such robots don’t carry people about, they face less of a hurdle from safety regulators leery of a fundamentally new battery type.

“And it’s not just about the big Amazon robots but also very small ones,” Kotov says. “Energy storage is a very significant issue for small and flexible soft robots.”

Here’s a video showing how Kotov’s lab has used batteries to form the “exoskeleton” of robots that scuttle like worms or scorpions.

The Electric Weed-Zapper Renaissance

Post Syndicated from David Schneider original https://spectrum.ieee.org/tech-talk/energy/environment/the-electric-weed-zapper-renaissance

In the 1890s, U.S. railroad companies struggled with what remains a problem for railroads across the world: weeds. The solution that 19th-century railroad engineers devised made use of a then-new technology—high-voltage electricity, which they discovered could zap troublesome vegetation overgrowing their tracks. Somewhat later, the people in charge of maintaining tracks turned using fire instead. But the approach to weed control that they and countless others ultimately adopted was applying chemical herbicides, which were easier to manage and more effective.

The use of herbicides, whether on railroad rights of way, agricultural fields, or suburban gardens, later raised health concerns, though. More than 100,000 people in the United States, for example, have claimed that Monsanto’s Roundup weed killer caused them to get cancer—claims that Bayer, which now owns Monsanto, is trying hard of late to settle.

Meanwhile, more and more places are banning the use of Roundup and similar glyphosate herbicides. Currently, half of all U.S. states have legal restrictions in place that limit the use of such chemical weed killers. Such restrictions are also in place in 19 other countries, including Austria, which banned the chemical in 2019, and Germany, which will be phasing it out by 2023. So, it’s no wonder that the concept of using electricity to kill weeds is undergoing a renaissance.

Actually, the idea never really died. A U.S. company called Lasco has been selling electric weed-killing equipment for decades. More recently, another U.S. company has been marketing this technology under the name “The Weed Zapper.” But the most interesting developments along these lines are in Europe, where electric weed control seems to be gaining real traction.

One company trying to replace herbicides with electricity is RootWave, based in the U.K. Andrew Diprose, RootWave’s CEO, is the son of Michael Diprose, who spent much of his career as a researcher at the University of Sheffield studying ways to control weeds with electricity.

Electricity, the younger Diprose explains, boasts some key benefits over other non-chemical forms of weed control, which include using hot water, steam, and mechanical extraction. In particular, electric weed control doesn’t require any water. It’s also considerably more energy efficient than using steam, which requires an order of magnitude more fuel. And unlike mechanical means, electric weed killing is also consistent with modern “no till” agricultural practices. What’s more, Diprose asserts, the cost is now comparable with chemical herbicides.

Unlike the electric weed-killing gear that’s long been sold in the United States, RootWave’s equipment runs at tens of kilohertz—a much higher frequency than the power mains. This brings two advantages. For one, it makes the equipment lighter, because the transformers required to raise the voltage to weed-zapping levels (thousands of volts) can be much smaller. It also makes the equipment safer, because higher frequencies pose less of a threat of electrocution. Should you accidentally touch a live electrode “you will get a burn,” says Diprose, but there is much less of a threat of causing cardiac arrest than there would be with a system that operated at 50 or 60 hertz.

RootWave has two systems, a hand-carried one operating at 5 kilowatts and a 20-kilowatt version carried by a tractor. The company is currently collaborating with various industrial partners, including another U.K. startup called Small Robot Company, which plans to outfit an agricultural robot for automated weed killing with electricity. 

And RootWave isn’t the only European company trying to revive this old idea. Netherlands-based CNH Industrial is also promoting electric weed control with a tractor-mounted system it has dubbed “XPower.” Like RootWave’s tractor-mounted system, the electrodes are swept over a field at a prescribed height, killing the weeds that poke up higher than the crop to be preserved.

Of the many advantages CNH touts for its weed-electrocution system (which presumably applies to all such systems, ever since the 1890s) is “No specific resistance expectable.” I should certainly hope not. But I do think that a more apropos wording here, for something that destroys weeds by placing them in a high-voltage electrical circuit, might be a phrase that both Star Trek fans and electrical engineers could better appreciate: “Resistance is futile.

Turning Bricks Into Supercapacitors

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energywise/energy/batteries-storage/turning-red-bricks-into-energy-storage-devices

As solar panels and wind turbines multiply, the big problem is how with how to store all the excess electricity produced when the sun is up or the wind blowing so it can be used at other times. Potential solutions have been suggested in many forms, including massive battery banks, fast-spinning flywheels, and underground vaults of air. Now a team of researchers say a classic construction material—the red fired brick—could be a contender in the quest for energy storage.

The common brick is porous like a sponge, and it’s red color comes from pigmentation that is rich in iron oxide. Both features provide ideal conditions for growing and hosting conductive polymers, Julio D’Arcy and colleagues have found. The team at Washington University in St. Louis transformed basic blocks into supercapacitors that can illuminate a light-emitting diode. 

Supercapacitors are of interest because, unlike batteries, they can deliver blindingly fast bursts of power and they recharge quickly. The downside is that, kilogram for kilogram, they store relatively little energy compared to batteries. In an electric vehicle, a supercapacitor supports acceleration, but the lithium-ion module is what provides power for hundreds of miles. Yet many scientists and technology developers are hoping supercapacitors can replace conventional batteries in many applications, owing to the steep environmental toll of mining and disposing of metals. 

The building brick proof-of-concept project presents new possibilities for the world’s many brick walls and structures, said D’Arcy, an assistant professor of chemistry at Washington University. Rooftop solar panels connected by wires could charge the bricks, which in turn could provide in-house backup power for emergency lighting or other applications.

“If we’re successful [in scaling up], you’d no longer need batteries in your house,” he said by phone. “The brick itself would be the battery.”

The novel device, described in Nature Communications on Tuesday, is a far cry from the megawatt-scale storage projects underway in places like California’s desert and China’s countryside. But D’Arcy said the paper shows, for the first time, that bricks can store electrical energy. It offers “food for thought” in a sector that’s searching for ideas, he noted. 

Researchers began by buying armfuls of 65-cent red bricks at a big-box hardware store. At the lab, they studied the material’s microstructure and filled the bricks’ many pores with vapors. Next, bricks went into an oven heated to 160° Celsius. The iron oxide triggered a chemical reaction, coating the bricks’ cavities with thin layers of PEDOT, the polymer known as poly(3,4- ethylenedioxythiophene). 

Bricks emerged from the oven with a blackish-blue hue—and the ability to conduct electricity.

D’Arcy’s team then attached copper leads to two coated bricks. To stop the blocks from shorting out while stacked together, the researchers separated the blocks with a thin plastic sheet of polypropylene. A sulfuric-acid based solution was used as a liquid electrolyte, and the bricks were connected via the copper leads to a AAA battery for about one minute. Once charged, the bricks could power a white LED for 11 minutes.  

If applied to 50 bricks, the supercapacitor could power 3 watts’ worth of lights for about 50 minutes, D’Arcy said. The current set-up can be recharged 10,000 times and still retain about 90 percent of its original capacitance. Researchers are developing the polymer’s chemistry further in an effort to reach 100,000 recharges. 

However, the St. Louis researchers are not alone in the quest to use everyday (if unusual) materials to make supercapacitors.

In Scotland, a team at the University of Glasgow has developed a flexible device that can be fully charged with human sweat. Researchers applied a thin layer of PEDOT to a piece of polyester cellulose cloth that absorbs the wearer’s perspiration, creating an electrochemical reaction and generating electricity. The idea is that these coated cloths could power wearable electronics, using a tiny amount of sweat to keep running.

The Indian Institute of Technology-Hyderabad is exploring the use of corn husks in high-voltage supercapacitors. India’s corn producing states generate substantial amounts of husk waste, which researchers say can be converted into activated carbon electrodes. The biomass offers a potentially cheaper and simpler alternative to electrodes derived from polymers and similar materials, according to a recent study in Journal of Power Sources.

However, to really make inroads into the dominance of batteries, where a chemical reaction drives creation of a voltage, supercapacitors will need to significantly increase their energy density. D’Arcy said his electrically charged bricks are “two orders of magnitude away” from lithium-ion batteries, in terms of the amount of energy they can store. 

“That’s another thing we’re trying to do—make our polymer store more energy,” he said. “A lot of groups are trying to do this,” he added, “but they didn’t do it in bricks.”

How to Track the Emissions of Every Power Plant on the Planet from Space

Post Syndicated from Heather D. Couture original https://spectrum.ieee.org/energywise/energy/fossil-fuels/how-to-track-the-emissions-of-every-power-plant-on-the-planet-from-space

Fossil-fuel power plants are one of the largest emitters of the greenhouse gases that cause climate change. Collectively, these 18,000 or so plants account for 30 percent of global greenhouse gas emissions, including an estimated 15 billion metric tons of carbon dioxide per year. The pollutants produced by burning fossil fuels also seriously degrade air quality and public health. They contribute to heart and respiratory diseases and lung cancer and are responsible for nearly 1 in 10 deaths worldwide.

Averting the most severe impacts of air pollution and climate change requires understanding the sources of emissions. The technology exists to measure CO2 and other gases in the atmosphere, but not with enough granularity to pinpoint who emitted what and how much. Last month, a new initiative called Climate TRACE was unveiled, with the aim of accurately tracking man-made CO2 emissions right to the source, no matter where in the world that source is. The coalition of nine organizations and former U.S. Vice President Al Gore has already begun to track such emissions across seven sectors, including electricity, transportation, and forest fires.

I’m a machine-learning researcher, and in conjunction with the nonprofits WattTime, Carbon Tracker, and the World Resources Institute (with funding from Google.org), I’m working on the electricity piece of Climate TRACE. Using existing satellite imagery and artificial intelligence, we’ll soon be able to estimate emissions from every fossil-fuel power plant in the world. Here’s how we’re doing it.

The current limits of monitoring emissions from space

The United States is one of the few countries that publicly releases high-resolution data on emissions from individual power plants. Every major U.S. plant has on-site emissions monitoring equipment and reports data to the Environmental Protection Agency. But the costs of installing and maintaining these systems make them impractical for use in many countries. Monitoring systems can also be tampered with. Other countries report annual emissions totals that may be rough estimates instead of actual measurements. These estimates lack verification, and they may under-report emissions.

Greenhouse gas emissions are surprisingly difficult to estimate. For one thing, not all of it is man-made. CO2 and methane releases from the ocean, volcanoes, decomposition, and soil, plant, and animal respiration also put greenhouse gases into the atmosphere. Then there are the non-obvious man-made contributors such as cement production and fertilizers. Even if you know the source, it can be tricky to estimate quantities because the emissions fluctuate. Power plants burning fossil fuels adjust their generation depending on local demand and electricity prices, among other factors.

Concentrations of CO2 are measured locally at observatories such as Mauna Loa, in Hawaii, and globally by satellites such as NASA’s OCO-2. Rather than directly measuring the concentration, satellites estimate it based on how much of the sunlight reflected from Earth is absorbed by carbon dioxide molecules in the air. The European Space Agency’s Sentinel-5P uses similar technology for measuring other greenhouse gases. These spectral measurements are great for creating regional maps of atmospheric CO2 concentrations. Such regional estimates have been particularly revealing during the pandemic, as stay-at-home orders led to decreased pollutants reported around cities, largely driven by decreases in transportation.

But the resolution of these measurements is too low. Each measurement from OCO-2, for example, represents a 1.1-square-mile (2.9-square-kilometer) area on the ground, so it can’t reveal how much an individual power plant emitted (not to mention CO2 from natural sources in the area). OCO-2 provides daily observations of each location, but with a great deal of noise due to clouds, wind, and other atmospheric changes. To get a reliable signal and suppress noisy data points, multiple observations of the same site should be averaged over a month.

To estimate emissions at the source, we need both spatial resolution that’s high enough to see plant operations and frequent observations to see how those measurements change over time.

How to model power plant emissions with AI

We’re fortunate that at any given moment, dozens of satellite networks and hundreds of satellites are capturing the kind of high-resolution imagery we need. Most of these Earth-observing satellites observe in the visible spectrum. We also use thermal infrared to detect heat signatures.

Having human analysts review images from multiple satellites and cross-referencing them with other data would be too time-consuming, expensive, and error-prone. Our prototype system is starting with data from three satellite networks, from which we collect about 5,000 non-cloudy images per day. The number of images will grow as we incorporate data from additional satellites. Some observations contain information at multiple wavelengths, which means even more data to be analyzed and requiring a finely tuned eye to interpret accurately. No human team could process that much data within a reasonable time frame.

With AI, the game has changed. Using the same deep-learning approach being applied to speech recognition and to obstacle avoidance in self-driving cars, we’re creating algorithms that lead to much faster prediction of emissions and an enhanced ability to extract patterns from satellite images at multiple wavelengths. The exact patterns the algorithm learns are dependent on the type of satellite and the power plant’s technology.

We start by matching historical satellite images with plant-reported power generation to create machine-learning models that can learn the relationship between them. Given a novel image of a plant, the model can then predict the plant’s power generation and emissions.

We have enough ground truth on power generation to train the models. The United States and Taiwan are two of the few countries that report both plant emissions and power generation at hourly intervals. Australia and countries in Europe report generation only, while still other countries report daily aggregated generation. Knowing the power generation and fuel type, we can estimate emissions where that data isn’t reported.

Once our models have been trained on plants with known power generation, we can apply the models worldwide to any power plant. Our algorithms create predictive models for various satellites and various types of power plants, and we can aggregate the predictions to estimate emissions over a period of time—say, one month.

What our deep-learning models look for in satellite images

In a typical fossil-fuel power plant, greenhouse gases exhaust through a chimney called the flue stack, producing a telltale smoke plume that our models can spot. Plants that are more efficient or have secondary collection measures to reduce emissions may have plumes that are difficult to see. In those cases, our models look for other visual and thermal indicators when the power plant’s characteristics are known.

Another sign the models look for is cooling. Fossil-fuel plants burn fuel to boil water that creates steam to spin a turbine that generates electricity. The steam must then be cooled back into water so that it can be reused to produce more electricity. Depending on the type of cooling technology, a large water vapor plume may be produced from cooling towers, or heat may be released as warm water discharged to a nearby source. We use both visible and thermal imaging to quantify these signals.

Applying our deep-learning models to power plant emissions worldwide

So far, we have created and validated an initial set of models for coal-burning plants using generation data from the United States and Europe. Our cross-disciplinary team of scientists and engineers continues to gather and analyze ground-truth data for other countries. As we begin to test our models globally, we will also validate them against reported annual country totals and fuel consumption data. We are starting with CO2 emissions but hope to expand to other greenhouse gases.

Our goal is global coverage of fossil-fuel power plant emissions—that is, for any fossil fuel plant in any country, we will be able to accurately predict its emissions of greenhouse gases. Our work for the energy sector is not happening in isolation. Climate TRACE grew out of our project on power plants, and it now has a goal to cover 95 percent of man-made greenhouse gas emissions in every sector by mid-2021.

What comes next? We will make the emissions data public. Renewable energy developers will be able to use it to pinpoint locations where new wind or solar farms will have the most impact. Regulatory agencies will be able to create and enforce new environmental policy. Individual citizens can see how much their local power plants are contributing to climate change. And it may even help track progress toward the Paris Agreement on climate, which is set to be renegotiated in 2021.

About the Author

Heather D. Couture is the founder of the machine-learning consulting firm Pixel Scientia Labs, which guides R&D teams to fight cancer and climate change more successfully with AI.

Dispute Erupts Over What Sparked an Explosive Li-ion Energy Storage Accident

Post Syndicated from David C. Wagman original https://spectrum.ieee.org/energywise/energy/batteries-storage/dispute-erupts-over-what-sparked-an-explosive-liion-energy-storage-accident

A little after 8:00 p.m. on April 19, 2019, a captain with the Peoria, Arizona, fire department’s Hazmat unit, opened the door of a container filled with more than 10,000 energized lithium-ion battery cells, part of a utility-scale storage system that had been deployed two years earlier by the local utility, Arizona Public Service.

Earlier that evening, at around 5:41 p.m., dispatchers had received a call alerting them to smoke and a “bad smell” in the area around the McMicken Battery Energy Storage System (BESS) site in suburban Phoenix.

Sirens blaring, three fire engines arrived at the scene within 10 minutes. Shortly after their arrival, first responders realized that energized batteries were involved and elevated the call to a Hazmat response. After consulting with utility personnel and deciding on a plan of action, a fire captain and three firefighters approached the container door shortly before 8:00 p.m., preparing to open it. The captain, identified in a later investigation as “Captain E193,” opened the door and stepped inside. The other three stood nearby.

The BESS was housed in a container arranged to hold 36 vertical racks separated into two rows on either side of a 3-ft-wide hallway. Twenty-seven racks held 14 battery modules manufactured by LG Chem, an 80 kW inverter manufactured by Parker, an AES Advancion node controller used for data collection and communication, and a Battery Protection Unit (BPU) manufactured by LG Chem.

The battery modules in turn contained 28 lithium-ion battery cells of Nickel Manganese Cobalt (NMC) chemistry. These modules were connected in series, providing a per-rack nominal voltage of 721 V. The entire system had a nameplate capacity to supply 2 MW of power over one hour for a lifetime energy rating of 2 MWh. With 27 full racks, there were 10,584 cells in the container. After a full day of charging, the batteries were around 90 percent of capacity.

With the door to the BESS container open and Captain E193 at its threshold, combustible gases that had built up inside since the incident began several hours before received a breath of oxygen and found an ignition source.

The gases erupted in what was described as a “deflagration event.” Firefighters just outside of the incident hot zone said they heard a loud noise and saw a “jet of flame” extend some 75 ft out and 20 ft up from the door.

In the explosion, Captain E193 and firefighter E193 were thrown against and under a chain-link fence surrounding the facility. The captain landed more than 70 feet from the open door; the firefighter landed 30 ft away.

The captain’s injuries included a traumatic brain injury, an eye injury, spine damage, broken ribs, a broken scapula, thermal and chemical burns, internal bleeding, two broken ankles, and a broken foot.

The firefighter suffered a traumatic brain injury, a collapsed lung, broken ribs, a broken leg, a separated shoulder, laceration of the liver, thermal and chemical burns, a missing tooth, and facial lacerations.

The timeline and series of events is not generally disputed. However, a dispute has erupted in recent weeks over what exactly happened inside the BESS container at around 4:54 p.m. that initiated a thermal runaway that cascaded across multiple battery cells.

In a report released in late July, the utility and its third-party investigator, DNV-GL, said that their review of the evidence pointed to the failure of a single lithium-ion cell as triggering the events.

In a separate, preliminary report filed days later with state officials, LG Chem, which supplied the li-ion batteries, challenged that finding. The South Korea-based battery supplier said the APS report missed a number of details about the accident. Those details, LG Chem told regulators, indicated that the cell thermal runaway began due to “intense heating” caused by a heat source “such as external electrical arcing” on one of the battery racks.

Scott Bordenkircher, who served as APS’ Director of Technology Innovation & Integration at the time of the accident, said in an interview that the utility accepts the findings of its third-party accident investigation, which was completed by Davion Hill, Ph.D., the U.S. Energy Storage Leader for DNV GL. “We have confidence in our third-party investigator,” Bordenkircher said.

In its 78-page report [PDF], DNV GL said that what was first thought to be a fire was in fact an extensive cascading thermal runaway event within the BESS. That event was initiated by an internal cell failure within one battery cell, identified as cell 7-2 on Rack 15. The failure was caused by “abnormal lithium metal deposition and dendritic growth” within the cell, the report said.

Once the failure occurred, thermal runaway cascaded from cell 7-2 through every other cell and module in Rack 15 via heat transfer. The runaway was aided by the “absence of adequate thermal barrier protections” between battery cells, which otherwise might have stopped or slowed the thermal runaway.

As the event progressed, a large amount of flammable gas was produced within the BESS. Lacking ventilation to the outside, the gases created a flammable atmosphere within the container. Around three hours after thermal runaway began, when firefighters opened the BESS door, flammable gases made contact with a heat source or spark and exploded.

It was a “tragic incident,” Bordenkircher said.

It also was not the first time that a lithium-ion battery had failed.

The APS report listed events reaching back to 2006 that involved thermal runaway events in lithium-ion batteries. In one widely report incident in January 2013, a Boeing 787-8 experienced smoke and heat coming from its lithium-ion battery-based auxiliary power unit. It was later determined that the failure was caused by an internal cell defect, which was exacerbated as thermal runaway cascaded through all the cells in the battery pack, releasing flammable electrolyte and gases.

“The state of the industry is that internal defects in battery cells is a known issue,” said Hill. Even so, problems with the technology have not been well communicated between, say, the personal electronics sector and the automotive sector or the aerospace industry and the energy industry.

“Overall, across the industry there was a gap in knowledge,” Bordenkircher said. The technology moved forward so quickly, he said, that standards and knowledge sharing had not kept up.

The McMicken BESS accident also was not the first for APS. In November 2012, a fire destroyed the Scale Energy Storage System (ESS) at an electrical substation in Flagstaff in northern Arizona. The ESS was manufactured by Electrovaya and consisted of a container housing 16 cabinets containing 24 lithium-ion cells.

An investigation into that accident determined that a severely discharged cell degraded and affected a neighboring cell, touching off a fire. The root cause of the 2012 accident was found to be faulty logic used to control the system.

The control logic had been updated more than two dozen times during the 11 months that the BESS operated. But several missed opportunities could have prevented the fire that destroyed the unit, the accident report said. It pointed in particular to an event the previous May in which a cell was “severely discharged” even as the logic was “continuously charging the cell against the intended design.” After the May event, the logic was not changed to address that improper behavior.

An APS spokesperson said that lessons learned from this 2012 incident were incorporated into the design and operation of the McMicken BESS.

In its 162-page rebuttal [PDF] of the McMicken accident LG Chem refuted the utility’s finding of fault with its battery.

The battery supplier said that based on available evidence, “metallic lithium plating did not cause an internal cell failure leading to the initial thermal runaway event” at the McMicken BESS facility. Instead, cell thermal runaway began through intense heating of the affected cells caused by an external heat source, such as external electrical arcing on Rack 15.

LG Chem said that its own third-party investigator, Exponent Inc., tested the internal cell failure theory. It did so by forcing a parallel cell configuration into thermal runaway. It then compared the resulting voltage profile to the voltage profile recorded during the incident. It found that the two did not match, leading to the conclusion that the explosion’s cause was unlikely to have been “an internal short within a single cell.”

The battery maker also said that data recorded during the incident showed a discharging current of 4.9A (amps) present during the voltage excursion. It said that although the APS report acknowledged that the current flipped from -27.9A charging to 4.9 A discharging,“it offered no explanation for the event.” To LG Chem, however, the fact that the discharging current was at 4.9A, instead of zero, “means the current indeed flowed to somewhere else,” supporting what it said was a likely double-point electrical isolation failure and not an internal cell short.

(Complicating the post-accident investigation was the fact that the fire destroyed system control electronics within the container. That left dozens of battery modules energized with no way to discharge them. It took seven weeks for the utility to figure out a plan to remove the modules one by one and bleed off their stored energy.)

The reports and their divergent conclusions signal the start of competing interpretations of available data as the utility and its battery supplier work to find a single cause for the accident.

“We don’t want a public argument about it,” said DNV GL’s Davion Hill. For him, the main point is that “we had a cascading thermal runaway that led to an explosive atmosphere” at the APS McMicken BESS. The goal now should be to make storage systems safer through standards development and information sharing.

After the accident, APS placed a hold on BESS deployment across its service territory. The technology is seen as key to meeting the utility’s announced goals to produce 100 percent “clean energy” by 2050. Two other BESS systems that had been operating at the time of the April 2019 accident were taken offline; they will remain idle until retrofits can be designed and installed.

Spacecraft of the Future Could Be Powered By Lattice Confinement Fusion

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/energywise/energy/nuclear/nuclear-fusiontokamak-not-included

Nuclear fusion is hard to do. It requires extremely high densities and pressures to force the nuclei of elements like hydrogen and helium to overcome their natural inclination to repel each other. On Earth, fusion experiments typically require large, expensive equipment to pull off.

But researchers at NASA’s Glenn Research Center have now demonstrated a method of inducing nuclear fusion without building a massive stellarator or tokamak. In fact, all they needed was a bit of metal, some hydrogen, and an electron accelerator.

The team believes that their method, called lattice confinement fusion, could be a potential new power source for deep space missions. They have published their results in two papers in Physical Review C.

“Lattice confinement” refers to the lattice structure formed by the atoms making up a piece of solid metal. The NASA group used samples of erbium and titanium for their experiments. Under high pressure, a sample was “loaded” with deuterium gas, an isotope of hydrogen with one proton and one neutron. The metal confines the deuterium nuclei, called deuterons, until it’s time for fusion.

“During the loading process, the metal lattice starts breaking apart in order to hold the deuterium gas,” says Theresa Benyo, an analytical physicist and nuclear diagnostics lead on the project. “The result is more like a powder.” At that point, the metal is ready for the next step: overcoming the mutual electrostatic repulsion between the positively-charged deuteron nuclei, the so-called Coulomb barrier. 

To overcome that barrier requires a sequence of particle collisions. First, an electron accelerator speeds up and slams electrons into a nearby target made of tungsten. The collision between beam and target creates high-energy photons, just like in a conventional X-ray machine. The photons are focused and directed into the deuteron-loaded erbium or titanium sample. When a photon hits a deuteron within the metal, it splits it apart into an energetic proton and neutron. Then the neutron collides with another deuteron, accelerating it.

At the end of this process of collisions and interactions, you’re left with a deuteron that’s moving with enough energy to overcome the Coulomb barrier and fuse with another deuteron in the lattice.

Key to this process is an effect called electron screening, or the shielding effect. Even with very energetic deuterons hurtling around, the Coulomb barrier can still be enough to prevent fusion. But the lattice helps again. “The electrons in the metal lattice form a screen around the stationary deuteron,” says Benyo. The electrons’ negative charge shields the energetic deuteron from the repulsive effects of the target deuteron’s positive charge until the nuclei are very close, maximizing the amount of energy that can be used to fuse.

Aside from deuteron-deuteron fusion, the NASA group found evidence of what are known as Oppenheimer-Phillips stripping reactions. Sometimes, rather than fusing with another deuteron, the energetic deuteron would collide with one of lattice’s metal atoms, either creating an isotope or converting the atom to a new element. The team found that both fusion and stripping reactions produced useable energy.

“What we did was not cold fusion,” says Lawrence Forsley, a senior lead experimental physicist for the project. Cold fusion, the idea that fusion can occur at relatively low energies in room-temperature materials, is viewed with skepticism by the vast majority of physicists. Forsley stresses this is hot fusion, but “We’ve come up with a new way of driving it.”

“Lattice confinement fusion initially has lower temperatures and pressures” than something like a tokamak, says Benyo. But “where the actual deuteron-deuteron fusion takes place is in these very hot, energetic locations.” Benyo says that when she would handle samples after an experiment, they were very warm. That warmth is partially from the fusion, but the energetic photons initiating the process also contribute heat.

There’s still plenty of research to be done by the NASA team. Now they’ve demonstrated nuclear fusion, the next step is to create reactions that are more efficient and more numerous. When two deuterons fuse, they create either a proton and tritium (a hydrogen atom with two neutrons), or helium-3 and a neutron. In the latter case, that extra neutron can start the process over again, allowing two more deuterons to fuse. The team plans to experiment with ways to coax more consistent and sustained reactions in the metal.

Benyo says that the ultimate goal is still to be able to power a deep-space mission with lattice confinement fusion. Power, space, and weight are all at a premium on a spacecraft, and this method of fusion offers a potentially reliable source for craft operating in places where solar panels may not be useable, for example. And of course, what works in space could be used on Earth. 

South Africa’s Lights Flicker as its Electric Utility Ponders a Future Without Carbon

Post Syndicated from David Wagman original https://spectrum.ieee.org/energywise/energy/renewables/south-africas-lights-flicker-as-its-electric-utility-ponders-a-future-without-carbon

Optimists look to the future of Eskom, South Africa’s electric utility, and see a glass half full. Pessimists see a glass nearly empty.

The optimists look to what they say is a rare opportunity for a national utility to fully embrace renewable energy resources. In so doing, the power provider could decarbonize its fleet of electric generating plants as well as Africa’s largest economy. Tens of thousands of green energy jobs could be created and gigawatts of renewable generating resources could be added each year for the next 10 or 20 years.

The pessimists—some might call them realists—point to a utility buckling under a massive debt load, singed by a legacy of government cronyism, disappointed by the performance problems of two of the world’s newest and biggest coal-fired power plants, and swamped beneath a backlog of deferred maintenance projects.

In particular, deferred maintenance across the utility’s 37,000 megawatts (MW) of installed capacity has led to forced outages and rolling blackouts in recent weeks during this, South Africa’s winter season.

In late July, the utility urged consumers to turn off lights after four of six generating units at the 3,600-MW Tutuka power station shut down following equipment failures. Only days earlier, the utility had restored power after four other power plants suffered unexpected outages.

Eskom’s problems have been decades in the making and are closely tied to the country’s transition away from Apartheid, the policy of racial segregation that ruled South Africa from the late 1940s until the early 1990s.

As a state-owned entity, Eskom made massive investment in coal-fired generating capacity during the 1970s and 1980s. That capacity led to low electricity tariffs driven still lower by policymakers’ decision not to set aside money for future investment.

But as economic growth and energy consumption accelerated during the 1990s, the government see-sawed over whether to pursue new generating capacity in a restructured energy sector, led by private-sector investment, or by government-led investment, through Eskom.

It ultimately opted for government investment, and in the early 2000s, accepted proposals to build two enormous coal-fired power stations, known as Medupi (meaning “a gentle rain”) and Kusile (meaning “the dawn has come”).

Those decisions to build came at a time when South African President Jacob Zuma was entering office, and utility oversight was weakened when Zuma installed senior managers who “ran amok,” says Mark Swilling, who directs the Sustainable Development Program at Stellenbosch University. Eskom then did “the worst possible thing,” says Swilling: It took on enormous debt loads at a time when fiscal and management oversight was being reduced.

At the same time, the utility took on the role of project manager for both power plants. One source familiar with the projects said that no engineering firm wanted the work because of “all the variables” around the government’s role. And, because few big infrastructure projects had been undertaken since the end of Apartheid, little insight existed into factors such as labor productivity.

The government anted up billions of dollars in guarantees to support construction of both power plants. But engineering flaws, coupled with inexperienced project managers led to delays and cost overruns at both sites.

What’s more, in 2015, a U.S. District Court fined equipment supplier Hitachi $19 million for improper payments it made to secure contracts to provide boilers for the power plants. The Japanese-based industrial conglomerate—which fell under the reach of U.S. securities law because the company did business in the U.S.—agreed to pay the fine but did not admit guilt.

Hitachi has since become part of Mitsubishi. Mitsubishi Hitachi Power Systems-Africa (MHPS-A) did not reply to requests for an interview.

Earlier this year, a South African government probe questioned fees Eskom paid to U.S.-based engineering group Black & Veatch as far back as 2005. The probe questioned alleged price escalations along with supposedly no-bid contracts for work related to the Kusile power plant and other energy infrastructure projects.

In an email response to a request for comment about the inquiry, Black & Veatch spokesperson Patrick MacElroy said, “We were selected by Eskom in an open and transparent process to provide engineering, project management, and construction management support services for the Kusile power station by the Eskom board, the Tender Committee, and by National Treasury. This initial project award was extended, and the scope increased through the addition of approved annual Task Orders including the current agreement governing our operations on the project.”

MacElroy said that over time, the engineering firm’s role expanded, “which required additional resources compared to the initial project plan.” Using boldface, underlined type in his email, MacElroy said that “at no point was Black & Veatch responsible for the design of the boilers or coal ash systems often highlighted as a source of cost overruns and technical issues.”

One source calls the power plants “an albatross around the country’s neck.”

Medupi Power Station is a dry-cooled coal-fired power station with an installed capacity of 4,764 MW, placing it among the largest coal-fired power stations in the world. When work first began on the plant in 2007, Eskom said it could be built for around $4.7 billion. By last year, that estimate had grown to nearly $14 billion.

The first 794 MW unit was commissioned and handed over to Eskom Generation in August 2015. Another five units—each also approaching 800 MW in capacity—were completed and delivered at roughly nine-month intervals. The final unit is expected to achieve commercial status later this year.

A series of technical problems have plagued the plant, including steam piping pressure, boiler weld defects, ash system blockage and damage to mill crushers used to prepare coal for burning.

Most troubling, however, may be design defects with the power plant’s boilers, which were supplied by Hitachi prior to its joining Mitsubishi Hitachi Power Systems-Africa.

“There are fundamental problems” with the boilers, says Mark Swilling. In particular, design engineers “got the boilers wrong” for the type of coal that was to be burned. A May 2020 system status briefing given by two senior Eskom executives reported that technical solutions had been agreed to between Eskom and MHPS-A for boiler defects at both the Medupi and Kusile power stations.

The repairs were first made to Medupi Unit 3 during a 75-day shutdown earlier this year. Medupi unit 6 was next for a 75-day repair outage. Three additional units are slated to be modified during the remainder of the year.

The World Bank, which provided some of the financing for the two power plants, issued a report this past February and said the boiler’s defects were impacting two key plant performance metrics: energy availability factor (EAF) and unplanned capability loss factor (UCLF). These measures, 92% and 2%, respectively, were “far from ideal values,” the report said.

The World Bank report said that Eskom’s EAF target for Medupi was 75%. That was a full 17 points below where it should have been. Even with the reduced target, some of the plant’s generating units were still operating at a substandard level.

Meanwhile, the target for UCLF, which represents unplanned availability and power capacity losses, stood at 18%, nine times as high as the optimal value. As of February, all units except one were operating above this target, the World Bank report said.

The report blamed the poor performance on “major plant defects” along with what it said was an inadequate maintenance and operating regime and difficulty managing spare parts.

The bottom line, says Swilling, is that the Medupi plant will “never, ever” achieve the performance targets that were specified before the generating units were built.

Further substantiating his outlook is another setback: It may not be until the early 2030s before flue gas desulfurization equipment is installed on each unit. A July oversight report by the African Development Bank said that retrofits to add those environmental controls, which were specified back in 2007, could cost another $2.5 billion.

Amid the turmoil, Andre de Ruyter took over as Eskom CEO in early January. Appointed by South African President Cyril Ramaphosa, de Ruyter is tasked with overseeing a plan to split Eskom into three units: for generation, transmission, and distribution, in an attempt to achieve efficiencies and attract investment. The executive has also undertaken a series of initiatives that some say are particularly bold.

First, de Ruyter committed to a schedule of power plant maintenance work “no matter the consequences,” as Swilling puts it. The result has been razor-thin reserve margins that have led to frequent electric service disruptions as some generating units are repaired while still others suffer forced outages.

Second, de Ruyter has said that Eskom will never build another coal-fired power plant. And third, he has called for the utility to lead an energy transformation in South Africa to decarbonize the grid and the economy.

The utility may have no other choice but to move away from fossil fuels. The average age of its coal-dominated power generating fleet is approaching 40 years. The legacy of deferred maintenance has not been kind. As a result, the price tag to repair the existing assets could exceed the cost of shutting them down and replacing the capacity with renewable energy resources.

That’s a tall order. After all, the utility has around 37 gigawatts of installed generating capacity, much of it coal-fired. By comparison, the country’s nascent, largely private renewable energy sector has installed around 5 GW of capacity over the past half-decade. An unprecedented boom in utility-scale wind and solar projects would need to ramp up rapidly just to keep pace with an expected surge of coal plant retirements.

But here, too, politics may get in the way. The country’s Minister of Public Enterprises supports a shift to renewable energy while the Minster of Mineral Resources and Energy does not.

“There is a lot of political uncertainty,” Swilling observes.

Whether or not Eskom—and, by extension, South Africa—shoulders this effort to shift from coal to renewables remains unclear. For now, at least, the utility’s glass appears to be nearly empty, drained by decades of bad management, meddlesome politicians, and mishandled infrastructure investments.

However, if Eskom in particular, and South Africans in general, embrace a plan to decarbonize the grid and build massive amounts of renewable energy, then the country may be able to fill its glass to overflowing as it shows the world how to restructure an economy in a way that fully embraces renewable energy.

Zimbabwe Hopes Rural Electrification Can Stop Deforestation. Here’s Why It Might Not Work

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energywise/energy/environment/zimbabwe-hopes-rural-electrification-stop-deforestation-it-might-not-work

In Zimbabwe, where access to the electrical grid is sparse and unreliable, millions of people still burn wood to cook food and heat their homes. The practice is partly to blame for worsening deforestation in the landlocked country. In recent years, government officials have proposed a seemingly straightforward solution: Extend the electric grid into rural villages, and reduce the use of wood for fuel.

But Ellen Fungisai Chipango, a Zimbabwe-born researcher, says that rural electrification isn’t likely to provide any quick fixes. That’s because adding poles, wires, and even off-grid solar systems will do little to alleviate the crushing poverty that leads people to cut large swaths of trees. In her field work, she found that initiatives to expand energy access in Zimbabwe often overlook the larger political and economic forces at play.

Chipango is among researchers worldwide who are closely examining long-held assumptions that electrifying rural homes can boost family incomes, help children study, reduce indoor air pollution, or protect the environment. Stakeholders including scrappy solar startups, major oil and gas companies, and the United Nations have all pledged to work toward improving energy access for one or more of those reasons. But recent studies suggest that, in order to deliver real benefits, programs must be more comprehensive.

“Without addressing these underlying factors, just extending the grid to rural people will be tantamount to an empty gesture of goodwill,” Chipango said by Skype.

Chipango is a postdoctoral research fellow at the University of Johannesburg in South Africa. She recently discussed her findings in an article in The Conversation. Her piece, published in July, updates a case study she conducted in late 2016 and early 2017 in a district of Zimbabwe’s southeastern Manicaland province. 

Over two-thirds of Zimbabwe’s 16.2 million people live in rural areas, and roughly 80 percent of those residents can’t access electricity. In 2002, Zimbabwe created the Rural Electrification Agency to rapidly electrify the countryside. So far, the agency has connected thousands of schools and rural health centers. Yet few power lines extend beyond institutional buildings, and only about 10 percent of villages are electrified.

In Chipango’s study, most participants said they burned wood for cooking and heating. But personal energy needs were only part of the reason why they chopped down trees. Many people are clear-cutting forests out of sheer desperation. Zimbabwe’s economy is on the brink of collapse, and a prolonged drought—made worse by climate change—has brought about the worst hunger crisis in a decade. Rural residents stave off grinding poverty by selling wood. Their customers include city dwellers, who, because they can’t afford diesel generators or battery backup systems, burn the fuel during frequent and prolonged power outages.

When Chipango interviewed participants again earlier this year, she found their economic situations had worsened since 2017. Meanwhile, urban demand for wood has surged. Interviewees said they’d still keep cutting trees even if power lines finally arrived in their villages. One resident, when asked if off-grid solar would help instead, told Chipango: “It will give us light, but light does not put food on the table.”

Poverty also limits the potential of rural energy initiatives in other ways. If residents can’t afford to buy electric stoves, heaters, or other appliances, they can’t take full advantage of the electrons flowing into their homes. Certainly, enterprising people with a little cash on hand can grow their income by setting up neighborhood phone-charging shops or converting their yards into makeshift movie theaters. But many people won’t likely see their living conditions improve so easily.

“For almost all the traditional outcomes that people talk about when expanding energy access, there are two or three other things that people need for that outcome to actually be realized,” said Ken Lee, who leads the India division of the Energy Policy Institute at the University of Chicago (EPIC). “You can’t eat electricity,” he added.

Lee and colleagues led an experiment in Western Kenya comparing the experiences of rural “under grid” households, meaning homes that are located next to, but not connected to, utility infrastructure. Kenya’s Rural Electrification Authority connected randomly selected households, at a cost of more than US $1,000 each; the rest remained disconnected. After 18 months, researchers found no obvious differences between the socioeconomic living standards of both groups. (Aspects of the Kenya experiment are ongoing.)

The initial results, published in March, surprised the team, which expected to see more tangible gains among the electrified households. Not only did budget constraints keep many participants from buying appliances and using the new electricity supply, researchers also found little improvement in children’s test scores. Even if kids could study under a lightbulb at night, they still attended underfunded schools in the morning. 

Utility mismanagement further undermined electrification efforts. During the rural grid expansion, nearly one-fourth of all utility poles were apparently stolen, possibly leaving the connected residents with a less reliable power supply in the long-run.

Globally, the problem of faulty equipment and lack of maintenance plagues many rural initiatives, including efforts to replace polluting indoor cookstoves with cleaner or electric models. In northern India, studies found that households revert to traditional stoves when new stoves break down. Sometimes, they use both at once, in a process known as “stacking” that undermines the health benefits of alternative models.

Lee and fellow researchers Catherine Wolfram and Edward Miguel, both of the University of California at Berkeley, also reviewed studies from other locations globally and reached a similar conclusion: Access to electricity alone isn’t enough to improve economic and noneconomic outcomes in a meaningful way. 

Still, Lee stressed, that doesn’t mean utilities, philanthropists, and companies should stop pursuing programs to bring grid power or off-grid technologies into rural and impoverished places. But it does clarify the need to design initiatives that do more than simply install infrastructure and assume the rest—rising incomes, better education—will naturally follow.

Chipango, reflecting on her Zimbabwe study, put it this way: “Energy access is not the mere presence of a grid. It’s the ability to use that energy.”

 

Power Grids Should Be as Data Driven as the Internet

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/energy/the-smarter-grid/power-grids-should-be-as-data-driven-as-the-internet

Governments are setting ambitious renewable energy goals in response to climate change. The problem is, the availability of renewable sources doesn’t align with the times when our energy demands are the highest. We need more electricity for lights when the sun has set and solar is no longer available, for example. But if utilities could receive information about energy usage in real time, as Internet service providers already do with data usage, it would change the relationship we have with the production and consumption of our energy.

Utilities must still meet energy demands regardless of whether renewable sources are available, and they still have to mull whether to construct expensive new power plants to meet expected spikes in demand. But real-time information would make it easier to use more renewable energy sources when they’re available. Using this information, utilities could set prices in response to current availability and demand. This real-time pricing would serve as an incentive to customers to use more energy when those sources are available, and thus avoid putting more strain on power plants.

California is one example of this strategy. The California Energy Commission hopes establishing rules for real-time pricing for electricity use will demonstrate how overall demand and availability affect the cost. It’s like surge pricing for a ride share: The idea is that electricity would cost more during peak demand. But the strategy would likely generate savings for people most of the time.

Granted, most people won’t be thrilled with the idea of paying more to dry their towels in the afternoons and evenings, as the sun goes down and demand peaks. But new smart devices could make the pricing incentives both easier on the customer and less visible by handling most of the heavy lifting that a truly dynamic and responsive energy grid requires.

For example, companies such as Ecobee, Nest, Schneider Electric, and Siemens could offer small app-controlled computers that would sit on the breaker boxes outside a building. The computer would manage the flow of electricity from the breaker box to the devices in the building, while the app would help set priorities and prices. It might ask the user during setup to decide on an electricity budget, or to set devices to have priority over other devices during peak demand.

Back in 2009, Google created similar software called Google PowerMeter, but the tech was too early—the appliances that could respond to real-time information weren’t yet available. Google shut down the service in 2011. Karen Herter, an energy specialist for the California Energy Commission, believes that the state’s rules for real-time pricing will be the turning point that convinces energy and tech giants to build such smart devices again.

This year, the CEC is writing rules for real-time pricing. The agency is investigating rates that update every hour, every 15 minutes, and every 5 minutes. No matter what, the rates will be publicly available, so that breaker box computers at homes and businesses can make decisions about what to power and when.

We will all need to start caring about when we use electricity—whether to spend more money to run a dryer at 7 p.m., when demand is high, or run it overnight, when electricity may be cheaper. California, with the rules it’s going to have in place by January 2022, could be the first to create a market for real-time energy pricing. Then, we may see a surge of devices and services that could increase our use of renewable energy to 100 percent—and save money on our electric bills along the way.

This article appears in the August 2020 print issue as “Data-Driven Power.”

Coronavirus vs. Climate Change

Post Syndicated from Thor Benson original https://spectrum.ieee.org/energywise/energy/environment/covid19-pandemic-reduce-greenhouse-gas-emissions

IEEE COVID-19 coverage logo, link to landing page

Whether their state is opening up or locking down again, Americans are generally staying home more during the COVID-19 pandemic. One result has been a significant reduction in greenhouse gas emissions, which could be as much as 7 percent lower in 2020 than they were in 2019. What remains to be seen is if we’ll be able to keep emissions at this level once the pandemic is over and people return to a more regular lifestyle.

In addition to the fact many Americans are telecommuting instead of driving to an office, more people are ordering groceries from home. Online grocery sales in the U.S. went up from $4 billion in March to a record-setting $7.2 billion in June.

Because we tend to assume the lazy option is the less eco-friendly option, you might think people ordering groceries online is worse for the environment. But research has shown that having vehicles delivery orders to multiple households, which is how Amazon Fresh and other vendors operate, is significantly better for the environment than having many people in cars going to the store individually. Not only do these service vehicles delivery to several homes on one round trip, they also follow the fastest route to each home, which makes the whole system pretty efficient and can reduce the carbon emissions associated with grocery shopping by 25 to 75 percent.

(Bad news if you use services like Instacart, which has one driver collect groceries for one person at a time: Because they’re not delivering multiple orders during one trip, they don’t really benefit the environment.)

Jesse Keenan, an associate professor of architecture and a social scientist at Tulane University who has studied sustainability extensively, tells Spectrum that getting groceries delivered is also not more eco-friendly if you’re getting groceries delivered but driving to do other errands in the same day. In that case, you’re just having someone do one of your multiple errands.

As for telecommuting, it’s not necessarily the case that everyone will be going back to work in an office once the pandemic abates. Now that some people have gotten used to working from home and have proven to their employers that they can be just as productive there as they were in the office, many companies may choose to continue having employees work remotely part or all of the time once the pandemic ends.

That would be good news for the environment and for corporate bottomlines.

Mikhail Chester, an associate professor of civil, environmental and sustainable engineering at Arizona State University, tells Spectrum that he can imagine some businesses seeing employees continuing to work remotely as a great way to save money.

“Right now, there are companies out there that were renting office space—they had a lease, and the lease expired and all of their employees have been working from home—and they probably made the decision that they’re getting the job done as effectively with a remote workforce and leasing a physical space is not really that necessary,” Chester says.

He adds that work and shopping are just two of many activities that people might continue to do virtually even when they don’t have to. Chester noted that pre-pandemic he used to fly a lot to attend conferences and meet with research partners but has now switched to doing these things virtually, which might be something that outlasts the pandemic.

Keenan says that the effect of more people working from home instead of traveling to an office or another brick-and-mortar business might depend on the city they live in, as many people use public transportation to get to work in some cities, which is better than driving to work.

“The problem is that service-based employment that is able to work from home is disproportionately in cities where many people take mass transit,” Keenan says. “But, small reductions—even in cities—could add up to reduce emissions on the margins. I think less business travel is more likely to have an aggregate impact. With Zoom, there could be fewer conferences and business travel—hence reducing air miles that are carbon-intensive.”

Michael Mann, a professor of atmospheric science at Penn State University and a leading expert on climate change, tells Spectrum that he expects that after the pandemic ends, there will be some long-term changes in how people approach work and other activities. But he doesn’t think these long-term changes are going to be nearly enough to beat climate change.

“In the end, personal lifestyle changes won’t yield substantial carbon reductions. Even with the massive reduction in travel and reduced economic activity due to the COVID-19 pandemic, we’ll only see at most about 5 percent reduction in carbon emissions [this] year,” Mann says. “We will need to reduce carbon emissions at least that much (more like 7%), year-after-year for the next decade and beyond if we’re too stay within our ‘carbon budget’ for avoiding dangerous >1.5°C planetary warming.”

People living more sustainably is important, and we should encourage it in any way possible, but if we’re going to beat climate change, Mann says we need major changes to how society operates. He says we need to “decarbonize” all forms of transportation and generally transition away from fossil fuel use across the board.

The fact we’ve seen such a significant reduction in carbon emissions this year is one good thing that’s come out of this terrible pandemic we’re facing, and overall, this reduction will likely be sustained as long as the pandemic remains a major issue. Perhaps that will buy us some time to get our climate change plans together. However, as Mann says, if we’re going to really beat climate change, it’s going to take a lot more than people making changes in how they live their daily lives. It’s going to take major changes to the economy and how we power the things we use.

“The main lesson is that personal behavioral change alone won’t get us the reductions we need,” Mann says. “We need fundamental systemic change, and that means policy incentives. We won’t get that unless we vote in politicians who will work in our interest rather than the polluting interests.”

Powering Through the Pandemic

Post Syndicated from Neil Savage original https://spectrum.ieee.org/energy/policy/powering-through-the-pandemic

IEEE COVID-19 coverage logo, link to landing page

Over their 140-year history, electric utilities have figured out how to deal with all sorts of calamities: floods, hurricanes, monsoons, earthquakes, ice storms, wildfires, even active shooters. But the last time they experienced something comparable to the COVID-19 pandemic, it was 1918, and only a third of U.S. homes had electricity.

In the COVID-19 crisis, most of the utilities’ existing disaster responses were of little use. The pandemic was both geographically boundless and also long-lasting, and yet it didn’t pose any immediate threat of physical damage. Rather, the risk was something new: Grid operators worried that if huge numbers of essential workers fell ill, key pieces of infrastructure—generating plants, substations, transmission and distribution lines—could become damaged or inoperable. This was at a time when the importance of uninterrupted electrical service was paramount, because hospitals’ response to COVID-19 depended strongly on suites of medical tools. Those included ultrasound and computed tomography for diagnosis and, in the most serious cases, ventilators for treatment.

Now, as utilities that endured the first wave of outbreaks begin returning to normal operations, they are taking stock of their responses to the pandemic. Meanwhile, as new hot spots emerge, utilities that escaped the first wave are bracing for their own first encounters.

The New York Power Authority (NYPA), the state’s public utility, was one of the organizations buffeted by the first wave in the United States. In mid-March, with COVID-19 cases beginning to soar in parts of the state, NYPA found itself grappling with how to deal with the anticipated onslaught. “We were trying to find strategic ways to make sure we could isolate key staff,” recalls Joseph Keesler, chief operating officer. “One of the things that we considered was our capacity. Should the virus go into our key staff, do we have enough backups?”

Projections suggested that NYPA did, but it hoped to avoid having to find out for sure. Office workers, such as those in management, payroll, and billing, were asked to work from home. But the specialists who run NYPA’s 16 electrical generation facilities and troubleshoot the equipment could not work remotely.

So the utility began bringing beds into their plants, and even towed some dormitory-type trailers to work sites, so key workers could comfortably remain on-site around the clock. After several weeks, NYPA rotated them out and replaced them with a second shift. “Sequestering is one method of social distancing that was helpful for us in terms of making sure our essential personnel were safe,” Keesler says. He himself directed operations from a makeshift office in his home.

The measures managed to keep critical personnel free of infection. As of the end of May, the utility was winding down the sequestration plan, but it remains vigilant and ready to return to that plan, or take new steps should cases of the virus begin to spike again.

In devising their responses to the pandemic, NYPA and other U.S. utilities called upon their experiences during two previous outbreaks, the bird flu of 2005 and the swine flu of 2009, according to Keesler. But COVID-19 spread further and faster than either of those, and it had unique attributes, such as a much higher degree of spread by infected but asymptomatic people. That meant the electricity sector had to adapt existing guidelines to account for the realities of the specific disease, not just for its own sake but for the sake of the countless medical workers and patients depending on a fully functional electricity grid. “Frontline hospital workers and ventilators and first responders don’t work without electricity,” notes Scott Aaronson, vice president for security and preparedness at the Edison Electric Institute, the association representing U.S. investor-owned utility companies.

The industry is now compiling its experiences on how to deal with COVID-19, Aaronson adds. “We are literally writing a book,” he says. “We have crowdsourced from hundreds of experts all across the sector.” He’s referring to a resource guide for dealing with COVID-19 from the Electricity Subsector Coordinating Council, a group made up of utility executives. “It started as a 6-page document, sort of a pamphlet,” Aaronson adds. “It’s 112 pages today.” 

The guide contains recommendations for pandemic-related issues faced by utilities, such as how to identify which personnel are critical, how to practice social distancing in the tight confines of a control facility, how to clean and disinfect control rooms, and how to prepare for difficulties in getting critical equipment.

Another concern is how to handle situations that require mutual aid. Utilities often help each other out during emergencies—for instance, sending repair crews to help with downed power lines after a storm. But that presents challenges when people from different regions are not supposed to be mingling. So the industry developed COVID-19 protocols for mutual aid, which include limiting the times when two people are in a bucket at the top of a pole and restricting personnel to one person per vehicle or per hotel room. Utilities put those protocols into practice in early April, when tornadoes from eastern Texas to the mid-Atlantic region left 1.5 million people without power, and they worked well, Aaronson says. “Some of these protocols are going to be used even after the pandemic,” he says.

Keesler says utilities have also agreed to expand the concept of mutual aid to include the sharing of control-room personnel, if the virus renders key members of one company’s workforce unable to work.

NYPA has also been benefiting from efforts, dating back to 2013, to digitize more of its operations. Under the initiative, the utility built up its information-technology infrastructure, installed remote sensors throughout its generating plants, and provided more handheld instruments to employees. The organization also benefited from prior experimentation with a couple of other technologies. “Drone operation for routine inspections or 3D modeling and 3D printing are things that were novelties but became essentials during this crisis,” Keesler says. He explains that NYPA used 3D modeling to examine its infrastructure and 3D printing to test potential replacement parts, which the utility would then machine with more standard tools.

In the future, NYPA may manufacture those parts with 3D printers, Keesler says. During the pandemic, it even used 3D printing to create plastic face shields for its workers and others. “I think going forward, we’re actually going to double down on all of our digital strategy to make sure that we can respond to the next crisis,” he adds.

Power companies were also forced to confront supply-chain problems during the pandemic. Utilities generally carry what is known as “storm stock”—extra poles, conductors, transformers, and other equipment that might need to be rapidly available. But the surplus stock tends to be somewhat meager, Aaronson says. The industry already has equipment-sharing programs in case one area runs short, and it’s pushing suppliers to ramp up production where possible, although many of those suppliers are also struggling to cope with the pandemic.

Some equipment, such as transformers, turbine wheels, and circuit breakers, is manufactured overseas, and utilities began experiencing disruptions back in January, when factories in China’s Hubei province, where the novel coronavirus is believed to have originated, shut down. On 1 May, U.S. president Donald Trump issued an executive order banning the purchase of electrical-grid equipment that comes from a country designated as a foreign adversary or that might pose a risk of sabotage. The threat identified in the executive order “is nothing new, but it’s shining a flashlight on an issue that has existed over time,” says Massoud Amin, a professor of electrical and computer engineering at the University of Minnesota and a leader in the field of electric-utility security.

Aaronson plays down that threat, however, noting that while potential attackers might see a spiraling crisis as an opportunity to wreak mayhem, the same emergency puts utilities on heightened alert. “During things like a pandemic, our workforce, in particular our cybersecurity workforce, is hypervigilant,” he says.

Even as states continue to grapple with new coronavirus cases and try to figure how to safely resume activity, epidemiologists are warning about a second wave later this year, which could coincide with a new flu season and also with peak hurricane season in North America. Even as they brace for what’s to come, utilities are looking further ahead. Once the pandemic has passed, experts say, the lessons learned may help the industry not only prepare better for the next emergency but also improve its operations in good times. Some of the changes made to cope with the crisis, such as more remote work and a greater reliance on digital technology, could become permanent. “I don’t think we’re ever going to get back to what we considered normal before,” Keesler says.

About the Author

Neil Savage is a freelance science and technology writer based in Lowell, Mass., and a frequent contributor to IEEE Spectrum. His topics of interest include photonics, physics, computing, materials science, and semiconductors. His most recent article, “Tiny Satellites Could Distribute Quantum Keys,” describes an experiment in which cryptographic keys were distributed from satellites released from the International Space Station. He serves on the steering committee of New England Science Writers.

Executive Order Shines a Light on Cyberattack Threat to the Power Grid

Post Syndicated from Yury Dvorkin original https://spectrum.ieee.org/energywise/energy/policy/executive-order-shines-a-light-on-cyberattack-threat-to-the-power-grid

On 1 May, Donald Trump signed an executive order aimed at securing the U.S. bulk-power system, the backbone of our national electricity infrastructure. Bulk power comprises high-voltage transmission lines and generators delivering energy to large consumption centers. The order spotlights an important issue, but neither the order nor the issue has received the attention it deserves, due in part to the COVID-19 outbreak. The order highlights the U.S. power system’s extreme vulnerability to attacks by hackers, terrorists, state actors, and other malefactors, and it’s a bold and timely attempt to recognize and appropriately deal with these threats.

We know that terrorists and state-sponsored actors already have the capabilities to disrupt a country’s power supply. In 2015, a Russian group launched cyberattacks against the Ukrainian power system, causing temporary blackouts and leaving more than 200,000 people without electricity on a winter day for up to six hours. Similarly, Russians were suspected of cyberattacks on Estonia’s power system in 2019. There is little doubt that many other countries also have this capability, though nobody else has applied it—yet.

As a means of protecting the U.S. bulk-power system, the new executive order bans the purchase of equipment manufactured outside of the United States. The supply chain for the power infrastructure is multinational, and many components intended for transformers, circuit breakers, and substation equipment are produced outside of the United States. The imported hardware as well as software could potentially include back doors that would provide critical access to this equipment. If these back doors are triggered remotely, they could disrupt or even lead to the collapse of our national power system.

But the executive order doesn’t account for some major details. The bulk-power system, defined in the order as 69 kilovolts and above, already enjoys tight federal regulation, close oversight, and continuous monitoring. Local power-distribution systems, much of whose energy delivery is below 69 kilovolts, are another story.

This portion of the grid, which may be thought of as the “last mile” to millions of end users, delivers electricity to countless systems, appliances, and devices—including anything you might plug into a standard 110-volt outlet. Because these distribution networks are regulated locally by states and municipalities, and not by the federal government, they fly under the radar of the new executive order. But they are still just as vulnerable to attack.

This vulnerability is particularly pernicious because it allows for a cyberattack that could propagate from a local network to the bulk-power system. Such a hack could target a local power utility or end-user systems, including large numbers of high-wattage devices like dishwashers, HVAC systems, or electric vehicles (EVs). Research at New York University by Samrat Acharya, Ramesh Karri, and me has shown, for example, that large numbers of EVs can be compromised remotely over the Internet and then manipulated to overload power system equipment. Because this type of attack begins with end users, it may remain invisible to operators of bulk-power systems until equipment failure or major abnormalities occur.

The very real danger here is that these sub rosa attacks, while local, can affect the national bulk-power system in the same way that damming a number of tributaries can damage the major river into which the tributaries flow.

Even if an attack doesn’t propagate to the bulk-power system, local end-user attacks in major metropolitan areas could directly and immediately affect vast numbers of people. Consolidated Edison, the electric power utility in New York City, operates 69 distribution system substations supplying 60 billion kilowatt-hours of electricity yearly to more than 3.4 million customers of all socio-demographic groups. A local attack that brought down even part of ConEd’s distribution system could disrupt service to hundreds of thousands of people.

The human costs of power-supply disruptions can go far beyond inconvenience. People on dialysis machines or ventilators or those with heat-sensitive pre-existing conditions are considered “electricity-dependent,” because the consequences for them of even a brief power outage could be dire. The number of electricity-dependent individuals in the United States who reside at home was estimated to be about 685,000 in 2012. Roughly one-fifth of that population could be harmed by even a short three- or four-hour power outage. The executive order needs to look at the many ways local and particularly metropolitan power systems are still at risk.

Another limitation of the executive order is its emphasis on hardware—that word appears in nearly every paragraph—without a corollary focus on software, another attractive port of entry for attackers. Despite sophisticated protection, power system software can be compromised in multifarious ways that are more difficult to identify than are hardware backdoors. And this risk exists in both the bulk- and local-power systems. Software is generally tested against known vulnerabilities, and so a valuable modification to the current order would be protocols for software improvements or analysis to discover new vulnerabilities.

Finally, the executive order focuses explicitly on protecting the power grid against foreign threats. But the danger comes from not only state actors, but also non-state actors and even, unfortunately, U.S. citizens. Consider the sniper attack of 2013 at a PG&E transmission facility near San Jose, Calif., in which a group of gunmen fired on transformers at a substation. While not a cyberattack or even technologically sophisticated, the incident nonetheless inflicted millions of dollars in damage and caused a local blackout—a prime example of a domestic attack by non-state actors. And because cyberattacks on the U.S. power system could be launched remotely, they could be carried out by domestic actors residing overseas.

Overall, the executive order is a big step in the right direction. It is bringing attention to a huge problem and could lead to good ideas and solutions. Still, it would be even stronger if it took a more comprehensive view of both the vulnerabilities in our power systems and the sources of potential threats.

About the Author

Yury Dvorkin is an assistant professor of electrical and computer engineering and a faculty member of the Center for Urban Science and Progress (CUSP) at New York University’s Tandon School of Engineering.

Risk Dashboard Could Help the Power Grid Manage Renewables

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/energywise/energy/the-smarter-grid/risk-dashboard-could-help-the-power-grid-manage-renewables

To fully embrace wind and solar power unless, grid operators need to be able to predict and manage the variability that comes from changes in the wind or clouds dimming sunlight.

One solution may come from a $2-million project backed by the U.S. Department of Energy that aims to develop a risk dashboard for handling more complex power grid scenarios.

 

Grid operators now use dashboards that report the current status of the power grid and show the impacts of large disturbances—such as storms and other weather contingencies—along with regional constraints in flow and generation. The new dashboard being developed by Columbia University researchers and funded by the Advanced Research Projects Agency–Energy (ARPA-E) would improve upon existing dashboards by modeling more complex factors. This could help the grid better incorporate both renewable power sources and demand response programs that encourage consumers to use less electricity during peak periods.

“[Y]ou have to operate the grid in a way that is looking forward in time and that accepts that there will be variability—you have to start talking about what people in finance would call risk,” says Daniel Bienstock, professor of industrial engineering and operations research, and professor of applied physics and applied mathematics at Columbia University.

The new dashboard would not necessarily help grid operators prepare for catastrophic black swan events that might happen only once in 100 years. Instead, Bienstock and his colleagues hope to apply some lessons from financial modeling to measure and manage risk associated with more common events that could strain the capabilities of the U.S. regional power grids managed by independent system operators (ISOs). The team plans to build and test an alpha version of the dashboard within two years, before demonstrating the dashboard for ISOs and electric utilities in the third year of the project.

Variability already poses a challenge to modern power grids that were designed to handle steady power output from conventional power plants to meet an anticipated level of demand from consumers. Power grids usually rely on gas turbine generators to kick in during peak periods of power usage or to provide backup to intermittent wind and solar power.

But such generators may not provide a fast enough response to compensate for the expected variability in power grids that include more renewable power sources and demand response programs driven by fickle human behavior. In the worst cases, grid operators may shut down power to consumers and create deliberate blackouts in order to protect the grid’s physical equipment.

One of the dashboard project’s main goals involves developing mathematical and statistical models that can quantify the risk from having greater uncertainty in the power grid. Such models would aim to simulate different scenarios based on conditions—such as changes in weather or power demand—that could stress the power grid. Repeatedly playing out such scenarios would force grid operators to fine-tune and adapt their operational plans to handle such surprises in real life.

For example, one scenario might involve a solar farm generating 10 percent less power and a wind farm generating 30 percent more power within a short amount of time, Bienstock explains. The combination of those factors might mean too much power begins flowing on a particular power line and the line subsequently starts running hot at the risk of damage.

Such models would only be as good as the data that trains them. Some ISOs and electric utilities have already been gathering useful data from the power grid for years. Those that already have more experience dealing with the variability of renewable power have been the most proactive. But many of the ISOs are reluctant to share such data with outsiders.

“One of the ISOs has told us that they will let us run our code on their data provided that we actually physically go to their office, but they will not give us the data to play with,” Bienstock says. 

For this project, ARPA-E has been working with one ISO to produce synthetic data covering many different scenarios based on historical data. The team is also using publicly available data on factors such as solar irradiation, cloud cover, wind strength, and the power generation capabilities of solar panels and wind turbines.

“You can look at historical events and then you can design stress that’s somehow compatible with what we observe in the past,” says Agostino Capponi, associate professor of industrial engineering and operations research at Columbia University and external consultant for the U.S. Commodity Futures Trading Commission.

A second big part of the dashboard project involves developing tools that grid operators could use to help manage the risks that come from dealing with greater uncertainty. Capponi is leading the team’s effort to design customized energy volatility contracts that could allow grid operators to buy such contracts for a fixed amount and receive compensation for all the variance that occurs over a historical period of time.

But he acknowledged that financial contracts designed to help offset risk in the stock market won’t apply in a straightforward manner to the realities of the power grid that include delays in power transmission, physical constraints, and weather events.

“You cannot really directly use existing financial contracts because in finance you don’t have to take into account the physics of the power grid,” Capponi says.

Once the new dashboard is up and running, it could begin to help grid operators deal with both near-term and long-term challenges for the U.S. power grid. One recent example comes from the current COVID-19 pandemic and associated human behavioral changes—such as more people working from home—having already increased variability in energy consumption across New York City and other parts of the United States. In the future, the risk dashboard might help grid operators quickly identify areas at higher risk of suffering from imbalances between supply and demand and act quickly to avoid straining the grid or having blackouts.

Knowing the long-term risks in specific regions might also drive more investment in additional energy storage technologies and improved transmission lines to help offset such risks. The situation is different for every grid operator’s particular region, but the researchers hope that their dashboard can eventually help level the speed bumps as the U.S. power grid moves toward using more renewable power.

“The ISOs have different levels of renewable penetration, and so they have different exposures and visibility to risk,” Bienstock says. “But this is just the right time to be doing this sort of thing.”

The Privatization of Puerto Rico’s Power Grid Is Mired in Controversy

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energywise/energy/policy/the-privatization-of-puerto-rico-power-grid-mired-in-controversy

When Hurricane Maria razed Puerto Rico in September 2017, the storm laid bare the serious flaws and pervasive neglect of the island’s electricity system. Nearly all 3.4 million residents lost power for weeks, months, or longer—a disaster unto itself that affected hospitals and schools and shut down businesses and factories.

The following January, then-Gov. Ricardo Rosselló signaled plans to sell off parts of the Puerto Rico Electric Power Authority (PREPA), leaving private companies to do what the state-run utility had failed to accomplish. Rosselló, who resigned last year, said it would take about 18 months to complete the transition.

“Our objective is simple: provide better service, one that’s more efficient and that allows us to jump into new energy models,” he said that June, after signing a law to start the process.

Yet privatization to date has been slow, piecemeal, and mired in controversy. Recent efforts seem unlikely to move the U.S. territory toward a cleaner, more resilient system, power experts say.

As the region braces for an “unusually active” 2020 hurricane season, the aging grid remains vulnerable to disruption, despite US $3.2 billion in post-Maria repairs. 

Puerto Rico relies primarily on large fossil fuel power plants and long transmission lines to carry electricity into mountains, coastlines, and urban centers. When storms mow down key power lines, or earthquakes destroy generating units—as was the case in January—outages cascade across the island. Lately, frequent brownouts caused by faulty infrastructure have complicated efforts to confront the COVID-19 outbreak. 

“In most of the emergencies that we’ve had, the centralized grid has failed,” says Lionel Orama Exclusa, an electrical engineering professor at the University of Puerto Rico-Mayagüez and member of Puerto Rico’s National Institute of Energy and Island Sustainability.  

He and many others have called for building smaller regional grids that can operate independently if other parts fail. Giant oil- and gas-fired power plants should similarly give way to renewable energy projects distributed near or within neighborhoods. Last year, Puerto Rico adopted a mandate to get to 100 percent renewable energy by 2050. (Solar, wind, and hydropower supply just 2.3 percent of today’s total generation.)

So far, however, PREPA’s contracts to private companies have mainly focused on retooling existing infrastructure—not reimagining the monolithic system. The companies are also tied to the U.S. natural gas industry, which has targeted Puerto Rico as a place to offload mainland supplies.

In June, Luma Energy signed a 15-year contract to operate and maintain PREPA’s transmission and distribution system. Luma is a newly formed joint venture between infrastructure company Quanta Services and Canadian Utilities Limited. The contract is valued between $70 million and $105 million per year, plus up to $20 million in annual “incentive fees.”

Wayne Stensby, president and CEO of Luma, said his vision for Puerto Rico includes wind, solar, and natural gas and is “somewhere down the middle” between a centralized and decentralized grid, Greentech Media reported. “It makes no sense to abandon the existing grid,” he told the news site in June, adding that Luma’s role is to “effectively optimize that reinvestment.”

Orama Exclusa says he has “mixed feelings” about the contract.

If the private consortium can effectively use federal disaster funding to fix crumbling poles and power lines, that could significantly improve the system’s reliability, he says. But the arrangement still doesn’t address the “fundamental” problem of centralization.

He also is also concerned that the Luma deal lacks transparency. Former utility leaders and consumer watchdogs have noted that regulators did not include public stakeholders in the 18-month selection process. They say they’re wary Puerto Rico may be repeating missteps made in the wake of Hurricane Maria.

As millions of Puerto Ricans recovered in the dark, PREPA quietly inked a no-bid, one-year contract for $300 million with Whitefish Energy Holdings, a two-person Montana firm with ties to then-U.S. Interior Secretary Ryan Zinke. Cobra Acquisitions, a fracking company subsidiary, secured $1.8 billion in federal contracts to repair the battered grid. Last September, U.S. prosecutors charged Cobra’s president and two officials in the Federal Emergency Management Agency with bribery and fraud

A more recent deal with another private U.S. firm is drawing further scrutiny.

In March 2019, New Fortress Energy won a five-year, $1.5 billion contract to supply natural gas to PREPA and convert two units (totaling 440 megawatts) at the utility’s San Juan power plant from diesel to gas. The company, founded by billionaire CEO Wes Edens, completed the project this May, nearly a year behind schedule. It also finished construction of a liquefied natural gas (LNG) import terminal in the capital city’s harbor. 

“This is another step forward in our energy transformation,” Gov. Wanda Vázquez Garced said in May during a tour of the new facilities. Converting the San Juan units “will allow for a cheaper and cleaner fuel” and reduce monthly utility costs for PREPA customers, she said.

Critics have called for canceling the project, which originated after New Fortress submitted an unsolicited proposal to PREPA in late 2017. The ensuing deal gave New Fortress an “unfair advantage,” was full of irregularities, and didn’t undergo sufficient legal review or financial oversight, according to a June report by CAMBIO, a Puerto Rico-based environmental nonprofit, and the Institute for Energy Economics and Financial Analysis.

The project “would continue to lock in fossil fuels on the island and would prevent the aggressive integration of renewable energy,” Ingrid Vila Biaggi, president of CAMBIO, told the independent news program Democracy Now!

The U.S. Federal Regulatory Commission, which oversees the transmission and wholesale sale of electricity and natural gas, also raised questions about the LNG import terminal.

On 18 June, the agency issued a rare show-cause order demanding that New Fortress explain why it didn’t seek prior approval before building the infrastructure at the Port of San Juan. New Fortress has 30 days to explain its failure to seek the agency’s authorization.

Concerns over contracts are among the many challenges to revitalizing Puerto Rico’s grid. The island has been mired in a recession since 2006, amid a series of budget shortfalls, financial crises, and mismanagement—which contributed to PREPA filing for bankruptcy in 2017, just months before Maria struck. The COVID-19 pandemic is further eroding the economy, with Puerto Ricans facing widespread unemployment and rising poverty

The coming months—typically those with the most extreme weather—will show if recent efforts to privatize the grid will alleviate, or exacerbate, Puerto Rico’s electricity problems.

The Software-Defined Power Grid Is Here

Post Syndicated from Patrick T. Lee original https://spectrum.ieee.org/energy/the-smarter-grid/the-softwaredefined-power-grid-is-here

My colleagues and I have been spending a lot of time on a project in Onslow, a remote coastal town of 850 in Western Australia, where a wealth of solar, wind power, and battery storage has come on line to complement the region’s traditional forms of power generation. We’re making sure that all of these distributed energy resources work as a balanced and coordinated system. The team has traveled more than 15,000 kilometers from our company headquarters in San Diego, and everyone is excited to help the people of Onslow and Western Australia’s electric utility Horizon Power.

Like other rural utilities around the world, Horizon faces an enormous challenge in providing reliable electricity to hundreds of small communities scattered across a wide area. Actually, calling this a “wide area” is a serious understatement: Horizon’s territory covers some 2.3million square kilometers—about one and a half times the size of Alaska. You can’t easily traverse all that territory with high-tension power lines and substations, so local power generation is key. And as the country tries to shrink its carbon footprint, Horizon is working with its customers to decrease their reliance on nonrenewable energy. The incentives for deploying renewables such as photovoltaics and wind turbines are compelling.

But adding more solar and wind power here, as elsewhere, brings its own problems. In particular, it challenges the grid’s stability and resilience. The power systems that most people are connected to were designed more than a century ago. They rely on large, centralized generation plants to deliver electricity through transmission and distribution networks that feed into cities, towns, homes, schools, factories, stores, office buildings, and more. Our 100-year-old power system wasn’t intended to handle power generators that produce electricity only when the sun is shining or the wind is blowing. Such intermittency can cause the grid’s voltage and frequency to fluctuate and spike dangerously when power generation isn’t balanced with demand throughout the network. Traditional grids also weren’t designed to handle energy flowing in two directions, with hundreds or thousands of small generators like rooftop solar panels attached to the network.

The problem is being magnified as the use of renewables grows worldwide. According to the United Nations report Global Trends in Renewable Energy Investment 2019, wind and solar power accounted for just 4 percent of generating capacity worldwide in 2010. That figure was expected to more than quadruple within a decade, to 18 percent. And that trend should continue for at least the next five years, according to the International Energy Agency. It anticipates that renewable energy capacity will rise by 50 percent through 2024, with solar photovoltaics and onshore wind making up the lion’s share of that increase.

Rather than viewing this new capacity as a valuable asset, though, many grid operators fear the intermittency of renewable resources. Rather than finding a way to integrate them, they have tried to limit the amount of renewable energy that can connect to their networks, and they routinely curtail the output of these sources.

In late 2018, Horizon Power hired my company, PXiSE Energy Solutions, to better integrate renewables across its vast territory. Many electric utilities around the world are grappling with this same challenge. The chief difference between what others are doing and our approach is that we use a special sensor, called a phasor measurement unit[PDF], or PMU. This sensor, first developed in the 1980s, measures voltage and current at various points on the grid and then computes the magnitude and phase of the signals, with each digitized measurement receiving a time stamp accurate to within 1 microsecond of true time. Such measurements reveal moment-by-moment changes in the status of the network.

For many years, utilities have deployed PMUs on their transmission systems, but they haven’t fully exploited the sensors’ real-time data. The PXiSE team developed machine-learning algorithms so that our high-speed controller can act quickly and autonomously to changes in generation and consumption—and also predict likely future conditions on the network. This intelligent system mitigates any grid disturbances while continuously balancing solar generation, battery power, and other available energy resources, making the grid more efficient and reliable. What’s more, our system can be integrated into virtually any type of power grid, regardless of its size, age, or mix of generation and loads. Here’s how it works.

The basic thing a PMU measures is called a phasor. Engineering great Charles Proteus Steinmetz coined this term back in 1893 and described how to calculate it based on the phase and amplitude of an alternating-current waveform. Nearly a century later, Arun G. Phadke and his team at Virginia Tech developed the phasor measurement unit and showed that the PMU could directly measure the magnitude and phase angle of AC sine waves at specific points on the grid.

PMUs were commercialized in 1992, at which point utilities began deploying the sensors to help identify outages and other grid “events.” Today, there are tens of thousands of PMUs installed at major substations throughout the United States. (The accuracy of PMU data is dictated by IEEE Standard C37.118 and the newer IEEE/IEC Standard 60255-118-1-2018, which call for more accurate and consistent power measurements than are typically required for other sensors.)

But these devices are valuable for much more than simply tracking blackouts. Today’s high-speed PMUs provide data 60 times per second, which is more than 200 times as fast as the sampling rate of the conventional SCADA (supervisory control and data acquisition) systems used on most electric grids. What’s more, PMU data are time-stamped with great precision using the Global Positioning System (GPS), which synchronizes all measurements worldwide. For that reason, the data are sometimes called synchrophasors.

With that kind of time resolution, PMUs can provide an extremely accurate and detailed snapshot of power quality, indicating how consistently the voltage and current remain within their specified ranges. Wide fluctuations can lead to inefficiency and wasted electricity. If the fluctuations grow too large, they can damage equipment or even trigger a brownout or blackout. The time-stamped data are particularly helpful when your resources are scattered across a wide area, as in Horizon Power’s grid in Western Australia.

The main reason that PMUs haven’t been fully exploited is that most utilities don’t take advantage of modern data communications and advanced control technologies. Even when they have PMUs and data networks in place, they haven’t tied them together to help coordinate their solar, wind, and other energy resources. SCADA systems are designed to send their information every 2 or 3seconds to a central operations center where human operators are watching and can take action if something goes wrong. But the power electronics used in inverter-based systems like solar PV, wind turbines, and energy storage units operate on the millisecond level, and they require much higher-speed control to maximize their benefits.

I’ve spent my entire 33-year career as a power engineer working for small and large power companies in California, so I’m deeply familiar with the issues that utilities have faced as more distributed-energy resources have come on line. I long had a hunch that PMUs and synchrophasors could play a larger role in grid modernization. In 2015, when I was vice president of infrastructure and technology at Sempra Energy, I began working with Charles Wells, an expert on power system controls who was then at OSIsoft, to figure out whether that hunch was valid and what a synchrophasor-based control system might look like.

Meeting every Friday afternoon for a few hours, we spent close to a year doing calculations and running simulations that confirmed PMU data could be used to both measure and control grid conditions in real time. We then turned to Raymond de Callafon of the University of California, San Diego, and other experts at OSIsoft to figure out how to create a robust commercial system. Our goal was to build a software-based controller that would make decisions autonomously, enabling a transition away from the hands-on adjustments required by traditional SCADA systems.

The controller we envisioned would need to operate with great accuracy and at high speeds. Our system would also need artificial intelligence to be able to forecast grid conditions hourly and daily. Because our platform would be largely software based, it would require a minimum of new hardware, and the integration into an existing network would be quick and inexpensive. In essence, we sought to create a new operating system for the grid.

By early 2017, our system was ready for its first real-world test, which took our team of six engineers to a 21-MW wind farm and battery storage facility on an island in Hawaii. In recent years, Hawaii has dramatically increased solar and wind generation, but this particular island’s grid had reached a limit to integrating more renewable energy, due to the variability of the wind resources. It was less complicated for the grid operator to rely on fossil-fueled generators than to mitigate the intermittent power from the wind farm, which was periodically shut off, or curtailed, at night to ensure grid stability.

We were on a startup’s budget, so we used off-the-shelf equipment, including a standard PC running a digital simulator model that de Callafon created. We rented a house near the wind farm, where we spent several days testing the technology. It was a fantastic moment when we realized we could, in fact, control energy generation at the wind farm from the kitchen table. (We took all due cybersecurity precautions, of course.) With our control system in place, the inherent power fluctuations of the wind farm were smoothed out by real-time dispatch from the battery storage facility. This allowed the wind farm’s generation to be far more reliable.

Soon after completing the Hawaii project, we installed a controller running our PMU-based algorithm to create a microgrid in Sempra Energy’s high-rise office building in downtown San Diego. The controller was designed to optimize the use of the building’s electric-vehicle chargers, solar panels, and storage batteries. Meanwhile, it would also reduce the reliance on grid power in the late afternoon and early evening, when demand and prices typically spike. At such times, the optimization algorithm automatically determines the proper resource mix and schedule, enabling one floor to be served solely by local renewable sources and stored energy rather than the main grid. This shift reduced the building’s utility bill by 20 percent, which we’ve since found to be typical for similar microgrids.

More recently, our team traveled to Jeju Island, in South Korea. Prior to the installation of our grid controller, the island’s electricity primarily came from the mainland via high-voltage underwater cables. The system was designed to interconnect two local battery-storage systems—with capacities of 224 kilowatt-hours and 776 kWh—with a 500-kW solar farm and 600 kW of wind power. To meet the island’s renewable energy goals and save on electricity costs, wind and solar power are stored in the batteries during the day, when power demand and grid prices are low. The stored electricity is then used at night, when power demand and grid prices are high.

The on-site installation of our controller took just four days. In addition to operating the two battery-storage systems throughout the day and night, PXiSE’s controller uses the batteries to provide what’s known as “ramp control” for the wind turbines. Most grids can tolerate only a gradual change in power flow, typically no more than 1 megawatt per minute, so ramp control acts like a shock absorber. It smooths out the turbines’ power output when the wind suddenly changes, which would otherwise cause reliability problems on the grid.

Every 16.7 milliseconds (one cycle of the 60-hertz wave), the controller looks for any change in wind power output. If the wind stops blowing, the system instantly compensates by discharging the batteries to the grid, gradually decreasing the amount of discharged battery power to zero, as power from the main grid gradually increases to match demand. When the wind starts blowing again, the batteries immediately respond by absorbing a portion of the generated wind power, gradually reducing the amount being stored to zero as the controller ramps up the amount feeding into the grid, thereby meeting real-time energy demand.

The control system also creates its own forecasts of load and generation based on weather forecasts and AI processing of historical load data. The forecasting models are updated in real time to compute the optimal settings for power and energy storage in the system.

At each of our other projects—in Chile, Guam, Hong Kong, Japan, Mexico, and a handful of U.S. sites—the system we provide looks a little different, depending on the customer’s mix of energy resources and what their priorities are. We designed the PXiSE system so it can be deployed in any part of the electric grid, from a rooftop solar array all the way to a centralized power plant connected to a large network. Our customers appreciate that they don’t have to install additional complex equipment beyond the controller, that integrating our technology is fast, and that they see positive results right away.

The work we’re doing is just one part of a far bigger goal: to make the grid smarter and cleaner. This is the great challenge of our time. The power industry isn’t known for being on the cutting edge of technology adoption, but the transition away from a century-old, hardware-dependent analog grid toward a modern software-based digital grid is not just desirable but essential. Our team of enthusiastic entrepreneurs and engineers joins thousands of others around the world who are committed to making a difference in the future of our industry and our planet.

This article appears in the July 2020 print issue as “The Software-Defined Power Grid.”

About the Author

Patrick T. Lee is CEO of PXiSE Energy Solutions and vice president of infrastructure and technology at Sempra Energy, in San Diego.