California wildfires knock out electric power to thousands of people; a hurricane destroys transmission lines that link electric power stations to cities and towns; an earthquake shatters homes and disrupts power service. The headlines are dramatic and seem to occur more and more often.
The fundamental vulnerability in each case is that the power grid relies on metal cables to carry electricity every meter along the way. Since the days of Nikola Tesla and his famous coil, inventors and engineers have dreamt of being able to send large amounts of electricity over long distances, and all without wires.
During the next several months, a startup company, a government-backed innovation institute and a major electric utility will aim to scale up a wireless electric power transmission system that they say will offer a commercially viable alternative to traditional wire transmission and distribution systems.
The underlying idea is nothing new: energy is converted into electromagnetic radiation by a transmitting antenna, picked up by a receiving antenna, and then distributed locally by conventional means. This is the same thing that happens in any radio system, but in radio the amount of power that reaches the receiver can be minuscule; picking up a few picowatts is all that is needed to deliver an intelligible signal. By contrast, the amount of raw energy sent via wireless power transfer is most important, and means the fraction of transmitted energy that is received becomes the key design parameter.
What’s new here is how New Zealand startup Emrod has borrowed ideas from radar and optics and used metamaterials to focus the transmitted radiation even more tightly than previous microwave-based wireless power attempts.
The “quasi-optical” system shapes the electromagnetic pulse into a cylindrical beam, thus making it “completely different” from the way a cell phone tower or radio antenna works, said Dr. Ray Simpkin, chief science officer at Emrod, which has a Silicon Valley office in addition to its New Zealand base. Simpkin’s background is in radar technology and he is on loan from Callaghan Innovation, the New Zealand government-sponsored innovation institute that is backing the wireless power startup.
Emrod’s laboratory prototype currently operates indoors at a distance of just 2 meters. Work is under way to build a 40-meter demonstration system, but it, too, will be indoors where conditions can be easily managed. Sometime next year though Emrod plans a field test at a still-to-be-determined grid-connected facility operated by Powerco, New Zealand’s second largest utility with around 1.1 million customers.
In an email, Powerco said that it is funding the test with an eye toward learning how much power the system can transmit and over what distance. The utility also is providing technical assistance to help Emrod connect the system to its distribution network. Before that can happen, however, the system must meet a number of safety, performance and environmental requirements.
One safety feature will be an array of lasers spaced along the edges of flat-panel receivers that are planned to catch and then pass along the focused energy beam. These lasers are pointed at sensors at the transmitter array so that if a bird in flight, for example, interrupted one of the lasers, the transmitter would pause a portion of the energy beam long enough for the bird to fly through.
Emrod’s electromagnetic beam operates at frequencies classified as industrial, scientific and medical (ISM). The company’s founder, Greg Kushnir, said in a recent interview that the power densities are roughly the equivalent of standing in the sun outside at noon, or around 1 kW per square meter.
Emrod sees an opportunity for utilities to deploy its technology to deliver electric service to remote areas and locations with difficult terrain. The company is looking at the feasibility of spanning a 30-km strait between the southern tip of New Zealand and Stewart Island. Emrod estimates that a 40-square-meter transmitter would do the job. And, although without offering detailed cost estimates, Simpkin said the system could cost around 60 percent that of a subsea cable.
Another potential application would be in post-disaster recovery. In that scenario, mobile transmitters would be deployed to close a gap between damaged or destroyed transmission and distribution lines.
The company has a “reasonable handle” on costs, Simpkin said, with the main areas for improvement coming from commercially available transmitter components. Here, the company expects that advancements in 5G communications technology will spur efficiency improvements. At present, its least efficient point is at the transmitter where existing electronic components are no better than around 70 percent efficient.
“The rule book hasn’t really been written,” he said, for this effort to meld wireless power transfer with radar and optics. “We are taking a softly, softly approach.”
Governments aresetting ambitious renewableenergy goals in response to climate change. The problem is, the availability of renewable sources doesn’t align with the times when our energy demands are the highest. We need more electricity for lights when the sun has set and solar is no longer available, for example. But if utilities could receive information about energy usage in real time, as Internet service providers already do with data usage, it would change the relationship we have with the production and consumption of our energy.
Utilities must still meet energy demands regardless of whether renewable sources are available, and they still have to mull whether to construct expensive new power plants to meet expected spikes in demand. But real-time information would make it easier to use more renewable energy sources when they’re available. Using this information, utilities could set prices in response to current availability and demand. This real-time pricing would serve as an incentive to customers to use more energy when those sources are available, and thus avoid putting more strain on power plants.
California is one example of this strategy. The California Energy Commission hopes establishing rules for real-time pricing for electricity use will demonstrate how overall demand and availability affect the cost. It’s like surge pricing for a ride share: The idea is that electricity would cost more during peak demand. But the strategy would likely generate savings for people most of the time.
Granted, most people won’t be thrilled with the idea of paying more to dry their towels in the afternoons and evenings, as the sun goes down and demand peaks. But new smart devices could make the pricing incentives both easier on the customer and less visible by handling most of the heavy lifting that a truly dynamic and responsive energy grid requires.
For example, companies such as Ecobee, Nest, Schneider Electric, and Siemens could offer small app-controlled computers that would sit on the breaker boxes outside a building. The computer would manage the flow of electricity from the breaker box to the devices in the building, while the app would help set priorities and prices. It might ask the user during setup to decide on an electricity budget, or to set devices to have priority over other devices during peak demand.
Back in 2009, Google created similar software called Google PowerMeter, but the tech was too early—the appliances that could respond to real-time information weren’t yet available. Google shut down the service in 2011. Karen Herter, an energy specialist for the California Energy Commission, believes that the state’s rules for real-time pricing will be the turning point that convinces energy and tech giants to build such smart devices again.
This year, the CEC is writing rules for real-time pricing. The agency is investigating rates that update every hour, every 15 minutes, and every 5 minutes. No matter what, the rates will be publicly available, so that breaker box computers at homes and businesses can make decisions about what to power and when.
We will all need to start caring about when we use electricity—whether to spend more money to run a dryer at 7 p.m., when demand is high, or run it overnight, when electricity may be cheaper. California, with the rules it’s going to have in place by January 2022, could be the first to create a market for real-time energy pricing. Then, we may see a surge of devices and services that could increase our use of renewable energy to 100 percent—and save money on our electric bills along the way.
This article appears in the August 2020 print issue as “Data-Driven Power.”
To fully embrace wind and solar power unless, grid operators need to be able to predict and manage the variability that comes from changes in the wind or clouds dimming sunlight.
One solution may come from a $2-million project backed by the U.S. Department of Energy that aims to develop a risk dashboard for handling more complex power grid scenarios.
Grid operators now use dashboards that report the current status of the power grid and show the impacts of large disturbances—such as storms and other weather contingencies—along with regional constraints in flow and generation. The new dashboard being developed by Columbia University researchers and funded by the Advanced Research Projects Agency–Energy (ARPA-E) would improve upon existing dashboards by modeling more complex factors. This could help the grid better incorporate both renewable power sources and demand response programs that encourage consumers to use less electricity during peak periods.
“[Y]ou have to operate the grid in a way that is looking forward in time and that accepts that there will be variability—you have to start talking about what people in finance would call risk,” says Daniel Bienstock, professor of industrial engineering and operations research, and professor of applied physics and applied mathematics at Columbia University.
The new dashboard would not necessarily help grid operators prepare for catastrophic black swan events that might happen only once in 100 years. Instead, Bienstock and his colleagues hope to apply some lessons from financial modeling to measure and manage risk associated with more common events that could strain the capabilities of the U.S. regional power grids managed by independent system operators (ISOs). The team plans to build and test an alpha version of the dashboard within two years, before demonstrating the dashboard for ISOs and electric utilities in the third year of the project.
Variability already poses a challenge to modern power grids that were designed to handle steady power output from conventional power plants to meet an anticipated level of demand from consumers. Power grids usually rely on gas turbine generators to kick in during peak periods of power usage or to provide backup to intermittent wind and solar power.
But such generators may not provide a fast enough response to compensate for the expected variability in power grids that include more renewable power sources and demand response programs driven by fickle human behavior. In the worst cases, grid operators may shut down power to consumers and create deliberate blackouts in order to protect the grid’s physical equipment.
One of the dashboard project’s main goals involves developing mathematical and statistical models that can quantify the risk from having greater uncertainty in the power grid. Such models would aim to simulate different scenarios based on conditions—such as changes in weather or power demand—that could stress the power grid. Repeatedly playing out such scenarios would force grid operators to fine-tune and adapt their operational plans to handle such surprises in real life.
For example, one scenario might involve a solar farm generating 10 percent less power and a wind farm generating 30 percent more power within a short amount of time, Bienstock explains. The combination of those factors might mean too much power begins flowing on a particular power line and the line subsequently starts running hot at the risk of damage.
Such models would only be as good as the data that trains them. Some ISOs and electric utilities have already been gathering useful data from the power grid for years. Those that already have more experience dealing with the variability of renewable power have been the most proactive. But many of the ISOs are reluctant to share such data with outsiders.
“One of the ISOs has told us that they will let us run our code on their data provided that we actually physically go to their office, but they will not give us the data to play with,” Bienstock says.
For this project, ARPA-E has been working with one ISO to produce synthetic data covering many different scenarios based on historical data. The team is also using publicly available data on factors such as solar irradiation, cloud cover, wind strength, and the power generation capabilities of solar panels and wind turbines.
“You can look at historical events and then you can design stress that’s somehow compatible with what we observe in the past,” says Agostino Capponi, associate professor of industrial engineering and operations research at Columbia University and external consultant for the U.S. Commodity Futures Trading Commission.
A second big part of the dashboard project involves developing tools that grid operators could use to help manage the risks that come from dealing with greater uncertainty. Capponi is leading the team’s effort to design customized energy volatility contracts that could allow grid operators to buy such contracts for a fixed amount and receive compensation for all the variance that occurs over a historical period of time.
But he acknowledged that financial contracts designed to help offset risk in the stock market won’t apply in a straightforward manner to the realities of the power grid that include delays in power transmission, physical constraints, and weather events.
“You cannot really directly use existing financial contracts because in finance you don’t have to take into account the physics of the power grid,” Capponi says.
Once the new dashboard is up and running, it could begin to help grid operators deal with both near-term and long-term challenges for the U.S. power grid. One recent example comes from the current COVID-19 pandemic and associated human behavioral changes—such as more people working from home—having already increased variability in energy consumption across New York City and other parts of the United States. In the future, the risk dashboard might help grid operators quickly identify areas at higher risk of suffering from imbalances between supply and demand and act quickly to avoid straining the grid or having blackouts.
Knowing the long-term risks in specific regions might also drive more investment in additional energy storage technologies and improved transmission lines to help offset such risks. The situation is different for every grid operator’s particular region, but the researchers hope that their dashboard can eventually help level the speed bumps as the U.S. power grid moves toward using more renewable power.
“The ISOs have different levels of renewable penetration, and so they have different exposures and visibility to risk,” Bienstock says. “But this is just the right time to be doing this sort of thing.”
My colleagues and I have been spending a lot of time on a project in Onslow, a remote coastal town of 850 in Western Australia, where a wealth of solar, wind power, and battery storage has come on line to complement the region’s traditional forms of power generation. We’re making sure that all of these distributed energy resources work as a balanced and coordinated system. The team has traveled more than 15,000 kilometers from our company headquarters in San Diego, and everyone is excited to help the people of Onslow and Western Australia’s electric utility Horizon Power.
Like other rural utilities around the world, Horizon faces an enormous challenge in providing reliable electricity to hundreds of small communities scattered across a wide area. Actually, calling this a “wide area” is a serious understatement: Horizon’s territory covers some 2.3million square kilometers—about one and a half times the size of Alaska. You can’t easily traverse all that territory with high-tension power lines and substations, so local power generation is key. And as the country tries to shrink its carbon footprint, Horizon is working with its customers to decrease their reliance on nonrenewable energy. The incentives for deploying renewables such as photovoltaics and wind turbines are compelling.
But adding more solar and wind power here, as elsewhere, brings its own problems. In particular, it challenges the grid’s stability and resilience. The power systems that most people are connected to were designed more than a century ago. They rely on large, centralized generation plants to deliver electricity through transmission and distribution networks that feed into cities, towns, homes, schools, factories, stores, office buildings, and more. Our 100-year-old power system wasn’t intended to handle power generators that produce electricity only when the sun is shining or the wind is blowing. Such intermittency can cause the grid’s voltage and frequency to fluctuate and spike dangerously when power generation isn’t balanced with demand throughout the network. Traditional grids also weren’t designed to handle energy flowing in two directions, with hundreds or thousands of small generators like rooftop solar panels attached to the network.
The problem is being magnified as the use of renewables grows worldwide. According to the United Nations report Global Trends in Renewable Energy Investment 2019, wind and solar power accounted for just 4 percent of generating capacity worldwide in 2010. That figure was expected to more than quadruple within a decade, to 18 percent. And that trend should continue for at least the next five years, according to the International Energy Agency. It anticipates that renewable energy capacity will rise by 50 percent through 2024, with solar photovoltaics and onshore wind making up the lion’s share of that increase.
Rather than viewing this new capacity as a valuable asset, though, many grid operators fear the intermittency of renewable resources. Rather than finding a way to integrate them, they have tried to limit the amount of renewable energy that can connect to their networks, and they routinely curtail the output of these sources.
In late 2018, Horizon Power hired my company, PXiSE Energy Solutions, to better integrate renewables across its vast territory. Many electric utilities around the world are grappling with this same challenge. The chief difference between what others are doing and our approach is that we use a special sensor, called a phasor measurement unit[PDF], or PMU. This sensor, first developed in the 1980s, measures voltage and current at various points on the grid and then computes the magnitude and phase of the signals, with each digitized measurement receiving a time stamp accurate to within 1 microsecond of true time. Such measurements reveal moment-by-moment changes in the status of the network.
For many years, utilities have deployed PMUs on their transmission systems, but they haven’t fully exploited the sensors’ real-time data. The PXiSE team developed machine-learning algorithms so that our high-speed controller can act quickly and autonomously to changes in generation and consumption—and also predict likely future conditions on the network. This intelligent system mitigates any grid disturbances while continuously balancing solar generation, battery power, and other available energy resources, making the grid more efficient and reliable. What’s more, our system can be integrated into virtually any type of power grid, regardless of its size, age, or mix of generation and loads. Here’s how it works.
The basic thing a PMU measures is called a phasor. Engineering great Charles Proteus Steinmetz coined this term back in 1893 and described how to calculate it based on the phase and amplitude of an alternating-current waveform. Nearly a century later, Arun G. Phadke and his team at Virginia Tech developed the phasor measurement unit and showed that the PMU could directly measure the magnitude and phase angle of AC sine waves at specific points on the grid.
PMUs were commercialized in 1992, at which point utilities began deploying the sensors to help identify outages and other grid “events.” Today, there are tens of thousands of PMUs installed at major substations throughout the United States. (The accuracy of PMU data is dictated by IEEE Standard C37.118 and the newer IEEE/IEC Standard 60255-118-1-2018, which call for more accurate and consistent power measurements than are typically required for other sensors.)
But these devices are valuable for much more than simply tracking blackouts. Today’s high-speed PMUs provide data 60 times per second, which is more than 200 times as fast as the sampling rate of the conventional SCADA (supervisory control and data acquisition) systems used on most electric grids. What’s more, PMU data are time-stamped with great precision using the Global Positioning System (GPS), which synchronizes all measurements worldwide. For that reason, the data are sometimes called synchrophasors.
With that kind of time resolution, PMUs can provide an extremely accurate and detailed snapshot of power quality, indicating how consistently the voltage and current remain within their specified ranges. Wide fluctuations can lead to inefficiency and wasted electricity. If the fluctuations grow too large, they can damage equipment or even trigger a brownout or blackout. The time-stamped data are particularly helpful when your resources are scattered across a wide area, as in Horizon Power’s grid in Western Australia.
The main reason that PMUs haven’t been fully exploited is that most utilities don’t take advantage of modern data communications and advanced control technologies. Even when they have PMUs and data networks in place, they haven’t tied them together to help coordinate their solar, wind, and other energy resources. SCADA systems are designed to send their information every 2 or 3seconds to a central operations center where human operators are watching and can take action if something goes wrong. But the power electronics used in inverter-based systems like solar PV, wind turbines, and energy storage units operate on the millisecond level, and they require much higher-speed control to maximize their benefits.
I’ve spent my entire 33-year career as a power engineer working for small and large power companies in California, so I’m deeply familiar with the issues that utilities have faced as more distributed-energy resources have come on line. I long had a hunch that PMUs and synchrophasors could play a larger role in grid modernization. In 2015, when I was vice president of infrastructure and technology at Sempra Energy, I began working with Charles Wells, an expert on power system controls who was then at OSIsoft, to figure out whether that hunch was valid and what a synchrophasor-based control system might look like.
Meeting every Friday afternoon for a few hours, we spent close to a year doing calculations and running simulations that confirmed PMU data could be used to both measure and control grid conditions in real time. We then turned to Raymond de Callafon of the University of California, San Diego, and other experts at OSIsoft to figure out how to create a robust commercial system. Our goal was to build a software-based controller that would make decisions autonomously, enabling a transition away from the hands-on adjustments required by traditional SCADA systems.
The controller we envisioned would need to operate with great accuracy and at high speeds. Our system would also need artificial intelligence to be able to forecast grid conditions hourly and daily. Because our platform would be largely software based, it would require a minimum of new hardware, and the integration into an existing network would be quick and inexpensive. In essence, we sought to create a new operating system for the grid.
By early 2017, our system was ready for its first real-world test, which took our team of six engineers to a 21-MW wind farm and battery storage facility on an island in Hawaii. In recent years, Hawaii has dramatically increased solar and wind generation, but this particular island’s grid had reached a limit to integrating more renewable energy, due to the variability of the wind resources. It was less complicated for the grid operator to rely on fossil-fueled generators than to mitigate the intermittent power from the wind farm, which was periodically shut off, or curtailed, at night to ensure grid stability.
We were on a startup’s budget, so we used off-the-shelf equipment, including a standard PC running a digital simulator model that de Callafon created. We rented a house near the wind farm, where we spent several days testing the technology. It was a fantastic moment when we realized we could, in fact, control energy generation at the wind farm from the kitchen table. (We took all due cybersecurity precautions, of course.) With our control system in place, the inherent power fluctuations of the wind farm were smoothed out by real-time dispatch from the battery storage facility. This allowed the wind farm’s generation to be far more reliable.
Soon after completing the Hawaii project, we installed a controller running our PMU-based algorithm to create a microgrid in Sempra Energy’s high-rise office building in downtown San Diego. The controller was designed to optimize the use of the building’s electric-vehicle chargers, solar panels, and storage batteries. Meanwhile, it would also reduce the reliance on grid power in the late afternoon and early evening, when demand and prices typically spike. At such times, the optimization algorithm automatically determines the proper resource mix and schedule, enabling one floor to be served solely by local renewable sources and stored energy rather than the main grid. This shift reduced the building’s utility bill by 20 percent, which we’ve since found to be typical for similar microgrids.
More recently, our team traveled to Jeju Island, in South Korea. Prior to the installation of our grid controller, the island’s electricity primarily came from the mainland via high-voltage underwater cables. The system was designed to interconnect two local battery-storage systems—with capacities of 224 kilowatt-hours and 776 kWh—with a 500-kW solar farm and 600 kW of wind power. To meet the island’s renewable energy goals and save on electricity costs, wind and solar power are stored in the batteries during the day, when power demand and grid prices are low. The stored electricity is then used at night, when power demand and grid prices are high.
The on-site installation of our controller took just four days. In addition to operating the two battery-storage systems throughout the day and night, PXiSE’s controller uses the batteries to provide what’s known as “ramp control” for the wind turbines. Most grids can tolerate only a gradual change in power flow, typically no more than 1 megawatt per minute, so ramp control acts like a shock absorber. It smooths out the turbines’ power output when the wind suddenly changes, which would otherwise cause reliability problems on the grid.
Every 16.7 milliseconds (one cycle of the 60-hertz wave), the controller looks for any change in wind power output. If the wind stops blowing, the system instantly compensates by discharging the batteries to the grid, gradually decreasing the amount of discharged battery power to zero, as power from the main grid gradually increases to match demand. When the wind starts blowing again, the batteries immediately respond by absorbing a portion of the generated wind power, gradually reducing the amount being stored to zero as the controller ramps up the amount feeding into the grid, thereby meeting real-time energy demand.
The control system also creates its own forecasts of load and generation based on weather forecasts and AI processing of historical load data. The forecasting models are updated in real time to compute the optimal settings for power and energy storage in the system.
At each of our other projects—in Chile, Guam, Hong Kong, Japan, Mexico, and a handful of U.S. sites—the system we provide looks a little different, depending on the customer’s mix of energy resources and what their priorities are. We designed the PXiSE system so it can be deployed in any part of the electric grid, from a rooftop solar array all the way to a centralized power plant connected to a large network. Our customers appreciate that they don’t have to install additional complex equipment beyond the controller, that integrating our technology is fast, and that they see positive results right away.
The work we’re doing is just one part of a far bigger goal: to make the grid smarter and cleaner. This is the great challenge of our time. The power industry isn’t known for being on the cutting edge of technology adoption, but the transition away from a century-old, hardware-dependent analog grid toward a modern software-based digital grid is not just desirable but essential. Our team of enthusiastic entrepreneurs and engineers joins thousands of others around the world who are committed to making a difference in the future of our industry and our planet.
This article appears in the July 2020 print issue as “The Software-Defined Power Grid.”
If you were trying to identify the handful of companies that created a business out of the emergence of electrical engineering in the Victorian era, certainly the list would have to include General Electric. GE’s lighting business was established by Thomas Edison in 1892, and it was just sold to engineering wunderkind Robert Madonna and his team at Savant Systems, in Hyannis, Mass. Savant builds high-end home-networking systems for the well-heeled.
By buying GE Lighting—and licensing the use of the GE Lighting and GE Smart Home names—Savant hopes to become a force in the emerging market for home networking devices, a business that has been a battleground for some of the biggest names in technology: Amazon, Apple, and Google.
I spoke to Savant president J.C. Murphy about the acquisition and Savant’s plans for the future.
Can you tell me a little about where Savant came from?
Bob [Madonna] tells a funny story about buying a place in [New York City’s] SoHo and getting his first automation system.
And so he bought this system and nothing really worked and it took weeks and weeks of programming to get simple functions up and working and when he would switch out a cable box or he’d switch out a TV, the whole system would come down and it was back to the drawing board.
That was really the beginning. He said, “I can definitely build a company and do this better.” And he took a lot of the thinking that we used previously in the telecommunications space in terms of reliability and redundancy, and fault tolerance, and we’ve imported that into what we build at Savant.
Basically, it’s a combination of hardware and software that ties the way you live in the home together through an elegant user experience. A simple app that’s on a touch pad or on your Android or iOS device, it basically allows the different things in your home to communicate seamlessly together. So your heating, and your lighting systems, and your audio and your video systems, and your security systems are all tied together in this unique app.
Our vision is to be able to expand that same quality look and feel to a broader marketplace. And obviously, we want to maintain the integrity of the Savant brand in this high-end custom home-automation area, but we want to import our technology into the iconic GE Lighting, and soon, the GE Smart Home brand, to bring it to more and more people at a better price point.
So why are lightbulbs central to this vision?
When you look at what you have in GE Lighting, it’s this unbelievable ability to build and manufacture and engineer quality products at scale.
There are some great individual point solutions out there, but there’s no company that has really introduced this at scale across the entire system. Our vision is to take the Savant DNA and technology and really build out an ecosystem of products and solutions that can be deployed very broadly.
This article appears in the July 2020 print issue as “Google Has Nest. Amazon Has Echo and Alexa. Savant Systems Has—GE Lighting!”
As the COVID-19 outbreak swept through Manhattan and the surrounding New York City boroughs earlier this year, electricity usage dropped as businesses shuttered and people hunkered down in their homes. Those changes in human behavior became visible from space as the nighttime lights of the city that never sleeps dimmed by 40 percent between February and April.
That striking visualization of the COVID-19 impact on U.S. electricity consumption came from NASA’s “Black Marble” satellite data. U.S. and Chinese researchers are currently using such data sources in what they describe as an unprecedented effort to study how electricity consumption across the United States has been changing in response to the pandemic. One early finding suggests that mobility in the retail sector—defined as daily visits to retail establishments—is an especially significant factor in the reduction of electricity consumption seen across all major U.S. regional markets.
“I was previously not aware that there is such a strong correlation between the mobility in the retail sector and the public health data on the electricity consumption,” says Le Xie, professor in electrical and computer engineering and assistant director of energy digitization at the Texas A&M Energy Institute. “So that is a key finding.”
Xie and his colleagues from Texas A&M, MIT, and Tsinghua University in Beijing, China, are publicly sharing their Coronavirus Disease-Electricity Market Data Aggregation (COVID-EMDA) project and the software codes they have used in their analyses in an online Github repository. They first uploaded a preprint paper describing their initial analyses to arXiv on 11 May 2020.
Most previous studies that focused on public health and electricity consumption tried to examine whether changes in electricity usage could provide an early warning sign of health issues. But when the U.S. and Chinese researchers first put their heads together on studying COVID-19 impacts, they did not find other prior studies that had examined how a pandemic can affect electricity consumption.
Beyond using the NASA satellite imagery of the nighttime lights, the COVID-EMDA project also taps additional sources of data about the major U.S. electricity markets from regional transmission organizations, weather patterns, COVID-19 cases, and the anonymized GPS locations of cellphone users.
“Before when people study electricity, they look at data on the electricity domain, perhaps the weather, maybe the economy, but you would have never thought about things like your cell phone data or mobility data or the public health data from COVID cases,” Xie says. “These are traditionally totally unrelated data sets, but in these very special circumstances they all suddenly became very relevant.”
The unique compilation of different data sources has already helped the researchers spot some interesting patterns. The most notable finding suggests that the largest portion of the drop in electricity consumption likely comes from the drop in people’s daily visits to retail establishments as individuals begin early adoption of practicing social distancing and home isolation. By comparison, the number of new confirmed COVID-19 cases does not seem to have a strong direct influence on changes in electricity consumption.
The Northeastern region of the U.S. electricity sector that includes New York City seems to be experiencing the most volatile changes so far during the pandemic. Xie and his colleagues hypothesize that larger cities with higher population density and commercial activity would likely see bigger COVID-19 impacts on their electricity consumption. But they plan to continue monitoring electricity consumption changes in all the major regions as new COVID-19 hotspots have emerged outside the New York City area.
The biggest limitation of such an analysis comes from the lack of available higher-resolution data on electricity consumption. Each of the major regional transmission organizations publishes power load and price numbers daily for their electricity markets, but this reflects a fairly large geographic area that often covers multiple states.
“For example, if we could know exactly how much electricity is used in each of the commercial, industrial, and residential categories in a city, we could have a much clearer picture of what is going on,” Xie says.
That could change in the near future. Some Texas utility companies have already approached the COVID-EMDA group about possibly sharing such higher-resolution data on electricity consumption for future analyses. The researchers have also heard from economists curious about analyzing and perhaps predicting near-term economic activities based on electricity consumption changes during the pandemic.
One of the next big steps is to “develop a predictive model with high confidence to estimate the impact to electricity consumption due to social-distancing policies,” Xie says. “This could potentially help the public policy people and [regional transmission organizations] to prepare for similar situations in the future.”
The COVID-19 pandemic has affected virtually every facet of life, including scientific research carried out at companies and universities around the world.
As biomedical researchers scramble to find a treatment or vaccine, other scientists and engineers try to continue their own work in the midst of a pandemic. In some cases, this means writing a paper or grant from home rather than at the office. But in many others, the disruption is more pronounced.
Most academic, government, and corporate labs in the United States have scaled back operations or closed temporarily to comply with stay-at-home orders. The impacts of these changes can vary greatly from one field to the next, depending on the nature of the work.
John Verboncoeur, a director of IEEE and associate dean for research and graduate studies at Michigan State University, says, “Our surveys indicate that theoretical research teams—my own included—are operating at around 80 to 90 percent efficiency, with the main challenge being the ability to explain complicated concepts without our traditional ‘waving hands about’ and interactive work at the white- or blackboard.”
For experimentalists, the pandemic is more disruptive, although some experiments may be completed from home. “The early focus [for experimentalists] was on catching up on the literature, completing manuscripts, analyzing existing data, and so on, which led to a productivity of 50 percent or so,” says Verboncoeur. “However, much of that is coming to completion, and we are seeing productivity drop as the activities narrow down to designing upcoming experiments and protocols.”
Engineers in many fields are looking for new ways to remain innovative and productive. Take, for example, those in the green energy sector. While some climate and energy research may continue from home, other projects are more difficult or impossible to complete remotely.
Sally Benson’s lab at Stanford University does a mix of theory, modeling, and experiments to support the transition to a low-carbon future, including studies related to carbon capture and storage. While the theory and modeling aspects of this research are easy enough to continue, the experimental work involves analyzing rock samples at the extreme temperatures and pressures found in underground reservoirs—tests that aren’t feasible to carry out at home.
Despite this limitation, Benson’s group is still finding ways to continue with some aspects of their experimental work. “The good news is that as experimentalists, we tend to collect way, way more data than we can assimilate,” she says. “We generate these immensely rich data sets, where there’s plenty more we can mine out of those data sets.”
The group is now returning to its old data sets and reanalyzing the data to answer new, unexplored questions, in part by applying machine learning. By doing so, the researchers have uncovered previously unknown ways that carbon dioxide interacts with rock. Benson acknowledges, however, that this reuse of old experimental data can’t go on forever.
Further up the coast, at the University of Washington, Brian Johnson is leading two projects funded by the U.S. Department of Energy. Both are designed to facilitate a major shift from electromechanical power grids to grids based on power-electronics systems that will better support renewable energy.
One project involves the design of controllers for these new power grids. The effort launched in April just as the pandemic was taking hold in the United States, but the team was able to get the research started by focusing on pen-and-paper designs and software simulations.
However, the pandemic may prove more problematic for Johnson’s second endeavor. It involves the design of a new breed of high-efficiency power electronics that converts DC power from solar cells into grid-compatible AC power. “For that project, we have a heavy set of milestones coming up in the summer months to actually demonstrate the hardware,” says Johnson. “If we can’t do [tests] in the summer, we’re going to have to start coming up with some contingency plans. Since these experiments necessitate a power lab with specialized equipment, they cannot be done in our homes.”
While the pandemic affects each research project to varying degrees, its overall impact on the broader shift toward green tech—and on the state of engineering research more generally—is still unclear.
Benson says she’s slightly concerned that the pandemic may cause some researchers to shift their focus from climate change to medicine. “To me, the COVID-19 pandemic is sort of a multiyear challenge and a short-term nightmare,” she says. “If we’re not careful, climate change will be a decadal-scale nightmare. So this work needs all of the attention it can get.”
Johnson is less concerned that the pandemic will interfere with the advancement of green tech, saying: “I think that energy is such an integral part of modern life itself and infrastructure that I don’t perceive [the COVID-19 pandemic] fundamentally altering the fact that we all need energy, and cheap energy.”
This article appears in the June 2020 print issue as “COVID-19 Disrupts Research Plans.”
A new study about solar-induced power outages in the U.S. electric grid finds that a few key regions—a portion of the American midwest and Eastern U.S. seaboard—appear to be more vulnerable than others.
The good news is that a few preventative measures could drastically reduce the damage done when a solar storm hits Earth. Those include stockpiling electrical transformers in national strategic reserves.
Jeffrey Love is a research geophysicist at the U.S. Geological Survey (USGS) in Golden, Colorado and co-author of the new USGS solar geoelectric hazard study. He’s one of many voices in the worldwide geophysical community warning that geoelectric “perfect storms” will happen—it’s not a question of if, but when. Such storms can last between one and three days.
Love explains that solar flares and other solar mass ejections that travel through space can slam into Earth’s atmosphere and generate powerful electric and magnetic fields. These magnetic storms can occasionally be intense enough to interfere with the operation of high-voltage electricity lines.
Depending on the geology of a given region, the currents a geomagnetic storm induces in the power lines can destabilize the power grid’s operation and cause damage to (or even destroy) transformers.
Earlier this month, Indian Prime Minister Narendra Modi asked the entire country to simultaneously switch off the lights. The gesture was meant to be a show of solidarity during the coronavirus pandemic, but Modi’s request left power engineers scrambling to prevent a nationwide blackout.
In a televised address on the afternoon of Friday, 3 April, Modi called on Indians to switch off their lights for nine minutes beginning at 9PM on Sunday, 5 April. Within hours, experts raised concerns that the massive drop in electricity demand, followed by a sudden surge nine minutes later, could debilitate the grid and trigger widespread blackouts.
Despite the warnings, the government stood by its request. So India’s power sector had just two days to come up with a strategy to protect the grid. “It was a very challenging situation,” says Sushil Kumar Soonee, a former CEO of grid operator Power System Operation Corporation (POSOCO) who still advises the organization and was involved in planning the response. (India’s Ministry of Power didn’t respond to a request for comment.)
As the sun sets across the Netherlands, streetlights twinkle on, town by town. But it’s not in lockstep: some city managers can set their lights to respond to local sunset time or a schedule of their own or they can control individual lights for local events. That’s because in 2017 those cities adopted a smart grid software platform built by Dutch public utility Alliander that may be the first open smart grid platform in everyday use.
Before, these cities could only operate their lights collectively because they used ripple control technology, a widespread control method that sends a pulse over the grid. While smarter control of streetlights may be handy for cities and save them some energy and cash, Alliander has also re-used the platform to manage a growing number of additional services and, earlier this month, passed control of the platform to LF Energy, part of the Linux Foundation.
“Utilities want to get rid of the black box,” says Shuli Goodman, executive director of LF Energy. Alliander started developing its own black box in 2013 but took it open source in 2015 thanks to lobbying by Sander Jansen, a data architect there.
“What I saw was the big [grid software] vendors had their own roadmap, their own product managers, their own vision and it doesn’t always align with what clients want,” Jansen recalls. Developing their own solution gave Alliander more options and prevented it from being stuck with any one provider’s service. Now that it is open source, it also allows third parties to develop their own uses for the platform.
So far, most of the outside interest has been in smart meters, Jansen says. Another project involves interfacing with municipal charging stations for electric cars. Other projects focus on more traditional grid management concerns such as distribution automation.
The electricity grid’s relationship to open source actually dates back to 1997, if not before, when some North American utilities and research organizations used it to simulate local grid management scenarios. Academics also developed their own open source research tools, such as the 2005 open source grid tool called PSAT, developed by Federico Milano at University College in Dublin, Ireland.
But there wasn’t much collaboration between academia and utilities, Milano says: “The [electric utility] community is very closed and not willing to help at all except for some, few individuals. The problem is [the people who use] open source tools are PhD students… Then, when they are hired by some company, they are forced to use some commercial software tool and do not have time to spare to contribute to the community with their code.”
Today, most major transmission and system operators still use commercial software, often from companies such as Siemens and ABB, with custom modifications. They also focus heavily on security, to ensure reliable electricity for hospitals and other critical infrastructure.
But changes in electricity supply may be favoring smarter grids and a more software-focused approach. As energy grids take on more intermittent sources of power, such as solar and wind, it can get harder for ripple control technology to send a reliable signal across the whole grid, Jansen says.
Other changes may also favor more openness, Milano says: “If power system ‘granularity’ is going to increase (e.g., grid-connected microgrids, smart building, aggregators, etc.), then there will be many small companies that will get into the power business from scratch and some of them might be attracted by the ‘open source software’ model.”
And yet the lithium-ion battery is far from perfect. It’s still too pricey for applications requiring long-term storage, and it has a tendency to catch fire. Many forms of the battery rely on increasingly hard-to-procure materials, like cobalt and nickel. Among battery experts, the consensus is that someday something better will have to come along.
That something may well be the lithium-ion battery’s immediate predecessor: the lithium-metal battery. It was developed in the 1970s by M. Stanley Whittingham, then a chemist at Exxon. Metallic lithium is attractive as a battery material because it easily sheds electrons and positively charged lithium ions. But Whittingham’s design proved too tricky to commercialize: Lithium is highly reactive, and the titanium disulfide he used for the cathode was expensive. Whittingham and other researchers added graphite to the lithium, allowing the lithium to intercalate and reducing its reactivity, and they swapped in cheaper materials for the cathode. And so the lithium-ion battery was born. Batteries with lithium-metal anodes, meanwhile, seemed destined to remain an interesting side note on the way to lithium-ions.
But XNRGI, based in Bothell, Wash., aims to bring lithium-metal batteries into the mainstream. Its R&D team managed to tame the reactivity of metallic lithium by depositing it into a substrate of silicon that’s been coated with thin films and etched with millions of tiny cells. The 3D substrate greatly increases the anode’s surface area compared with a traditional lithium-ion’s two-dimensional anode. When you factor in using metallic lithium instead of a compound, the XNRGI anode has up to 10 times the capacity of a traditional intercalated graphite-lithium anode, says Chris D’Couto, XNRGI’s CEO.
Of course, new battery technologies are announced all the time, and tech news outlets, including IEEE Spectrum, are more than happy to tout their promising capabilities. But relatively few batteries that appear promising or even revolutionary in the lab actually make the leap to the marketplace.
Commercializing any new battery is a complicated prospect, notes Venkat Srinivasan, an energy-storage expert at Argonne National Laboratory, near Chicago. “It depends on how many metrics you’re trying to satisfy,” he says. For an electric car, the ideal battery offers a driving range of several hundred kilometers, charging times measured in minutes, a wide range of operating temperatures, a 10-year life cycle, and safety in collisions. And of course, low cost.
“The more metrics you have, the more difficult it will be for a new battery technology to satisfy them all,” Srinivasan says. “So you need to compromise—maybe the battery will last 10 years, but the driving range will be limited, and it won’t charge that quickly.” Different applications will have different metrics, he adds, and “industry only wants to look at batteries that are at least as good as what’s already available.”
D’Couto acknowledges that commercializing XNRGI’s batteries has not been easy, but he says several factors gave the company a leg up. Rather than inventing a new manufacturing method, it borrowed some of the same tried-and-true techniques that chipmakers use to make integrated circuits. These include the etching of the 20-by-20–micrometer cavities into the silicon and application of the thin films. Hence the battery’s name: the PowerChip.
Each of those microscopic cells can be considered a microbattery, D’Couto says. Unlike the catastrophic failure that occurs when a lithium-ion battery is punctured, a failure in one cell of a PowerChip won’t propagate to the surrounding cells. The cells also seem to discourage the formation of dendrites, threadlike growths that can cause the battery to fail.
Some flavors of lithium-ion batteries, such as those made by Enovix, Nexeon, Sila Nanotechnologies, and SiON Power, also achieve better performance by replacing some or all of the graphite in the anode with silicon. [See, for example, “To Boost Lithium-Ion Battery Capacity by up to 70%, Add Silicon.”] In those batteries’ anodes, the lithium is intercalated with the silicon, bonding to form Li15Si4.
In XNRGI’s PowerChip, the silicon substrate has a conductive coating that acts as a current collector and a diffusion barrier that prevents the silicon from interacting with the lithium. D’Couto says that the lithium-metal anode’s capacity is about five times that of silicon-intercalated anodes.
For most of its existence, XNRGI was known as Neah Power Systems, and it focused on developing fuel cells. The fuel cells used a novel porous silicon substrate. But the fuel-cell market didn’t take off, and so in 2016, the company got a Department of Energy grant to use the same concept to build a lithium-metal battery.
XNRGI continues to experiment with cathode designs that can keep up with its supercharged anodes. For now, the company is using cathodes made from lithium cobalt oxide and nickel manganese cobalt, which could yield a battery with twice the capacity of traditional lithium-ions. It’s also making sample batteries using cathodes supplied by customers. D’Couto says alternative materials like sulfur could boost the cathode performance even more. “Having a high-performing anode without a corresponding high-performing cathode doesn’t maximize the battery’s full potential,” he says.
“People like me dream of a day where we’ve completely solved all the battery problems,” says Argonne’s Srinivasan. “I want everybody to drive an EV, everybody to have battery storage in their home. I want aviation to be electrified,” he says. “Meanwhile, my cellphone battery is dying.” In batteries as in life, there will always be room for improvement.
Electric utilities routinely adjust power supplies to match the peaks and troughs in demand. But more utilities are working to tweak customers’ habits, too, so that we don’t all gobble energy at the same time and strain the grid.
Measures like “time-of-use” tariffs are proliferating in the United States and globally, with utilities charging higher electricity rates during peak demand periods. In places like sunny California, the idea is to shift more energy usage to the afternoon—when solar power is abundant and cheap—and away from evenings, when utilities rely more on fossil fuel-fired power plants.
Yet such initiatives may have unintended consequences. A new study in the journal Nature Energy found that one utility pilot hit some participants harder than others. Vulnerable groups, including elderly people and those with disabilities, saw disproportionately negative financial and health impacts as a result of paying time-of-use rates.
“You have this potentially really useful tool, but you need to make sure you’re not unintentionally making a worse situation for parts of the population,” said Lee White, the study’s lead author and a research fellow at Australian National University in Canberra.
About 14 percent of U.S. utilities offer residential time-of-use rates, according to the consulting firm Brattle Group. Rate designs can vary from place to place, as do climate conditions and consumer habits, so the study’s findings might not hold true everywhere. Still, the research highlights concerns worth heeding as utilities and regulators design such programs.
“We need to be very careful about how we implement these rates,” White said.
White and Nicole Sintov, an assistant professor at Ohio State University, analyzed data from 7,500 households that voluntarily joined a utility’s 2016 pilot in the southwestern United States. (The company asked to go unnamed.)
Participants were randomly assigned to a control group, or one of two time-of-use rates. The first group paid an extra 0.3451 cents per kilowatt-hour from 2 to 8 p.m. on weekdays. The second group saw tariffs of 0.5326 cents per kilowatt-hour from 5 to 8 p.m. on weekdays.
Researchers studied results from July to September, a sweltering season. All participants paying time-of-use rates saw their bills increase. But households with elderly members or people with disabilities saw even greater bill increases relative to the rest. Elderly folks reported turning off their air-conditioning less than other groups; in general, older adults are especially vulnerable to heat-related illnesses.
Participants with disabilities were more likely to seek medical attention for heat-related reasons when assigned to one of the time-of-use rates—as were customers identified as Hispanic. But researchers found that people within the disability, Hispanic, or low-income groups were more likely to report adverse health outcomes regardless of rates, even in the control group.
White said a “somewhat encouraging” finding is that low-income households and Hispanic participants saw lower bill increases compared to other groups. Yet any extra costs “could still cause additional tensions in the household budget,” she added. According to the U.S. Census, low-income households on average spend 8.2 percent of their income on energy bills—about three times as much as higher-earning households.
The study highlights gaps in “flexibility capital” among electricity users, said Michael Fell, a research associate at the UCL Energy Institute. For example, wealthier households might avoid higher rates by installing energy storage devices or smart appliances with sensors and timers. Healthier individuals can cope with using less AC or heating. But many people can’t spare the expense to their wallets or wellbeing.
“There is already recognition amongst regulators that the transition to a flexible future may come with risks to those in vulnerable situations,” Fell wrote in Nature Energy. “White and Sintov’s study lends nuance to this concern.”
Ryan Hledik, a principal at Brattle Group, said residential time-of-use rates are gaining momentum as smart meters become the norm in households nationwide. While many utilities are now using tariffs to integrate more wind and solar power into the electricity mix, in coming years, such programs could help keep electric-vehicle owners from charging batteries all at once, overtaxing local infrastructure.
“That’s definitely something utilities are going to need to confront, and time-of-use rates are one way to deal with that,” Hledik said.
Nine years before Paradise, California burned to the ground, a similar tragedy unfolded in Australia. On a searing, windy day in 2009 that came to be known as “Black Saturday,” hundreds of fires erupted in the state of Victoria. One of the worst razed the bucolic mountain town of Marysville, northeast of Melbourne. And just as sparks from a Pacific Gas & Electric (PG&E) power line launched the Camp Fire that destroyed Paradise, Marysville’s undoing began with high-voltage current.
In all, the Black Saturday fires killed 173 people and caused an estimated AUS $4 billion ($2.75 billion) in damage. Fires started by power lines caused 159 of the deaths.
California’s wildfires have “brought it all back,” says Tony Marxsen, an electrical engineering professor at Monash University in Australia. His parents honeymooned in Marysville. “It was a lovely little town nestled up in the hills. To see it destroyed was just wrenching,” he recalls.
Marxsen says faded memories increased Marysville’s death toll. “It had been 26 years since Australia’s last major suite of deadly fires,” he says. “People had come to believe that they could defend their house against a firestorm. Some stayed, and they all died.”
While they go by different names, California’s wildfires and Victoria’s bushfires are driven by the same combination of electrical networks and extreme weather, stoked by climate change. How Victoria responded after the Black Saturday fires—work that continues today—differs significantly from what is happening in California today, especially in PG&E’s territory.
California utility Pacific Gas & Electric (PG&E) delivered a bitter pill last month when it said that deliberate blackouts to keep its lines from sparking wildfires could be the new normal for millions of customers for the next decade—a dangerousdisruption to power-dependent communities that California governor Gavin Newsom says “no state in the 21st Century should experience.” Grid experts say Newsom is right, because technology available today can slash the risk of grid-induced fires, reducing or eliminating the need for PG&E’s “public safety power shutoffs.”
Equipment to slash grid-related fire risk isn’t cheap or problem-free, but could be preferable to the most commonly-advanced solutions: putting lines underground or equipping California with thousands of “microgrids” to reduce reliance on big lines. Widespread undergrounding and microgrids will be costly. And the latter could create inequalities and weaken investment in the big grids as communities with means isolate themselves from power shutoffs with solar systems and batteries.
Some of the most innovative fire-beating grid technologies are the products of an R&D program funded by the state of Victoria in Australia, prompted by deadly grid-sparked bushfires there 10 years ago. Early this year, utilities in Victoria began a massive rollout of one solution: power diverters that are expected to protect all of the substations serving the state’s high fire risk areas by 2024.
Researchers in Australia have developed a control algorithm that allows electric boats equipped with solar panels to sell power to a microgrid
The collective thoughts of the interwebz
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.