Tag Archives: energy

We Don’t Need a Jetsons Future, Just a Sustainable One

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/energy/environment/we-dont-need-a-jetsons-future-just-a-sustainable-one

For decades, our vision of the future has been stuck in a 1960s-era dream of science fiction embodied by The Jetsons and space travel. But that isn’t what we need right now. In fact, what if our vision of that particular technologically advanced future is all wrong?

What if, instead of self-driving cars, digital assistants whispering in our ears, and virtual-reality glasses, we viewed a technologically advanced society as one where everyone had sustainable housing? Where we could manage and then reduce the amount of carbon in our atmosphere? Where everyone had access to preventative health care that was both personalized and less invasive?

What we need is something called cozy futurism, a concept I first encountered while reading a blog post by software engineer Jose Luis Ricón Fernández de la Puente. In the post, he calls for a vision of technology that looks at human needs and attempts to meet those needs, not only through technologies but also cultural shifts and policy changes.

Take space travel as an example. Much of the motivation behind building new rockets or developing colonies on Mars is wrapped up in the rhetoric of our warming planet being something to escape from. In doing so, we miss opportunities to fix our home rather than flee it.

But we can change our attitudes. What’s more, we are changing. Climate change is a great example. Albeit slowly, entrepreneurs who helped build out the products and services over the tech boom of the past 20 years are now searching for technologies to address the crisis.

Jason Jacobs, the founder of the fitness app Runkeeper, has created an entire media business called My Climate Journey to find and help recruit tech folks to address climate change. Last year, Jeff Bezos created a US $10 billion fund to make investments in organizations fighting climate change. Bill Gates wrote an entire book, How to Avoid a Climate Disaster: The Solutions We Have and the Breakthroughs We Need.

Mitigating climate change is an easy way to understand the goals of cozy futurism, but I’m eager to see us all go further. What about reducing pollution in urban and poor communities? Nonprofits are already using cheap sensors to pinpoint heat islands in cities, or neighborhoods where air pollution disproportionately affects communities of color. With this information, policy changes can lighten the unfair distribution of harm.

And perhaps if we see the evidence of harm in data, more people will vote to attack pollution, climate change, and other problems at their sources, rather than looking to tech to put a Band-Aid on them or mitigate the effects—or worse, adding to the problem by producing a never-ending stream of throwaway gadgets. We should instead embrace tech as a tool to help governments hold companies accountable for meeting policy goals.

Cozy futurism is an opportunity to reframe the best use of technology as something actively working to help humanity—not individually, like a smartwatch monitoring your health or self-driving cars easing your commute, but in aggregate. That’s not to say we should do away with VR goggles or smart gadgets, but we should think a bit more about how and why we’re using them, and whether we’re overprioritizing them. After all, what’s better than demonstrating that the existential challenges facing us all are things we can find solutions to, not just for those who can hitch a ride off-world but for everyone.

After all, I’d rather be cozy on Earth than stuck in a bubble on Mars.

This article appears in the August 2021 print issue as “Cozy Futurism.”

Solar-to-Hydrogen Water Splitter Outlasts Next Best Tech By 14x

Post Syndicated from Prachi Patel original https://spectrum.ieee.org/energywise/energy/batteries-storage/solar-hydrogen-converter-outlasts-next-best-by-14x

To split water into hydrogen on a large scale, we need technologies that are sustainable, efficient, scalable and durable. Using solar energy (or other renewable energy sources) to split water delivers sustainability, while recent research has made key inroads toward efficiency and scalability. Now Japanese researchers say they’ve made an important step toward durability.

Hydrogen today comes primarily from natural gas, which pumps out a lot of carbon and methane pollution into the atmosphere. By contrast, the sustainable solar-to-hydrogen approach has concentrated on photoelectrochemical (PEC) water splitting. In PEC systems, which nominally generate no greenhouse gases, special catalyst materials absorb sunlight to directly split water into hydrogen and oxygen. But these devices have also been limited by low efficiencies and lifetime. While previous PEC technologies have typically only lasted about a week, the new system is dramatically longer-lived.  

“We confirmed 100 days durability, which is one of the longest periods among experimentally confirmed PEC water splitting materials,” says Masashi Kato, a professor of electrical and mechanical engineering at Nagoya Institute of Technology. Durability will be key for maintenance-free systems that can be installed at remote locations, he says.

Green hydrogen research and technologies have been gaining momentum around the world. Several companies and initiatives are making it by using wind or solar electricity to split water via electrolysis. 

Direct solar water-splitting using PEC is a more elegant, one-step way to harness solar energy for hydrogen production. But it has proven challenging to do on a large scale. Devices aren’t cheap, efficient, or durable enough yet to move out of the lab. 

Photocatalysts do the heavy lifting in PEC devices. Kato and his colleagues designed a tandem PEC device that uses two electrodes each coated with a different catalyst. One is titanium dioxide, a material commonly used in white paint and sunscreen, and the other is a cubic silicon carbide that Kato’s team has developed and reported previously. 

The two catalysts absorb different parts of the light spectrum and work in a complementary way to split water. Titanium dioxide is an n-type photocatalyst, which soaks up ultraviolet light and generates electrons, triggering chemical reactions that produce oxygen. And the silicon carbide material the researchers have made is a p-type catalyst that absorbs visible light to produce hydrogen.

Together, the two reactions sustain each other for a while to split water into hydrogen and oxygen when a voltage is applied across the device placed in water. This results in the five-fold longevity boost over previous technologies to achieve 100-day operation, Kato says. 

The efficiency of the system reported in the journal Solar Energy Materials and Solar Cells is relatively low at 0.74 percent. Most solar-to-hydrogen technologies have achieved efficiencies in the 1-2 percent range, but some research teams have achieved substantially higher efficiencies. Researchers from Italy and Israel recently reported a method that harnesses semiconductor nanorods topped with platinum spheres that convert almost 4 percent of solar energy into hydrogen fuel. 

A Belgian research team at KU Leuven in 2019 reported a solar panel prototype that absorbs moisture from the air and splits it into hydrogen and oxygen with 15 percent efficiency. According to the U.S. Department of Energy, 5-10 percent efficiency should be enough for a practical solar hydrogen system. 

Kato says that it’s the titanium dioxide electrode that limits the efficiency of the system, and the team is now looking for other photocatalysts to boost efficiency that would still work in concert with the silicon carbide electrode. However, the combination of durability and efficiency still sets their device apart, he says. 

Learn More About Important Digitizer Features and System-Level Aspects

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/learn-more-about-important-digitizer-features-and-systemlevel-aspects

Hight-Speed Data Acquisition for Partial Discharge Detection

In this white paper, we discuss how digitizer and ADC specifications influence partial discharge detection system performance. How to estimate the required analog input bandwidth? What is the benefit of using multi-channel digitizers? What is the impact of Effective Number Of Bits (ENOB)? Download to learn more!

How to Prevent a Power Outage From Becoming a Crisis

Post Syndicated from Yury Dvorkin original https://spectrum.ieee.org/energywise/energy/policy/how-to-prevent-a-power-outage-from-becoming-a-crisis

On 4 August 2020, a tropical storm knocked out power in many parts of New York City as well as neighboring counties and states. The electricity utility, Consolidated Edison, was able to fully restore service in Manhattan within a few hours. Meanwhile, in the surrounding boroughs of the Bronx, Brooklyn, Queens, and Staten Island, thousands of customers remained without electricity for days. There are technical reasons that contributed to faster repairs in Manhattan, but in general the neighborhoods that waited the longest to have their power restored tended to be poorer and less white.

For most people, a power outage is an inconvenience. But for some, it is an emergency that can quickly turn deadly, especially when the outage occurs during a heat wave or a winter freeze. Extended exposure to temperatures above 32° C can quickly cause health crises, especially in the elderly, children, and people with heart disease, poor blood circulation, and other pre-existing conditions. The recent record-breaking heat in Oregon and Washington state, for example, claimed more than 200 lives. Extreme cold can have similarly dire consequences, as we saw during February’s massive power outage in Texas.

Public health experts refer to those who are most at risk during power outages as “electricity vulnerable” or “electricity dependent.” In the United States, hundreds of thousands of people are in that category. A 2017 study estimated that about 685,000 Americans who live at home and have medical insurance are electricity dependent; of that group, roughly one fifth are vulnerable to even short power outages of 3 to 4 hours.

Normally during a heat wave, people have the option of escaping their homes and seeking cooler temperatures in public spaces like libraries, coffee shops, and stores. COVID-19 changed all that. The pandemic created a work-at-home paradigm that shifted electricity usage away from commercial buildings to residential neighborhoods, in ways that few expected and fewer planned for. It made finding relief from the heat logistically difficult. And it slowed urgent repair and maintenance of the power grid, with work crews having to practice social distancing due to the pandemic.

Step 1: Identify outages in real time

There’s a better way to do things. It requires that providers like New York City’s ConEd revise their priorities for repairs during outages. Instead of first serving areas with the greatest density of customers, as they do now, utilities would make repairs in those areas with a greater share of customers whose health is immediately endangered by the outage. This strategy would correct an endemic imbalance that puts greater stressors on less affluent neighborhoods and the electricity vulnerable. The existence of this imbalance isn’t just theoretical, as the storm last August demonstrated.

To help implement this strategy, my group at New York University has been developing a Power Outage Dashboard for New York City. The dashboard, created with funding from the National Science Foundation, collects data from ConEd about power outages in the city and integrates that data with open-source socio-demographic and environmental data to evaluate the severity of each outage for electricity-vulnerable groups.

Based on this evaluation, we compute a rank for each of New York City’s 300-plus zip codes that takes into account demographic information like household income, age, race, and gender, as well as public health data and the presence of low-income and senior housing; the Zip Code Rank also factors in dynamically changing environmental data, such as ambient temperature, humidity, and precipitation. From the Zip Code Rank, we can determine an Overall Severity Rank for the outages in each zip code, which can be used to prioritize repairs.

To aggregate this data, we designed a crawler that collects real-time outage data from Con Edison; we also have archives of historical data on hundreds of thousands of past outages. The addresses, zip codes, and demographic information come from NYC Open Data, a comprehensive set of public databases published by New York City agencies and their partners. A composite algorithm that we developed ranks the outages by the relative vulnerability of the customers in the zip code. This data is superimposed on a real-time outage map of New York City and color-coded by vulnerability—red for most vulnerable, blue for least. The dashboard is designed to allow users, including the public, to know which outages should have higher priority.

Even a cursory look at the dashboard shows that outages in Manhattan tend to be green or blue, while those in the outer boroughs tend to be yellow, orange, or red. For example, on 8 July 2021, there were 41 relatively large outages in New York City. Of these, 6 were in more affluent areas of Manhattan, and our algorithm coded most of them as blue. In Brooklyn, by contrast, there were 17 outages coded orange or red.

This wasn’t a one-off. When we look at the historical data, we can see that residents in the outer boroughs are more likely to lose power, with a clear correlation between the number and duration of power outages and the ethnic and class makeup of neighborhoods. A poor neighborhood with a larger minority population in the Bronx is much more likely to suffer an extended power outage than is a wealthier, whiter neighborhood in lower Manhattan.

There are a number of ways to explain this disparity. The outer boroughs have more overhead power lines compared to Manhattan, where the cables run underground, and overhead power lines are more prone to faults. Likewise, the residential buildings in the Bronx, Brooklyn, and Queens tend to be older or less well maintained, compared to the office buildings and luxury condos of lower Manhattan. However you explain it, though, there’s still an underlying problem of social injustice and societal inequality that is leaving vulnerable people in jeopardy and that must be corrected.

We hope to offer the dashboard as an open-source framework for use by utilities. In the future we will be designing functions to help route service vehicles to where they’re needed, based on the availability of repair teams.

Step 2: Prioritize repairs for the most vulnerable customers

Beyond just knowing where outages are and which groups of customers are being affected, a utility also needs to be able to forecast demand—predicting how much electricity it will need to supply to customers in the coming hours and days. This is of particular importance now, when many people are suffering from the lingering effects of COVID-19—so-called “long COVID” patients. Some of them are likely homebound and are now counted among the ranks of the electricity vulnerable.

Demand forecasting tools rely on historic trends about electricity use. But in New York City, analyses showed that demand forecasting errors surged in the aftermath of the pandemic’s stay-at-home orders. That’s because the COVID-19 pandemic was a sui generis phenomenon for which there was no historic data. As consumption patterns shifted from commercial buildings to residential, the forecasting tools were rendered ineffective.

Any plan that could significantly alter demand forecasting must be considered with the power grid in mind. Last summer, for example, the mayor of New York City, Bill De Blasio, invested $55 million in a heatwave plan that included installing more than 74,000 air-conditioning units for low-income senior citizens. Although these units are providing necessary relief to a vulnerable population, they also are raising electricity demand in residential areas and causing additional stress on ConEd’s distribution system.

Now that many offices and businesses are reopening, it may be difficult or even impossible for utilities to predict exactly how electricity demand will change this summer and when, where, and what the actual demand peak will be. Just because a utility experiences reduced demand in one part of its system does not mean it will be able to accommodate increased demand in another part of the system. There are basic network limits on the ability to transfer electricity from one part of the system to another, such as voltage and power flow.

Grid operators must therefore proactively analyze the impacts of shifting demand and the reduced accuracy of demand forecasting tools on their systems. And they must factor their electricity-vulnerable customers into their planning. Electricity infrastructure is a complex engineering system, and its reliability cannot be 100-percent guaranteed, despite the best efforts of engineers, managers, and planners. Hence, it is important for a utility to consider every possible contingency and plan for mitigation and corrective actions. Such planning should be transparent and open for public comment and evaluation by experts from leading academic institutions, government labs, professional organizations, and so on.

Some readers may find it odd to link the power grid to social justice, but when you look at historic patterns, it’s hard to ignore that certain groups in our society have been marginalized and underserved. Going forward, we must do a better job of protecting vulnerable populations. Utilities can engage with the local community by surveying customers about their electricity needs. Companies will then be in a good position to assist their most vulnerable customers as soon as any power outage is reported.

Thankfully, New York City made it through last summer with relatively few heat crises. However, the pandemic didn’t end once the weather turned cool. Circumstances could be much worse this summer. The city needs a fundamental change and the tools to affect it, with repairs prioritized in such a way that the most vulnerable, not the most affluent, are serviced first. And ConEd and electricity providers like them need to begin planning now.

About the Author

Yury Dvorkin is an assistant professor of electrical and computer engineering at New York University’s Tandon School of Engineering and a faculty member of NYU’s Center for Urban Science and Progress.

Tomorrow’s Hydropower Begins With Retrofitting Existing Dams

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/energywise/energy/renewables/tomorrows-hydropower-begins-retrofitting-dams

With wind and solar prices dropping, it can be easy to forget that two-thirds of the globe’s renewable energy comes from hydropower. But hydro’s future is muddled with ghostly costs—and sometimes dubious environmental consequences.  

Most dams weren’t, in fact, actually built for hydropower. They help stop floods and supply farms and families with water, but they’re not generating electricity—especially in developing countries. Nearly half of Europe’s large dams are primarily used for hydropower, but fewer than a sixth of Asian dams and a tenth of African dams generate substantial amounts of electricity—according to Tor Haakon Bakken, a civil engineer at the Norwegian University of Science and Technology (NTNU).

People like Bakken see such dams as opportunities. He’s one of a few researchers proposing to retrofit old, non-generating dams by installing turbines at their bases. That, he thinks, would create electricity without adding an ecological burden.

Bakken’s group and one of his graduate students, Nora Rydland Fjøsnemodeled theoretically doing just that for numerous dams in a part of southern Spain. Their study found that, in many cases, retrofitting was an economically viable approach.

Bakken hopes that hydro developers consider retrofitting before building anew. “I think, for the case in Spain, we have proven that this is both a technically and economically viable alternative,” he says. “And I think it’s the case in many other places too.”

Even power-generating dams could also be productively retrofitted. The Brazilian Energy Research Office estimates that updating aging hydro plants could add 3 to 11 gigawatts of generating capacity—above and beyond Brazil’s existing 87 GW hydropower base. Meanwhile, scientists at the National Renewable Energy Laboratory (NREL) in the US have proposed using reservoirs as beds for floating solar panels, something they think could theoretically generate terawatts of power.

But if you do need to build new hydropower facilities, other scientists believe the best course of action is to stay small: focus on establishing so-called “run-of-the-river” dams, which try to keep the river and its environmental conditions intact.

“This kind of turbine, you don’t need to flood large areas,” says Michael Craig, an energy systems researcher at the University of Michigan, and formerly of NREL. 

Craig and some of his colleagues from NREL and the private-sector Natel Energy modeled a sequence of run-of-the-river dams on a river in California, all linked such that they could be easily controlled. They found that this approach was, like retrofitting dams, economically viable.

“The smaller the facility, the more energy you can get out of that, I think is definitely a great strategy,” says Ilissa Ocko, a climate scientist at Environmental Defense Fund

Righting the course of old hydro

Hydropower’s behemoth dams of old can singlehandedly rewrite the courses of rivers, creating winding reservoirs in their wake. They can prevent floods and supply water, but they can also displace countless communities upstream and constrict river flow downstream. Such dams disproportionately hurt rural and indigenous people who rely on rivers for a living.

Moreover, some hydro plants generate alarming amounts of greenhouse gases. The culprit, scientists now know, are those very reservoirs—and the biological material trapped underneath. “You’re basically flooding a whole area that has all this vegetation on it that now is just decomposing underwater,” says Ocko.

The result? Greenhouse gases. On top of carbon dioxide, it can generate methane, which—while not as long-lasting in the atmosphere—is more potent at warming. Reservoirs that are larger in surface area, and reservoirs that have warmer waters—such as those near the equator—are especially prone to burping up copious amounts of methane.

Take the Brazilian Amazon, for instance. “Over the last decades, basically all the potential for hydropower expansion that Brazil had close to consuming markets has been exhausted, and Amazonia became the new frontier,” says Dailson José Bertassoli, Jr., a geochemist at the University of São Paulo.

But many dam reservoirs in the Amazon are comparable to fossil fuel plants in their capacity to generate greenhouse gases. So too are their counterparts in Western Africa. In fact, one Environmental Defense Fund study found that nearly seven percent of the 1500 hydro plants around the world they examined emit more greenhouse gases per unit energy than fossil fuel plants.

That’s ample reason to eschew building new dams and instead look to what we have. “In terms of environmental impacts, it makes sense to focus on areas that already are not in their natural conditions,” says Bertassoli.

So, in the face of its mounting environmental costs, does hydro have a future? 

Yes, say climate scientists, but with a caveat. The key is to minimize the future of those large, greenhouse-gas-excreting reservoirs. 

“I would never say that we should stay away from hydropower. From a climate perspective, I think we need all the solutions we can get.” says Ocko. But, she adds, “We can’t make an assumption and put it into this bucket of being a renewable energy source just like solar and wind. It’s not.”

The Battery Revolution Is Just Getting Started

Post Syndicated from Rodney Brooks original https://spectrum.ieee.org/energy/batteries-storage/the-battery-revolution-is-just-getting-started

How I loved the Macintosh Portable, Apple’s first laptop, which I bought shortly after it was introduced in 1989. Today, my infatuation seems pretty incomprehensible, though. The thing cost US $7,300. Its display had no backlight. And, most crushingly, the unit weighed more than 7 kilograms (16 pounds)—much of that weight from a lead-acid battery.

Thirty-two years later, our laptops are powered by much, much lighter lithium-ion or lithium-polymer batteries. My lap is relieved, but the battery life is the same—it’s still about 10 hours, as it was with the Macintosh Portable. Batteries have gotten a whole lot better, but they have not gotten Moore’s-Law better.

We’re living in battery-powered times. Incredible amounts of capital are going into gigaplants that produce millions of battery cells per day, and there are rivers of cash flowing to R&D on advanced batteries. There are fortunes to be made and new megacorporations to be built. Not only will batteries be needed for our phones, laptops, power tools, cars, ships, and planes, but also to store energy from renewable sources. Batteries are key to decarbonizing our economies.

It would be great if batteries could improve exponentially, à la Moore’s Law. But it’ll never happen. Gordon Moore himself told us so in his seminal 1965 magazine article, in which he predicted that we would be able to double the number of components in a digital integrated circuit every year for the next 10 years (turns out he was a pessimist). He also said that the same sort of performance increase would not happen for devices that needed to store energy.

In digital electronics, all you need to do is detect a voltage—or not—to establish whether a binary digit is a “1” or a “0.” The actual amount of current the voltage can drive does not really matter. So you can repeatedly halve the amount of matter in each transistor and still have a working circuit. For batteries, however, we need to store energy in a material, using a reversible mechanism so that we can tap that energy later. And because we pack that matter as full of energy as we can, halving the amount of matter halves the amount of energy we can store. No Moore’s Law. Not ever.

So the huge improvements in batteries over the past 30 years are all the more astounding. According to one recent study, by the metric of price per energy capacity, lithium-ion batteries have improved by 13 percent per year since their commercial introduction in 1991. These improvements came as the batteries became more environmentally benign, less dependent on rare earth elements, easier to manufacture, more reliable, and longer-lived.

Today’s chemical batteries can produce only low voltages, so you need to connect a lot of them in series to get a high voltage and also many in parallel so that they are both manufacturable and still able to deliver high power. All of today’s batteries, in anything larger than a cellphone, are made up of multiple smaller battery cells. Even ordinary 9-volt batteries contain six separate cylindrical, 1.5-volt cells, packed two by three. Within each of those cells is a chemical paste that stores the charges.

Today, the big breakthroughs in batteries are being made for EVs. Tesla is now, arguably, a battery company. Other car companies are buying up or investing in battery startups, with financing rounds now in the billions of dollars each.

For grid-scale storage, researchers are trying to move beyond collections of small individual cells and are experimenting with large vats that store energy in fixed battery farms. We are also now starting to see utility-scale energy-storage trials where chemistry has been replaced by gravity. Large weights are cranked up tens of meters, and then lowered when power is needed, driving generators.

There’ll be no Moore’s Law, but there will be lots of reward for every little improvement in batteries and their manufacturing technologies. If I were to offer advice to an ambitious young graduate today, I’d have one word for her: “Batteries.”

Hydrogen Battery “Sponges” Store Solar for the Grid

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energywise/energy/batteries-storage/hydrogen-batteries-solar-energy

A novel project in Australia aims to harness the sun’s energy in two different ways: by storing it and by using it to produce green hydrogen.

Dozens of solar farms in the country’s southeastern region are slated to use “hydrogen batteries” in coming years. The dual-purpose devices can fit inside of shipping containers and pack a bounty of technologies: lithium batteries, electrolyzers, fuel cells, and canisters of a hydrogen-metal compound. Operators can use the systems to store energy from solar panels and deliver power to the grid during cloudy days or at night. Or, they can supply the green hydrogen to other industries, such as cargo shipping and steel production.

At least that’s the vision Alan Yu and his partners share. Yu is CEO of Lavo, the Sydney-based company that makes hydrogen storage systems for utility and residential markets. He’s also co-founder of the investment firm Providence Asset Group, which is developing solar power projects in the states of Victoria and New South Wales. 

In early July, Providence Asset Group signed an agreement to sell output from more than 30 of its solar farms to SmartestEnergy Australia, a retail energy provider that’s owned by the Japanese conglomerate Marubeni. Eleven of the solar projects are fully operational and the rest are expected to be up and running by early 2023. All together they’ll represent a 300-megawatt solar installation.

Meanwhile, Lavo plans to commence work on its first utility-scale trial unit by the end of this year. Each hydrogen battery system—which it dubs HEOS—will provide about 13 megawatt-hours of storage at the solar sites.

The initiative comes as the global electricity sector is clamoring for grid-storage solutions. The rise of intermittent renewables like solar and wind is driving a need for systems that can absorb excess power supplies and discharge them at a moment’s notice, to match the ebb and flow of power demand. By 2030, the global energy storage market could see a five-fold increase, from 800 gigawatt-hours today to as much as 4,000 gigawatt-hours, according to the U.S. National Renewable Energy Laboratory. (The figures include both stationary storage installations and transportation applications, such as electric vehicle batteries.)

At the same time, green hydrogen is gaining favor as a way to clean up long-distance transportation, chemical manufacturing, aviation, and other sectors that are difficult to electrify. Estimates for green hydrogen growth vary widely, and there’s little consensus as to what demand might look like in coming decades, Canary Media recently reported. However, the Hydrogen Council said it expects green hydrogen production to reach nearly 550 million metric tons by 2050—a significant jump from the roughly 0.36 million metric tons produced in 2019.

Lavo’s hydrogen battery aims to capitalize on both energy trends, Yu said.

The system builds on years of research at the University of New South Wales, which patented the hydrogen-metal compound—or metal hydridetechnology in 2019. Here’s how it works: Solar panels feed electricity into the unit and charge a 5 kilowatt-hour lithium battery. Once the battery is fully charged, any additional electricity runs through an electrolyzer, which splits water into hydrogen and oxygen. The oxygen is released into the air, while the hydrogen flows into the metal canisters. Inside the red-top tubes, hydrogen is stored in a solid form by combining it with a fibrous metal alloy made from common minerals.

“Our long-duration storage can act as a solar sponge to absorb…to reduce pressure and add stability to the grid,” Yu said.

The system also works in reverse, converting the solid metal hydride back into hydrogen, which then runs through a fuel cell and supplies electricity to the grid. Yu said the systems can deliver more than 20,000 charge cycles, giving the components an expected lifetime of 30 years—about as long as a solar farm lasts. Alternately, the metal hydride canisters can be plucked out of the system and placed on a truck or cargo ship for export.

Stored at room temperature and low pressures, the canisters are safer and easier to transport than hydrogen that’s stored in pressurized tanks or converted into ammonia, according to Kondo-Francois Aguey-Zinsou, who has worked on the Lavo technology and leads the university’s Hydrogen Energy Research Center in Sydney

Lavo began testing its first prototype at the research center last year. That unit is smaller than the ones that will operate at solar farms; instead of a shipping container, it’s about the size of a double-door refrigerator. The technology firm has started marketing its more compact version for use in homes and businesses. With a storage capacity of about 40 kilowatt-hours, it purportedly stores three times as much energy as Tesla’s Powerwall 2

Yu said Lavo initially planned to commercialize its utility-scale units first. But manufacturing delays and other disruptions due to the Covid-19 outbreak spurred Lavo to pivot its focus to the home storage market. As the company fills orders for the fridge-sized systems, it will also be developing the larger hydrogen batteries to roll out alongside solar farms in southeastern Australia.

This Laser Scans Skies for Air Pollution and Greenhouse Gases

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/energywise/energy/environment/nists-laser-comb-scan-greenhouse-gases

In a world gravely threatened by greenhouse gas emissions, actually measuring those greenhouse gases can be surprisingly tricky. You might need to grab a sample of the air or force it through an analyzer. And many of these techniques can only measure one greenhouse gas or one pollutant at a time.

Lasers, however, provide another way. While laser spectroscopic technology that tracks individual compounds have been around for decades, researchers at the National Institute of Standards and Technology (NIST) have developed a system that can measure four greenhouse gases at once: methane, carbon dioxide, water vapor, and nitrous oxide.

“It’s a nice, robust, sort of no-moving-parts package, but you still get really high spectral resolution,” says Kevin Cossel, a researcher at NIST who was part of the project.

The technology behind this package is called an optical frequency comb, a development that helped win the 2005 Nobel Prize in Physics. It’s essentially a tool that fires lasers at specific, evenly spaced, frequencies. As depicted on a spectral chart, those frequencies look like a comb, hence the name.

You can take advantage of the fact that frequency combs are very, very precise. In particular, scanning technology based on frequency combs relies on a dual-comb setup: using two combs with different frequencies and watching their interference patterns. It doesn’t have any complex gratings or moving parts.

NIST have been using combs for this purpose for several years now. Initially, the NIST researchers tuned their laser combs to wavelengths in the near-infrared, around 1.6 μm. That allowed the researchers to look at gases like methane, water vapor, and, of course, carbon dioxide.

This system also has another key characteristic: it’s open-pathed. Because the combs are tuned to frequencies that are less absorbed by features of the atmosphere, their lasers can go on for a distancea kilometer, say—and see everything in between. Rather than looking at emissions from a single point, you can set up a grid to look at emissions over a designated area.

It also means that you can compare those measurements to larger-scale atmospheric models of gas emissions. “If you’re measuring over this open path, you’re already matching the grid size of the models,” says Cossel. “So the models might look at dispersion and air quality with grid sizes of hundreds of meters or a kilometer, for the really high-resolution ones. So you’re kind of matching that.”

One of the system’s initial focuses was on measuring methane, which has more potential to cause warming than carbon dioxide. Humans release methane from burning fossil fuels (especially oil and natural gas) and from industrial-scale agriculture (notoriously, burps and flatulence of ruminants like cows and sheep).

So the NIST group took their technology into the field—literally, to gauge the emissions from a field full of cows. It’s now widely used for that purpose. It’s also used to detect gas leaks.

But methane is only one piece of the greenhouse gas puzzle. The NIST researchers thought that, if they could lengthen their combs’ wavelengths—deeper into the infrared realms, closer 5 μm, which also allows for open paths—they could detect a few other gases. They’ve successfully tested the device and published their results in the journal Laser & Photonics Reviews in June.

So, in addition to carbon dioxide, methane, and water vapor, NIST’s system can now measure nitrous oxide. And on top of those four key greenhouse gases, the comb can also be used to measure ozone and carbon monoxide, both common air pollutants that are especially prevalent where there are loads of cars.

“We’re working right now on making it a much more compact system,” Cossel says.

He hopes that, now that the technology’s been demonstrated to work, it can therefore be used to study things like urban air quality and the impacts of wildfires. He also wants to use it to study nitrous oxide emissions from traffic and from agriculture, which he says aren’t well-understood.

Free Whitepaper: Safety Advancements of Loadbreak Separable Connectors

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/free-whitepaper-safety-advancements-of-loadbreak-separable-connectors

Make it a sure-break solution

This paper highlights the industry innovations that successfully mitigated partial vacuum induced flashovers and improved lineperson safety and system reliability.

Engineers: You Can Disrupt Climate Change

Post Syndicated from David Fork original https://spectrum.ieee.org/energy/renewables/engineers-you-can-disrupt-climate-change

Seven years ago, we published an article in IEEE Spectrum titled “What It Would Really Take to Reverse Climate Change.” We described what we had learned as Google engineers who worked on a well-intentioned but ultimately failed effort to cut the cost of renewable energy. We argued that incremental improvements to existing energy technologies weren’t enough to reverse climate change, and we advocated for a portfolio of conventional, cutting-edge, and might-seem-crazy R&D to find truly disruptive solutions. We wrote: “While humanity is currently on a trajectory to severe climate change, this disaster can be averted if researchers aim for goals that seem nearly impossible. We’re hopeful, because sometimes engineers and scientists do achieve the impossible.”

Today, still at Google, we remain hopeful. And we’re happy to say that we got a few things wrong. In particular, renewable energy systems have come down in price faster than we expected, and adoption has surged beyond the predictions we cited in 2014.

Our earlier article referred to “breakthrough” price targets ( modeled in collaboration with the consulting firm McKinsey & Co.) that could lead to a 55 percent reduction in U.S. emissions by 2050. Since then, wind and solar power prices have met the targets set for 2020, while battery prices did even better, plummeting to the range predicted for 2050. These better-than-expected price trends, combined with cheap natural gas, caused U.S. coal usage to drop by half. The result: By 2019, U.S. emissions had fallen to the level that the McKinsey scenario forecast for 2030—a decade sooner than our model predicted.

And thanks to this progress in decarbonizing electricity production, engineers are seeking and finding numerous opportunities to switch existing systems based on the combustion of fossil fuels to lower-carbon electricity. For example, electric heat pumps are becoming a cost-effective replacement for heating fuel, and electric cars are coming down in ­­price and going up in range.

Even with all this progress, though, we’re still on a trajectory to severe climate change: a 3 °C rise by 2100. Many countries are not meeting the emissions reductions they pledged in the 2015 Paris Agreement. Even if every country were to meet its pledge, it would not be enough to limit planetwide warming to 1.5 °C, which most experts consider necessary to avoid environmental disaster. Meeting pledges today would require a drastic slashing of emissions. If these wholesale emission reductions don’t happen, as we think likely, then other strategies will be needed to keep temperatures within bounds.

Here are some key numbers: To reverse climate change, even partially, we’ll need to bring atmospheric carbon dioxide levels down to a safer threshold of 350 parts per million; on Earth Day 2021 the figure stood at 417 ppm. We estimate that meeting that target will require removing on the order of 2,000 gigatonnes of CO2 from the atmosphere over the next century. That wholesale removal is necessary both to draw down existing atmospheric CO2 as well as the CO2 that will be emitted while we transition to a carbon-negative society (one that removes more carbon from the atmosphere than it emits).

Our opening battles in the war on climate change need engineers to work on the many existing technologies that can massively scale up. As already illustrated with wind, solar, and batteries, such scale-ups often bring dramatic drops in costs. Other industrial sectors require technological revolutions to reduce emissions. If you experiment with your own mix of climate-mitigation techniques using the En-ROADS interactive climate tool, you’ll see how many options you have to max out to change our current trajectory and achieve 350 ppm CO2 levels and a global temperature rise of no more than 1.5 °C.

So what’s an engineer who wants to save the planet to do? Even as we work on the changeover to a society powered by carbon-free energy, we must get serious about carbon sequestration, which is the stashing of CO2 in forests, soil, geological formations, and other places where it will stay put. And as a stopgap measure during this difficult transition period, we will also need to consider techniques for solar-radiation management—deflecting some incoming sunlight to reduce heating of the atmosphere. These strategic areas require real innovation over the coming years. To win the war on climate change we need new technologies too.

We’re optimistic that the needed technology will emerge within a couple of decades. After all, engineers of the past took mere decades to design engines of war, build ships that could circle the globe, create ubiquitous real-time communication, speed up computation over a trillionfold, and launch people into space and to the moon. The 1990s, 2000s, and 2010s were the decades when wind power, solar power, and grid-scale batteries respectively started to become mainstream. As for which technologies will define the coming decades and enable people to live sustainably and prosperously on a climate-stable planet, well, in part, that’s up to you. There’s plenty to keep engineers hard at work. Are you ready?

Before we get to the technology challenges that need your attention, allow us to talk for a moment about policy. Climate policy is essential to the engineering work of decarbonization, as it can make the costs of new energy technologies plummet and shift markets to low-carbon alternatives. For example, by 2005, Germany was offering extremely generous long-term contracts to solar-energy producers (at about five times the average price of electricity in the United States). This guaranteed demand jump-started the global market for solar photovoltaic (PV) panels, which has since grown exponentially. In short, Germany’s temporary subsidies helped create a sustainable global market for solar panels. People often underestimate how much human ingenuity can be unleashed when it’s propelled by market forces.

This surge in solar PV could have happened a decade earlier. Every basic process was ready by 1995: Engineers had mastered the technical steps of making silicon wafers, diffusing diode junctions, applying metal grids to the solar-cell surfaces, passivating the semiconductor surface to add an antireflective coating, and laminating modules. The only missing piece was supportive policy. We can’t afford any more of these “lost decades.” We want engineers to look at energy systems and ask themselves: Which technologies have everything they need to scale up and drive costs down—except the policy and market?

Economics Nobel laureate William Nordhaus argues that carbon pricing is instrumental to tackling climate change in his book The Climate Casino (Yale University Press, 2015). Today, carbon pricing applies to about 22 percent of global carbon emissions. The European Union’s large carbon market, which currently prices carbon at above €50 per ton (US $61), is a major reason why its airlines, steel manufacturers, and other industries are currently developing long-term decarbonization plans. But economist Mark Jaccard has pointed out that while carbon taxes are economically most efficient, they often face outsize political opposition. Climate-policy pioneers in Canada, California, and elsewhere have therefore resorted to flexible (albeit more complicated) regulations that provide a variety of options for industries to meet decarbonization objectives.

Engineers may appreciate the simplicity and elegance of carbon pricing, but the simplest approach is not always the one that enables progress. While we engineers aren’t in the business of making policy, it behooves us to stay informed and to support policies that will help our industries flourish.

Tough decarbonization challenges abound for ambitious engineers. There are far too many to enumerate in this article, so we’ll pick a few favorites and refer the reader to Project Drawdown, an organization that assesses the impact of climate efforts, for a more complete list.

Let’s consider air travel. It accounts for 2.5 percent of global carbon emissions, and decarbonizing it is a worthy goal. But you can’t simply capture airplane exhaust and pipe it underground, nor are engineers likely to develop a battery with the energy density of jet fuel anytime soon. So there are two options: Either pull CO2 directly from the air in amounts that offset airplane emissions and then stash it somewhere, or switch to planes that run on zero-carbon fuels, such as biofuels.

One interesting possibility is to use hydrogen for aviation fuel. Airbus is currently working on designs for a hydrogen-powered plane that it says will be in commercial service in 2035. Most of today’s hydrogen is decidedly bad for the climate, as it’s made from fossil methane gas in a process that emits CO2. But clean hydrogen production is a hot research topic, and the 200-year-old technique of water electrolysis—in which H2O is split into oxygen and hydrogen gas—is getting a new look. If low-carbon electricity is used to power electrolysis, the clean hydrogen produced could be used to manufacture chemicals, materials, and synthetic fuels.

Policy, particularly in Europe, Japan, and Australia, is driving hydrogen research forward. For example, the European Union published an ambitious strategy for 80 gigawatts of capacity in Europe and neighboring countries by 2030. Engineers can help drive down prices; the first goal is to reach $2 per kilogram (down from about $3 to $6.50 per kilogram now), at which point clean hydrogen would be cheaper than a combination of natural gas with carbon capture and sequestration.

Climate-friendly hydrogen could also lead to another great accomplishment: decarbonizing the production of metals. The Stone Age gave way to the Iron Age only when people figured out how to deploy energy to remove the oxygen from the metal ores found in nature. Europe was deforested in part to provide charcoal to burn in the crucibles where metalsmiths heated iron ore, so it was considered an environmental win when they switched from charcoal to coal in the 18th century. Today, thanks to the European Union’s carbon market, engineers are piloting exciting new methods to remove oxygen from metal ore using hydrogen and electric arc furnaces.

There’s still much work to do in decarbonizing the generation of electricity and production of clean fuels. Worldwide, humans use roughly one zettajoule per year—that’s 1021 joules every year. Satisfying that demand without further contributing to climate change means we’ll have to drastically speed up deployment of zero-carbon energy sources. Providing 1 ZJ per year with only solar PV, for example, would require covering roughly 1.6 percent of the world’s land area with panels. Doing it with nuclear energy alone would necessitate building three 1-gigawatt plants every day between now and 2050. It’s clear that we need a host of cost-effective and environmentally friendly options, particularly in light of large regional variations in resources.

While we consider those options, we’ll also need to make sure those sources of energy are steady and reliable. Critical infrastructure such as hospitals, data centers, airports, trains, and sewage plants need around-the-clock electricity. (Google, for one, is aggressively pursuing 24/7 carbon-free energy for its data centers by 2030.) Most large industrial processes, such as the production of glass, fertilizer, hydrogen, synthesized fuels, and cement, are currently cost-effective only when plants are operated nearly continuously, and often need high-temperature process heat.

To provide that steady carbon-free electricity and process heat, we should consider new forms of nuclear power. In the United States and Canada, new policies support advanced nuclear-energy development and licensing. Dozens of advanced nuclear-fission companies offer engineers a variety of interesting challenges, such as creating fault-tolerant fuels that become less reactive as they heat up. Other opportunities can be found in designing reactors that recycle spent fuel to reduce waste and mining needs, or that destroy long-lived waste components via new transmutation technologies.

Engineers who are drawn to really tough quests should consider nuclear fusion, where the challenges include controlling the plasma within which the fusion occurs and achieving net electric power output. This decade’s competition in advanced nuclear-energy technologies may produce winners that get investors excited, and a new round of policies could push these technologies down the cost curve, avoiding a lost decade for advanced nuclear energy.

Global-scale climate preservation is an idea that engineers should love, because it opens up new fields and career opportunities. Earth’s climate has run open loop for over 4 billion years; we are lucky that our planet’s wildly fluctuating climate was unusually stable over the 10,000 years that modern civilization arose and flourished. We believe that humankind will soon start wrapping a control loop around earth’s climate, designing and introducing controlled changes that preserve the climate.

The basic rationale for climate preservation is to avoid irreversible climate changes. The melting of the Greenland ice sheet could raise sea levels by 6 meters, or the runaway thawing of permafrost could release enough greenhouse gas to add an additional degree of global warming. Scientists agree that continuation of unchecked emissions will trigger such tipping points, although there’s uncertainty about when that would happen. The economist Nordhaus, applying the conservative precautionary principle to climate change, argues that this uncertainty justifies earlier and larger climate measures than if tipping-point thresholds were precisely known.

We believe in aggressively pursuing carbon dioxide removal because the alternative is both too grim and too expensive. Some approaches to carbon dioxide removal and sequestration are technically feasible and are now being tried. Others, such as ocean fertilization of algae and plankton, caused controversy when attempted in early experiments, but we need to learn more about these as well.

The Intergovernmental Panel on Climate Change’s recommendation for capping warming at 1.5 °C requires cutting net global emissions almost in half by 2030, and to zero by 2050, but nations are not making the necessary emission cuts. (By net emissions, we mean actual CO2 emissions minus the CO2 that we pull out of the air and sequester.) The IPCC estimates that achieving the 1.5 °C peak temperature goal and, over time, drawing CO2 concentrations down to 350 ppm actually requires negative emissions of more than 10 Gt of CO2 per year within several decades—and this may need to continue as long as there remain atmospheric litterbugs who continue to emit CO2.

The En-ROADS tool, which can be used to model the impact of climate-mitigation strategies, shows that limiting warming to 1.5 °C requires maxing out all options for carbon sequestration—including biological means, such as reforestation, and nascent technological methods that aren’t yet cost effective.

We need to sequester CO2, in part, to compensate for activities that can’t be decarbonized. Cement, for example, has the largest carbon footprint of any man-made material, creating about 8 percent of global emissions. Cement is manufactured by heating limestone (mostly calcite, or CaCO3), to produce lime (CaO). Making 1 tonne of cement lime releases about 1 tonne of CO2. If all the CO2 emissions from cement manufacturing were captured and pumped underground at a cost of $80 per tonne, we estimate that a 50-pound bag (about 23 kg) of concrete mix, one component of which is cement, will cost about 42 cents more. Such a price change would not stop people from using concrete nor significantly add to building costs. What’s more, the gas coming out of smokestacks at cement plants is rich in CO2 compared with the diluted amount in the atmosphere, which means it’s easier to capture and store.

Capturing cement’s emissions will be good practice as we get ready for the bigger lift of removing 2,000 Gt of CO2 directly from the atmosphere over the next 100 years. Therein lies one of the century’s biggest challenges for scientists and engineers. A recent Physics Today article estimated the costs of directly capturing atmospheric CO2 at between $100 and $600 per tonne. The process is expensive because it requires a lot of energy: Direct air capture involves forcing enormous volumes of air over sorbents, which are then heated to release concentrated CO2 for storage or use.

We need a price breakthrough in carbon capture and sequestration that rivals what we have seen in wind power, solar energy, and batteries. We estimate that at $100 per tonne, removing those 2,000 Gt of CO2 would account for roughly 2.8 percent of global GDP for 80 years. Compare that cost with the toll of hitting a climate tipping point, which no amount of spending could undo.

In principle, there are enough subterranean rock formations to store not just gigatonnes but teratonnes of CO2. But the scale of the sequestration required, and the urgency of the need for it, calls for outside-the-box thinking. For example, massive-scale, low-cost carbon removal may be possible by giving nature an assist. During the planet’s Carboniferous period, 350 million years ago, nature sequestered so much carbon that it reduced atmospheric CO2 from over 1,000 ppm to our preindustrial level of 260 ppm (and created coal in the process). The mechanism: Plants evolved the fibrous carbon-containing material lignin for their stems and bark, millions of years before other creatures evolved ways to digest it.

Now consider that the ocean absorbs and almost completely reemits about 200 Gt of CO2 per year. If we could prevent 10 percent of this reemission for 100 years, we would meet the goal of sequestering 2,000 Gt of CO2. Perhaps some critter in the ocean’s food chain could be altered to excrete an organic biopolymer like lignin that’s hard to metabolize, which would settle to the seafloor and sequester carbon. Phytoplankton reproduce quickly, offering a quick path to enormous scale. If our legacy of solving climate change is a few millimeters of indigestible carbon-rich poop at the bottom of the ocean, we’d be okay with that.

Altering radiative forcing—that is, reflecting more sunlight to space—could be used as a temporary and stopgap measure to limit warming until we’ve made a dent in reducing atmospheric CO2 levels. Such efforts could avoid the worst physical and economic impacts of temperature rise, and would be decommissioned once the crisis has passed. For example, we could reduce the formation of airplane contrails, which trap heat, and make roofs and other surfaces white to reflect more sunlight. These two measures, which could reduce our expected planetary warming by about 3 percent, would help the public better appreciate that our collective actions affect climate.

There are more ambitious proposals that would reflect more sunlight, but there is much to debate about the positive and negative consequences of such actions. We believe that the most responsible path forward is for engineers, chemists, biologists, and ecologists to test all the options, particularly those that can make a difference at a planetary scale.

We don’t claim to know which technologies will prevent a dystopian world that’s over 2° C warmer. But we fervently believe that the world’s engineers can find ways to deliver tens of terawatts of carbon-free energy, radically decarbonize industrial processes, sequester vast amounts of CO2, and temporarily deflect the necessary amounts of solar radiation. Effective use of policies that support worthy innovations can help move these technologies into place within the next three or four decades, putting us well on our way to a stable and livable planet. So, engineers, let’s get to work. Whether you make machines or design algorithms or analyze numbers, whether you tinker with biology, chemistry, physics, computers, or electrical engineering, you have a role to play.

The views expressed here are solely those of the authors and do not represent the positions of Google or the IEEE.

About the Authors

In 2014, two distinguished Google engineers wrote for IEEE Spectrum about the sobering lessons they’d learned while trying to develop renewable-energy systems that were as cheap as coal. That article, titled “What It Would Really Take to Reverse Climate Change,” struck a chord. By the metric of online readership, it was the seventh most popular article Spectrum published in the 2010s. The piece bluntly described the enormous scale of the challenge.  

Seven years later, the authors, David Fork [on right in photo] and Ross Koningstein, are back with a new message, and it’s surprisingly hopeful. “It’s stunning how rapidly things have been moving since the first article was published,” says Fork. The scope of the challenge is still enormous, of course, but experts now have a better understanding of how a variety of technologies could be combined to prevent catastrophic climate change, the coauthors say. Many renewable-energy systems, for example, are already mature and just need to be scaled up. Some innovations need significant development, including new processes to produce steel and concrete, and geoengineering techniques to sequester carbon and temporarily reduce solar radiation. The one commonality among all these promising technologies, they conclude, is that engineers can make a difference on a planetary scale.

“We need engineers to recognize where these opportunities are, and then not step on the gas pedal but step on the accelerator of an electric vehicle,” says Koningstein. Concerned about the pessimistic tone of most climate coverage, the authors argue that wise policies, market pressure, and human creativity can get the job done. “When you put the right incentives in place, you capture the ingenuity of the masses,” says Fork. “All of us are smarter than any of us.”


  • Disrupting Climate Change Is a Math Problem

    And we’re showing our work

    We,the authors of thisarticle,work at Google on the renewable energy team, and in our jobs we could never get away with presenting a bold idea if we didn’t have the math to back it up. So we’re presenting here some data and calculations to support the biggest claims in our article.

    We need to remove about 2000 gigatonnes of CO2from the atmosphere

    The Intergovernmental Panel on Climate Change (IPCC)special report “Global Warming of 1.5° C” states thatcumulative CO2emissionsfrom 1876 to the end of 2010 were 1,930 gigatonnes CO2, and that by the end of 2017, the amount had reached 2,220 Gt CO2. These emissions were accompanied by an estimated 1° C surface temperature change over that time span. So to reverse the effects of climate change we need to remove at least 2,000 Gt CO2from the atmosphere and oceans. Add to that total our emissions going forward, which are currently 40 Gt of CO2per year.

    There are other ways to get to a number of the same scale. That same IPCC report also states: “Pathways that aim for limiting warming to 1.5° C by 2100 after a temporary temperature overshoot rely on large-scale deployment of carbon dioxide removal (CDR) measures.” This acknowledgment has led to including large-scale (as in tens of Gt CO2per year) carbon removal in models for reducing net carbon emissions. It’s important to understand that net carbon emissions means actual CO2emissions minus the CO2that we pull out of the air and sequester.

    The IPCC report states a goal of reducing net emissions to zero by 2050, using both significant emissions cuts and carbon removal to limit global warming to 1.5° C. Agraphicshows net emissions gradually decline to almost negative 20 Gt CO2per year by the end of the century, and would clearly have to continue on at that scale. If about 20 Gt CO2per year is removed for the next 100 years, that would be 2000 Gt CO2.

    We don’t need to know the precise amount of necessary CO2removal to evaluate the suitability of potential approaches to this problem: Whether the number is about 2000 Gt or even higher, net negative emissions would need to continue for perhaps a century at 20 Gt CO2per year to restore the atmosphere (and oceans) to desired levels. So when setting targets for carbon removal, think tens of gigatonnes CO2per year—think big!

    Humans use about 1 zettajoule of energy per year

    In 2017 global energy consumption was about160,000 terawatt-hours. This is about 6×1020joules. Over time energy consumption has always increased as development has improved the quality of people’s lives. It therefore seems likely that later this century, humans will be using even more than 6×1020joules of energy. In our article, we round up humanity’s energy use to 1 zettajoule (1021joules) for simplicity.

    To supply 1 ZJ of energy per year with photovoltaic solar panels, we’d need to cover 1.6 percent of the planet’s land surface

    Solar installations are rated in terms of their peak capacity, which is their power production in full sun. To determine the output of an installation, we need to know its capacity factor, which is the average utilization of its peak capacity. In a good location for solar PV, the capacity factor can be about 20 percent. For every watt of peak capacity with a 20 percent capacity factor, one year of operation will produce 6,307,200 joules of energy (365 days x 24 hours x 60 minutes x 60 seconds x 0.2 capacity). Dividing this number into 1 ZJ reveals that it wouldrequire solar panels with a peak capacity of 160 terrawattsto produce 1 ZJ per year of electricity generation.

    A utility scale solar farm typically has a ⅓ ratio of panel area to land area, which translates to about 66 peak megawatts of power generation per square kilometer. To produce160 TW of peak solar generation capacity we would need about 2.4 million square kilometers, or about 1.6 percent of Earth’s land area. For comparison, farming claimsabout 40 percent of Earth’s land surface. About one-third of the planet’s land surface is desert; hence in a scenario where desert land is used for solar energy generation 1 ZJ of energy per year could be produced without significantly impacting the food supply.

    To supply 1 ZJ of energy per year with nuclear power, we’d have to build three 1-gigawatt plants per day for 30 years

    A typical nuclear power plant today generates about 1 gigawatt of power. Let’s say that the plant operates with a capacity factor of 95 percent, meaning that it’s up and running at its design capacity 95 percent of the time. This one plant will produce 3×1016Joules per year. One zettajoule is 1021Joules. So it would take a little over 33,000 1-GW plants to provide the capacity for generating 1 ZJ per year. Given 30 years to build this quantity of nuclear power plants, the average rate of construction would be about 3 plants per day.

    A 50-pound bag of concrete mix will cost about 42 cents more if the emissions from cement manufacturing are captured and stored

    Making a tonne of cement by burning fossil fuels to provide process heat releases CO2in two ways: It’s released duringthe combustion process and also comes from the heated feedstock of carbonate rock. Combined, the emissions areabout 0.93 tonnes of carbon dioxide per tonne of cement. The roughly “1 tonne per tonne” rule of thumb makes it rather easy to compute the cost of producing zero-carbon cement. As of this writing, the price of cement averagesaround $125 per tonne. If a cement plant were to attach machinery that captured and sequestered its carbon emissions for a cost of $80 per tonne of CO2(an optimistic cost estimate), this process would add 60 percent to the cost of cement (0.93 x $80 / $125 is about 0.6).

    A 50-pound bag of cement weighs 0.023 tonnes and produces emissions of about 0.021 tonnes CO2, hence at a sequestration cost of $80 per tonne, the added cost would be about $1.69. Ready-mix concrete is about 25 percent cement by weight. That cement would add about $0.42 (0.25 x $1.69) to the cost of a 50-pound bag of concrete. Cement is such a valuable material, it’s arguable that if the price were to increase significantly more than this amount, even if the price were to double, humanity would still use lots of it.

    Removing 2,000 Gt of CO2 would account for roughly 2.8 percent of global GDP for 80 years

    In the article, we suggest that 2.8 percent of global gross domestic product would be a reasonable amount of money to spend to pull down the levels of CO2 in the atmosphere. In 2021, global gross domestic product will be aboutUS $90 trillion. Choosing the timeframe for drawing 2000 Gt of CO2is a very uncertain task; 80 years seems like a reasonable number to get the planet into decent shape by early next century. This time horizon gives us an annualCO2 capture and sequestration target of about 25 Gt of CO2per year.

    The cost of mechanical direct air capture of CO2and sequestration is currently hundreds of dollars per tonne; in the article we note that the area is ripe for creative R&D. If the price could be reduced to $100 per tonne, drawing down those 25 Gt of CO2per year works out to $2.5 trillion per year, or about 2.8 percent of current global GDP.

    Carbon capture is perhaps the best lever to use for stabilizing Earth’s climate in the long term—meaning a time scale of centuries. Drawing down the greenhouse gas concentrations to below current levels will eventually cool the land and ocean temperatures, and will reverse ocean acidification as well.

Why Does the World Harbor So Many Different Voltages, Plugs, and Sockets?

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/energy/the-smarter-grid/why-does-the-world-harbor-so-many-different-voltages-plugs-and-sockets

Standardization makes life easier, but it is often impossible to introduce it to systems that have a messy evolutionary history. Electricity supply is a case in point.

Edison’s pioneering 1882 Pearl Street station transmitted direct current at 110 volts, and the same voltage was used when alternating current at 60 hertz took over in American homes. Later the standard was raised a bit to 120 V , and in order to accommodate heavy-duty appliances and electric heating, North American homes can also access 240 V. In contrast, in 1899 Berliner Elektrizitäts-Werke was the first European utility to switch to 220 V and this led eventually to the continent-wide norm of 230 V.

Japan has the lowest voltage (100 V) and the dubious distinction of operating on two frequencies. This, too, is a legacy from the earliest days of electrification, when Tokyo’s utility bought German 50-Hz generators and Osaka, 500 kilometers to the east, imported American 60-Hz machines. Eastern Honshu and Hokkaido island operate at 50 Hz. The rest of the country, to the west, is at 60 Hz, and the capacity of four frequency-converter stations allows only a limited exchange between the two systems. 

Elsewhere, the world is divided between the minority of countries with voltages centered on 120 V (110–130 V and 60 Hz) and the majority using 230 V (220–240 V and 50 Hz). North and Central America and most countries of South America combine single voltages between 110 and 130 V and the frequency of 60 Hz; exceptions include Argentina and Chile (220/50), Peru (220/60), and Bolivia (230/50). Africa, Asia (aside from Japan), Australia, and Europe work with the higher voltages: 220 V in Russia and Ethiopia; 230V in South Africa; and 240 V in Brunei, Kenya, and Kuwait.

The variety of plugs and sockets is even more confusing, reflecting the enduring effect of early choices and of a multitude of originally separate systems. The International Electrotechnical Commission recognizes 15 different plug and socket types, with two combinations dominant. In North and Central America, as well as in Colombia, Ecuador, and Venezuela, there are just two standard plugs, A and B: A is ungrounded with two flat parallel pins (for plug-in lights and some small appliances), B has two flat parallel pins and a round grounding pin. A and B plugs are also used in Japan but the neutral pin on many American A plugs is wider than the live pin. However, in Japan both pins have the same width, so Japanese A plugs always work in the United States but the U.S. plugs may not work in Japan.

The second dominant combination (C and F) is used in Europe and in a number of Asian countries. The C type (Europlug) has two round pins, and its use is limited to appliances requiring 2.5 amperes or less. The F plug (rated at 16 A) has two 4.8-millimeter round pins set 19 mm apart and two ground clips on the side. But the United Kingdom does not use C and F, and its G plug (also used in Ireland and some former colonies) has three rectangular blades set in a triangular pattern and an incorporated fuse (3 A for smaller appliances, 13 A for heavy-duty uses). Italy uses C and F and also the L plug, with two round pins and a grounding pin in the middle, and Switzerland has the J plug, with the same pins but in a triangular configuration.

A few countries (including Lebanon and Thailand) use five different plugs. The Maldives, an archipelago in the Indian Ocean, is one of the world’s smallest countries, but it shares the record for number of plugs: six (C, D, G, J, K, L).

So, have we learned from this history to now do better at standardizing the plugs for mass-produced globally distributed electronics? Just try to plug your USB connector from an iPad into a Samsung tablet!

This article appears in the July 2021 print issue as “Voltages, Frequencies, Plugs, and Sockets.”

White-Hot Blocks as Renewable Energy Storage?

Post Syndicated from Prachi Patel original https://spectrum.ieee.org/energywise/energy/batteries-storage/could-storing-electricity-in-whitehot-blocks-give-supercheap-renewables-storage

In five years, operating a coal or natural gas power plant is going to be more expensive than building wind and solar farms. In fact, according to a new study by Bloomberg New Energy Finance, building a new solar farm is already cheaper than operating coal and natural gas plants in many regions of the world. 

Yet a full shift to intermittent energy sources desperately calls for low-cost, reliable energy storage that can be built anywhere. Some nascent startups believe the answer lies in the process that lights up toaster coils by electrically heating them to scorching temperatures.

Antora Energy in Sunnyvale, Calif., wants to use carbon blocks for such thermal storage, while Electrified Thermal Solutions in Boston is seeking funds to build a similar system using conductive ceramic blocks. Their vision is similar: use excess renewable electricity to heat up the blocks to over 1,500°C, and then turn it back to electricity for the grid when needed.

To beat the cost of the natural gas plants that today back up wind and solar, storing energy would have to cost around $10 per kilowatt-hour. Both startups say their Joule heating systems will meet that price. Lithium-ion batteries, meanwhile, are now at approximately $140/kWH, according to a recent study by MIT economists, and could drop to as low as $20/kWH, although only in 2030 or thereafter. 

Justin Briggs, Antora’s co-founder and Chief Science Officer, says he and his co-founders Andrew Ponec and David Bierman, who launched the company in 2018, considered several energy-storage technologies to meet that goal. This included today’s dominant method, pumped hydro, in which water pumped to a higher elevation spins turbines as it falls, and the similar new gravity storage method, which involves lifting 35-ton bricks and letting them drop.

In the end, heating carbon blocks won for its impressive energy density, simplicity, low cost, and scalability. The energy density is on par with lithium-ion batteries at a few hundred kWh/m3, hundreds of times higher than pumped hydro or gravity, which also “need two reservoirs separated by a mountain, or a skyscraper-sized stack of bricks,” Briggs says.

Antora uses the same graphite blocks that serve as electrodes in steel furnaces and aluminum smelters. “[These] are already produced in 100 million ton quantities so we can tap into that supply chain,” he says. Briggs imagines blocks roughly the size of dorm fridges packed in modular units and wrapped in common insulating materials like rockwool.

“After you heat this thing up with electricity, the real trick is how you retrieve the heat,” he says. One option is to use the heat to drive a gas turbine. But Antora chose thermophotovoltaics, solar cell-like devices that convert infrared radiation and light from the glowing-hot carbon blocks into electricity. The price of these semiconductor devices drops dramatically when made at large scale, so they work out cheaper per Watt than turbines. Plus, unlike turbines that work best when built big, thermophotovoltaic perform well regardless of power output.

Fukushima’s Legacy: Japan’s Hard Turn Toward Renewables

Post Syndicated from John Boyd original https://spectrum.ieee.org/energy/batteries-storage/fukushimas-legacy-japans-hard-turn-toward-renewables

When the tsunami generated by the Great East Japan Earthquake struck the Fukushima Daiichi Nuclear Power Plant on 11 March 2011, it not only knocked out the plant but eventually led to the shutdown of all the country’s 54 nuclear reactors as a safety precaution. Ten years on, just nine reactors have come back on line. And while nuclear energy in Japan today is anything but dead (the central government now hopes nuclear could provide 20 percent of the nation’s power by 2030), the prospect of a ­zero-carbon future in Japan still leaves the lion’s share to renewables.

The magnitude 9.0 earthquake also killed nearly 20,000 people, with 2,500 still missing. As of last December, some 42,000 of the total 470,000 evacuees remained evacuated, even as the disaster’s 10th anniversary loomed. The government has directed its decontamination efforts to reducing an individual’s radiation dose to 1 millisievert a year, a generally accepted international standard. Nevertheless, some 337 square kilometers within seven Fukushima municipalities continue to be designated “difficult-to-return zones,” while a critical Greenpeace radiation survey report published in 2019 warned that forests in the region, which have never been decontaminated, “will continue to be long-term sources of recontamination.”

To help both revitalize the stricken area and advance the country’s decarbonization efforts, the government in 2014 established the Fukushima Renewable Energy Institute, AIST (FREA) in Koriyama, Fukushima prefecture, says Masaru Nakaiwa, FREA’s ­director-general. (“AIST” stands for the National Institute of Advanced Industrial Science and Technology.) FREA’s mandate is to work with industry and academia to improve photovoltaic and wind-turbine performance, optimize ground-source heat pumps and geothermal resources, and develop technologies for hydrogen-energy carriers and hydrogen-energy systems.

“Fukushima prefectural government has set a target of producing all of ­Fukushima’s energy demands from renewable sources by 2040,” says Nakaiwa. To do this, the government is working with FREA, industry, and universities to help commercialize research in renewable technologies and increase the use of solar, biomass, and wind generation in the prefecture. Hydrogen is also viewed as an important new energy resource. The prefecture is now home to the Fukushima Hydrogen Energy Research Field, the world’s largest green-hydrogen production facility, capable of supplying 1,200 cubic meters of hydrogen an hour. This new focus is in keeping with past and recent central government announcements on hydrogen and the goal to make Japan carbon neutral by 2050.

Achieving the 2050 target won’t be easy. Whereas nuclear accounted for 30 percent of the country’s energy use before the accident, today it provides just 6 percent. Making up the shortfall, Japan now relies more on coal (25 percent), natural gas (23 percent), and oil (39 percent), with renewables and hydro accounting for the rest, as of April 2018.

To encourage industry to work toward carbon neutrality, the government will provide capital investment, tax relief, and deregulation in areas such as wind power; carbon capture, utilization, and storage; and the mass production of storage batteries.

At the end of 2018, some 55 gigawatts of solar power equipment had been installed around Japan, putting the country on track to surpass the government’s target of 64 GW by 2030. Regarding wind power, however, Japan had only 3.6 GW of equipment installed in 2018, hence Japan’s Ministry of Economy, Trade and Industry noted it as technology to invest in.

More notable is the country’s embrace of hydrogen as a versatile energy-storage medium. Hydrogen can be produced from various kinds of natural resources, in particular the water used for electrolysis, which removes carbon dioxide, says Satoshi Hirano, FREA’s deputy director-general. And hydrogen can be compressed, stored, transported, and converted into electricity or heat when needed, without emitting CO2.

Hydrogen’s major downside is the high cost of production. Hence FREA and other national research institutes are developing efficient, low-cost hydrogen-production technologies powered by renewable energies, says Manabu Ihara, director of the Tokyo Tech Academy of Energy and Informatics at the Tokyo Institute of Technology.

FREA has already demonstrated a green-hydrogen supply chain and a hydrogen cofiring generator system, as well as the successful synthesis of ammonia (NH3) from green hydrogen, and its use to fuel a modified micro gas-turbine generator. (Hydrogen could also be used in ammonia-powered cargo ships.) Currently FREA is working with IHI Corp. and Tohoku University to develop larger generator systems using liquid ammonia spray injection, says Hirano.

Other countries are also developing green-hydrogen projects. China has a major project underway in Inner Mongolia slated to produce 454,000 metric tons annually; the European Union estimates spending €430 billion (about US $520 billion) over the next 10 years on hydrogen technologies, while South Korea is aiming to become a leader in developing clean hydrogen.

Meanwhile, Japan is creating international supply chains for shipping green hydrogen and “blue” hydrogen (using carbon capture and storage) to the country, and has established pilot projects in Brunei and Australia to test the feasibility of the scheme. These overseas and domestic sources of clean hydrogen fueling large-scale modified gas turbines will eventually take on the role of supplying base load power to the electric grid that can replace nuclear power, says Ihara, of the Tokyo Institute of Technology. “And we should see this partly realized before 2030.”

This article appears in the March 2021 print issue as “Japan’s Renewables Renaissance.”

Reversing Climate Change by Pulling Carbon Out of the Air

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/environment/reversing-climate-change-by-pulling-carbon-out-of-the-air

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

Let’s face it. The United States, and, really, the entire world, has squandered much of the time that has elapsed since climate change first became a concern more than forty years ago.

Increasingly, scientists are warning that taking coal plants off line, building wind and solar farms here and there, and planting trees, even everywhere, aren’t going keep our planet from heating to the point of human misery. Twenty years from now, we’re going to wish we had started thinking about not just carbon-zero technologies, but carbon-negative ones.

Last year we spoke with the founder of Air Company, which makes carbon-negative vodka by starting with liquid CO2 and turning it into ethanol, and then further refining it into a product sold in high-end liquor stores. Was it possible to skip the final refining steps and just use the ethanol as fuel? Yes, we were told, but that would be a waste of what was already close to being a premium product.

Which leads to the question, are there any efforts under way to take carbon out of the atmosphere on an industrial scale? And if so, what would be the entire product chain?

One company already doing that is Global Thermostat, and its CEO is our guest today.

Graciela Chichilnisky is, in addition to the startup, an Argentine-born Professor of Economics and Mathematical Statistics at Columbia University and Director of the school’s Consortium for Risk Management. She’s also co-author of a July 2020 book, Reversing Climate Change.

Welcome to the podcast.

Graciela Chichilnisky Thank you, Steven. Pleasure to be here.

Steven Cherry Graciela, you have to pilot facilities in California, they will each have the capacity to remove 3000 to 4000 metric tons of CO2 per year. How exactly do they operate?

Graciela Chichilnisky The actual capacity varies depending on the equipment, but you are right on the whole, and the facility is at SRI, which used to be the Stanford Research Institute. They work by removing CO2 directly from the air. The technology is called “direct-air-capture” and our firm, Global Thermostat, is the only American firm doing that. And it is the world leader.

The technology, essentially, scrubs air. So you move a lot of air over capture equipment and chemicals that have a natural affinity for CO2, so as the air moves by, the CO2 is absorbed by the solvents and then you separate the combination of the solvent with the CO2 and lo and behold, you got yourself 98 percent pure CO2 coming out at, as a gas, at one atmosphere. That is [at a] very, very, very high, level, how it works.

And the details are, of course, much more complex and very, very interesting. What is most interesting, perhaps, is the chemists who are used to working with constrained capture in limited facilities—hence volumes—find that the natural chemical and physical properties of the process change when you are acting in an unconstrained area (in fact, the whole atmosphere). You are using the air directly from the atmosphere to remove the CO2. And that’s why it is possible to do that in a way that we have patented—we have about 70 patents right now—in a way that actually is economically feasible. It is possible to do it, save the CO2, and make money. And that is, in fact, the business plan for our company, which includes reversing climate change through this process.

Steven Cherry Yes, so let’s take the next step of the process, what happens with the CO2 once it’s at its 98 percent purity?

Graciela Chichilnisky The CO2—what is perhaps a very good secret for most people—you see CO2 is a very valuable gas and even though it’s a nuisance and is dangerous depending on the concentration in your atmosphere, here or earth, it sells for anywhere between a $100/tonne and $1500 to $1800/tonne. So if you think about that, all you need to know is that the cost of obtaining the CO2 from the air should be lower than the cost of selling it.

The question is what markets would satisfy that. And I’m going to give you a case in which we are already working and selling which we are not working yet. We’re already working with the production of synthetic fuels, in particular synthetic gasoline. Gasoline can be produced by combining CO2 and hydrogen, the CO2 from the air, the hydrogen from water—the hydrogen is produced using hydrolysis—and the CO2 comes from here using our technology. Combining those two gives you hydrocarbons and when properly mixed, you obtain a chemical which is molecule by molecule identical to gasoline, except it comes from water and air instead of coming from petroleum. So if you burn it, you still produce CO2, but the CO2 that is emitted came from the atmosphere in the production of the gasoline and therefore you have a closed circle. And in net terms you’re emitting nothing, using the gasoline that is produced from CO2 and hydrogen—from air and water. These markets, the markets currently in our case, in addition to our synthetic gasoline, include the water desalination market. We work with a company that is the largest desalinated of water in the world, in Saudi Arabia.

And they need a lot of CO2 because the process of desalinating water for human consumption requires the use of CO2. In addition to those two examples, applications, commercial uses, synthetic gasoline and disseminated water, there are carbonated beverages, for example, beer and Coca-Cola. Indeed, we work with Coca-Cola and we work with Siemens, and with AME, automobile companies such as. Porsche, to produce clean gasoline—the synthetic gasoline I mentioned.

From the CO2, you can actually produce elements of cement and other building materials. So as a whole, McKinsey has documented that there is a $1 trillion market per year globally for CO2. So CO2 is a very valuable chemical on Earth, even though it’s a nuisance and dangerous in the atmosphere. So the notion is—the notion of Global Thermostat is—bring it down. In other words, take it from the atmosphere where it is dangerous; bring it down to earth, where it is valuable.

Steven Cherry I love that our first carbon negative podcast involved vodka and our second one now involves beer. So that’s the economic case for what you’re doing. There’s also the question of the carbon budget. There’s a certain amount of energy used in the processes of removing CO2 from the air and then using it for some of these applications; what would be a typical net carbon budget?

Graciela Chichilnisky Negative, in other words, what happens is that we don’t use electricity, which is mostly reduced from fossil fuels right now. We use heat and our heat can be produced as a waste heat from other processes; it doesn’t have to be electricity. In fact we use very little electricity.

But think of it this way: In the year 2020, we for the first time in history humans are able to produce electricity directly from the sun less expensively than using fossil fuels. The two-and-a-half cents or less, continually downward, is the going price for solar photovoltaic production of electricity. It’s the lowest cost. Two cents a kilowatt hour is really the lowest possible cost.

Steven Cherry One wonderful thing about this is that you’re an economist and so you’re determined not just to develop technologies, but ensure that they find a home in the marketplace because that’s the most practical way to implement them at scale.

In 2019, Global Thermostat started working with Exxon Mobil. I understand they provided some money and I believe initially 10 employees. I gather the idea is for them to be one organization commercializing this technology further. How would that work?

Graciela Chichilnisky Well, first of all, I do have two Ph.D.s; I started pure mathematics at MIT. That was my first Ph.D. My second Ph.D. was in economics at UC Berkeley. So I do have the mathematics as well as the economics in my background. What we’re doing requires several forms of expertise. You said it; Global Thermostat has made a joint development agreement with Exxon and is working with Coca-Cola and is working now, with Siemens; is working with a company called HIF, which is in Chile.

So, how does that work? As you probably know, Exxon Mobil is a multifaceted company. In addition to fossil fuels, they have a huge expertise in carbon capture technology, the old fashioned, I would say traditional, type. And by that I mean capture of CO2 from the fumes of power plants, for example.

They have the resources and the know-how, and we are a small company and we want to expand our production. So they offered an opportunity for us to go with the high-level technology, the advanced company in the area of carbon capture in a more traditional way that are willing to experiment and they’re willing to advance commercially the removal of CO2 directly from the atmosphere.

So that with them in our contract, we intend to build a one gigaton plant, that’s what we contracted to do, which means that we then we will scale up or technology. So every year it can eventually remove one billion—with a ‘b’ as in boy—tons of CO2 from the atmosphere every year. That’s the scale-up I’m talking about, and that is the main purpose of our partnership with Exxon Mobil.

And if you think about it—you said it yourself—you want to know what the carbon budget really, roughly speaking, don’t forget that I worked in the Kyoto Protocol. And I created the carbon market of the Kyoto Protocols. So I know a lot about carbon budgets and how demanding they are and how far we are from what we need to do. We need to essentially remove 40 gigatons of CO2 every year from the atmosphere in order to reverse climate change. And what I’m telling you is that we these type of partnerships with companies like Exxon, we can do one gigaton—you’re at a shooting distance from that goal. And that’s why I a contract with Exxon is to scale up our technology to remove one gigaton of CO2 per year. And then if we had 40 of those plans, then we would be removing all the CO2 that humans need to remove from the atmosphere right now in order to reverse climate change.

Steven Cherry It seems paradoxical that it would make more sense to take carbon directly out of the air, the direct air capture, rather than focusing on concentrated sources of carbon and carbon dioxide, such as a power plant smokestack. How is that paradox resolved? How is it more sensible to take it directly out of the atmosphere?

Graciela Chichilnisky First of all, it is not sensible, it’s very creative, very unique, and he has never been done not what we’re doing—it has never been done. And there is a good reason why wasn’t done, because as you’re point out, it’s more difficult, actually, and it’s more expensive to remove CO2 from the air than to remove it from a concentrated source. So why would we be doing that? The answer is, if you remove CO2 from the chimneys or any natural facility, the best you can do—the best best best possible—is to make that facility carbon neutral; to remove all the CO2 that it is emitting.

That’s the best. If you’re really lucky, right? Okay, that’s not enough anymore. When I used to be the lead author of the IPCC, the Intergovernmental Panel on Climate Change, working on this topic, I found—and it is well-known now—that going carbon neutral does not suffice. I think you say that in your introduction. Now we have to go carbon negative, which means we have to remove in net terms more CO2 than what is emitted. And that CO2 that we remove should be stabilized on Earth. I’m not saying sequester on the ground, but I’m saying stabilized. You know, it could be in materials or instruments or whatever, stabilizing nerves after it’s removed.

If you need to remove more CO2 than what you emit and we need to remove 40 gigatons more than what we emit right now, you cannot do it from industrial facilities, the best that you can achieve is carbon neutrality. You need to go carbon negative. For that you have to go and remove CO2 from air.

Steven Cherry I said that 20 years from now, we’ll wish we had started all this 20 years earlier, but you actually started this process a decade ago, you already foresaw that we would need carbon negative processes. But at the same time, as you mentioned, you were also working to develop the Kyoto Protocols, specifically creating carbon markets. Was that just a stopgap before getting to this point that you’re at now?

Graciela Chichilnisky No. No, no. The carbon market solution was the solution, an easy solution. Let me explain. The problem is that our prices are all wrong, and when we try to maximize economic performance, we maximize our GDP, in which we don’t take into account the enormous damage that excessive CO2 emissions are causing to humans to our economy, to our world, and even to our survival as a species. So the invention of the carbon market—I invented and I designed it and I rolled it into the Kyoto Protocol in 1997—was done with a purpose of changing the system of values.

In other words, introducing prices and values that make it more desirable to be clean rather than to over-emit. Right now if we were to cut all the trees in the United States and produce toilet paper, our would economic system of economic performance, how we measure it, we say that we are much better off. After all, more trees are being cut off and used to produce toilet paper than before.

So I decided that this had to change. And in fact, when I designed and created the carbon market, in the Kyoto Protocol, it became international law in 2005. And it is now what’s called the European Union Emission Trading System, which encompasses 27 nations, and is also used in China, in 14 states in the United States, and essentially 25 percent of humankind is using now the carbon market, that I designed and wrote into the protocol originally in 1997. But the most important statistic for me is, in December 2019 Physics Today, there is an article on the carbon market, which says the carbon market has succeeded by decreasing the emissions from the nations that use the carbon market in those years since 2005, when it became international law, decreasing the emission, those nations that use the carbon market by 30 percent from the base year.

Another way of saying is that if the whole world was using, not just the 25 percent that I mentioned, the carbon markets, we would be 30 percent below the level of emissions of 2005. And you know what? We really wouldn’t have the climate disaster, the catastrophe, that we fear. We would not have it because we would be containing the emissions of CO2 through the use of the carbon market, as was done in all the nations that adopted carbon market when it became international law in 2005.

So that’s a solution, but we haven’t adopted it, only 25 percent of the work succeeded. The rest of the world went south. We emitted even more. So now in relation to decreasing emissions because you cannot avoid increasing emissions—that’s critical—you now have to remove the CO2, the legacy CO2, that we put into the atmosphere and which is still in the atmosphere after all these years. So from the physical point of view, you have to know CO2 doesn’t decay, doesn’t decay as fast as other gases, and it remains in the atmosphere once emitted for decades, even hundreds of years in some cases. As a result of that, we do have a lot of legacy CO2 that doesn’t decay.

Steven Cherry The title of your book is Reversing Climate Change. The subtitle is How Carbon Removals Can Resolve Climate Change and Fix the Economy. Perhaps you want to say another word about the fix the economy part.

Graciela Chichilnisky Yeah, I will do it with two sentences. Sentence #1, I just want to quote new President Biden, who said, “When I think about climate change,” he said, “I think jobs, jobs, jobs.” So a technological evolution of this nature, that could be even a revolution, it’s creating a lot of jobs and it is creating the infrastructure that will allow us to solve the problem and grow the economy at the same time, because every time you remove CO2, you make money now. It doesn’t cost money. You have to invest initially, but you make money.

 The second issue—[Biden] doesn’t address because he doesn’t know the level of detail or this type of focus—is the problem of the environment and the resources is very closely tied with the problem of inequity. And you must be aware, because there have been a number of books that were really prominently published and reviewed about the increase in the inequity in the global economy, not just internationally that we know is huge, it has increased 20 times since 1945, but also within nations, like in the United States. Well, what’s interesting is that these new technologies not only solve the problem at the technological level and not only can bring jobs, as I mentioned and I quoted Biden saying so, but in addition, these technologies sponsor equity. And I will give you two examples very quickly. As I mentioned already, the solar photovoltaic revolution in which 80 percent of the cost of the production of electricity for photovoltaic energy has decreased in the last 20 years.

That revolution has created the most accessible form of energy than ever before, because while fossil fuels were the main raw material for the production of electricity in the $60 trillion power plant economy, those are really not very equitable at all. And fossil fuels come from a few regions in the world. They have to be extracted from under the earth, etc. And the result is that our whole energy production system lies at the foundation of the inequity of the modern economy, the industrial revolution. If you replace fossil fuels, natural gas, petroleum, and coal, by the sun, as an input, you have a major equalizer because everybody in the world has access to the sun in the same amount. So the input now is no longer fossil fuels that come from a few places that make a lot of money. The input now is the sun that comes from everywhere and everybody has access to that. They import. That creates energy. Now, that’s more equitable is a huge difference, huge difference.

And the other difference is that with new technology that transforms CO2 into materials for construction or even into clean forms of energy like synthetic gasoline as I explained before. That is based on air, as an input, and the air has a property, it has the same concentration of CO2 all over the planet and this means an equalizer again. So we now can reduce cement, let’s say, beverages, food. You can even reduce protein from CO2 of course, because of the carbon molecules; you can actually produce all the materials that we need and even food and drinks, beverages, from air. And the air is equitably distributed—it’s one of the last few public goods that everybody has access to, as is the sun. So we are now going into a new economy. Powered by sun and with resources coming from air and, you know, what? That solves the problem of equity in a big way. I would say inequity, which is so paralyzing to economies and to the world as a whole. So I wanted to say not only this is an environmental change, some may say a revolution, but this is in addition a social and economic change and some would say revolution.

Steven Cherry Yeah, we could do we could do an entire show on things like the resource paradox, countries that are rich in oil, for example, end up being poorer through the extraction processes than when they started. Well, Graciela, it’s going to take economists, businesspeople, scientists and politicians to lead us out of this crisis. And we’re fortunate to have a news, someone who is several of those things. Thank you for your research, your book, your company, your teaching, and for joining us today.

Graciela Chichilnisky Great. Thank you very, very much for your time and for your insightful questions.

Steven Cherry Well Graciela, it’s going to take economists, businesspeople, scientists, and politicians to lead us out of this crisis, and we’re fortunate to have in you someone who is two of those things working with the other two. Thanks for your research, your book, your company, and your teaching—and for joining us today.

We’ve been speaking with Graciela Chichilnisky: Columbia University economist, co-author of the 2020 book, Reversing Climate Change, and CEO of Global Thermostat, a startup devoted to pulling carbon out of the air cost-effectively.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

This interview was recorded February 2, 2021 via Zoom and AdobeAudition. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on Spotify, Apple Podcast, and wherever else you get your podcasts, or listen on the Spectrum website, where you can also sign up for alerts of new episodes. We welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

 

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

What the Texas-Freeze Fiasco Tells Us About The Future of the Grid

Post Syndicated from Robert Hebner original https://spectrum.ieee.org/energywise/energy/the-smarter-grid/what-texas-freeze-fiasco-tells-us-about-future-of-the-grid

“Don’t Mess with Texas” started life as part of an anti-litter campaign, back in 1985, and soon became an internationally recognized slogan. Too bad nature cares not a whit about slogans. In mid-February, a wintry blast hit the state, leaving more than 4 million people without power, most of them in homes not designed to shelter against bitter cold. The prolonged icy temperatures triggered a public health emergency and killed several dozen people in the state, according to press accounts.

So what actually happened, and why? The first question is a lot easier to answer than the second. What everyone agrees on is that the whole state experienced record cold, preceded by ice storms, which were followed by snow.  Central Texas, for example, recorded the coldest temperatures in more than three decades and the most snow—about 15 centimeters—in more than seven decades.  Moreover, the number of hours below freezing was in the triple digits—in a state in which dips below freezing very seldom last more than a few hours.

And bad things happened to the grid.  Ice storms caused tree limbs to fall onto distribution lines, causing power outages.  Wind turbines were taken off line due to icing of their blades.  Distribution of natural gas to power plants was shut off or curtailed when key components in the gas system froze up.  Even a nuclear plant had a cold-weather-related failure.  At the South Texas Project Electrical Generating Station in Bay City, Texas, a 1,300-megawatt unit went off line on 15 February after a pressure sensor in a feedwater line malfunctioned.

At the same time, the frigid weather triggered soaring demand for electricity.  Unfortunately, some plants were off line for maintenance and others were unavailable because of the cold. As the crisis went on, and on, nervous grid operators recognized that surging demand would outstrip supply, causing major parts of the state’s grid—or perhaps its entire grid—to collapse.

So, at 1:25 a.m. on 16 February, about two days after the storm spread across the state, operators began implementing rolling blackouts to assure power-system stability.  But they soon ran into problems, because the curtailment area was so large.  Some places, including Austin, the state’s capitol, found that in order to reduce the load by the amount mandated by the state’s electrical authority, they had to shut down all electrical feeders except the ones feeding critical loads, such as water treatment plants and hospitals.  So, the “rolling” blackouts weren’t rolling at all; for nearly all residential customers in and around Austin, once the power was turned off, it stayed off.

Now to the second question: Why did the Texas grid crumble? The weather-triggered problems led to a tidal wave of instant pundits picking over the very limited data to support their preferred theory as to the root cause of the problem. Against renewables? Then obviously the whole sorry episode could be blamed on the iced-over wind turbines.  Anti-fossil fuels? In that case, the maximizing of profits by those plant operators was clearly the fundamental cause. Microgrid proponents said there would not have been a problem if Texas had more microgrids.

And there were twists here, too, related to a couple of unusual technical and economic aspects of the Texas electrical system. Most of the United States and Canada are covered by just three synchronous electrical grids. There’s one for the eastern part of the continent, one for the western part of the continent, and a relatively tiny one that covers most of Texas. That Texas grid is operated by an organization called the Electric Reliability Council of Texas (ERCOT). Not everyone thinks it’s a good idea for Texas to have its own grid, so for these folks, the lack of synchronous connections to the rest of the U.S. was the problem.

Also, since 1999, Texas has had a deregulated, energy-only market structure, which means that suppliers get paid only for the electricity they produce and sell, and the market is not regulated by the Federal Energy Regulatory Commission.  So there were also calls for a transition to a forward-capacity-market structure in which suppliers are paid not only for what they sell but also to maintain the capacity to produce more than they sell. A few observers claimed that a capacity market would have avoided the fiasco.

Focusing on the technical claims and counter-claims for the moment, it is obvious that engineers around the world know how to make wind turbines and fossil-fuel power plants that continue to work under prolonged winter stress.  So why were these tried-and-true engineering approaches not implemented? 

To understand the reason, you first have to consider a fundamental role of State utility commissions, which is to assure that the people of the State get the lowest-cost electricity with acceptable reliability.  It’s always possible to invest more money and get a more reliable electrical system.  So, it’s a mostly non-technical judgement call to properly balance the cost of enhanced reliability against the risk of an unusual calamity. It is this logic that leads to, for example, Buffalo, New York, having considerably more snow plows per kilometer of paved road than San Antonio, Texas.

Not wanting a crisis to go to waste, some are proposing significant structural changes.  For example, the grid covering much of Texas is connected to the rest of the US power grid and the Mexican power grid via five direct-current links.  Some observers saw an opportunity to renew calls for Texas to merge its grid with one or both of the other major continental grids. This could be accomplished by building new high-voltage transmission lines, either AC or DC, tapping into other parts of the country. These would expand the existing electricity import-export market for Texas and better integrate Texas’s grid with the other two, adjacent grid systems.

This won’t be a near-term solution. The time required to build transmission lines is measured in years and the cost will likely exceed US $1 million per mile ($620,000 per km). And this transmission-expansion idea competes with alternatives: distributed generators fueled by propane or natural gas; and storage facilities based on batteries or fuel cells capable of powering a single house or a retail, industrial, or commercial facility.

There are some intriguing transportation-related options for enhanced grid resilience now becoming available, too. These are linked to emerging technologies for the electrification of transportation. The U.S. Department of Transportation, for example, unveiled a fuel-cell-powered-electric transit bus last year that could provide emergency power to a drug store, a supermarket, or some other critical establishment.  It was cost effective for periods up to two weeks compared with leasing a generator.  Ford made news on 18 February when it asked its dealers to loan out stocks of its new F-150 hybrid truck, versions of which are equipped with generators capable of putting out 7.2 kilowatts. In October 2019, the US Departments of Energy and Defense offered up to $1 million to develop a military vehicle with a similar purpose.

A vital fact made very visible by the Texas situation is that population centers increasingly rely on interacting systems.  In Texas, the weather disrupted both transportation and electricity.  These disruptions in turn affected the water supply, telecommunications, emergency response, the food supply, the availability of gasoline, and healthcare—including COVID-19 vaccinations.  For years, to aid in planning and event management, academics, companies, cities and states have been developing models to predict the interconnected effects of disasters in specific locations.  Recently, the Department of Energy, via its laboratories, has addressed this issue.  Better models could help officials prevent major fiascoes in some cases, or, when that’s not possible, react better during crises by giving managers the tools needed for real-time management of complex, interdependent systems. 

Now, in Texas, given the high levels of publicity, political involvement, and consumer anger, it’s a pretty safe bet that the needle will very soon be moved toward higher cost and more reliability. In fact, Texas’s Governor, Greg Abbott, has proposed requiring the implementation of established winterizing technology.

There will be exhaustive, detailed, after-action analysis once past the immediate crisis that will probably uncover crucial new details.  For now, though, it seems pretty clear that what happened in Texas was likely preventable with readily accessible and longstanding engineering practices.  But a collective, and likely implicit, judgment was made that the risk to be mitigated was so small that mitigation would not be worth the cost. And nature “messed” with that judgment.

Robert Hebner is Director of the Center for Electromechanics at the University of Texas at Austin. A Fellow of the IEEE, Hebner has served on the IEEE Board of Directors and is also a former member of the IEEE Spectrum editorial board.

The Uneconomics of Coal, Fracking, and Developing ANWR

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/environment/the-uneconomics-of-coal-fracking-and-developing-anwr

Steven Cherry Hi this is Steven Cherry for Radio Spectrum.

Many things have changed in 2020, and it’s an open question which are altered permanently and which are transitory. Work-from-home may be here to stay; as might the shift from movie theatres and cable tv networks to streaming services; pet adoption rates are so high that some animal shelters are empty and global greenhouse gas emissions declined in record numbers.

That last fact has several causes—the lockdowns and voluntary confinements of the pandemic; an oil glut that preceded the pandemic and continued through it; the ways renewable energy—especially solar energy—is successfully competing with fossil-fuels. According to the Institute for Energy Economics and Financial Analysis, an Ohio-based non-profit that studies the energy economy, more than 100 banks and insurers have divested or are divesting from coal mining and coal power plants. Their analysis also shows that natural gas power plant projects—for example one that’s been proposed for central Virginia—are a poor investment, due to a combination of clean-energy regulations and the difficulty of amortizing big power-plant construction in the face of a growing clean-energy pipeline, expected to grow dramatically over the next four years.

Such continued growth in clean-energy projects is particularly notable, as it comes despite high job losses for the renewable energy industry, slowing construction activity, and difficulty in finding capital financing. Those same headwinds brought about a record number of bankruptcies in the fracking industry.

My guest today is eminently qualified to answer the question, are the changes we’re seeing in the U.S. energy-generation profile temporary or permanent? And what are the consequences for climate change? Kathy Hipple was formerly an analyst at the aforementioned Institute for Energy Economics and Financial Analysis and is a professor in Bard College’s Managing for Sustainability MBA program.

Kathy, welcome to the podcast.

Kathy Hipple Thank you, Steven. It’s great to be here.

Steven Cherry Kathy, your background is broader than most. You did a long stint on Wall Street at Merrill Lynch, but you’re also on the board of Meals on Wheels in Bennington, Vermont. There are issues of environmental justice in our decisions about what kind of energy generation to finance and where, and we’ll get to that. But first, it seems like the economics behind our energy sources are shifting almost faster than we can keep up. Where are we at currently with the economics of fossil fuels—coal, petroleum, natural gas?

Kathy Hipple Well, you’re right. It has seemed that 2020 saw an acceleration of trends. But this is not new. This has been going on for at least a decade, that fossil fuels have been in decline from a financial standpoint. And the energy sector—which currently only includes oil and gas companies, that does not include renewable energy—finished last place in the market for the decade between 2010 and 2020. It also finished last place in 2020, 2019, and 2018. So this is a sector in financial decline, long-term financial decline. And as we know and because I’m a finance professor, finance is all about the future. So the market is telling us that the future is not fossil fuels. Which is why the energy sector is now only 2 percent—a little over 2 percent—of the S&P 500. And in the 1980w it was 28+ percent. So we now have a world economy that is much less dependent on fossil fuels financially than it has ever been.

Steven Cherry Wall Street firms have promised to lead the charge toward sustainable energy use, but the world’s largest asset manager, BlackRock, a year after it said it would divest its portfolio from fossil fuels, still has something like $85 billion invested in coal companies, the worst of the fossil fuels in terms of pollution and greenhouse gases.

Kathy Hipple Yes, BlackRock has been a disappointment in many respects. They are not walking their talk. Their talk is impressive, but their follow-through, as you say, they’re still heavily invested in coal, still heavily invested in financing gas and oil projects around the world. And they are also moving into clean energy. But they have not yet done the divestment that many activists have called on them to do and that the Larry Fink letter suggests that they will do.

They have not been as transparent as they probably should be in terms of how they are working with management of companies to see if they are actually promoting the energy transition or if they are reporting on Taskforce for Climate-related Financial Disclosures, TCFD. So I do think that they grew their asset base tremendously in 2020, but they have a long way to go before they will become a climate leader on the investment side.

Steven Cherry It’s impossible to talk about new drilling without talking about fracking. A 2019 study of 40 dedicated U.S. shale oil companies found that only four of them had a positive cash flow balance. Much of the easiest drilling has already been done. Investors haven’t been getting good returns even on them. And the price of oil generally is pretty low. The thing that has puzzled some observers is that besides the economic damage wrought by fracking financially, it seems to be driven more by enthusiasm than results. Does fracking make sense financially?

Kathy Hipple Fracking does not make sense financially and it never has. That is the big dirty secret—even when oil prices were well above $100/barrel and natural gas prices were much higher than they are now. These companies, year in and year out since 2010, had been cash flow negative in aggregate. Occasionally you’ll get one or two companies that will outperform their peers. But in aggregate, the oil—the frackers that are going after oil, largely in the Permian Basin in Texas and New Mexico—have been cash flow negative each and every year; and in even worse shape than the oil price fractures, are the fossil gas (sometimes called natural gas) producers, largely in the Marcellus-Utica basins in Appalachia.

They have been in extremis and they have produced negative cash flows again, even when gas prices were much higher than they are now. So the business case for fracking has never been proved; it’s a poor business model—as you mentioned, the decline rate is very high, which means you have to continue to put money into drill new wells. And the industry has never found a way to be profitable and to be cash flow positive.

In fact, one of the former CEOs of the largest gas frackers, EQT, said he had never seen an industry, in a sense, commit suicide the way the fracking industry has done. So you’re right, it’s been a terrible investment. It’s been driven by enthusiasm and a lot of investors saying wait until next year. But largely the investor base has moved away from this sector. The sector has no access to the public markets for either equity or for debt. Many banks have walked away from them. They’ve closed their loan portfolios. One prominent bank sold their entire energy portfolio for roughly 50, 60 cents on the dollar. So the sector probably can only go forward if it has access to higher-risk capital or higher-cost capital. And these will be investors who are willing to gamble on a sector that has never yet shown a financial success.

Steven Cherry There’s a lot of political momentum behind fracking, especially in western Pennsylvania and places like that, North Dakota. What is one to do when there’s such a disconnect between the politics and the finances?

Kathy Hipple That’s a great question, Steven. The industry has lost a tremendous amount of its economic and financial power, but it retains a lot of political power. And that is particularly true in places like Texas and in Pennsylvania, as you mentioned. However, I think that the public view about fracking has started to change. In fact, there was an interesting study that the counties in Pennsylvania that had more fracking, in fact, did not vote for Trump at the same level they had four years earlier, and that the public is starting to really question whether they want to have pipelines under their land, whether they want to have orphan wells or wells for gas and for oil that have just been abandoned. And they’re really questioning whether the number of jobs the industry promises will ever materialize.

Often the industry comes to a state and says we will produce this many jobs. And in fact, most of the jobs are in construction and they’re short-term jobs. And they are reasonably high-paying jobs, but often the jobs are imported from construction workers outside the state. And once these wells are drilled, they don’t require people to man them. So these are not good long term sources of revenue for these local counties, communities, or states.

Some of my students, interestingly enough, did a study on a wind farm in a small county, Paulding County, in Ohio, and they showed that the long term revenue produced from the wind farm was actually very stable income and that the county could make use of these—they were called payment in lieu of taxes—PILOT funds to finance their school district, to finance special ed, to finance DARE officers (stay off drug officers), and that a lot of counties throughout Texas, for example, are really very dependent now on income and revenue streams coming from wind. So I think as more municipalities are looking at the long-term stable income that comes in from a wind farm, for example, versus the boom-bust cycle of the oil and gas industry, clean energy will begin to be much, much more appealing—even more so than it is now.

Steven Cherry Historically, a lot of that revenue to communities are really … there’s sort of no better example of that than Alaska and in fact, in mid-November 2020, in other words, in the lame duck period between election and inauguration, the Trump administration opened up ANWAR, the Alaska Arctic National Wildlife Refuge. In fact, this was our impetus for first contacting you for this show. It’s now mid-January as we record this. Where are we at with ANWAR?

Kathy Hipple Well, it’s a beautiful, pristine part of the world and it’s very high cost to produce oil from that part of the world. And since there’s a glut of oil and a glut of gas on the market worldwide, one questions whether there’s any rational reason for drilling there. But it was one of the final moves by the Trump administration to rush through the process of allowing bidding on these lands.

And it will be interesting to see. Very few bids came in. And it doesn’t mean anybody will go forward because this is not economically producible oil, given current prices of oil. Any firm that puts money into this is likely at the end of the day to lose money.

Steven Cherry You know, back in the mid 2010s, Shell ended up abandoning a $7 billion drilling project in the Arctic, are the oil companies really enthusiastic about drilling there?

Kathy Hipple No, it doesn’t appear that they are. In fact, if you look at most of 2020, there were massive historic write-downs among the big oil companies around the world. The large oil companies did not participate in bidding for the land and water. They … A couple of smaller companies did. But the larger companies have largely stayed away.

Steven Cherry So is unwarned more of a symbol of a conflict between business and environmentalism?

Kathy Hipple I wouldn’t have put it in those terms, but I think that’s an excellent way to put that.

Steven Cherry The Biden administration promised an enormous infrastructure program oriented toward environmental concerns and shifting to a clean energy economy. Leaving aside the political difficulties in getting any such legislation through Congress, how big a program could we have and still remain within the bounds of good economic sense?

Kathy Hipple I don’t know the exact dollar amount to answer that question, but there’s still a tremendous amount of low hanging fruit with infrastructure spending and energy-efficiency spending. We always talk about moving to clean energy and renewable energy, which is fantastic. There’s an enormous need to build that out in this country. But there’s also a lot of low-hanging fruit about just energy efficiency, which ends up getting kind of short shrift when we talk about the energy transition. That could be billions and billions building out an electric-vehicle-charging system around the country. We need to move very quickly to decarbonize. Many of the countries’ plans are 2030, 2040, 2050. The urgency is to act immediately, to act now. And I’m extraordinarily happy that the Biden administration is moving as quickly as they are—just a few days into their administration.

Steven Cherry I was going to ask you about electric vehicles. It looks like Detroit is finally getting serious about them. How does that change the energy generation situation and the grid distribution system five years from now, 10 years from now?

Kathy Hipple Well, it’s essential to decarbonize the economy and much of the use of oil is for vehicle travel. The more vehicles can be electrified, the less need there will be for oil in this country. The United States has fallen behind Europe in terms of EVs and China is coming along very, very quickly and very aggressively. So the United States has a long way to go.

And part of it is that people do have a concern about range anxiety. There are not enough high-speed chargers. Many people live in apartments, and if they live in apartments, they can’t charge their vehicle overnight. They may not be going to an office, which you alluded to in your opening statement. So they can’t charge there. So if you live, for example, in New York City, where I split my time between Vermont and New York City, if you live in an apartment building, it’s very difficult in New York City to reliably have an EV. And that has to change and it has to change very, very quickly.

Steven Cherry Perhaps we could say a word about nuclear power. We’ve had three really bad accidents and almost three-quarters of a century, four, if you count Three Mile Island. That’s either a lot or a little, depending on how you look at these things. France still gets a steady 63 percent of its energy from nuclear. In fact, it only gets 10 percent from fossil fuels. Now, there are a number of new designs, including one that puts small nuclear plants on barges in the ocean. Is there a future for nuclear construction, new nuclear construction outside of China, which has been continuing to move that way?

Kathy Hipple I am not the world’s expert on nuclear power, but what I see, the cost of solar dropping 90 percent and wind dropping 70 percent and battery storage dropping quickly. I keep seeing estimates for new nuclear power and it is surprisingly continuing to increase. So it is very difficult for a new energy plant, whether it’s gas or whether it’s nuclear, to compete with the dropping cost or the declining costs of solar, wind, and battery storage.

So I don’t see in the United States that there’s a future for certainly not large nuclear. The question would be is how long do the existing nuclear plants continue to operate in the United States? And most of the energy forecasts to get to net-zero by 2030, 2040, 2050, do assume that the currently existing nuclear plants continue to operate, but they do not generally call for new nuclear.

Steven Cherry Finally, there are issues of environmental justice that are economic, for example, the air pollution caused by fossil fuel extraction and consumption falls disproportionately on minorities and the poor. This is something that you’ve studied as well.

I think that the issue of environmental justice has always been there, but it has gained a tremendous amount of traction in the past couple of years, I think, especially in 2020, when it became increasingly clear how disproportionate the poor communities were being affected by fossil fuels, which includes also petrochemical plants.

If you look at Cancer Alley in Louisiana and the number of refineries and petrochemical plants that are in a very small area of Louisiana, it’s very difficult not to be very, very concerned about environmental justice issues and the concept of a just transition. It’s a very interesting one that really needs to be top of mind as we are very thoughtful about accelerating the energy transition. It’s simply as a matter of basic decency and fairness that we cannot have the pollution caused by fossil fuels to fall disproportionately on poor communities and especially black and community communities of color. Terribly unfair.

Steven Cherry In some ways this is a part of a broader question about externalities and how they get paid for either financially or in terms of things like cancer that have tilted our economy toward fossil fuel consumption for a century now. Is there anything that can be done about that?

Kathy Hipple Well, it depends on who you ask. If you asked, for example, Bob Litterman, he chaired the Climate Leadership Committee, and he has pushed hard for a … essentially a carbon tax, but that if carbon was taxed and if the proceeds of the revenues generated by that was treated like a dividend in his view and that of, I think, his fellow Climate Leadership committee board members, that would go a long way toward addressing some of the social costs of carbon pollution. That’s one possible solution. Other countries are figuring out how to do it with cap-and-trade. But I think it’s only a question of time in this country before we have some kind of a reckoning. And one of the things the Biden administration is doing is trying to actually calculate the social cost of carbon pollution.

Steven Cherry Kathy, we’ve been speaking about oil companies as a sort of hegemony, but are there distinctions you want to make among them?

Kathy Hipple I think that’s a very interesting question, Steven. In the last few years, some of the oil—especially the large oil—companies, we call them oil majors or the integrated oil companies, have started to diverge. So the European oil companies, Shell, BP, Total, in particular, have taken a more forward-looking view toward the energy transition than have their American counterparts, Exxon and Chevron. Exxon and Chevron have largely continued along the path of doubling down on oil and gas production and petrochemicals, whereas Total, for example, has been very forward-thinking for about a decade. Now, are they doing enough? No. Still, a very small percentage of their capital expenditures are directed toward clean energy, but they are at least moving in the right direction. And Shell and BP are very involved as well, at least moving in that direction again—not quickly enough, not aggressively enough, to meet the Paris … To be aligned with Paris. But at least we’re seeing that they are aware of the energy transition and they are not staking their entire future on oil and gas, but trying to move beyond that.

Steven Cherry Companies like BP have even set a date to be out of fossil fuels 2040 or 2050. How painful is that going to be for them? Are there loopholes that make this more of a PR commitment than a serious one?

Kathy Hipple That’s a great question. BP did actually say they would reduce their fossil fuel production and that the loophole is some of their joint ventures have been carved out of that. But that was one of the most significant because it said they will, along with Repsol another European oil company, did say that they would reduce production. And we need more of that. This industry is mature. It’s declining. We need a managed decline for that industry. And that will not happen if they are just making empty statements.

Steven Cherry Well, Kathy, it seems like we’re not really going to get to where we need to on climate change until we restructure the economy around it. So thank you for your work toward that and for joining us today to talk about it.

Kathy Hipple Thank you very much for having me, Steven. And congratulations on the work that you’re doing with your students at NYU.

Steven Cherry We’ve been speaking with Kathy Hipple, of Bard College’s Managing for Sustainability MBA program, about the clean-energy economy.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

This interview was recorded January 25, 2021 via Zoom. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on Spotify, Apple Podcast, and wherever else you get your podcasts, or listen on the Spectrum website, where you can also sign up for alerts of new episodes. We welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

The First Battery-Powered Tanker is Coming to Tokyo

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energywise/energy/batteries-storage/first-battery-powered-tanker-coming-to-tokyo

A new ship powered only by lithium-ion batteries is coming to Japan’s coastline. The 60-meter-long tanker will be the first all-electric vessel of its kind when it launches in Tokyo Bay next year, its developers say.

The “e5” tanker is the latest in a small but growing fleet of vessels that use batteries for propulsion or onboard electricity use. As the global shipping industry works to curb carbon dioxide emissions and eliminate air pollution, shipbuilders and cargo owners are increasingly moving to electrify the freighters, tankers, and other vessels that move goods across the water.

Tokyo-based Asahi Tanker will own and operate the e5 vessel—which, ironically, will carry marine diesel fuels to refill the tanks of other cargo ships in the Bay. The 3.5-megawatt-hour (MWh) energy storage system is about the size of 40 Tesla Model S battery packs. That’s enough capacity to propel the ship for “many hours” before needing to plug into a shoreside charging station, said Sean Puchalski of Corvus Energy, the company supplying the batteries.

Corvus, which has offices in Norway and Canada, has put batteries in nearly 400 ships, roughly a quarter of which are fully electric, he said. Most of these are passenger and car ferries plying the Norwegian fjords, where ship operators face tight restrictions on emissions of CO2 and toxic air pollutants, such as sulfur dioxide and nitrogen oxides.

The Japanese tanker is Corvus’s first fully-electric coastal freighter project; the company hopes the e5 will be the first of hundreds more just like it. “We see it has a beachhead for the coastal shipping market globally,” Puchalski said. “There are many other coastal freighter types that are similar in size and energy demand.”

The number of battery-powered ships has ballooned from virtually zero a decade ago to hundreds worldwide. The e5 tanker’s battery is relatively big for today’s electric ships, though several larger projects are also in development. The Yara Birkeland, an 80-meter-long container ship, will use a 9-MWh system for all of its propulsion when it launches in late 2021. Corvus is supplying 10 MWh worth of batteries for AIDAPerla, a 3,330-passenger cruise ship.

Two main factors are giving momentum to maritime batteries. First, lithium-ion technology has become significantly cheaper thanks to the electric car boom on land. Average battery pack prices were about $140 per kilowatt-hour in 2020, down from about $670 in 2013. Prices are expected to drop to about $100 per kilowatt-hour by 2023, BloombergNEF, a research consulting firm, said in a report.

Second, shipping companies are now required to tackle their carbon footprints. Cargo ships account for nearly 3 percent of annual greenhouse gas emissions, according to the International Maritime Organization, the United Nations body that regulates the industry. In 2018, the IMO agreed to reduce shipping emissions by 50 percent from 2008 levels by 2050—a target that is spurring investment in not only batteries but also cleaner-burning fuels like hydrogen and ammonia

First-mover projects like the e5 tanker are needed to develop technologies and infrastructure that can eventually scale for larger, longer-distance vessels, said Narve Mjøs, director of the Green Shipping Programme for DNV GL, an international consultancy in Oslo.

“Here in Norway, most of the green technologies and fuels have first been used between our islands and in our fjords,” he said. “But it’s important that these technologies can take the steps toward short-sea and deep-sea shipping,” he added, referring to two sectors with much higher energy requirements.

Mjøs said he believes eventually every ship will have some type of battery system—either to propel the vessel while at sea, or to keep the ship’s lights and equipment running while at berth. But ocean-crossing cargo ships will probably never be only powered by batteries. To sail for days or weeks without recharging, a ship would have to carry so many batteries there’d be no room left for cargo, he said.

That’s why companies like Corvus are expanding their focus. On 1 February, Corvus announced it would begin developing “large scale” hydrogen fuel cell systems for ships, which it will pair with its lithium-ion batteries. (Put simply, fuel cell modules convert chemical energy into electrical energy without burning the fuel.) The company plans to showcase its first combined system by 2023.

“Corvus is definitely interested in pushing the boundary on how applicable we can make battery technology,” Puchalski said. “But where the range of the ship is too far, or is not practical for battery-only, we’ll add the fuel cell.”

Increasing Energy Inefficiency

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/energy/policy/increasing-energy-inefficiency

Perhaps the most celebrated graphic image of all time was published in 1869 by Charles Joseph Minard, a French civil engineer. He traced the advance of Napoleon’s army into Russia and its retreat from 1812 to 1813 by a sequence of thinning bands representing the total number of men. Four hundred twenty-two thousand soldiers crossed eastward into Russia, 100,000 reached Moscow, and 10,000 crossed the Neman River westward to Prussia, at which point the Grand Armée had lost 97.6 percent of its initial force.

A similar graphic technique was employed by a committee of Britain’s Institution of Civil Engineers in its 1897–98 report on thermal efficiency in steam engines. The graphic illustrated the operation of the Louisville-Leavitt pumping steam engine by beginning with the combustion of coal on the boiler’s grate, producing 193,708 kilojoules (183,600 British thermal units) per minute, continuing with the 149,976 kJ per minute that actually reached the engine, and ending with the effective work (brake horsepower) of 26,788 kJ per minute, for an overall efficiency of only 13.8 percent. Soon this representation became known as a Sankey diagram, after Matthew Henry Phineas Riall Sankey, the honorary chairman of the committee.

One of the most revealing uses of ­Sankey diagrams is to trace national energy flows, starting on the left with all primary energy inputs—all fossil fuels and biofuels, together with electricity generated from hydro, nuclear, wind, solar and geothermal sources—and ending on the right with actual energy services (industrial heat, kinetic and chemical energies, residential and commercial heating and ­air-conditioning, all forms of transportation). A set of these graphs is available for the United States for 1950, 1960, 1970, and then for every year from 1978 through 2019; they can be downloaded from two Lawrence Livermore National Laboratory websites. The latest Sankey diagram, for 2019, shows that the nation’s useful energies (energy services) added up to 32.6 percent of the total primary energy input, a considerably poorer performance than in 1950, when the overall mean was 50.8 percent!

Two realities explain this retrogression. First, transportation has taken a larger share of the energy budget. The average efficiency of car engines has been improving since the mid-1970s, and flying has seen even more impressive efficiency gains per passenger-kilometer. However, rising car ownership, heavier vehicles, much more frequent flying, and longer distances traveled per year per capita explain the sector’s higher share of final energy use (37 percent in 2019, 30 percent in 1950) and its slight efficiency drop from 26 percent to 21 percent during the past 70 years.

The second reality is the decline in the average conversion efficiency of residential and commercial energy use, from about 70 percent to 65 percent, as the gains from more efficient heating have been more than erased by the mass adoption of air-conditioning. Electricity for air-conditioning comes mostly from ­fossil-fuel-powered plants with their considerable inherent conversion losses: In 2019, the average efficiency of U.S. ­electricity-generating coal-fired plants was about 32 percent and that of the now-dominant gas-fired stations 44 percent.

The decline of average conversion efficiency has been much more pronounced in the industrial sector, from 70 percent to 49 percent, which is explained largely by the sector’s ongoing electrification (which displaced former direct fuel uses) and by the expansion of ­electricity-intensive manufacturing. This is a common paradox that has accompanied improved design and higher efficiency of individual energy converters: Even as their specific performance gets better, the overall performance gets worse. The United States is now wasting significantly more energy than it did a lifetime ago. About two-thirds of the total primary input go directly into heating the universe without performing first any useful work, and only a third provides desired energy services, while in 1950 it was a 50/50 split. Another example of progressing by regressing.

This article appears in the February 2021 print issue as “Energy-Conversion Efficiency Is Falling.”

Bright X-Rays, AI, and Robotic Labs—A Roadmap for Better Batteries

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/batteries-storage/bright-xrays-ai-and-robotic-labsa-roadmap-for-better-batteries

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

Batteries have come a long way. What used to power flashlights and toys, Timex watches and Sony Walkmans, are now found in everything from phones and laptops to cars and planes.

Batteries all work the same: Chemical energy is converted to electrical energy by creating a flow of electrons from one material to another; that flow generates an electrical current.

Yet batteries are also wildly different, both because the light bulb in a flashlight and the engine in a Tesla have different needs, and because battery technology keeps improving as researchers fiddle with every part of the system: the two chemistries that make up the anode and the cathode, and the electrolyte and how the ions pass through it from one to the other.

A Chinese proverb says, “Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.” The Christian Bible says, “follow me and I will make you fishers of men.”

In other words, a more engineering-oriented proverb would say, “let’s create a lab and develop techniques for measuring the efficacy of different fishing rods, which will help us develop different rods for different bodies of water and different species of fish.”

The Argonne National Laboratory is one such lab. There, under the leadership of Venkat Srinivasan, director of its Collaborative Center for Energy Storage Science, a team of scientists has developed a quiver of techniques for precisely measuring the velocity and behavior of ions and comparing it to mathematical models of battery designs.

Venkat Srinivasan [Ven-kat Sri-ni-va-san] is also deputy director of Argonne’s Joint Center for Energy Storage Research, a national program that looks beyond the current generation of lithium–ion batteries. He was previously a staff scientist at Lawrence Berkeley National Laboratory, wrote a popular blog, “This Week in Batteries,” and is my guest today via Teams.

Venkat, welcome to the podcast.

Venkat Srinivasan Thank you so much. I appreciate the time. I always love talking about batteries, so it’d be great to have this conversation.

Steven Cherry I think I gave about as simplistic a description of batteries as one could give, maybe we could start with what are the main battery types today and why is one better than another for a given application?

Venkat Srinivasan So, Steve, there are two kinds of batteries that I think all of us use in our daily lives. One of them is a primary battery. The ones that you don’t recharge. So a common one is something that you might be putting in your children’s toys or something like that.

The second, which I think is the one that is sort of powering everything that we think of, things like electric cars and grid storage or rechargeable batteries. So these are the ones where we have to go back and charge them again. So let’s talk a little bit more about rechargeable batteries that are a number of them that are sitting somewhere in the world. You have lead–acid batteries that are sitting in your car today. They’ve been sitting there for the last 30, 40 years where they used to stop the car for lighting the car up when the engine is not on. This is something that will continue to be in our cars for quite some time.

[n-dashes].

You’re also seeing lithium–ion batteries that are not powering the car itself. Instead of having internal combustion engine and gasoline, you’re seeing more pure chemicals coming out that have lithium–ion batteries. And then the third battery, which we sort of don’t see, but we have some in different places are nickel–cadmium and metal–hydride batteries. These are kind of going away slowly. But the Toyota Prius is a great example of a nickel–metal hybrid. But many people still drive Priuses—I have one—that still has nickel-metal batteries in them. These are some of the classes of materials that are more common. But there are others, like flow batteries, that people haven’t really probably thought about and haven’t seen, which is being researched quite a bit, there are companies that are trying to install flow batteries for grid storage, which are also rechargeable batteries that are of a different type.

The most prevalent of these is lithium–ion; that’s the chemistry that has completely changed electric vehicle transportation. It’s changed the way we speak on our phones. The iPhone would not be possible if not for the lithium–ion battery. It’s the battery that has pretty much revolutionized all of transportation. And it’s the reason why the Nobel Prize two years ago went to the lithium–ion batteries for the discovery and ultimately the commercialization of the technology—it’s because it had such a wide impact.

Steven Cherry I gather that remarkably, we’ve designed all these different batteries and can power a cell phone for a full day and power a car from New York to Boston without fully understanding the chemistry involved. I’m going to offer a comparison and I’d like you to say whether it’s accurate or not.

We developed vaccines for smallpox beginning in 1798; we ended smallpox as a threat to humanity—all without understanding the actual mechanisms at the genetic level or even the cellular level by which the vaccine confers immunity. But the coronavirus vaccines we’re now deploying were developed in record time because we were able to study the virus and how it interacts with human organs at those deeper levels. And the comparison here is that with these new techniques developed at Argonne and elsewhere, we can finally understand battery chemistry at the most fundamental level.

Venkat Srinivasan That is absolutely correct. If you go back in time and ask yourself, what about the batteries like the acid batteries and the nickel–cadmium batteries—did we invent them in some systematic fashion? Well, I guess not, right?

Certainly once the materials were discovered, there was a lot of innovation that went into it using what was state-of-the-art techniques at that time to make them better and better and better. But to a large extent, the story that you just said about the vaccines with the smallpox is probably very similar to the kinds of things that are happening in batteries, the older chemistries.

The world has changed now. If you look at the kinds of things we are doing today, like you said, that in a variety of techniques, both experimental but also mathematical, meaning, now computer simulations have come to our aid and now we’re able to take a deeper understanding on how batteries behave and then use that to discover new materials—first, maybe on a computer, but certainly in the lab at some point. So this is something that is also happening in the battery world. The kinds of innovations you are seeing now with COVID vaccines are the kinds of things we are seeing happen in the battery world in terms of discovering the next big breakthrough.

Steven Cherry So I gather the main technology you’re using now is ultraright X-rays and you’re using it to come up with for the first time the electrical current, something known as the transport number. Let’s let’s start with the X-rays.

Venkat Srinivasan We used to cycle the battery up back. Things used to happen to them. We then had to open up the battery and see what happened on the inside. And as you can imagine, right when you open up a battery, you hope that nothing changes by the time you take it to your experimental technique of choice to look at what’s happening on the inside. But oftentimes things change. So what you have inside the battery during its operation may not be the same as what you’re probing when you open up the cell. So a trend that’s been going on for some time now is to say, well, maybe we should be thinking about in situ to operando methods, meaning inside the party’s environment during operation, trying to find more information in the cell.

Typically all battery people will do is they’ll send a current into the battery and then measure the potential or vice versa. That’s a common thing that’s done. So what we are trying to do now is do one more thing on top of that: Can we probe something on the inside without opening up the cell? X-rays come into play because these are extremely powerful light, they can go through the battery casing, go into the cell, and you can actually start seeing things inside the battery itself during operando operation, meaning you can pass current keep the battery in the environment you want it to be and send the X-ray beam and see what’s happening on the inside.

So this is a trend that we’ve been slowly exploring, going back a decade. And a decade ago, we probably did not have the resolution to be able to see things at a very minute scale. So we were seeing maybe a few tens of microns of what was happening in these batteries. Maybe we were measuring things once every minute or so, but we’re slowly getting better and better; we’re making the resolution tighter, meaning we can see smaller features and we are trying to get the time resolution such that we can see things at a faster and faster time. So that trend is something that is going to is helping us and we continue to help us make batteries better.

Steven Cherry So if I could push my comparison a little further, we developed the COVID vaccines in record time and with stunning efficiency. I mean, 95 percent effective right out of the gate. Will this new ability to look inside the battery while it’s in operation, will this create new generations of better batteries in record time?

Venkat Srinivasan That will be the hope. And I do want to bring in two aspects that I think work complementarily with each other. One is the extreme techniques—and related techniques like X-ray, so we should not forget that there are non-X-ray techniques also that give us information that can be crucially important. But along with that, there has been this revolution in computing that has really come to the forefront in the last five to 10 years. What this computing revolution is that basically because computers are getting more and more powerful and computing resources are getting cheaper, we are able to now start to calculate on computers all sorts of things. For example, we can calculate how much lithium can a material hold—without actually having to go into the lab. And we can do this in a high-throughput fashion: screen a variety of materials and start to see which of these looks the most promising. Similarly, we can do it, same thing, to ask: Can we find iron conductors to find, say, solid-state battery materials using the same techniques?

Now, once you have these kinds of materials in play and you do them very, very fast using computers, you can start to think about how do you combine them with these X-ray techniques. So you could imagine that you’re finding a material on the computer. You’re trying to synthesize them and during the synthesis you try to watch and see, are you making the material you were predicting or did something happen during synthesis where you were not able to make the particular material?

And using this complementary way of looking at things, I think in the next five to 10 years you’re going to see this amazing acceleration of material discovery between the computing and the X-ray sources and other techniques of experimental methods. They’re going to see this incredible acceleration in terms of finding new things. You know, the big trick in materials—and this is certainly true for battery materials—if you can find those materials, maybe one of them looks interesting. So the job here is to cycle through those thousand as quickly as possible to find that one nugget that can be exciting. And so what we’re seeing now with computing and with these X-rays is the ability to cycle through many materials very quickly so that we can start to pin down which of those which of the one among those thousand looks the most promising that we can spend a lot more resources and time on them.

Steven Cherry We’ve been relying on lithium–ion for quite a while. It was first developed in 1985 and first used commercially by Sony in 1991. These batteries are somewhat infamous for occasionally exploding in phones and laptops and living rooms and on airplanes and even in the airplanes themselves in the case of the Boeing 787. Do you think this research will lead to safer batteries?

Venkat Srinivasan Absolutely. The first thing I should clarify is that the lithium–ion from the 1990s is not the same lithium–ion we used today. There have been many generations of materials that have changed over time; they’ve gotten better; the energy density has actually gone up by a factor of three in those twenty-five years, and there’s a chance that it’s going to continue to go up by another factor of two in the next decade or so. The reality is that when we use the word lithium–ion, we’re actually talking about a variety of material classes that go into the into the anodes, the cathode, and the electrolytes that make up the lithium–ion batteries. So the first thing to kind of notice is that these materials are changing continuously, what the new techniques are bringing is a way for us to push the boundaries of lithium–ion, meaning there is still a lot of room left for lithium–ion to get better, and these new techniques are allowing us to invent the next generation of cathode materials, anode materials, and electrolytes that could be used in the system to continue to push on things like energy density, fast-charge capability, cycle life. These are the kinds of big problems we’re worried about. So these techniques are certainly going to allow us to get there.

There is another important thing to think about for lithium–ion, which is recyclability. I think it’s been pretty clear that as the market for batteries starts to go up, they’re going to have a lot of batteries that are going to reach end-of-life at some stage and we do not want to throw them away. We want to take out the precious metals in them, the ones that we think are going to be useful for the next generation of batteries. And we want to make sure we dispose of them in a very sort of a safe and efficient manner for the environment. So I think that is also an area of R&D that’s going to be enabled by these kinds of techniques.

The last thing I’d say is that we’re thinking hard about systems that go beyond lithium–ion, things like solid-state batteries, things like magnesium-based batteries … And those kinds of chemistries, we really feel like taking these modern techniques and putting them in play is going to accelerate the development time frame. So you mentioned 1985 and 1991; lithium–ion battery research started in the 1950s and 60s, and it’s taken as many decades before we could get to a stage where Sony could actually go and commercialize it. And we think we can accelerate the timeline pretty significantly for things like solid-state batteries or magnesium-based batteries because of all the modern techniques.

Steven Cherry Charging time is also a big area for potential improvement, especially in electric cars, which still only have a driving range that maybe gets to 400 kilometers, in practical terms. Will we be getting to the point where we can recharge in the time it takes to get a White Chocolate Gingerbread Frappuccino at Starbucks?

Venkat Srinivasan That’s the that’s the dream. So Argonne actually leads a project for the Department of Energy working with multiple other national labs on enabling 10-minute charging of batteries. I will say that in the last two or three years, there’s been tremendous progress in this area. Instead of a forty-five-minute charge or a one-hour charge that was considered to be a fast charge. We now feel like there is a possibility of getting under 30 minutes of charging. They still have to be proven out. They have to be implemented at large scale. But more and more as we learn using these similar techniques that I can see a little bit more about, that there is a lot of work happening at the Advanced Photon Source looking at fast charging of batteries, trying to understand the phenomenon that is stopping us from charging very fast. These same techniques are allowing us to think about how to solve the problem.

And I’ll take a bet in the next five years, we’ll start to look at 10-minute charging as something that is going to be possible. Three or four years ago, I would not have said that. But in the next five years, I think they are going to start saying, hey, you know, I think there are ways in which you can start to get to this kind of charging time. Certainly it’s a big challenge. It’s not just a challenge in the battery side, it’s a challenge in how are we going to get the electricity to reach the electric car? I mean, there’s going to be a problem there. There’s a lot of heat generation that happens in these systems. We’ve got to find a way to pull it out. So there’s a lot of challenges that we have to solve. But I think these techniques are slowly giving us answers to, why is it a problem to begin with? And allowing us to start to test various hypotheses to find ways to solve the problem.

Steven Cherry The last area where I think people are looking for dramatic improvement is weight and bulk. It’s important in our cell phones and it’s also important in electric cars.

Venkat Srinivasan Yeah, absolutely. So frankly, it’s not just in electric cars. At Argonne they’re starting to think about light-duty vehicles, which is our passenger cars, but also heavy-duty vehicles. Right. I mean, what happens when you start to think about trucking across the country carrying a heavy payload? We are trying to think hard about aviation, about marine, and rail. As you start to get to these kinds of applications, the energy density requirement goes up dramatically.

I’ll give you some numbers. If you look at today’s lithium–ion batteries at the pack level, the energy density is approximately 180 watt-hours per kilogram, give or take. Depending on the company, That could be a little bit higher or lower, but approximately 180 Wh/kg. If we look at a 737 going across the country or a significant distance carrying a number of passengers, the kinds of energy density you would need is upwards of 800 Wh/kg. So just to give you a sense for that, right, we said it’s 180 for today’s lithium–ion. We’re talking about four to five times the energy density of today’s lithium–ion before we can start to think about electric aviation. So energy density would gravimetric and volumetric. It’s going to be extremely important in the future. Much of the R&D that we are doing is trying to discover materials that allow us to increase energy density. The hope is that you will increase energy density. You will make the battery charge very fine. To get them to last very long, all simultaneously, that tends to be a big deal, but it is not all about compromising between these different competing metrics—cycle life, calendar life, cost, safety, performance, all of them tend to play against each other. But the big hope is that we are able to improve the energy density without compromising on these other metrics. That’s kind of the big focus of the R&D that’s going on worldwide, but certainly at Argonne.

Steven Cherry I gather there’s also a new business model for conducting this research, a nonprofit organization that brings corporate and government, and academic research all under one aegis. Tell us about CalCharge.

Venkat Srinivasan Yeah, if you kind of think about the battery world and this is true for many of these hard technologies, the sort of the cleantech or greentech as people have come to call them. There is a lot of innovation that is needed, which means in our lab R&D, the kinds of techniques and models that we’re talking about is crucially important. But it’s also important for us to find a way to make them into a market, meaning you have to be able to take that lab innovation; you’ve got to be able to manufacture them; you’ve got to get them in the hands of, say, a car company that’s going to test them and ultimately qualify them and then integrate them into the vehicle.

So this is a long road to go from lab to market. And the traditional way you’ve thought about this is you will want to throw it across the fence, right. So, say at Argonne National Lab, invent something and then we throw it across the fence to industry and then you hope that industry takes it from there and they run with it and they solve the problems. That tends to be an extremely inefficient process. That’s because oftentimes that a national lab might stop is not enough for an industry to run with it—there are multiple paths that show up. And when you integrate these devices into the company’s existing other components there are problems that show up when you get it up to manufacturing, when you start to get up to a larger scale; there are problems that show up and you make a pact with it. And oftentimes the solution to these problems goes back to the material. So the fundamental principle that me and many others have started thinking about is you do not want to keep R&D, the manufacturing, and the market separate. You have to find a way to connect them up.

And if you connect them up very closely, then the market starts to drive the R&D, the R&D innovation starts to get the people to the manufacturing world excited. And there is this close connection among all of these three things that makes things go faster and faster. We’ve seen this in other industries and it certainly will be true in the battery world. So we’ve been trying very, very hard to kind of enable these kinds of what I would call public-private[NDASH] partnerships, ways in which we, the public, meaning the national lab systems, can start to interact with the private companies and find ways to move this along. So this is a concept that I think of me and a few others have been sort of thinking about for quite some time. Before I moved to Argonne, I was at Lawrence Berkeley. And at Lawrence Berkeley—the Bay Area has a very rich ecosystem of battery companies, especially startup companies.

So I created this entity called CalCharge, which was a way to connect up the local ecosystem in the San Francisco Bay Area to the national labs in the area—Lawrence Berkeley, SLAC, and Sandia National Labs in Livermore. So those are the three that were connected. And the idea behind this is how do we take the sort of the national lab facilities, the people, and the kind of the amazing brains that they have and use them to start to solve some of the problems that is facing? And how do we take the IP that is sitting in the lab and how do we move them to market using these startups so that we can continuously work with each other, make sure that we don’t have these valleys of death as we’ve come to call them, when we move from lab to market and try to accelerate that. I’ve been doing very similar things at Argonne in the last four years thinking hard about how do you do this, but on a national scale.

So we’ve been working closely with the Department of Energy, working with various entities both in the Chicagoland area, but also in the wider U.S. community, to start to think about enabling these kinds of ecosystems where national labs like ours and others across the country—there are 70 national labs, Department of Energy national labs—maybe a dozen of them have expertise that can be used for the free world. How do we connect them up? And the local universities that are the different parts of the country with amazing expertise, how do you connect them up to these startups, the big companies, the manufacturers, the car companies that are coming in, but also the material companies, companies that are providing lithium for a supply chain perspective? So my dream is that we would have this big ecosystem of everybody talking to each other, finding ways to leverage each other and ultimately making this technology something that can reach the market as quickly as possible.

Steven Cherry And right now, who is waiting on whom? Is there enough new research that it’s up to the corporations to do something with it? Or are they looking for specific improvements that that they need to wait for you to make?

Venkat Srinivasan All of the above. That is probably quite a bit of R&D that’s going on that industry is not aware of, and that tends to be a big problem—there’s a visibility problem when it comes to the kinds of things that are going on in the national labs and the academic world. There are things where we are not aware of the problems that industry is facing. And I think these kinds of disconnects where sometimes the lack of awareness keeps things from happening fast is what we need to solve. And the more connections we have, the more interactions we have, the more conversations we have with each other, the exposure increases. And when the exposure increases, we have a better chance of being able to solve these kinds of problems where the lack of information stops us from getting the kinds of innovation that we could get.

Steven Cherry And at your end, at the research end, I gather one immediate improvement you’re looking to make is the brightness of the X-rays. Is there anything else that we should look forward to?

Venkat Srinivasan Yeah, there are a couple of things that I think are very important. The first one is the brightness of the X-rays. There’s an upgrade that is coming up for the advanced photon source that’s going to change the time resolution in which we can start to see these batteries. So, for example, when you’re charging the batteries very fast, you can get data very quickly. So that’s going to be super important. The second one is you can also start to think about seeing features that are even smaller than the kinds of features we see today. So that’s the first big thing.

The second thing that is connected to that is artificial intelligence and machine learning is becoming something that is permeating through all forms of research, including battery research, we use AI and ML for all sorts of things. But one thing we’ve been thinking about is how do we connect up AI and ML to the kinds of X-ray techniques we’ve been using. So, for example, instead of looking all over the battery to see if there is a problem, can we use signatures but of where the problems could be occurring? So that these machine learning tools can quickly go in and identify the spot where things could be going wrong so that you can spend all your time and energy taking data at that particular spot. So that, again, we’re being very efficient with the time that we have to ensure that we’re catching the problems we have to catch. So I think the next big thing that is going on is this whole artificial intelligence and machine learning that is going to be integral for us in the battery discovery world.

The last thing which is an emerging trend is what is called automated labs or self-driving labs. The idea behind this is that instead of a human being going in and sort of synthesizing a material starting in the morning and finishing the evening and then characterizing it the next day and finding out what happened to it and then going back and trying the next material, could we start to do this using robotics? This is something that’s been a trend for a while now. But where things are heading is that more and more robots can start to do things that a human being could do. So you could imagine robots going in and synthesizing electrolyte molecules, mixing them up, testing for the conductivity and trying to see if the conductivity is higher than the one that you had before. If it’s not going back and iterating on finding a new molecule based on the previous results so that you can efficiently try to find the answer for a higher conductive electrolyte than one that you have is your baseline. Robots work 24/7. So it kind of makes it very, very useful for us to think about these ways of innovating. Robots generate a lot of data, which we now know how to handle because of all the machine learning tools we’ll be developing in the last three, four, five years. So all of a sudden, the synergy, the intersection between machine learning, the ability to analyze a lot of data, and robotics are starting to come into play. And I think we’re going to see that that’s going to open up new ways to discover materials in a rapid fashion.

Steven Cherry Well, Venkat, if you will forgive a rather obvious pun, the future of battery technology seems bright. And I wish you and your colleagues at Argonne and CalCharge every success. Thank you for your role in this research and for being here today.

Thank you so much. I appreciate the time you’ve taken to ask me this questions.

We’ve been speaking with Venkat Srivinasan of Argonne National Lab about a newfound ability to study batteries at the molecular level and about improvements that might result from it.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

This interview was recorded January 6, 2021 using Adobe Audition and edited in Audacity. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on the Spectrum website, where you can also sign up for alerts, or on Spotify, Apple, Google—wherever you get your podcasts. We welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

See Also:

Battery of tests: Scientists figure out how to track what happens inside batteries

Concentration and velocity profiles in a polymeric lithium-ion battery electrolyte

Carbon Engineering’s Tech Will Suck Carbon From the Sky

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energy/fossil-fuels/carbon-engineerings-tech-will-suck-carbon-from-the-sky

graphic link to special report landing page

West Texas is a hydrocarbon hot spot, with thousands of wells pumping millions of barrels of oil and billions of cubic feet of natural gas from the Permian Basin. When burned, all that oil and gas will release vast amounts of greenhouse gases into the atmosphere.

A new facility there aims to do the opposite. Rows of giant fans spread across a flat, arid field will pull carbon dioxide from the air and then pump it deep underground. When completed, the project could capture 1 million metric tons of carbon dioxide per year, doing the air-scrubbing work of some 40 million trees.

Canadian firm Carbon Engineering is designing and building this “direct-air capture” facility with 1PointFive, a joint venture between a subsidiary of Occidental Petroleum Corp. and the private equity firm Rusheen Capital Management. Carbon Engineering will devote much of 2021 to front-end engineering and design work in Texas, with construction slated to start the following year and operations by 2024, the partners say. The project is the biggest of its kind in the world and will likely cost hundreds of millions of dollars to develop.

Carbon Engineering is among a handful of companies with major direct-air capture developments underway this year. Zurich-based Climeworks is expanding across Europe, while Dublin’s Silicon Kingdom Holdings plans to install its first CO2-breathing “mechanical tree” in Arizona. Global Thermostat, headquartered in New York City, has three new projects in the works. All the companies say they intend to curb the high cost of capturing carbon by optimizing technology, reducing energy use, and scaling up operations.

The projects arrive as many climate experts warn that current measures to reduce emissions—such as adopting renewable energy and electrifying transportation—are no longer sufficient to avert catastrophe. To limit global warming to 1.5 °C, the world must also use “negative-emission technologies,” according to the United Nations Intergovernmental Panel on Climate Change’s 2018 report.

Global CO2 emissions from fossil fuels reached 33 billion metric tons in 2019. Existing direct-air capture projects would eliminate a tiny fraction of that total, and not all of the captured CO2 is expected to be permanently sequestered. Some of it will likely return to the atmosphere when used in synthetic fuels or other products. Companies say the goal is to continuously capture and “recycle” the greenhouse gas to avoid creating new emissions, while also generating revenue that can fund the technology.

Carbon removal can help compensate for sectors that are difficult to decarbonize, such as agriculture, cement making, and aviation, says Jennifer Wilcox, a chemical engineer and senior fellow at the World Resources Institute. “The climate models are saying clearly that if we don’t do carbon removal in addition to avoiding emissions, we will not reach our climate goals.”

Carbon Engineering’s plant in Texas will use banks of fans, each about 8.5 meters in diameter, to draw air into a large structure called a contactor. The air is pushed through a plastic mesh coated with a potassium hydroxide solution, which binds with the carbon dioxide. A series of chemical processes concentrate and compress the CO2 into tiny white pellets, which are then heated to 900 °C to release the carbon dioxide as a gas. Steve Oldham, CEO of Carbon Engineering, likens the plant to a refinery that produces chemicals at an industrial scale. “That’s the type of capability we’re going to need, to make a material impact on climate change,” he says.

At its pilot plant in British Columbia, Carbon Engineering combines the pure CO2 with hydrogen to produce synthetic crude oil. The facility can capture 1 metric ton of carbon dioxide per day; by comparison, the Texas operation is expected to capture over 2,700 metric tons daily. At the larger site, the captured gas will be injected into older oil wells, both sequestering the CO2 underground and forcing up any remaining oil. In addition to the work in Texas, the company is scaling up its Canadian operations, Oldham says. In 2021, it will open a new business and advanced-development center and expand research operations; the new facility will capture up to 4 metric tons of CO2 per day from the air.

Other direct-air capture firms are opting for a modular approach. Climeworks’ carbon collectors can be stacked to build facilities of any size. The system also uses fans, but the air runs over a solid filter material. Once saturated with CO2, the filter is heated to between 80 and 100 °C, releasing highly concentrated CO2 gas, which can be used in various ways.

For example, at Climeworks’ pilot site in Iceland—which is powered by geothermal energy—the company’s partner Carbfix reacts the concentrated CO2 with basaltic rock to lock it below ground. The site is now being expanded to capture 4,000 metric tons of carbon dioxide a year; it should be operational in the first half of 2021, says Daniel Egger, head of marketing and sales for Climeworks. The CO2 could also be used to make a more sustainable form of jet fuel; Climeworks is seeking financing for two CO2-to-fuel projects in Norway and the Netherlands.

Meanwhile, the company will continue working with the e-commerce platforms Stripe and Shopify. To cancel their carbon footprints, the two companies have committed to purchasing carbon credits from Climeworks, reflecting the amount of CO2 that Climeworks has removed from the air. Major tech firms in general are investing in carbon-reducing schemes to help meet their corporate environmental goals. Microsoft has pledged to be carbon negative by 2030 and to spend $1 billion to accelerate the development of technology for carbon reduction and removal.

“For all these companies that have targets to bring their emissions to ‘net zero,’ technologies like ours are absolutely needed,” Egger says.

Global energy giants are also backing direct-air capture to undo some of the damage caused by their products and operations. In September, for instance, ExxonMobil expanded an agreement with Global Thermostat to help scale the startup’s technology. Global Thermostat’s machines are the size of a shipping container and capture CO2 using amine-based adsorbents on honeycombed ceramic cubes, akin to a car’s catalytic converter.

Cofounder Peter Eisenberger, a professor of Earth and environmental science at Columbia University, says Global Thermostat’s goal is to remove billions of tons of carbon dioxide every year by licensing its technology to other firms. He believes the world will have to remove 50 billion metric tons of carbon dioxide over the next two decades to avoid catastrophic climate shifts. In 2021, the company will add three pilot projects, including a 2,000-metric-ton plant in Chile to produce synthetic fuels, as well as facilities in Latin America and the Middle East that will provide CO2 for bubbly beverages and water desalination, respectively.

Unlike its peers, Silicon Kingdom Holdings uses a passive system to draw in air. Klaus Lackner, a professor at Arizona State University, developed the company’s mechanical-tree technology. Each tree will have stacks of 150 disks coated in a carbon-adsorbing material; as wind blows over the disks, they trap carbon on their surfaces. The disks are then lowered into a bottom chamber, where an “energy-efficient process” releases the CO2 from the sorbent, says Pól Ó Móráin, CEO of Silicon Kingdom Holdings. The high-purity gas could be sequestered or reused in beverages, cement, fertilizer, or other industrial products. The startup plans to build and operate the first commercial-scale 2.5-meter-tall tree near the ASU campus in Tempe in 2021.

Ó Móráin says a dozen trees can capture 1 metric ton of carbon dioxide daily. The goal is to install carbon farms worldwide, each with up to 120,000 mechanical trees.

Wilcox of the World Resources Institute says there’s “no clear winner” among these emerging technologies for capturing carbon. They’re distinct from one another, she notes. “I think we need them all.”

An abridged version of this article appears in the January 2021 print issue as “The Carbon-Sucking Fans of West Texas.”