Bend, roll, twist, scrunch, fold, flex. These are terms we might use to describe a lithe gymnast doing a complex floor routine. But batteries?
Yet these are precisely the words the company Jenax in South Korea wants you to use when talking about its batteries. The Busan-based firm has spent the past few years developing J.Flex, an advanced lithium-ion battery that is ultra-thin, flexible, and rechargeable.
With the arrival of so many wearable gadgets, phones with flexible displays, and other portable gizmos, “we’re now interacting with machines on a different level from what we did before,” says EJ Shin, head of strategic planning at Jenax. “What we’re doing at Jenax is putting batteries into locations where they couldn’t be before,” says Shin. Her firm demonstrated some of those new possibilities last week at CES 2020 in Las Vegas.
A series of earthquakes left Puerto Rico in the dark this week as power outages swept nearly the entire island. About 80 percent of utility customers had power restored by Friday afternoon, yet authorities warned it could take weeks to stabilize the overall system.
A 6.4-magnitude earthquake rocked the U.S. territory on 7 January following days of seismic activity. Temblors and aftershocks leveled buildings, split streets, and severely damaged the island’s largest power plant, Costa Sur. The blackouts hit a system still reeling from 2017’s Hurricane Maria—which knocked out the entire grid and required $3.2 billion in repairs.
Lithium-sulfur batteries seem to be ideal successors to good old lithium-ion. They could in theory hold up to five times the energy per weight. Their low weight makes them ideal for electric airplanes: firms such as Sion Power and Oxis Energy are starting to test their lithium-sulfur batteries in aircraft. And they would be cheaper given their use of sulfur instead of the rare-earth metals used in the cathode today.
But the technology isn’t yet commercial mainly because of its short life span. The cathode starts falling apart after just 40 to 50 charge cycles.
By designing a novel robust cathode structure, researchers have now made a lithium-sulfur battery that can be recharged several hundred times. The cells have an energy capacity four times that of lithium-ion, which typically holds 150 to 200 watt-hours per kilogram (Wh/kg). If translatable to commercial devices, it could mean a battery that powers a phone for five days without needing to recharge, or quadruples the range of electric cars.
That’s unlikely to happen, since energy capacity drops when cells are strung together into battery packs. But the team still expects a “twofold increase at battery pack level when [the new battery is] introduced to the market,” says Mahdokht Shaibani, a mechanical and aerospace engineer at Australia’s Monash University who led the work published recently in the journal Science Advances.
Shaibani likens the sulfur cathode in a lithium-sulfur battery to a hard-working, overtaxed office worker. It can take on a lot, but the job demands cause stress and hurt productivity. In battery terms, during discharge the cathode soaks up a large amount of lithium ions, forming lithium sulfide. But in the process, it swells enormously, and then contracts when the ions leave during battery charging. This repeated volume change breaks down the cathode.
Inside a row of nondescript buildings in the small town of Albany, in northeast Indiana—approximately 1,000 kilometers from the nearest coast—Atlantic salmon are sloshing around in fiberglass tanks.
Only in the past five years has it become possible to raise thousands of healthy fish so far from the shoreline without contaminating millions of gallons of fresh water. A technology called recirculating aquaculture systems (RAS) now allows indoor aquaculture farms to recycle up to 99 percent of the water they use. And the newest generation of these systems will help one biotech company bring its unusual fish to U.S. customers for the first time this year.
For AquaBounty Technologies, which owns and operates the Indiana facility, this technology couldn’t have come at a better time. The company has for decades tried to introduce a transgenic salmon it sells under the brand name AquAdvantage to the U.S. market. In this quest, AquaBounty has lost between US $100 million and $115 million (so far).
In June, the company will harvest its first salmon raised in the United States and intended for sale there. Thanks to modifications that involved splicing genetic material into its salmon from two other species of fish, these salmon grow twice as fast and need 25 percent less food to reach the same weight as salmon raised on other fish farms.
Since AquAdvantage salmon are genetically modified, the company has taken special precautions to reduce the odds that these fish could reproduce in the wild. Raising all the salmon indoors, far away from wild populations, is key to that equation. And that strategy wouldn’t be possible without modern recirculating systems.
But it’s not yet clear whether U.S. consumers will buy AquaBounty’s salmon, or even if stores will sell it. Already Costco, Target, Trader Joe’s, Walmart, Whole Foods, and roughly 80 other North American grocery store chains have said they don’t plan to carry it. As of December, AquaBounty was unable to name any restaurants or stores where customers would be able to buy its salmon.
A 2018 report by Diamond Equity Research, paid for by AquaBounty, estimated potential annual sales of $10 million in the United States. Meanwhile, sales in Canada—where AquAdvantage salmon has been sold since 2017—brought in just $140,371 in the first nine months of 2019.
In late October, the biotech firm Intrexon Corp., which held 38.1 percent of AquaBounty’s shares, sold its entire stake to Virginia-based TS AquaCulture for $21.6 million. Both firms are owned by billionaire biotech investor Randal Kirk.
Eric Hallerman, a fisheries scientist at Virginia Tech who served on the U.S. Food and Drug Administration panel that reviewed AquAdvantage salmon, thinks it deserves a place on the table. “People want to eat more meat. We have to do it efficiently,” Hallerman says. “So, I think this has to be part of that.”
The first generation of recirculating systems, which rolled out in the 1980s and 1990s, largely failed. The filters involved couldn’t remove enough waste to maintain water quality at the indoor aquaculture farms that installed them. “Few [of these systems], if any, are still around,” says Brian Vinci, director of the Freshwater Institute, a program sponsored by a nonprofit called the Conservation Fund that has developed recirculation technology. “The ones that [still exist] grow tilapia—a very hardy species that’s able to handle ‘just okay’ water quality.”
These systems use a series of mechanical and biological filters to remove solid waste, ammonia, and carbon dioxide—all produced by the fish—from the water used on the farm. Sensors monitor temperature, pH, and water levels in every tank and track the oxygen content of the water, which must be replenished before it cycles back through. Alarms alert staff to potential problems.
Like all salmon, AquAdvantage fish begin life as fertilized eggs. In AquaBounty’s case, salmon start out at a hatchery on Prince Edward Island, in Canada, where the company keeps a small breeding stock. Technicians there gently massage female fish to extract eggs and prompt males to expel milt, or semen, which the staff mix together to produce fertilized eggs. Aside from the fish used in breeding, all the other salmon the company produces are sterile females, which cannot reproduce with one another or with wild salmon.
When these eggs become “eyed eggs”—so named because two little black eyes suddenly become visible inside each gelatinous orange blob—the eggs are considered stable enough to transport. At this point, they’re moved from the Prince Edward hatchery to AquaBounty’s Indiana farm, where the company had about 150,000 eyed eggs on site in November.
When the eyed eggs arrive, they’re put onto large trays that hold as many as 10,000 at a time. Then they’re placed into one of two incubation units until they hatch (typically within two weeks) and absorb their yolk sac—at which point the fry are said to be “buttoned up.”
The buttoned-up fry then slide into one of 12 small tanks in a nursery, where they begin eating commercial feed (the same kind used on other fish farms) until they weigh about 5 grams. Then they’re transferred into one of 24 tanks—still in the nursery—until they hit 40 to 50 grams.
At that point, the fish are moved from the nursery to a set of “pre–grow out” tanks, which can hold up to 20,000 fish at a time. Once they reach 300 grams, they’re switched over to a set of six tanks where they grow to about 4.5 kilograms.
Right before harvest, the fish must spend about six days being purged in specially-designed tanks that pump in fresh water. Here the fish are rinsed of any compounds that may have built up in the recirculation system and could spoil the salmon’s flavor.
Then, it’s harvest time. Common methods include electrocution or percussive stunning; AquaBounty isn’t yet sure which technique it will use. AquaBounty’s salmon are ready to harvest just 18 months after they hatch. It can take up to three years for wild salmon to reach market weight of 4.5 kg.
AquaBounty’s recirculating system cleans and recycles water and monitors conditions throughout every stage of a salmon’s life. Mechanical filters, such as the Hydrotech drum filters, capture fish waste. Biological filters containing bacteria convert ammonia to nitrite, and then change nitrite into nitrate. Water temperature is kept to between 13 and 15 °C.
One advance developed at Cornell, adopted by the Freshwater Institute and installed at AquaBounty’s facility, is a “self-cleaning” circular fish tank fitted with strategically placed nozzles, which create a whirlpool effect to mechanically separate waste such as uneaten food. “We get the tank to operate like a teacup or coffee cup, so when you swirl the water, the grounds go to the bottom,” Vinci says.
With its recirculating tech, AquaBounty aims to recycle 95 percent of the water used at its Indiana facility. Any water that can’t be recycled will pass through an on-site water treatment plant and then go into wetlands, according to Dave Conley, AquaBounty’s director of communications.
Even with the newest recirculating tech, Vinci at the Freshwater Institute says there’s still room for improvement. “We do use a lot of sensors, and that is one of the weakest parts of the RAS industry, in my opinion,” Vinci says. “I can’t tell you how many different probes we’ve tried.”
He hopes that the machine-vision technology developed by Aquabyte to count sea lice in coastal fish farms will someday be able to recognize individual fish in indoor aquaculture facilities and monitor their health and well-being. Compared with traditional fish farms, AquaBounty’s salmon live in close quarters—there are more than three times as many fish per cubic meter of water at the Indiana facility as there are in traditional fish farms.
Even so, the AquaBounty farm uses no vaccines, antibiotics, or chemical treatments, Conley says. Eyed eggs are disinfected with iodine upon arrival, and technicians clean and disinfect the tanks and incubator trays between each batch (about every three months). Before a fish leaves the nursery, it’s screened for eight different bacterial, parasitic, and viral diseases.
Rosalind Leggatt, a postdoctoral researcher at Fisheries and Oceans Canada who contributed to the agency’s environmental assessment of AquAdvantage salmon, says the development of recirculating technology has dovetailed nicely with AquaBounty’s plans. “The recirculating systems are advancing every six months,” she says. “They might go hand in hand together.”
Now, AquaBounty must try to win over retailers, restaurateurs, and consumers who have plenty of wild-caught and farm-raised salmon from which to choose. AquaBounty plans to produce about 1,200 metric tons of salmon a year. That’s a tiny fraction of the 351,136 metric tons of salmon imported in 2018 to the United States.
To entice customers, AquaBounty is touting the environmental benefits of its salmon. The company’s website even declares it to be “The World’s Most Sustainable Salmon.” The fact that this fish consumes far less feed to reach market weight is part of that story, as is the notion that eating farm-raised salmon preserves wild stocks. Decades of overfishing have landed U.S. wild Atlantic salmon populations on the endangered species list, making it illegal to catch them.
AquaBounty also points out that, for U.S. customers, the carbon emissions generated by the transportation of its salmon will be a fraction (1/25, according to the company) of the emissions produced by transporting Atlantic salmon raised on farms in Norway and Chile to the United States. All wild Atlantic salmon and the vast majority of farm-raised Atlantic salmon consumed in the United States are imported—a condition AquaBounty refers to as the “national salmon deficit.”
However, there’s a smattering of U.S. and Canadian fish farms that raise Atlantic salmon either indoors or along the coasts, and it’s not clear how AquaBounty’s sustainability claims would stack up against these homegrown options—or against wild Alaskan stocks that are sustainably caught, says Bruce Bugbee, a crop physiologist at Utah State University. “The question here is not whether it’s good to eat, and not whether it’s profitable. It’s [whether] they should be using the word ‘sustainable’ on their website.” he says. “And that’s a key question.”
Some North American fish farms even tout their products as not genetically modified—possibly to differentiate themselves from AquaBounty’s offering. Scientific reviews have repeatedly found that genetically modified (GM) crops are as safe to eat as non-GM crops. And reviews by the FDA and Environment and Climate Change Canada concluded that the environmental risks of AquAdvantage salmon were extremely low or negligible thanks to the containment measures that AquaBounty has put in place.
Starting this month, companies that produce bioengineered food—defined as food containing genetic material that does not occur naturally and which could not have resulted from conventional breeding—are required by the United States Department of Agriculture to apply a new label to their products. At press time, AquaBounty could not confirm whether its fish would carry the labels or not.
Undeterred, AquaBounty is already moving forward with its second product—gene-edited tilapia cleared for sale in Argentina. These fish grow faster, consume less food, and produce bigger fillets than conventional tilapia do.
With its progress in Argentina, Canada, and the United States, AquaBounty is finally nearing the end of its protracted push to bring bioengineered fish to consumers. But being first brings no guarantees—and for AquaBounty, it’s time to sink or swim.
This article appears in the January 2020 print issue as “Transgenic Salmon Hits U.S. Shelves.”
In September a modern-day Mayflower will launch from Plymouth, a seaside town on the English Channel. And as its namesake did precisely 400 years earlier, this boat will set a course for the other side of the Atlantic.
Weather permitting, the 2020 voyage will follow the same course, but that’s about the only thing the two ships will have in common. Instead of carrying pilgrims intent on beginning a new life in the New World, this ship will be fully autonomous, with no crew or passengers on board. It will be powered in part by solar panels and a wind turbine at its stern. The boat has a backup electrical generator on board, although there are no plans to refuel the boat at sea if the generator backup runs dry.
The ship will cross the Atlantic to Plymouth, Mass., in 12 days instead of the 60 days of the 1620 voyage. It’ll be made of aluminum and composite materials. And it will measure 15 meters and weigh 5 metric tons—half as long and 1/36 as heavy as the original wooden boat. Just as a spacefaring mission would, the new Mayflower will contain science bays for experiments to measure oceanographic, climate, and meteorological data. And its trimaran design makes it look a little like a sleek, scaled-down, seagoing version of the Battlestar Galactica, from the TV series of the same name.
“It doesn’t conform to any specific class, regulations, or rules,” says Rachel Nicholls-Lee, the naval architect designing the boat. But because the International Organization for Standardization has a set of standards for oceangoing vessels under 24 meters in length, Nicholls-Lee is designing the boat as close to those specs as she can. Of course, without anyone on board, the new Mayflower also sets some of its own standards, too. For instance, the tightly waterproofed interior will barely have room for a human to crawl around in and to access its computer servers.
“It’s the best access we can have, really,” says Nicholls-Lee. “There won’t be room for people. So it is cozy. It’s doable, but it’s not somewhere you’d want to spend much time.” She adds that there’s just one meter between the waterline and the top of the boat’s hull. Atop the hull will also be a “sail fin” that juts up vertically to exploit wind power for propulsion, while a vertical turbine exploits it to generate electricity.
Nicholls-Lee’s architectural firm, Whiskerstay, based in Cornwall, England, provides the nautical design expertise for a team that also includes the project’s cofounders, Greg Cook and Brett Phaneuf. Cook (based in Chester, Conn.) and Phaneuf (based in Plymouth, England) jointly head up the marine exploration nonprofit Promare.
Phaneuf, who’s also the managing director of the Plymouth-based submersibles consulting company MSubs, says the idea for the autonomous Mayflower quadricentennial voyage came up at a Plymouth city council meeting in 2016. With the 400th anniversary of the original voyage fast approaching, Phaneuf said Plymouth city councillors were chatting about ideas for commemorating the historical event.
“Someone said, ‘We’re thinking about building a replica,’ ” Phaneuf says. “[I said], ‘That’s not the best idea. There’s already a replica in Massachusetts, and I grew up not far from it and Plymouth Rock.’ Instead of building something that is a 17th-century ship, we should build something that represents what the marine enterprise for the next 400 years is going to look like.” The town’s officials liked the idea, and they gave him the resources to start working on it.
The No. 1 challenge was clear from the start: “How do you get a ship across the ocean without sinking?” Phaneuf says. “The big issue isn’t automation, because automation is all around us in our daily lives. Much of the modern world is automated—people just don’t realize it. Reliability at sea is really the big challenge.”
But the team’s budget constrained its ability to tackle the reliability problem head-on. The ship will be at sea on its own with no crewed vessel tailing it. In fact, its designers are assuming that much of the Atlantic crossing will have to be done with spotty satellite communications at best.
Phaneuf says that the new Mayflower will have little competition in the autonomous sailing ship category. “There are lots of automated boats, ranging in size from less than one meter to about 10 meters,” he says. “But are they ships? Are they fully autonomous? Not really.” Not that the team’s Mayflower is going to be vastly larger than 10 meters in length. “We only have enough money to build a boat that’s 15 meters long,” he says. “Not big, by the ocean’s standards. And even if it was as big as an aircraft carrier, there’s a few of them at the bottom of the ocean from the years gone by.”
Cook, who consults with Phaneuf and the Mayflower project from a distance in his Connecticut office, says the 400-year-anniversary deadline was always on researchers’ minds.
“There are a lot of days when you think we’ll never get this done,” Cook says. “And you just keep your head down and power through it. You go over it, you go under it, you go around it, or you go through it. Because you’ve got to get it. And we will.”
When IEEE Spectrum contacted Cook in October, he was negotiating with the shipyard in Gdańsk, Poland, that’s building the new Mayflower’s hull. The yard needed plans executed to a level of detail that the team was not quite ready to provide. But parts of the boat needed to be completed promptly, so the day’s balancing act was already under way.
The next day’s challenge, Cook says, involved the output from the radar systems on the boat. The finest commercial radars in the world, he says, are worthless if they can’t output raw radar data—which the computers on the ship will need to process. So finding a radar system that represents the best mix of quality, affordability, and versatility with respect to output was another struggle.
Nicholls-Lee specializes in designing sustainable energy systems for boats, so she was up to the challenge of developing a boat that one day might not need to refuel. The ship will have 15 solar panels, each just 3 millimeters thick, which means they’ll follow the curve of the hull. “They’re very low profile; they’re not going to get washed off or anything like that,” Nicholls-Lee says. On a clear day, the panels could potentially generate some 2.5 kilowatts.
The sail fin is expected to propel the boat to its currently projected average cruising speed of 10 knots. When it operates just on electricity—to be stored in a half-ton battery bank in the hull—the Mayflower should make from 4 to 5 knots.
The ship’s eyes and ears sit near the stern, Nicholls-Lee says. Radar, cameras, lights, antennas, satellite-navigation equipment, and sonar pods will all be perched above the hull on a specially outfitted mast.
Nicholls-Lee says she’s been negotiating “with the AI team, who want the mast with all the equipment on it as high up as possible.” The mast really can’t be placed further forward on the boat, she says, because anything that’s closer to the bow gets the worst of the waves and the weather. And although the boat could keep moving if its sail fin snapped off, the loss of the mast would leave the Mayflower unable to navigate, leaving it more or less dead in the water.
The problem with putting the sensors behind the sail fin, Nicholls-Lee says, is that it means losing a fair portion of the field of view. That’s a trade-off the engineers are willing to work with if it helps to reduce their chances of being demasted by a particularly nasty wave or swell. In the worst case, in which the sail fin gets stuck in one position, blocking the radar, sonar, and cameras, the fin has an emergency clutch. Resorting to that clutch would deprive the ship of the wind’s propulsive power, but at least it wouldn’t blind the ship.
Behind all that hardware is the software, which of course ultimately does the piloting. IBM supplies the AI package, together with cloud computing power.
The 8 to 10 core team members are now adapting the hardware and software to the problem of transatlantic navigation, Phaneuf says. An example of what they’re tweaking is an element of the Mayflower’s software stack called the operational decision manager.
“It’s a thing that parses rules,” Phaneuf says. “It’s used in fiscal markets. It looks at bank swaps or credit card applications, making tens of thousands or millions of decisions, over and over again, all day. You put in a set of rules textually, and it keeps refining as you give it more input. So in this case we can put in all the collision regulations and all sorts of exceptions and alternative hypotheses that take into account when people don’t follow the rules.”
Eric Aquaronne, a cloud-and-AI strategist for IBM in Nice, France, says that ultimately the Mayflower’s software must output a perhaps deceptively simple set of decisions. “In the end, it has to decide, Do I go right, left, or change my speed?” Aquaronne says.
Yet within those options, at every instant during the boat’s voyage are hidden a whole universe of weather, sensor, and regulatory data, as well as communications with the IBM systems onshore that continue to train the AI algorithms. (The boat will sometimes lose the satellite connection, Phaneuf notes, at which point it is really on its own, running its AI inference algorithms locally.)
Today very little weather data is collected from the ocean’s surface, Phaneuf notes. A successful Mayflower voyage that gathered such data for months on end could therefore make a strong case for having more such autonomous ships out in the ocean.
“We can help refine weather models, and if you have more of these things out on the ocean, you could make weather prediction ever more resolute,” he says. But, he adds, “it’s the first voyage. So we’re trying not to go too crazy. I’m really just worried about getting across. I’m letting the other guys worry about the science packages. I’m mostly concerned with the ‘not sinking’ part now—and the ‘get there relatively close to where it’s supposed to be’ part. After that, the sky’s the limit.”
And yet the lithium-ion battery is far from perfect. It’s still too pricey for applications requiring long-term storage, and it has a tendency to catch fire. Many forms of the battery rely on increasingly hard-to-procure materials, like cobalt and nickel. Among battery experts, the consensus is that someday something better will have to come along.
That something may well be the lithium-ion battery’s immediate predecessor: the lithium-metal battery. It was developed in the 1970s by M. Stanley Whittingham, then a chemist at Exxon. Metallic lithium is attractive as a battery material because it easily sheds electrons and positively charged lithium ions. But Whittingham’s design proved too tricky to commercialize: Lithium is highly reactive, and the titanium disulfide he used for the cathode was expensive. Whittingham and other researchers added graphite to the lithium, allowing the lithium to intercalate and reducing its reactivity, and they swapped in cheaper materials for the cathode. And so the lithium-ion battery was born. Batteries with lithium-metal anodes, meanwhile, seemed destined to remain an interesting side note on the way to lithium-ions.
But XNRGI, based in Bothell, Wash., aims to bring lithium-metal batteries into the mainstream. Its R&D team managed to tame the reactivity of metallic lithium by depositing it into a substrate of silicon that’s been coated with thin films and etched with millions of tiny cells. The 3D substrate greatly increases the anode’s surface area compared with a traditional lithium-ion’s two-dimensional anode. When you factor in using metallic lithium instead of a compound, the XNRGI anode has up to 10 times the capacity of a traditional intercalated graphite-lithium anode, says Chris D’Couto, XNRGI’s CEO.
Of course, new battery technologies are announced all the time, and tech news outlets, including IEEE Spectrum, are more than happy to tout their promising capabilities. But relatively few batteries that appear promising or even revolutionary in the lab actually make the leap to the marketplace.
Commercializing any new battery is a complicated prospect, notes Venkat Srinivasan, an energy-storage expert at Argonne National Laboratory, near Chicago. “It depends on how many metrics you’re trying to satisfy,” he says. For an electric car, the ideal battery offers a driving range of several hundred kilometers, charging times measured in minutes, a wide range of operating temperatures, a 10-year life cycle, and safety in collisions. And of course, low cost.
“The more metrics you have, the more difficult it will be for a new battery technology to satisfy them all,” Srinivasan says. “So you need to compromise—maybe the battery will last 10 years, but the driving range will be limited, and it won’t charge that quickly.” Different applications will have different metrics, he adds, and “industry only wants to look at batteries that are at least as good as what’s already available.”
D’Couto acknowledges that commercializing XNRGI’s batteries has not been easy, but he says several factors gave the company a leg up. Rather than inventing a new manufacturing method, it borrowed some of the same tried-and-true techniques that chipmakers use to make integrated circuits. These include the etching of the 20-by-20–micrometer cavities into the silicon and application of the thin films. Hence the battery’s name: the PowerChip.
Each of those microscopic cells can be considered a microbattery, D’Couto says. Unlike the catastrophic failure that occurs when a lithium-ion battery is punctured, a failure in one cell of a PowerChip won’t propagate to the surrounding cells. The cells also seem to discourage the formation of dendrites, threadlike growths that can cause the battery to fail.
Some flavors of lithium-ion batteries, such as those made by Enovix, Nexeon, Sila Nanotechnologies, and SiON Power, also achieve better performance by replacing some or all of the graphite in the anode with silicon. [See, for example, “To Boost Lithium-Ion Battery Capacity by up to 70%, Add Silicon.”] In those batteries’ anodes, the lithium is intercalated with the silicon, bonding to form Li15Si4.
In XNRGI’s PowerChip, the silicon substrate has a conductive coating that acts as a current collector and a diffusion barrier that prevents the silicon from interacting with the lithium. D’Couto says that the lithium-metal anode’s capacity is about five times that of silicon-intercalated anodes.
For most of its existence, XNRGI was known as Neah Power Systems, and it focused on developing fuel cells. The fuel cells used a novel porous silicon substrate. But the fuel-cell market didn’t take off, and so in 2016, the company got a Department of Energy grant to use the same concept to build a lithium-metal battery.
XNRGI continues to experiment with cathode designs that can keep up with its supercharged anodes. For now, the company is using cathodes made from lithium cobalt oxide and nickel manganese cobalt, which could yield a battery with twice the capacity of traditional lithium-ions. It’s also making sample batteries using cathodes supplied by customers. D’Couto says alternative materials like sulfur could boost the cathode performance even more. “Having a high-performing anode without a corresponding high-performing cathode doesn’t maximize the battery’s full potential,” he says.
“People like me dream of a day where we’ve completely solved all the battery problems,” says Argonne’s Srinivasan. “I want everybody to drive an EV, everybody to have battery storage in their home. I want aviation to be electrified,” he says. “Meanwhile, my cellphone battery is dying.” In batteries as in life, there will always be room for improvement.
IBM lifted the veil this week on a new battery for EVs, consumer devices, and electric grid storage that it says could be built from minerals and compounds found in seawater. (By contrast, many present-day batteries must source precious minerals like cobalt from dangerous and exploitative political regimes.) The battery is also touted as being non-flammable and able to recharge 80 percent of its capacity in five minutes.
The battery’s specs are, says Donald Sadoway, MIT professor of materials chemistry, “staggering.” Some details are available in a Dec. 18 blog posted to IBM’s website. Yet, Sadoway adds, lacking any substantive data on the device, he has “no basis with which to be able to confirm or deny” the company’s claims.
Electric utilities routinely adjust power supplies to match the peaks and troughs in demand. But more utilities are working to tweak customers’ habits, too, so that we don’t all gobble energy at the same time and strain the grid.
Measures like “time-of-use” tariffs are proliferating in the United States and globally, with utilities charging higher electricity rates during peak demand periods. In places like sunny California, the idea is to shift more energy usage to the afternoon—when solar power is abundant and cheap—and away from evenings, when utilities rely more on fossil fuel-fired power plants.
Yet such initiatives may have unintended consequences. A new study in the journal Nature Energy found that one utility pilot hit some participants harder than others. Vulnerable groups, including elderly people and those with disabilities, saw disproportionately negative financial and health impacts as a result of paying time-of-use rates.
“You have this potentially really useful tool, but you need to make sure you’re not unintentionally making a worse situation for parts of the population,” said Lee White, the study’s lead author and a research fellow at Australian National University in Canberra.
About 14 percent of U.S. utilities offer residential time-of-use rates, according to the consulting firm Brattle Group. Rate designs can vary from place to place, as do climate conditions and consumer habits, so the study’s findings might not hold true everywhere. Still, the research highlights concerns worth heeding as utilities and regulators design such programs.
“We need to be very careful about how we implement these rates,” White said.
White and Nicole Sintov, an assistant professor at Ohio State University, analyzed data from 7,500 households that voluntarily joined a utility’s 2016 pilot in the southwestern United States. (The company asked to go unnamed.)
Participants were randomly assigned to a control group, or one of two time-of-use rates. The first group paid an extra 0.3451 cents per kilowatt-hour from 2 to 8 p.m. on weekdays. The second group saw tariffs of 0.5326 cents per kilowatt-hour from 5 to 8 p.m. on weekdays.
Researchers studied results from July to September, a sweltering season. All participants paying time-of-use rates saw their bills increase. But households with elderly members or people with disabilities saw even greater bill increases relative to the rest. Elderly folks reported turning off their air-conditioning less than other groups; in general, older adults are especially vulnerable to heat-related illnesses.
Participants with disabilities were more likely to seek medical attention for heat-related reasons when assigned to one of the time-of-use rates—as were customers identified as Hispanic. But researchers found that people within the disability, Hispanic, or low-income groups were more likely to report adverse health outcomes regardless of rates, even in the control group.
White said a “somewhat encouraging” finding is that low-income households and Hispanic participants saw lower bill increases compared to other groups. Yet any extra costs “could still cause additional tensions in the household budget,” she added. According to the U.S. Census, low-income households on average spend 8.2 percent of their income on energy bills—about three times as much as higher-earning households.
The study highlights gaps in “flexibility capital” among electricity users, said Michael Fell, a research associate at the UCL Energy Institute. For example, wealthier households might avoid higher rates by installing energy storage devices or smart appliances with sensors and timers. Healthier individuals can cope with using less AC or heating. But many people can’t spare the expense to their wallets or wellbeing.
“There is already recognition amongst regulators that the transition to a flexible future may come with risks to those in vulnerable situations,” Fell wrote in Nature Energy. “White and Sintov’s study lends nuance to this concern.”
Ryan Hledik, a principal at Brattle Group, said residential time-of-use rates are gaining momentum as smart meters become the norm in households nationwide. While many utilities are now using tariffs to integrate more wind and solar power into the electricity mix, in coming years, such programs could help keep electric-vehicle owners from charging batteries all at once, overtaxing local infrastructure.
“That’s definitely something utilities are going to need to confront, and time-of-use rates are one way to deal with that,” Hledik said.
A team of European scientists proposes using mountains to build a new type of battery for long-term energy storage.
The intermittent nature of energy sources such as solar and wind has made it difficult to incorporate them into grids, which require a steady power supply. To provide uninterrupted power, grid operators must store extra energy harnessed when the sun is shining or the wind is blowing, so that power can be distributed when there’s no sun or wind.
Lithium-ion batteries currently dominate the energy storage market, but these are better suited for short-term storage, says Hunt, because the charge they hold dissipates over time. To store sufficient energy for months or years would require many batteries, which is too expensive to be a feasible option.
Eighty years ago,the world’s first industrial gas turbine began to generate electricity in a municipal power station in Neuchâtel, Switzerland. The machine, installed by Brown Boveri, vented the exhaust without making use of its heat, and the turbine’s compressor consumed nearly three-quarters of the generated power. That resulted in an efficiency of just 17 percent, or about 4 MW.
The disruption of World War II and the economic difficulties that followed made the Neuchâtel turbine a pioneering exception until 1949, when Westinghouse and General Electric introduced their first low-capacity designs. There was no rush to install them, as the generation market was dominated by large coal-fired plants. By 1960 the most powerful gas turbine reached 20 MW, still an order of magnitude smaller than the output of most steam turbo generators.
In November 1965, the great power blackout in the U.S. Northeast changed many minds: Gas turbines could operate at full load within minutes. But rising oil and gas prices and a slowing demand for electricity prevented any rapid expansion of the new technology.
The shift came only during the late 1980s. By 1990 almost half of all new installed U.S. capacity was in gas turbines of increasing power, reliability, and efficiency. But even efficiencies in excess of 40 percent—matching today’s best steam turbo generators—produce exhaust gases of about 600 °C, hot enough to generate steam in an attached steam turbine. These combined-cycle gas turbines (CCGTs) arrived during the late 1960s, and their best efficiencies now top 60 percent. No other prime mover is less wasteful.
Gas turbines are now much more powerful. Siemens now offers a CCGT for utility generation rated at 593 MW, nearly 40 times as powerful as the Neuchâtel machine and operating at 63 percent efficiency. GE’s 9HA delivers 571 MW in simple-cycle generation and 661 MW (63.5 percent efficiency) by CCGT.
Their near-instant availability makes gas turbines the ideal suppliers of peak power and the best backups for new intermittent wind and solar generation. In the United States they are now by far the most affordable choice for new generating capacities. The levelized cost of electricity—a measure of the lifetime cost of an energy project—for new generation entering service in 2023 is forecast to be about US $60 per megawatt-hour for coal-fired steam turbo generators with partial carbon capture, $48/MWh for solar photovoltaics, and $40/MWh for onshore wind—but less than $30/MWh for conventional gas turbines and less than $10/MWh for CCGTs.
Gas turbines are also used for the combined production of electricity and heat, which is required in many industries and is used to energize central heating systems in many large European cities. These turbines have even been used to heat and light extensive Dutch greenhouses, which additionally benefit from their use of the generated carbon dioxide to speed up the growth of vegetables. Gas turbines also run compressors in many industrial enterprises and in the pumping stations of long-distance pipelines. The verdict is clear: No other combustion machines combine so many advantages as do modern gas turbines. They’re compact, easy to transport and install, relatively silent, affordable, and efficient, offering nearly instant-on power and able to operate without water cooling. All this makes them the unrivaled stationary prime mover.
And their longevity? The Neuchâtel turbine was decommissioned in 2002, after 63 years of operation—not due to any failure in the machine but because of a damaged generator.
This article appears in the December 2019 print issue as “Superefficient Gas Turbines.”
Nine years before Paradise, California burned to the ground, a similar tragedy unfolded in Australia. On a searing, windy day in 2009 that came to be known as “Black Saturday,” hundreds of fires erupted in the state of Victoria. One of the worst razed the bucolic mountain town of Marysville, northeast of Melbourne. And just as sparks from a Pacific Gas & Electric (PG&E) power line launched the Camp Fire that destroyed Paradise, Marysville’s undoing began with high-voltage current.
In all, the Black Saturday fires killed 173 people and caused an estimated AUS $4 billion ($2.75 billion) in damage. Fires started by power lines caused 159 of the deaths.
California’s wildfires have “brought it all back,” says Tony Marxsen, an electrical engineering professor at Monash University in Australia. His parents honeymooned in Marysville. “It was a lovely little town nestled up in the hills. To see it destroyed was just wrenching,” he recalls.
Marxsen says faded memories increased Marysville’s death toll. “It had been 26 years since Australia’s last major suite of deadly fires,” he says. “People had come to believe that they could defend their house against a firestorm. Some stayed, and they all died.”
While they go by different names, California’s wildfires and Victoria’s bushfires are driven by the same combination of electrical networks and extreme weather, stoked by climate change. How Victoria responded after the Black Saturday fires—work that continues today—differs significantly from what is happening in California today, especially in PG&E’s territory.
California utility Pacific Gas & Electric (PG&E) delivered a bitter pill last month when it said that deliberate blackouts to keep its lines from sparking wildfires could be the new normal for millions of customers for the next decade—a dangerousdisruption to power-dependent communities that California governor Gavin Newsom says “no state in the 21st Century should experience.” Grid experts say Newsom is right, because technology available today can slash the risk of grid-induced fires, reducing or eliminating the need for PG&E’s “public safety power shutoffs.”
Equipment to slash grid-related fire risk isn’t cheap or problem-free, but could be preferable to the most commonly-advanced solutions: putting lines underground or equipping California with thousands of “microgrids” to reduce reliance on big lines. Widespread undergrounding and microgrids will be costly. And the latter could create inequalities and weaken investment in the big grids as communities with means isolate themselves from power shutoffs with solar systems and batteries.
Some of the most innovative fire-beating grid technologies are the products of an R&D program funded by the state of Victoria in Australia, prompted by deadly grid-sparked bushfires there 10 years ago. Early this year, utilities in Victoria began a massive rollout of one solution: power diverters that are expected to protect all of the substations serving the state’s high fire risk areas by 2024.
You could say that farming is in my blood: My grandparents on both sides ran large, prosperous farms in Iowa. One of my fondest childhood memories is of visiting my maternal grandparents’ farm and watching the intricate moving mechanisms of the threshing machine. I guess it’s not surprising that I eventually decided to study mechanical engineering at MIT. I never really considered a career in farming.
Shortly after I graduated in 1957 and took a job with the California Institute of Technology’s Jet Propulsion Lab, the Soviets launched Sputnik. I was at the right place at the right time. JPL was soon transferred to the newly formed NASA. And for more than 50 years, I worked with some of the brightest engineers in the world to send unmanned spacecraft—including Mariner, Viking, and Voyager—to all the other planets in the solar system.
But my love of farms and farming never went away, and in 1999, I purchased my paternal grandfather’s 130-hectare (320-acre) property, Pinehurst Farm, which had been out of the family for 55 years. I wasn’t exactly sure what I’d do with the place, but by the time I retired in 2007, there was more and more talk about climate change due to human-caused carbon emissions. I knew that agriculture has a large carbon footprint, and I wondered if there was a way to make farming more sustainable. After all, the most recent numbers are alarming: The World Meteorological Organization reports that the planet is on course for a rise in temperature of 3 to 5 °C by 2100. The U.S. Environmental Protection Agency estimates that agriculture and forestry accounted for almost 10 percent of greenhouse gas emissions in 2016. While a significant share of those are livestock emissions (that is, belches and flatulence), much of it comes from burning fuel to grow, harvest, and transport food, as well as fertilizer production.
I recalled a conversation I’d had with my dad and his friend, Roy McAlister, right after I acquired the farm. Roy was the president of the American Hydrogen Association, and he owned a hydrogen-powered Nissan pickup truck. Both men were vocal advocates for replacing fossil fuels with hydrogen to reduce the United States’ dependence on oil imports. The same transition would also have a big impact on carbon emissions.
And so, in 2008, I decided to create a solar-hydrogen system for Pinehurst Farm as a memorial to my father. I’d use solar power to run the equipment that would generate fuel for a hydrogen-burning tractor. Several years into the project, I decided to also make ammonia (nitrogen trihydride, or NH3) to use as tractor fuel and crop fertilizer.
My aim is to make the public—especially farmers—aware that we will need to develop such alternative fuels and fertilizers as fossil fuels become depleted and more expensive, and as climate change worsens. Developing local manufacturing processes to generate carbon-free fuel and fertilizer and powering those processes with renewable energy sources like solar and wind will eliminate farmers’ reliance on fossil fuels. And doing this all locally will remove much of the cost of transporting large amounts of fuel and fertilizers as well. At our demonstration project at Pinehurst, my colleague David Toyne, an engineer based in Tujunga, Calif., and I have shown that sustainable farming is possible. But much like designing spacecraft, the effort has taken a little longer and presented many more challenges than we initially expected.
The system that we now have in place includes several main components: a retrofitted tractor that can use either hydrogen or ammonia as fuel; generators to create pure hydrogen and pure nitrogen, plus a reactor to combine the two into ammonia; tanks to store the various gases; and a grid-tied solar array to power the equipment. When I started, there were no other solar-hydrogen farms on which I could model my farm, so every aspect had to be painstakingly engineered from scratch, with plenty of revisions, mishaps, and discoveries along the way.
The work began in earnest in 2009. Before actually starting to build anything, I crunched the numbers to see what would be needed to pull off the project. I found that a 112-kilowatt (150-horsepower) tractor burns about 47 liters per hectare (5 gallons per acre) if you’re raising corn and about two-thirds that amount for soybeans. The same area would require 5 kilograms of hydrogen fuel. That meant we needed roughly 1,400 kg of hydrogen to fuel the tractor and other farm vehicles from planting to harvest. Dennis Crow, who farms the Pinehurst land, told me about half the fuel would go toward spring planting and half for fall harvesting. The growing season in Iowa is about 150 days, so we’d need to make about 4.5 kg of hydrogen per day to have 700 kg of hydrogen for the harvest. Spring planting would be easier—we would have 215 days of the year to make the remaining fuel.
To generate the hydrogen, we would split water into hydrogen and oxygen. By my calculations, running the hydrogen generator and related equipment would require about 80 kW of solar power. I decided to use two-axis solar arrays, which track the sun to boost the collection capacity by 30 percent. Based on the efficiency of commodity photovoltaic panels in 2008, we’d need 30 solar arrays, with each array holding 12 solar panels.
That’s a lot of solar panels to install, operate, and maintain, and a lot of hydrogen to generate and store. I soon realized I could not afford to build a complete operational system. Instead, I focused on creating a demonstration system at one-tenth scale, with three solar arrays instead of 30. While the tractor would be full size, we would make only 10 percent of the hydrogen needed to fuel it. I decided that even a limited demonstration would be a worthwhile proof of concept. Now we had to figure out how to make it happen, starting with the tractor.
As it turns out, I wasn’t the first to think of using hydrogen as a tractor fuel. Back in 1959, machinery manufacturer Allis-Chalmers demonstrated a tractor powered by hydrogen fuel cells. Fifty-two years later, New Holland Agriculture did the same. Unfortunately, neither company produced a commercial model. After some further research, I decided that fuel cells were (and still are) far too expensive. Instead, I would have to buy a regular diesel tractor and convert it to run on hydrogen.
Tom Hurd, an architect in Mason City, Iowa, who specializes in renewable-energy installations, assisted with the farm’s overall design. At his suggestion, I contacted the Hydrogen Engine Center in nearby Algona, Iowa. The company’s specialty was modifying internal combustion engines to burn hydrogen, natural gas, or propane. Ted Hollinger, the center’s president, agreed to provide a hydrogen-fueled engine for the tractor.
Hollinger’s design started with a gasoline-fueled Ford 460 V-8 engine block. He suggested that we include a small propane tank as backup in case the tractor ran out of hydrogen out in the field. Several months later, though, he recommended that we use ammonia instead of propane, to avoid fossil fuels completely. Since the idea was to reduce the farm’s carbon footprint, I liked the ammonia idea.
Scott McMains, who looks after the old cars that I store on the farm, located a used 7810 John Deere tractor as well as a Ford 460 engine. The work of installing the Ford engine into the tractor was done by Russ Hughes, who lives in Monticello, Iowa, and was already restoring my 1947 Buick Roadmaster sedan.
The tractor would need to carry several large, heavy fuel tanks for the hydrogen and ammonia. Bob Bamford, a retired JPL structural-design analyst, took a look at my plans for the fuel tanks’ support structure and redesigned it. In my original design, the support structure was bolted together, but Bamford’s design used welds for increased strength. I had the new and improved design fabricated in California.
The completed tractor was delivered to the farm in late 2014. With the flick of a switch in the cab, our tractor can toggle between burning pure hydrogen and burning a mixture of hydrogen and ammonia gas. Pure ammonia won’t burn in an internal combustion engine; you first need to mix it with about 10 percent hydrogen. The energy content of a gallon of ammonia is about 35 percent that of diesel. The fuel is then mixed with the intake air and injected into the tractor’s computer-controlled, spark-ignited engine cylinders. The tractor can run for 6 hours at full power before it needs to be refueled.
While work on the tractor proceeded, we were also figuring out how to generate the hydrogen and ammonia it would burn.
Ramsey Creek Woodworks of Kalona, Iowa, modified the farm’s old hog shed to house the hydrogen generators, control equipment, and the tractor itself. The company also installed the solar trackers and the solar arrays.
We constructed a smaller building to house the pumps that would compress the hydrogen for high-pressure storage. Hydrogen is of course incredibly flammable. For safety, I designed low slots in the walls on two sides so that air could enter and vent out the top, taking with it any leaked hydrogen.
So how does the system actually produce hydrogen? The generator I purchased, from a Connecticut company called Proton OnSite, creates hydrogen and oxygen by splitting water that we pipe in from an on-site well. It is rated to make 90 grams (3 ounces) of hydrogen per hour. With the amount of sunlight Iowa receives, I can make an average of 450 grams of hydrogen per day. We can make more on a summer day, when we have more daylight, than we can in winter.
The generator was designed to operate continuously. But we’d be relying on solar power, which is intermittent, so David Toyne, who specializes in factory automation and customized systems, worked with Proton to modify it. Now the generator makes less hydrogen on overcast days and enters standby when the solar arrays’ output is too low. At the end of each day, the generator automatically turns off after being on standby for 20 minutes.
Generating ammonia posed some other challenges. I wanted to make the ammonia on-site, so that I could show it was possible for a farm to produce its fuel and fertilizer with no carbon emissions.
A substantial percentage of the world’s population depends on food grown using nitrogen-based fertilizers, including ammonia. It’s hard to beat for boosting crop yields. For example, Adam Sylvester, Pinehurst’s farm manager, told me that if we did not use nitrogen-based fertilizers on our cornfields, the yield would be about 250 bushels per hectare (100 bushels per acre), instead of the 500 bushels we get now. Clearly, the advantages to producing ammonia on location extend beyond just fuel.
But ammonia production also accounts for about 1 percent of all greenhouse emissions, largely from the fossil fuels powering most reactors. And just like hydrogen, ammonia comes with safety concerns. Ammonia is an irritant to the eyes, respiratory tract, mucus membranes, and skin.
Even so, ammonia has been used for years in refrigeration as well as fertilizer. It’s also an attractive carbon-free fuel. A ruptured ammonia tank won’t explode or catch fire as a propane tank will, and the liquid is stored at a much lower pressure than is hydrogen gas (1 megapascal for ammonia versus 70 MPa for hydrogen).
While attending the NH3 Fuel Conference in Sacramento in 2013, I had dinner with Bill Ayres, a director for the NH3 Fuel Association, and we discussed my interest in making ammonia in a self-contained system. Ayres pointed me to Doug Carpenter, who had developed a way to make ammonia on a small scale—provided you already have the hydrogen. Which I did. Carpenter delivered the reactor in 2016, several months before his untimely passing.
We turned again to Ramsey Creek to construct the ammonia-generation building. The 9-square-meter building, similar in design to the hydrogen shed, houses the pumps, valves, controls, ammonia reactor, collector tanks, and 10 high-pressure storage tanks. We make nitrogen by flowing compressed air through a nitrogen generator and removing the atmospheric oxygen. Before entering the reactor, the hydrogen and nitrogen are compressed to 24 MPa (3,500 pounds per square inch).
It’s been a process of trial and error to get the system right. When we first started making ammonia, we found it took too long for the reactor’s preheater to heat the hydrogen and nitrogen, so we added electrical band heaters around the outside of the unit. Unfortunately, the additional heat weakened the outer steel shell, and the next time we attempted to make ammonia, the outer shell split open. The mixed gases, which were under pressure at 24 MPa, caught fire. Toyne was in the equipment room at the time and noticed the pressure dropping. He made it out to the ammonia building in time to take pictures of the flames. After a few minutes, the gas had all vented through the top of the building. Luckily, only the reactor was damaged, and no one was hurt.
After that incident, we redesigned the ammonia reactor to add internal electrical heaters, which warm the apparatus before the gases are introduced. We also insulated the outer pressure shell from the heated inside components. Once started, the reaction forming the ammonia needs no additional heat.
Our ammonia system, like our hydrogen and nitrogen systems, is hooked up to the solar panels, so we cannot run it round the clock. Also, because of the limited amount of solar power we have, we can make either hydrogen or nitrogen on any given day. Once we have enough of both, we can produce a batch of ammonia. At first, we had difficulty producing nitrogen pure enough for ammonia production, but we solved that problem by mixing in a bit of hydrogen. The hydrogen bonds with the oxygen to create water vapor, which is far easier to remove than atmospheric oxygen.
We’ve estimated that our system uses a total of 14 kilowatt-hours to make a liter of ammonia, which contains 3.8 kWh of energy. This may seem inefficient, but it’s comparable to the amount of usable energy we could get from a diesel-powered tractor. About two-thirds of the electrical energy is used to make the hydrogen, one-quarter is used to make the nitrogen, and the remainder is for the ammonia.
Each batch of ammonia is about 38 liters (10 gallons). It takes 10 batches to make enough ammonia to fertilize 1.2 hectares of the farm’s nearly 61 hectares (3 of 150 acres) of corn. Thankfully, we can use the same ammonia for either application—it has to be liquid regardless of whether we’re using it for fertilizer or fuel.
We now have the basis of an on-site carbon-emission-free system for fueling a tractor and generating fertilizer, but there’s still plenty to improve. The solar arrays were sized to generate only hydrogen. We need additional solar panels or perhaps wind turbines to make more hydrogen, nitrogen, and ammonia. In order to make these improvements, we’ve created the Schmuecker Renewable Energy System, a nonprofit organization that accepts donations.
Toyne compares our system to the Wright brothers’ airplane: It is the initial demonstration of what is possible. Hydrogen and ammonia fuels will become more viable as the equipment costs decrease and more people gain experience working with them. I’ve spent more than US $2 million of my retirement savings on the effort. But much of the expense was due to the custom nature of the work: We estimate that to replicate the farm’s current setup would cost a third to half as much and would be more efficient with today’s improved equipment.
We’ve gotten a lot of interest about what we’ve installed so far. Our tractor has drawn attention from other farmers in Iowa. We’ve received inquiries from Europe, South Africa, Saudi Arabia, and Australia about making ammonia with no carbon emissions. In May 2018, we were showing our system to two employees of the U.S. Department of Energy, and they were so intrigued they invited us to present at an Advanced Research Projects Agency–Energy (ARPA-E) program on renewable, carbon-free energy generation that July.
Humankind needs to develop renewable, carbon-emission-free systems like the one we’ve demonstrated. If we do not harness other energy sources to address climate change and replace fossil fuels, future farmers will find it harder and harder to feed everyone. Our warming world will become one in which famine is an everyday occurrence.
This article appears in the November 2019 print issue as “The Carbon-Free Farm.”
About the Author
Jay Schmuecker worked for more than 50 years building planetary spacecraft at NASA’s Jet Propulsion Laboratory. Since retiring, he has been developing a solar-powered hydrogen fueling and fertilization system at Pinehurst Farm in eastern Iowa.
In research, sometimes the investigator becomes part of the experiment. That’s exactly what happened to Efraín O’Neill-Carrillo and Agustín Irizarry-Rivera, both professors of electrical engineering at the University of Puerto Rico Mayagüez, when Hurricane Maria hit Puerto Rico on 20 September 2017. Along with every other resident of the island, they lost power in an islandwide blackout that lasted for months.
The two have studied Puerto Rico’s fragile electricity infrastructure for nearly two decades and, considering the island’s location in a hurricane zone, had been proposing ways to make it more resilient.
They also practice what they preach. Back in 2008, O’Neill-Carrillo outfitted his home with a 1.1-kilowatt rooftop photovoltaic system and a 5.4-kilowatt-hour battery bank that could operate independently of the main grid. He was on a business trip when Maria struck, but he worried a bit less knowing that his family would have power.
Irizarry-Rivera [top] wasn’t so lucky. His home in San Germán also had solar panels. “But it was a grid-tied system,” he says, “so of course it wasn’t working.” It didn’t have storage or the necessary control electronics to allow his household to draw electricity directly from the solar panels, he explains.
“I estimated I wouldn’t get [grid] power until March,” Irizarry-Rivera says. “It came back in February, so I wasn’t too far off.” In the meantime, he spent more than a month acquiring and installing batteries, charge controllers, and a new stand-alone inverter. His family then relied exclusively on solar power for 101 days, until grid power was restored.
In “How to Harden Puerto Rico’s Grid Against Hurricanes,” the two engineers describe how Puerto Rico could benefit from community microgrids made up of similar small PV systems. The amount of power they produce wouldn’t meet the average Puerto Rican household’s typical demand. But, Irizarry-Rivera points out, you quickly learn to get by with less.
“We got a lot of things done with 4 kilowatt-hours a day,” he says of his own household. “We had lighting and our personal electronics working, we could wash our clothes, run our refrigerator. Everything else is just luxuries and conveniences.”
This article appears in the November 2019 print issue as “After Maria.”
Another devastating hurricane season winds down in the Caribbean. As in previous years, we are left with haunting images of entire neighborhoods flattened, flooded streets, and ruined communities. This time it was the Bahamas, where damage was estimated at US $7 billion and at least 50 people were confirmed dead, with the possibility of many more fatalities yet to be discovered.
A little over two years ago, even greater devastation was wreaked upon Puerto Rico. The back-to-back calamity of Hurricanes Irma and Maria killed nearly 3,000 people and triggered the longest blackout in U.S. history. All 1.5 million customers of the Puerto Rico Electric Power Authority lost power. Thanks to heroic efforts by emergency utility crews, about 95 percent of customers had their service restored after about 6 months. But the remaining 5 percent—representing some 250,000 people—had to wait nearly a year.
After the hurricanes, many observers were stunned by the ravages to Puerto Rico’s centralized power grid: Twenty-five percent of the island’s electric transmission towers were severely damaged, as were 40 percent of the 334 substations. Power lines all over the island were downed, including the critical north-south transmission lines that cross the island’s mountainous interior and move electricity generated by large power plants on Puerto Rico’s south shore to the more populated north.
In the weeks and months following the hurricane, many of the 3.3 million inhabitants of Puerto Rico, who are all U.S. citizens, were forced to rely on noisy, noxious diesel- or gasoline-fired generators. The generators were expensive to operate, and people had to wait in long lines just to get enough fuel to last a few hours. Government emergency services were slow to reach people, and many residents found assistance instead from within their own communities, from family and friends.
The two of us weren’t surprised that the hurricane caused such intense and long-lasting havoc. For more than 20 years, our group at the University of Puerto Rico Mayagüez has studied Puerto Rico’s vulnerable electricity network and considered alternatives that would better serve the island’s communities.
Hurricanes are a fact of life in the Caribbean. Preparing for natural disaster is what any responsible government should do. And yet, even before the storm, we had become increasingly concerned at how the Puerto Rico Electric Power Authority, or PREPA, had bowed to partisan politics and allowed the island’s electrical infrastructure to fall into disrepair. Worse, PREPA, a once well-regarded public power company, chose not to invest in new technology and organizational innovations that would have made the grid more durable, efficient, and sustainable.
In our research, we’ve tried to answer such questions as these: What would it take to make the island’s electricity network more resilient in the face of a natural disaster? Would a more decentralized system provide better service than the single central grid and large fossil-fuel power plants that Puerto Rico now relies on? Hurricane Maria turned our academic questions into a huge, open-air experiment that included millions of unwilling subjects—ourselves included. [For more on our experiences during the storm, see “For Two Power Grid Experts, Hurricane Maria Became a Huge Experiment.”]
As Puerto Rico rebuilds, there is an extraordinary opportunity to rethink the island’s power grid and move toward a flexible, robust system capable of withstanding punishing storms. Based on our years of study and analysis, we have devised a comprehensive plan for such a grid, one that would be much better suited to the conditions and risks faced by island populations. This grid would rely heavily on microgrids, distributed solar photovoltaics, and battery storage to give utilities and residents much greater resilience than could ever be achieved with a conventional grid. We are confident our ideas could benefit island communities in any part of the world marked by powerful storms and other unpredictable threats.
As is typical throughout the world,Puerto Rico designed its electricity infrastructure around large power plants that feed into an interconnected network of high-voltage transmission lines and lower-voltage distribution lines. When this system was built, large-scale energy storage was very limited. So then, as now, the grid’s control systems had to constantly match generation with demand at all times while maintaining a desired voltage and frequency across the network. About 70 percent of Puerto Rico’s fossil-fuel generation is located along the island’s south coast, while 70 percent of the demand is concentrated in the north, which necessitated building transmission lines across the tropical mountainous interior.
The hurricane vividly exposed the system’s vulnerability. Officials finally acknowledged that it made no sense for a heavily populated island sitting squarely in the Caribbean’s hurricane zone to rely on a centralized infrastructure that was developed for continent-wide systems, and based on technology, assumptions, and economics from the last century. After Maria, many electricity experts called for Puerto Rico to move toward a more decentralized grid.
It was a bittersweet moment for us, because we’d been saying the same thing for more than a decade. Back in 2008, for instance, our group at the university assessed the potential for renewable energy [PDF] on the island. We looked at biomass, microhydropower, ocean, photovoltaics (PV), solar thermal, wind, and fuel cells. Of these, rooftop PV stood out. We estimated that equipping about two-thirds of residential roofs with photovoltaics would be enough to meet the total daytime peak demand—about 3 gigawatts—for the entire island.
To be sure, interconnecting so much distributed energy generation to the power grid would be an enormous challenge, as we stated in the report. However, in the 11 years since that study, PV technology—as well as energy storage, PV inverters, and control software—has gotten much better and less costly. Now, more than ever, distributed-solar PV is the way to go for Puerto Rico.
Sadly, though, renewable energy did not take off in Puerto Rico. Right before Maria, renewable sources were supplying just 2.4 percent of the island’s electricity, from a combination of rooftop PV, several onshore wind and solar-power farms, and a few small outdated hydropower plants.
Progress has been hamstrung by PREPA. The utility was founded as a government corporation in 1941 to interconnect the existing isolated electric systems and achieve islandwide electrification at a reasonable cost. By the early 1970s, it had succeeded.
Meanwhile, generous tax incentives had induced many large companies to locate their factories and other facilities in Puerto Rico. The utility relied heavily on those large customers, which paid on time and helped finance PREPA’s infrastructure improvements. But in the late 1990s, a change in U.S. tax code led to the departure of nearly 60 percent of PREPA’s industrial clients. To close the gap between its revenues and operating costs, PREPA periodically issued new municipal bonds. It wasn’t enough. The utility’s operating and management practices failed to adapt to the new reality of more environmental controls, the rise of renewable energy, and demands for better customer service. Having accumulated $9 billion in debt, PREPA filed for bankruptcy in July 2017.
Then the hurricane struck. After the debris was cleared came the recognition—finally—that the technological options for supplying electricity have multiplied. For starters, distributed energy resources like rooftop PV and battery storage are now economically competitive with grid power in Puerto Rico. Over the last 10 years, the residential retail price of electricity has fluctuated between 20 and 27 U.S. cents per kilowatt-hour; for comparison, the average price in the rest of the United States is about 13 cents per kWh. When you factor in the additional rate increases that will be needed to service PREPA’s debt, the price will eventually exceed 30 cents per kWh. That’s more than the levelized cost of electricity (LCOE) from a rooftop PV system plus battery storage, at 24 to 29 cents per kWh, depending on financing and battery type. And if these solar-plus-storage systems were purchased in bulk, the LCOE would be even less.
Also, the technology now exists to match supply and demand locally, by using energy storage and by selectively lowering demand through improved efficiency, conservation, and demand-response actions. We have new control and communications systems that allow these distributed energy resources to be interconnected into a community network capable of meeting the electricity needs of a village or neighborhood.
Such a system is called a community microgrid. It is basically a small electrical network that connects electricity consumers—for example, dozens or hundreds of homes—with one or more sources of electricity, such as solar panels, along with inverters, control electronics, and some energy storage. In the event of an outage, disconnect switches enable this small grid to be quickly isolated from the larger grid that surrounds it or from neighboring microgrids, as the case may be.
Here’s how Puerto Rico’s grid could be refashioned from the bottom up. In each community microgrid, users would collectively install enough solar panels to satisfy local demand. These distributed resources and the related loads would be connected to one another and also tied to the main grid.
Over time, community microgrids could interconnect to form a regional grid. Eventually, Puerto Rico’s single centralized power grid could even be replaced by interconnecting regional grids and community microgrids. If a storm or some other calamity threatens one or more microgrids, neighboring ones could disconnect and operate independently. Studies of how grids are affected by storms have repeatedly shown that a large percentage of power outages are caused by relatively tiny areas of grid damage. So the ability to quickly isolate the areas of damage, as a system of microgrids is able to do, can be enormously beneficial in coping with storms. The upshot is that an interconnection of microgrids would be far more resilient and reliable than Puerto Rico’s current grid and also more sustainable and economical.
Could such a model actually work in Puerto Rico? It certainly could. Starting in 2009, our research group developed a model for a microgrid that would serve a typical community in Puerto Rico. In the latest version, the overall microgrid serves 700 houses, divided into 70 groups of 10 houses. Each of these groups is connected to its own distribution transformer, which serves as the connection point to the rest of the community microgrid. All of the transformers are connected by 4.16-kilovolt lines in a radial network. [See diagram, “A Grid of Microgrids.”]
Each group within the community microgrid would be equipped with solar panels, inverters, batteries, control and communications systems, and protective devices. For the 10 homes in each group, there would be an aggregate PV supply of 10 to 20 kW, or 1 to 2 kW per house. The aggregate battery storage per group is 128 kWh, which is enough to get the homes through most nights without requiring power from the larger grid. (The amounts of storage and supply in our model are based on measurements of energy demand and variations in solar irradiance in an actual Puerto Rican town; obviously, they could be scaled up or down, according to local needs.)
In our tests, we assume that each community microgrid remains connected to the central grid (or rather, a new and improved version of Puerto Rico’s central grid) under normal conditions but also manages its own energy resources. We also assume that individual households and businesses have taken significant steps to improve their energy conservation and efficiency—through the use of higher-efficiency appliances, for instance. Electricity demand must still be balanced with generation, but that balancing is made easier due to the presence of battery storage.
That capability means the microgrids in our model can make use of demand response, a technique that enables customers to cut their electricity consumption by a predefined amount during times of peak usage or crisis. In exchange for cutting demand, the customer receives preferential rates, and the central grid benefits by limiting its peak demand. Many utilities around the world now use some form of demand response to reduce their reliance on fast-starting generating facilities, typically fired by natural gas, that provide additional capacity at times of peak demand. PREPA’s antiquated grid, however, isn’t yet set up for demand response.
During any disruption that knocks out all or part of the central grid, our model’s community microgrids would disconnect from the main grid. In this “islanded” mode, the local community would continue to receive electricity from the batteries and solar panels for essential loads, such as refrigeration. Like demand response, this capability would be built into and managed by the communications and control systems. Such technology exists, but not yet in Puerto Rico.
Besides the modeling and simulation, our research group has been working with several communities in Puerto Rico that are interested in developing local microgrids and distributed-energy resources. We have helped one community secure funding to install ten 2-kW rooftop PV systems, which they eventually hope to connect into a community microgrid based on our design.
Other communities in central Puerto Rico have installed similar systems since the hurricane. The largest of these consists of 28 small PV systems in Toro Negro, a town in the municipality of Ciales. Most are rooftop PV systems serving a single household, but a few serve two or three houses, which share the resources.
Another project at the University of Puerto Rico Mayagüez built five stand-alone PV kiosks, which were deployed in rural locations that had no electricity for months after Maria. University staff, students, and faculty all contributed to this effort. The kiosks address the simple fact that rural and otherwise isolated communities are usually the last to be reconnected to the power grid after blackouts caused by natural disasters.
Taking this idea one step further, a member of our group, Marcel J. Castro-Sitiriche, recently proposed that the 200,000 households that were the last to be reconnected to the grid following the hurricane should receive rooftop PV and battery storage systems, to be paid for out of grid-reconstruction funds. If those households had had such systems and thus been able to weather the storm with no interruption in service, the blackout would have lasted for 6 months instead of a year. The cost of materials and installation for a 2-kW PV system with 10 kWh of batteries comes to about $7,000, assuming $3 per watt for the PV systems and $100/kWh for lead-acid batteries. Many households and small businesses spent nearly that much on diesel fuel to power generators during the months they had no grid connection.
To outfit all 200,000 of those households would come to $1.4 billion, a sizable sum. But it’s just a fraction of what the Puerto Rico government has proposed spending on an enhanced central grid. Rather than merely rebuilding PREPA’s grid, Castro-Sitiriche argues, the government should focus its attention on protecting those most vulnerable to any future natural disaster.
As engineers, we’re of course interested in the details of distributed-energy resources and microgrid technology. But our fieldwork has taught us the importance of considering the social implications and the end users.
One big advantage of the distributed-microgrid approach is that it’s centered on Puerto Rico’s most reliable social structures: families, friends, and local community. When all else failed after Hurricane Maria, those were the networks that rose to the many challenges Puerto Ricans faced. We think it makes sense to build a resilient electricity grid around this key resource. With proper training, local residents and businesspeople could learn to operate and maintain their community microgrid.
A move toward community microgrids would be more than a technical solution—it would be a socioeconomic development strategy. That’s because a greater reliance on distributed energy would favor small and medium-size businesses, which tend to invest in their communities, pay taxes locally, and generate jobs.
There is a precedent for this model: Over 200 communities in Puerto Rico extract and treat their own potable water, through arrangements known as acueductos comunitarios, or community aqueducts. A key component to this arrangement is having a solid governance agreement among community members. Our social-science colleagues at the university have studied how community aqueducts are managed, and from them we have learned some best practices that have influenced the design of our community microgrid concept. Perhaps most important is that the community agrees to manage electricity demand in a flexible way. This can help minimize the amount of battery storage needed and therefore the overall cost of the microgrid.
During outages and emergencies, for instance, when the microgrid is running in islanded mode, users would be expected to be conservative and flexible about their electricity usage. They might have to agree to run their washing machines only on sunny days. For less conscientious users, sensors monitoring their energy usage could trigger a signal to their cellphones, reminding them to curtail their consumption. That strategy has already been successfully implemented as part of demand-response programs elsewhere in the world.
Readers living in the mainland United States or Western Europe, accustomed to reliable, round-the-clock electricity, might consider such measures highly inconvenient. But the residents of Puerto Rico, we believe, would be more accepting. Overnight, we went from being a fully electrified, modern society to having no electricity at all. The memory is still raw. A community microgrid that compels people to occasionally cut their electricity consumption and to take greater responsibility over the local electricity infrastructure would be far more preferable.
This model is applicable beyond Puerto Rico—it could benefit other islands in the tropics and subtropics, as well as polar regions and other areas that have weak or no grid connections. For those locales, it no longer makes sense to invest millions or billions of dollars to extend and maintain a centralized electric system. Thanks to the advance of solar, power electronics, control, and energy-storage technologies, community-based, distributed-energy initiatives are already challenging the dominant centralized energy model in many parts of the world. More than two years after Hurricane Maria, it’s finally time for Puerto Rico to see the light.
The idea is simple: Send kites or tethered drones hundreds of meters up in the sky to generate electricity from the persistent winds aloft. With such technologies, it might even be possible to produce wind energy around the clock. However, the engineering required to realize this vision is still very much a work in progress.
Dozens of companies and researchers devoted to developing technologies that produce wind power while adrift high in the sky gathered at a conference in Glasgow, Scotland last week. They presented studies, experiments, field tests, and simulations describing the efficiency and cost-effectiveness of various technologies collectively described as airborne wind energy (AWE).
In August, Alameda, Calif.-based Makani Technologies ran demonstration flights of its airborne wind turbines—which the company calls energy kites—in the North Sea, some 10 kilometers off the coast of Norway. According to Makani CEO Fort Felker, the North Sea tests consisted of a launch and “landing” test for the flyer followed by a flight test, in which the kite stayed aloft for an hour in “robust crosswind(s).” The flights were the first offshore tests of the company’s kite-and-buoy setup. The company has, however, been conducting onshore flights of various incarnations of their energy kites in California and Hawaii.
Wind turbines have certainly grown up. When the Danish firm Vestas began the trend toward gigantism, in 1981, its three-blade machines were capable of a mere 55 kilowatts. That figure rose to 500 kW in 1995, reached 2 MW in 1999, and today stands at 5.6 MW. In 2021, MHI Vestas Offshore Wind’s V164 will rise 105 meters high at the hub, swing 80-meter blades, and generate up to 10 MW, making it the first commercially available double-digit turbine ever. Not to be left behind, General Electric’s Renewable Energy is developing a 12-MW machine with a 260-meter tower and 107-meter blades, also rolling out by 2021.
That is clearly pushing the envelope, although it must be noted that still larger designs have been considered. In 2011, the UpWind project released what it called a predesign of a 20-MW offshore machine with a rotor diameter of 252 meters (three times the wingspan of an Airbus A380) and a hub diameter of 6 meters. So far, the limit of the largest conceptual designs stands at 50 MW, with height exceeding 300 meters and with 200-meter blades that could flex (much like palm fronds) in furious winds.
To imply, as an enthusiastic promoter did, that building such a structure would pose no fundamental technical problems because it stands no higher than the Eiffel tower, constructed 130 years ago, is to choose an inappropriate comparison. If the constructible height of an artifact were the determinant of wind-turbine design then we might as well refer to the Burj Khalifa in Dubai, a skyscraper that topped 800 meters in 2010, or to the Jeddah Tower, which will reach 1,000 meters in 2021. Erecting a tall tower is no great problem; it’s quite another proposition, however, to engineer a tall tower that can support a massive nacelle and rotating blades for many years of safe operation.
Larger turbines must face the inescapable effects of scaling. Turbine power increases with the square of the radius swept by its blades: A turbine with blades twice as long would, theoretically, be four times as powerful. But the expansion of the surface swept by the rotor puts a greater strain on the entire assembly, and because blade mass should (at first glance) increase as a cube of blade length, larger designs should be extraordinarily heavy. In reality, designs using lightweight synthetic materials and balsa can keep the actual exponent to as little as 2.3.
Even so, the mass (and hence the cost) adds up. Each of the three blades of Vestas’s 10-MW machine will weigh 35 metric tons, and the nacelle will come to nearly 400 tons. GE’s record-breaking design will have blades of 55 tons, a nacelle of 600 tons, and a tower of 2,550 tons. Merely transporting such long and massive blades is an unusual challenge, although it could be made easier by using a segmented design.
Exploring likely limits of commercial capacity is more useful than forecasting specific maxima for given dates. Available wind turbine power [PDF] is equal to half the density of the air (which is 1.23 kilograms per cubic meter) times the area swept by the blades (pi times the radius squared) times the cube of wind velocity. Assuming a wind velocity of 12 meters per second and an energy-conversion coefficient of 0.4, then a 100-MW turbine would require rotors nearly 550 meters in diameter.
To predict when we’ll get such a machine, just answer this question: When will we be able to produce 275-meter blades of plastic composites and balsa, figure out their transport and their coupling to nacelles hanging 300 meters above the ground, ensure their survival in cyclonic winds, and guarantee their reliable operation for at least 15 or 20 years? Not soon.
This article appears in the November 2019 print issue as “Wind Turbines: How Big?”
The world’s first floating nuclear power plant (FNPP) docked at Pevek, Chukotka, in Russia’s remote Far East on 14 September. It completed a journey of some 9,000 kilometers from where it was constructed in a St. Petersburg shipyard. First, it was towed to the city of Murmansk, where its nuclear fuel was loaded, and from there took the North Sea Route to the other side of Russia’s Arctic coast.
The co-generation plant, named the Akademik Lomonosov, consists of a non-motorized barge, two pressurized-water KLT-40S reactors similar to those powering Russian nuclear icebreakers, and two steam turbine plants.
The FNPP can generate up to 70 megawatts (MW) of electricity and 50 gigacalories of heat an hour. That is sufficient to power the electric grids of the resource-rich region—where some 50,000 people live and work—and also deliver steam heat to the supply lines of Pevek city. The plant will manage this second feat by using steam extracted from the turbines to heat its intermediate circuit water system, which circulates between the reactor units and the coastal facilities, from 70 to 130degrees C.
Construction of the floating reactor began in 2007 and had to overcome a messy financial situation including the threat of bankruptcy in 2011. The venture is based on the small modular reactor (SMR) design: a type of nuclear fission reactor that is smaller than conventional reactors. Such reactors can be built from start to finish at a plant and then shipped—fully-assembled, tested, and ready to operate—to remote sites where normal construction would be difficult to manage.
Andrey Zolotkov, head of the Murmansk, Russia office of Bellona Foundation, an environmental organization based in Oslo, Norway, acknowledges the practicability of the SMR design. But he is one of many who questions its necessity in this particular case.
“The same plant could be built on the ground there (in Chukotka) without resorting to creating a floating structure,” says Zolotkov. “After all, the [nuclear power plant] presently in use was built on land there and has been operating for decades.”
The floating design has raised both environmental and safety concerns, given that the plant will operate in the pristine Arctic and must endure its harsh winters and choppy seas. Greenpeace has dubbed it a “floating Chernobyl,” and “a nuclear Titanic.”
Coastal structures, dams, and breakwaters have also been built to protect the vessel against tsunamis and icebergs.
The plant employs a number of active and passive safety systems, including an electrically-driven automated system and a passive system that uses gravity to insert control rods into the reactor core to ensure the reactor remains at subcritical levels in emergencies. The reactors also use low enriched uranium in a concentration below 20 percent of Uranium-235. This makes the fuel unsuitable for producing nuclear weapons.
Given such safety measures, Rosatom says on its site that a peer-reviewed probabilistic safety assessment modeling of possible damage to the FNPP finds the chances of a serious accident happening at the FNPP “are less than one hundred thousandth of a percent.”
Zolotkov, who worked in various capacities—including radiation safety officer—for 35 years in Russia’s civilian nuclear fleet, also notes that there have been no serious incidents on such ships since 1975. “In the event of an accident in the FNPP, the consequences, I believe, would be localized within its structure, so the release of radioactive substances will be minimal,” he says.
The plant’s nuclear fuel has to be replaced every three years. The unloaded fuel is held in onboard storage pools, and later in dry containers also kept on board. Every 10 to 12 years during its 40-year life cycle (possibly extendable to 50 years), the FNPP will be towed to a special facility for maintenance.
After decommissioning, the plant will be towed to a deconstruction and recycling facility. Rosatom says on its site, “No spent nuclear fuel or radioactive waste is planned to be left in the Arctic—spent fuel will be taken to the special storage facilities in mainland Russia.”
Rosatom has not disclosed the cost of the venture, calling it a pilot project. It is currently working on a next-generation version that will use two RITM-200M reactors, each rated at 50 MW. Improvement targets include a more compact design, longer periods between refueling, flexible load-following capabilities, and multipurpose uses that include water desalination and district heating.
Provided Rosatom receives sufficient orders, it says it aims to compete in price with plants based on fossil fuels and renewable energy.
The company, however, may face challenges other than marketing and operating its novel design. “These FNPPs will eventually carry spent nuclear fuel and are not yet recognized by international maritime law,” says Zolotkov. “So Rosatom may face problems obtaining permits and insurance when it comes to towing them along certain sea routes.”
The suit has a bit of a spy novel twist in that Celgard alleges in its complaint that one of its senior scientists left the company in October 2016 and moved to China to join Senior, after which he changed his name to cover up his identity. This scientist is alleged to be the source through which Senior acquired Celgard’s intellectual property.
Lucas Joppa thinks big. Even while gazing down into his cup of tea in his modest office on Microsoft’s campus in Redmond, Washington, he seems to see the entire planet bobbing in there like a spherical tea bag.
As Microsoft’s first chief environmental officer, Joppa came up with the company’s AI for Earth program, a five-year effort that’s spending US $50 million on AI-powered solutions to global environmental challenges.
The program is not just about specific deliverables, though. It’s also about mindset, Joppa told IEEE Spectrum in an interview in July. “It’s a plea for people to think about the Earth in the same way they think about the technologies they’re developing,” he says. “You start with an objective. So what’s our objective function for Earth?” (In computer science, an objective function describes the parameter or parameters you are trying to maximize or minimize for optimal results.)
AI for Earth launched in December 2017, and Joppa’s team has since given grants to more than 400 organizations around the world. In addition to receiving funding, some grantees get help from Microsoft’s data scientists and access to the company’s computing resources.
In a wide-ranging interview about the program, Joppa described his vision of the “ultimate optimization problem”—figuring out which parts of the planet should be used for farming, cities, wilderness reserves, energy production, and so on.
Every square meter of land and water on Earth has an infinite number of possible utility functions. It’s the job of Homo sapiens to describe our overall objective for the Earth. Then it’s the job of computers to produce optimization results that are aligned with the human-defined objective.
I don’t think we’re close at all to being able to do this. I think we’re closer from a technology perspective—being able to run the model—than we are from a social perspective—being able to make decisions about what the objective should be. What do we want to do with the Earth’s surface?
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.