All posts by Vaclav Smil

Cranes Lift More Than Their Weight in the World of Shipping and Construction

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/tech-history/heroic-failures/cranes-lift-more-than-their-weight-in-the-world-of-shipping-and-construction

A crane can lift a burden, move it sideways, and lower it. These operations, which the Greeks employed when building marble temples 25 centuries ago, are now performed every second somewhere around the world as tall ship-to-shore gantry cranes empty and reload container vessels. The two fundamental differences between ancient cranes and the modern versions involve the materials that make up those cranes and the energies that power them.

The cranes of early Greek antiquity were just wooden jibs with a simple pulley whose mechanical advantage came from winding the ropes by winches. By the third century B.C.E., compound pulleys had come into use; these Roman trispastos provided a nearly threefold mechanical advantage—nearly, because there were losses to friction. Those losses became prohibitive with the addition of more than the five ropes of the pentaspastos.

The most powerful cranes in the Roman and later the medieval periods were powered by men treading inside wheels or by animals turning windlasses in tight circles. Their lifting capacities were generally between 1 and 5 metric tons. Major advances came only during the 19th century.

William Fairbairn’s largest harbor crane, which he developed in the 1850s, was powered by four men turning winches. Its performance was further expanded by fixed and movable pulleys, which gave it more than a ­600-fold mechanical advantage, enabling it to lift weights of up to 60 metric tons and move them over a circle with a 32-meter diameter. And by the early 1860s, ­William Armstrong’s company was producing annually more than 100 hydraulic cranes (transmitting forces through the liquid under pressure) for English docks. Steam-powered cranes of the latter half of the 19th century were able to lift more than 100 metric tons in steel mills; some of these cranes hung above the factory floor and moved on roof-mounted rails. During the 1890s, steam engines were supplanted by electric motors.

The next fundamental advance came after World War I, with Hans Liebherr’s invention of a tower crane that could swing its loads horizontally and could be quickly assembled at a construction site. Its tower top is the fulcrum of a lever whose lifting (jib) arm is balanced by a counterweight, and the crane’s capacity is enhanced by pulleys. Tower cranes were first deployed to reconstruct bombed-out German cities; they have diffused rapidly and are now seen on construction projects around the world. Typical lifting capacities are between 12 and 20 metric tons, with the record held by the K10000, made by the Danish firm Krøll Cranes. It can lift up to 120 metric tons at the maximum radius of 100 meters.

Cranes are also mainstays of global commerce: Without gantry quay cranes capable of hoisting from 40 to 80 metric tons, it would take weeks to unload the 20,000 standard-size steel containers that sit in one of today’s enormous container ships. But put cranes together in a tightly coordinated operation involving straddle carriers and trucks and the job now takes less than 72 hours. And 20,568 containers were unloaded from the Madrid Maersk in Antwerp in June 2017 in the record time of 59 hours.

Without these giant cranes, every piece of clothing, every pair of shoes, every TV, every mobile phone imported from Asia to North America or Europe would take longer to arrive and cost more to buy. As for the lifting capacity records, Liebherr now makes a truly gargantuan mobile crane that sits on an 18-wheeler truck and can support 1,200 metric tons. And, not surprisingly given China’s dominance of the industry, the Taisun, the most powerful shipbuilding gantry crane, can hoist 20,000 metric tons. That’s about half again as heavy as the Brooklyn Bridge.

This article appears in the August 2020 print issue as “Cranes (The Machines, Not Birds).”

Air Conditioning Wasn’t Invented to Provide Comfort to Human Beings

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/green-tech/buildings/air-conditioning-wasnt-invented-to-provide-comfort-to-human-beings

Air conditioning was devised not for comfort but for industry, specifically to control temperature and humidity in a color printing factory in Brooklyn. The process required feeding paper into the presses a number of times, once for each of the component colors, and the slightest misalignment caused by changes in humidity produced defective copies that had to be thrown away.

In 1902 the factory’s operators asked Willis Haviland Carrier (1876–1950) to help, and he deployed coils [PDF] supplying 211 kilowatts of cooling to maintain a temperature of 27 °C and a relative humidity of 55 percent. Half the cooling was furnished by cold well water, half from mechanical refrigeration based on evaporative cooling—the principle of which had been worked out in the 18th century by several scientists, notably Benjamin Franklin. The system was not a complete success, but Carrier continued his work, eventually publishing the first chart relating temperature, humidity, and the way people perceived these qualities—part of an engineering discipline now known as psychrometrics. In 1906, he obtained his first patent (U.S. Patent 808,897) for the “Apparatus for Treating Air,” and in 1911 he presented his paper, “Rational Psychrometric Formulae,” a lasting foundation for designing efficient air-conditioning systems.

At first, air conditioning was mainly used in the textile, printing, film, and food-processing industries. In the 1920s department stores and movie theaters began to offer it. In 1932 came a limited run of the first individual window units, a type that Philco-York brought to market in 1938—though nonessential A/C installations were temporarily banned during the war.

It took almost another three decades before half the households in the United States had A/C. By 1990, that rate reached 70 percent; now it exceeds 90 percent.

Large-scale migration from the Snowbelt to the Sunbelt drove the adoption, and afterwards household air conditioning began its northerly diffusion. Electricity-generating companies saw their peak production move from the winter months to the summer. Today, U.S. generation at utility-scale facilities is significantly higher in July than in January. In 2018 the difference was about 10 percent [PDF], and in 2019 it was 15 percent.

By 2019 there were more than 1.6 billion units in operation around the world, more than half of them in China and the United States. But this was just the beginning: In 2018 the International Energy Agency predicted a coming “cold crunch” driven by a combination of three key factors: Incomes will continue to rise in the emerging economies, where one of the first spending choices is for a window A/C unit. Global warming will gradually raise summer temperature peaks and increase the number of hot spells. Finally, populations will rise, with more than 90 percent of the increase taking place in the hotter places, such as Africa, the Middle East, and southern Asia.

There’s a lot of growth potential here. Fewer than 10 percent of the nearly 3 billion people who live in the warmest places now have air conditioning in their homes, compared with 90 percent in the United States and Japan. If air conditioning were provided to the more than 200 million inhabitants of Uttar Pradesh, a single state in India whose average summer temperature is far higher than that in Florida, this would require at least twice as much electricity as the cooling demand in the United States, with its 330 million people. And because coal-fired generation provides a large share of the electricity in India, China, and Indonesia, the spread of air conditioning will greatly increase Asia’s emissions of carbon dioxide.

Rising temperatures, rising incomes, and growing populations make the rapid growth of air conditioning unstoppable. All we can do is to moderate that growth rate by better urban planning, smarter building design, and enforcement of strict efficiency standards for new A/C units.

This article appears in the July 2020 print issue as “The Year is 118 A.A.C. (After Air Conditioning).”

How Many People Did it Take to Build the Great Pyramid?

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/tech-history/heroic-failures/how-many-people-did-it-take-to-build-the-great-pyramid

Given that some 4,600 years have elapsed since the completion of the Great Pyramid of Giza, the structure stands remarkably intact. It is a polyhedron with a regular polygon base, its volume is about 2.6 million cubic meters, and its original height was 146.6 meters, including the lost pyramidion, or capstone. We may never know exactly how the pyramid was built, but even so, we can say with some confidence how many people were required to build it.

We must start with the time constraint of roughly 20 years, the length of the reign of Khufu, the pharaoh who commissioned the construction (he died around 2530 B.C.E.). Herodotus, writing more than 21 centuries after the pyramid’s completion, was told that labor gangs totaling 100,000 men worked in three-month spells a year to finish the structure in 20 years. In 1974, Kurt Mendelssohn, a German-born British physicist, put the labor force at 70,000 seasonal workers and up to 10,000 permanent masons.

These are large overestimates; we can do better by appealing to simple physics. The potential energy of the pyramid—the energy needed to lift the mass above ground level—is simply the product of acceleration due to gravity, mass, and the center of mass, which in a pyramid is one-quarter of its height. The mass cannot be pinpointed because it depends on the specific densities of the Tura limestone and mortar that were used to build the structure; I am assuming a mean of 2.6 metric tons per cubic meter, hence a total mass of about 6.75 million metric tons. That means the pyramid’s potential energy is about 2.4 trillion joules.

To maintain his basal metabolic rate, a 70-kilogram (154-pound) man requires some 7.5 megajoules a day; steady exertion will raise that figure by at least 30 percent. About 20 percent of that increase will be converted into useful work, which amounts to about 450 kilojoules a day (different assumptions are possible, but they would make no fundamental difference). Dividing the potential energy of the pyramid by 450 kJ implies that it took 5.3 million man-days to raise the pyramid. If a work year consists of 300 days, that would mean almost 18,000 man-years, which, spread over 20 years, implies a workforce of about 900 men.

A similar number of workers might be needed to place the stones in the rising structure and then to smooth the cladding blocks (many interior blocks were just rough-cut). And in order to cut 2.6 million cubic meters of stone in 20 years, the project would have required about 1,500 quarrymen working 300 days a year and producing 0.25 cubic meter of stone per capita. The grand total of the construction labor would then be some 3,300 workers. Even if we were to double that number to account for designers, organizers, and overseers and for labor needed for transport, tool repair, the building and maintenance of on-site housing, and cooking and laundry work, the total would be still less than 7,000 workers.

During the time of the pyramid’s construction, the total population of the late Old Kingdom was 1.5 million to 1.6 million people, and hence such a labor force would not have been an extraordinary imposition on the country’s economy. The challenge was to organize the labor, plan an uninterrupted supply of building stones, and provide housing, clothing, and food for labor gangs on the Giza site.

In the 1990s, archaeologists uncovered a cemetery for workers and the foundations of a settlement used to house the builders of the two later pyramids at the site, indicating that no more than 20,000 people lived there. That an additional two pyramids were built in rapid succession at the Giza site (for Khafre, Khufu’s son, starting at 2520 B.C.E., and for Menkaure, starting 2490 B.C.E.) shows how quickly early Egyptians mastered the building of pyramids: The erection of those massive structures became just another series of construction projects for the Old Kingdom’s designers, managers, and workers. If you build things, it becomes easier to build things—a useful lesson for those who worry about the sorry state of our infrastructure.

This article appears in the June 2020 print issue as “Building the Great Pyramid.”

Peak Oil, a Specimen Case of Apocalypic Thinking

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/energy/fossil-fuels/peak-oil-specimen-case-apocalypic-thinking

Predictions of when we would run out of oil have been around for a century, but the idea that peak production was rapidly approaching and that it would be followed by ruinous decline gained wider acceptance thanks to the work of M. King Hubbert, an American geologist who worked for Shell in Houston.

In 1956, he predicted that U.S. oil output would top out during the late 1960s; in 1969 he placed it in the first half of the 1970s. In 1970, when the actual peak came—or appeared to come—it was nearly 20 percent above Hubbert’s prediction. But few paid attention to the miss—the timing was enough to make his name as a prophet and give credence to his notion that the production curve of a mineral resource was symmetrical, with the decline being a mirror image of the ascent.

But reality does not follow perfect models, and by the year 2000, actual U.S. oil output was 2.3 times as much as indicated by Hubbert’s symmetrically declining forecast. Similarly, his forecasts of global peak oil (either in 1990 or in 2000) had soon unraveled. But that made no difference to a group of geologists, most notably Colin Campbell, Kenneth Deffeyes, L.F. Ivanhoe, and Jean Laherrère, who saw global oil peaking in 2000 or 2003. Deffeyes set it, with ridiculous accuracy, on Thanksgiving Day, 24 November 2005.

These analysts then predicted unprecedented economic disruptions. Ivanhoe went so far as to speak of “the inevitable doomsday” followed by “economic implosion” that would make “many of the world’s developed societies look more like today’s Russia than the United States.” Richard C. Duncan, an electrical engineer, proffered his “Olduvai theory,” which held that declining oil extraction would plunge humanity into a life comparable to that of the hunter-gatherers who lived near the famous Kenyan gorge some 2.5 million years ago.

In 2006, I reacted [PDF]to these prophecies by noting that “the recent obsession with an imminent peak of oil extraction has all the marks of a catastrophist apocalyptic cult.” I concluded that “conventional oil will become relatively a less important part of the world’s primary energy supply. But this spells no imminent end of the oil era as very large volumes of the fuel, both from traditional and nonconventional sources, will remain on the world market during the first half of the 21st century.”

And so they have. With the exception of a tiny (0.2 percent) dip in 2007 and a larger (2.5 percent) decline in 2009 (following the economic downturn), global crude oil extraction has set new records every year. In 2018, at nearly 4.5 billion metric tons, it was nearly 14 percent higher than in 2005.

A large part of this gain has been due to expansion in the United States, where the combination of horizontal drilling and hydraulic fracturing made the country, once again, the world’s largest producer of crude oil, about 16 percent ahead of Saudi Arabia and 19 percent ahead of Russia. Instead of following a perfect bell-shaped curve, since 2009 the trajectory of the U.S. crude oil extraction has been on the rise, and it is now surpassing the record set in 1970.

As for the global economic product, in 2019 it was 82 percent higher, in current prices, than in 2005, a rise enabled by the adequate supply of energy in general and crude oil in particular. I’d like to think that there are many lessons to be learned from this peak oil–mongering, but I have no illusions: Those who put models ahead of reality are bound to make the same false calls again and again.

This article appears in the May 2020 print issue as “Peak Oil: A Retrospective.”

Wine Is Going Out of Style—in France

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/biomedical/ethics/wine-is-going-out-of-stylein-france

France and wine—what an iconic link, and for centuries, how immutable! Wine was introduced by Greeks before the Romans conquered Gaul. Production greatly expanded during the Middle Ages, and since then the very names of the regions—Bordeaux, Bourgogne, Champagne—have become a symbol of quality everywhere. Thus has the French culture of wine long been a key signifier of national identity.

Statistics for French wine consumption begin in 1850 with a high mean of 121 liters per capita per year, which is nearly two glasses per day. By 1890, a Phylloxera infestation had cut the country’s grape harvest by nearly 70 percent from its 1875 peak, and French vineyards had to be reconstituted by grafting on resistant rootstocks from the United States. Although annual consumption of wine did fluctuate, rising imports prevented any steep decline in the total supply. Vineyard recovery brought the per capita consumption to a pre-World War I peak of 125 L in 1909, equaled again only in 1924. The all-time record of 136 L was set in 1926, after which the rate fell only slightly to 124 liters per capita in 1950.

Postwar, the French standard of living remained surprisingly low: According to the 1954 census, only 25 percent of homes had an indoor toilet. But rapidly rising incomes during the 1960s brought dietary shifts, notably a decline in wine drinking per capita. It fell to about 95 L in 1980, to 71 L in 1990, and then to 58 L in 2000—about half what it had been a century before. The latest available data shows the mean at just 40 L.

France’s wine consumption survey of 2015 shows deep gender and generational divides that explain the falling trend. Forty years ago, more than half of French adults drank wine nearly every day; now it’s just 16 percent, with 23 percent among men and only 11 percent among women. Among people over 65, the rate is 38 percent; for people 25 to 34 years of age, it is 5 percent, and for 15- to 24-year-olds, it’s only 1 percent. The same divides apply to all alcoholic drinks, as beer, liquors, and cider have also seen gradual consumption declines, while the beverages with the highest average per capita gains include mineral and spring water, roughly doubling since 1990, as well as fruit juices and carbonated soft drinks.

Alcoholic beverages are thus fast disappearing from French culture. And although no other traditional wine-drinking country has seen greater declines in absolute or relative terms, Italy comes close, and wine consumption has also decreased in Spain and Greece.

Only one upward trend persists: French exports of wine set a new record, at about €9.7 billion, in 2018. Premium prices and exports to the United States and China are the key factors. American drinkers have been the largest importers of French wines, and demand by newly rich Chinese has also claimed a growing share of sales. But in the country that gave the world countless vins ordinaires as well as exorbitantly priced Grand Crus Classés, the clinking of stemmed glasses and wishes of santé have become an endangered habit.

This article appears in the April 2020 print issue as “(Not) Drinking Wine.”

A Hard Look at Concrete

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/energy/fossil-fuels/hard-look-at-concrete

The ancient Romans were the first to mix sand and gravel with water and a bonding agent to make concrete. Although they called it opus cementitium, the bonding agent differed from that used in modern cement: It was a mixture of gypsum, quicklime, and pozzolana, a volcanic sand from Puteoli, near Mount Vesuvius, that made an outstanding material fit for massive vaults. Rome’s Pantheon, completed in 126 C.E., still spans a greater distance than any other structure made of nonreinforced concrete.

The modern cement industry began in 1824, when Joseph Aspdin, of England, patented his firing of limestone and clay at high temperatures. Lime, silica, and alumina are the dominant constituents of modern cement; adding water, sand and gravel produces a slurry that hardens into concrete as it cures. The typical ratios are 7 to 15 percent cement, 14 to 21 percent water, and 60 to 75 percent sand and gravel.

Concrete is remarkably strong under compression. Today’s formulations can resist a crushing pressure of more than 100 megapascals (14,500 pounds per square inch)—about the weight of an African bull elephant balanced on a coin. However, a pulling force of just 2 to 5 MPa can tear concrete apart; human skin [PDF] is far stronger in this respect.

This tensile weakness can be offset by reinforcement. This technique was first used in iron-reinforced troughs for plants built by Joseph Monier, a French gardener, during the 1860s. Before the end of the 19th century, steel reinforcement was common in construction. In 1903 the Ingalls Building, in Cincinnati, became the world’s first reinforced-concrete skyscraper. Eventually engineers began pouring concrete into forms containing steel wires or bars that were tensioned just before or after the concrete was cast. Such pre- or poststressing further enhances the material’s tensile strength.

Today concrete is everywhere. It can be seen in the Burj Khalifa Tower in Dubai, the world’s tallest building, and in the sail-like Sydney Opera House, perhaps the most visually striking application. Reinforced concrete has made it possible to build massive hydroelectric dams, long bridges, and gigantic offshore drilling platforms, as well as to pave roads, freeways, parking lots, and airport runways.

From 1900 to 1928, the U.S. consumption of cement (recall that cement makes up no more than 15 percent of concrete) rose tenfold, to 30 million metric tons. The postwar economic expansion, including the construction of the Interstate Highway System, raised consumption to a peak of about 128 million tons by 2005; recent rates are around 100 million tons a year. China became the world’s largest producer in 1985, and its output of cement—above 2.3 billion metric tons in 2018—now accounts for nearly 60 percent of the global total. In 2017 and 2018 China made slightly more cement (about 4.7 billion tons) than the United States had made throughout the entire 20th century.

But concrete does not last forever, the Pantheon’s extraordinary longevity constituting a rare exception. Concrete deteriorates in all climates in a process that is accelerated by acid deposition, vibration, structural overloading, and salt-induced corrosion of the reinforcing steel. As a result, the concretization of the world has produced tens of billions of tons of material that will soon have to be replaced, destroyed, or simply abandoned.

The environmental impact of concrete is another worry. The industry burns low-quality coal and petroleum coke, producing roughly a ton of carbon dioxide per ton of cement, which works out to about 5 percent of global carbon emissions from fossil fuels. This carbon footprint can be reduced by recycling concrete, by using blast-furnace slag and fly ash captured in power plants, or by adopting one of the several new low-carbon or no-carbon processes. But these improvements would make only a small dent in a business whose global output now surpasses 4 billion metric tons.

This article appears in the March 2020 print issue as “Concrete Facts.”

Electricity: It’s Wonderfully Affordable, But it’s No Longer Getting Any Cheaper

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/energy/policy/electricity-its-wonderfully-affordable-but-its-no-longer-getting-any-cheaper

The generations-old trend toward lower electricity prices now appears to have ended. In many affluent countries, prices tilted upward at the turn of the century, and they continue to rise, even after adjusting for inflation.

Even so, the price we pay for electricity is an extraordinary bargain, and that’s why this form of power has become ubiquitous in the modern world. When expressed in constant 2019 dollars, the average price of electricity in the United States fell from $4.79 per kilowatt-hour in 1902 (the first year for which the national mean is available) to 32 cents in 1950. The price had dropped to 10 cents by 2000, and in late 2019 it was just marginally higher, at about 11 cents per kilowatt-hour. This represents a relative decline of more than 97 percent. A dollar now buys nearly 44 times as much electricity as it did in 1902.

Because average inflation-adjusted manufacturing wages have quadrupled since 1902, blue-collar households now find electricity about 175 times as affordable as it was nearly 120 years ago. And it gets better: We buy electricity in order to convert it into light, kinetic energy, or heat, and the improved efficiency of such conversions have made the end uses of electricity an even greater bargain.

Lighting offers the most impressive gain: In 1902, a lightbulb with a tantalum filament produced 7 lumens per watt; in 2019 a dimmable LED light delivered 89 lm/W (see “Luminous Efficacy,” IEEE Spectrum, April 2019). That means a lumen of electric light is now three orders of magnitude (approximately 2,220 times) more affordable for a working-class household than it was in the early 20th century. Lower but still impressive reductions in end-use costs apply in the case of electric motors that run kitchen appliances and force hot air into the ducts to heat houses using natural-gas furnaces.

An international perspective shows some surprising differences. The United States has cheaper residential electricity than other affluent nations, with the exception of Canada and Norway, which derive high shares of their power from hydroelectric generation (60 percent and 95 percent, respectively).

When using prevailing exchange rates, the U.S. residential price is about 45 percent of the European Union’s mean, about half the Japanese average, and about a third of the German rate. Electricity prices in India, Mexico, Turkey, and South Africa are lower than in the United States in terms of the official exchange rates, but they are considerably higher in terms of purchasing power parity—more than twice the U.S. level in India and nearly three times as much in Turkey.

A naive observer, reading the reports of falling prices for photovoltaic cells and wind turbines, might conclude that the rising shares of solar and wind power will bring a new era of falling electricity prices. Just the opposite has been true.

Before the year 2000, when Germany embarked on its large-scale and expensive Energiewende (energy transition), that country’s residential electricity prices were low and declining, bottoming at less than €0.14/kWh ($.13/kWh, using the prevailing exchange rate) in 2000.

By 2015, Germany’s combined solar and wind capacity of nearly 84 gigawatts had surpassed the total installed in fossil-fueled plants, and by March 2019 more than 20 percent of all electricity came from the new renewables. However, over an 18-year period (2000 to 2018) electricity prices more than doubled, to €0.31/ kWh. The E.U.’s largest economy thus has the continent’s highest electricity prices, followed by heavily wind-dependent Denmark, at €0.3/kWh.

A similar contrast can be seen in the United States. In California, where the new renewables have taken an increasing share, electricity prices have been rising five times as fast as the national mean and are now nearly 60 percent higher than the countrywide average.

This article appears in the February 2020 print issue as “Electricity Prices: A Changing Bargain.”

How Chicken Beat Beef in America

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/biomedical/ethics/how-chicken-beat-beef-in-america

For generations, beef was the United States’ dominant meat, followed by pork. When annual beef consumption peaked in 1976 at about 40 kilograms (boneless weight) per capita, it accounted for nearly half of all meat. Chicken had just a 20 percent share. But chicken caught up by 2010, and in 2018 chicken’s share came to 36 percent of the total, nearly 20 percentage points higher than beef. The average American now eats 30 kg of boneless chicken every year, bought overwhelmingly as cut-up or processed parts (from boneless breast to Chicken McNuggets).

The United States’ constant obsession with diet, in this case the fear of dietary cholesterol and saturated fat in red meat, has been a factor in the shift. The differences, however, are not ­striking: 100 grams of lean beef has 1.5 grams of saturated fat, compared with 1 gram in skinless chicken breast—which actually has more cholesterol. But the main reason for chicken’s ascendance has been its lower price, which reflects its metabolic advantage: No other domesticated land animal can convert feed to meat as efficiently as broilers. Modern breeding advances have had a lot to do with this efficiency.

During the 1930s, the average feeding efficiency for broilers (at about 5 units of feed per unit of live weight) was no better than for pigs. That rate was halved by the mid-1980s, and the latest U.S. Department of Agriculture’s feed-to-meat ratios show that it now takes only about 1.7 units of feed (standardized in terms of feed corn) to produce a unit of broiler live weight, compared with nearly 5 units of feed for hogs and almost 12 units for cattle.

Because edible weight as a share of live weight differs substantially among the leading meat species (about 60 percent for chicken, 53 percent for pork, and only about 40 percent for beef), recalculations in terms of feeding efficiencies per unit of edible meat are even more revealing. Recent ratios have been 3 to 4 units of feed per unit of edible meat for broilers, 9 to 10 for pork, and 20 to 30 for beef. These ratios correspond to average feed-to-meat conversion efficiencies of, respectively, 15, 10, and 4 percent.

In addition, broilers have been bred to mature faster and to put on an unprecedented amount of weight. Traditional free-running birds were slaughtered at the age of one year, when they weighed only about 1 kg. The average weight of American broilers rose from 1.1 kg in 1925 to nearly 2.7 kg in 2018, while the typical feeding span was cut from 112 days in 1925 to just 47 days in 2018.

Consumers benefit while the birds suffer. They gain weight so rapidly because they can eat as much as they want while being kept in darkness and in strict confinement. Because consumers prefer lean breast meat, the selection for excessively large breasts shifts the bird’s center of gravity forward, impairs its natural movement, and puts stress on its legs and heart. But the bird cannot move anyway: According to the National Chicken Council, a broiler is allotted just 560 to 650 square centimeters, an area only slightly larger than a sheet of standard A4 paper. As long periods of darkness improve growth, broilers mature under light intensities resembling twilight. This condition disrupts their normal circadian and behavioral rhythms.

On one side, you have shortened lives (less than seven weeks for a bird whose normal life span is up to eight years) with malformed bodies in dark confinement; on the other, in late 2019 you got retail prices of about US $2.94 per pound ($6.47 per kilogram) for boneless breast compared with $4.98/lb. for round beef roast and $8.22/lb. for choice sirloin steak.

But chicken’s rule hasn’t yet gone global: Thanks to its dominance in China and in Europe, pork is still about 10 percent ahead worldwide. Still, broilers mass-produced in confinement will, almost certainly, come out on top within a decade or two.

This article appears in the January 2020 print issue as “Why Chicken Rules.”

Gas Turbines Have Become by Far the Best Choice for Add-on Generating Power

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/energy/fossil-fuels/gas-turbines-have-become-by-far-the-best-choice-for-addon-generating-power

Eighty years ago, the world’s first industrial gas turbine began to generate electricity in a municipal power station in Neuchâtel, Switzerland. The machine, installed by Brown Boveri, vented the exhaust without making use of its heat, and the turbine’s compressor consumed nearly three-quarters of the generated power. That resulted in an efficiency of just 17 percent, or about 4 MW.

The disruption of World War II and the economic difficulties that followed made the Neuchâtel turbine a pioneering exception until 1949, when Westinghouse and General Electric introduced their first low-capacity designs. There was no rush to install them, as the generation market was dominated by large coal-fired plants. By 1960 the most powerful gas turbine reached 20 MW, still an order of magnitude smaller than the output of most steam turbo generators.

In November 1965, the great power blackout in the U.S. Northeast changed many minds: Gas turbines could operate at full load within minutes. But rising oil and gas prices and a slowing demand for electricity prevented any rapid expansion of the new technology.

The shift came only during the late 1980s. By 1990 almost half of all new installed U.S. capacity was in gas turbines of increasing power, reliability, and efficiency. But even efficiencies in excess of 40 percent—matching today’s best steam turbo generators—produce exhaust gases of about 600 °C, hot enough to generate steam in an attached steam turbine. These combined-cycle gas turbines (CCGTs) arrived during the late 1960s, and their best efficiencies now top 60 percent. No other prime mover is less wasteful.

Gas turbines are now much more powerful. Siemens now offers a CCGT for utility generation rated at 593 MW, nearly 40 times as powerful as the Neuchâtel machine and operating at 63 percent efficiency. GE’s 9HA delivers 571 MW in simple-cycle generation and 661 MW (63.5 percent efficiency) by CCGT.

Their near-instant availability makes gas turbines the ideal suppliers of peak power and the best backups for new intermittent wind and solar generation. In the United States they are now by far the most affordable choice for new generating capacities. The levelized cost of electricity—a measure of the lifetime cost of an energy project—for new generation entering service in 2023 is forecast to be about US $60 per megawatt-hour for coal-fired steam turbo generators with partial carbon capture, $48/MWh for solar photovoltaics, and $40/MWh for onshore wind—but less than $30/MWh for conventional gas turbines and less than $10/MWh for CCGTs.

Gas turbines are also used for the combined production of electricity and heat, which is required in many industries and is used to energize central heating systems in many large European cities. These turbines have even been used to heat and light extensive Dutch greenhouses, which additionally benefit from their use of the generated carbon dioxide to speed up the growth of vegetables. Gas turbines also run compressors in many industrial enterprises and in the pumping stations of long-distance pipelines. The verdict is clear: No other combustion machines combine so many advantages as do modern gas turbines. They’re compact, easy to transport and install, relatively silent, affordable, and efficient, offering nearly instant-on power and able to operate without water cooling. All this makes them the unrivaled stationary prime mover.

And their longevity? The Neuchâtel turbine was decommissioned in 2002, after 63 years of operation—not due to any failure in the machine but because of a damaged generator.

This article appears in the December 2019 print issue as “Superefficient Gas Turbines.”

Wind Turbines Just Keep Getting Bigger, But There’s a Limit

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/energy/renewables/wind-turbines-just-keep-getting-bigger-but-theres-a-limit

Wind turbines have certainly grown up. When the Danish firm Vestas began the trend toward gigantism, in 1981, its three-blade machines were capable of a mere 55 kilowatts. That figure rose to 500 kW in 1995, reached 2 MW in 1999, and today stands at 5.6 MW. In 2021, MHI Vestas Offshore Wind’s V164 will rise 105 meters high at the hub, swing 80-meter blades, and generate up to 10 MW, making it the first commercially available double-digit turbine ever. Not to be left behind, General Electric’s Renewable Energy is developing a 12-MW machine with a 260-meter tower and 107-meter blades, also rolling out by 2021.

That is clearly pushing the envelope, although it must be noted that still larger designs have been considered. In 2011, the UpWind project released what it called a predesign of a 20-MW offshore machine with a rotor diameter of 252 meters (three times the wingspan of an Airbus A380) and a hub diameter of 6 meters. So far, the limit of the largest conceptual designs stands at 50 MW, with height exceeding 300 meters and with 200-meter blades that could flex (much like palm fronds) in furious winds.

To imply, as an enthusiastic promoter did, that building such a structure would pose no fundamental technical problems because it stands no higher than the Eiffel tower, constructed 130 years ago, is to choose an inappropriate comparison. If the constructible height of an artifact were the determinant of wind-turbine design then we might as well refer to the Burj Khalifa in Dubai, a skyscraper that topped 800 meters in 2010, or to the Jeddah Tower, which will reach 1,000 meters in 2021. Erecting a tall tower is no great problem; it’s quite another proposition, however, to engineer a tall tower that can support a massive nacelle and rotating blades for many years of safe operation.

Larger turbines must face the inescapable effects of scaling. Turbine power increases with the square of the radius swept by its blades: A turbine with blades twice as long would, theoretically, be four times as powerful. But the expansion of the surface swept by the rotor puts a greater strain on the entire assembly, and because blade mass should (at first glance) increase as a cube of blade length, larger designs should be extraordinarily heavy. In reality, designs using lightweight synthetic materials and balsa can keep the actual exponent to as little as 2.3.

Even so, the mass (and hence the cost) adds up. Each of the three blades of Vestas’s 10-MW machine will weigh 35 metric tons, and the nacelle will come to nearly 400 tons. GE’s record-breaking design will have blades of 55 tons, a nacelle of 600 tons, and a tower of 2,550 tons. Merely transporting such long and massive blades is an unusual challenge, although it could be made easier by using a segmented design.

Exploring likely limits of commercial capacity is more useful than forecasting specific maxima for given dates. Available wind turbine power [PDF] is equal to half the density of the air (which is 1.23 kilograms per cubic meter) times the area swept by the blades (pi times the radius squared) times the cube of wind velocity. Assuming a wind velocity of 12 meters per second and an energy-conversion coefficient of 0.4, then a 100-MW turbine would require rotors nearly 550 meters in diameter.

To predict when we’ll get such a machine, just answer this question: When will we be able to produce 275-meter blades of plastic composites and balsa, figure out their transport and their coupling to nacelles hanging 300 meters above the ground, ensure their survival in cyclonic winds, and guarantee their reliable operation for at least 15 or 20 years? Not soon.

This article appears in the November 2019 print issue as “Wind Turbines: How Big?”

Carbon Emissions Rising Despite European and North American Efforts to Reduce Them

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/energy/fossil-fuels/carbon-emissions-rising-despite-european-and-north-american-efforts-to-reduce-them

In 1896, Svante Arrhenius, of Sweden, became the first scientist to quantify the effects [PDF] of man-made carbon dioxide on global temperatures. He calculated that doubling the atmospheric level of the gas from its concentration in his time would raise the average midlatitude temperature by 5 to 6 degrees Celsius. That’s not too far from the latest results, obtained by computer models running more than 200,000 lines of code.

The United Nations held its first Framework Convention on Climate Change in 1992, and this was followed by a series of meetings and climate treaties. But the global emissions of carbon dioxide have been rising steadily just the same.

At the beginning of the 19th century, when the United Kingdom was the only major coal producer, global emissions of carbon from fossil fuel combustion were minuscule, at less than 10 million metric tons a year. (To express them in terms of carbon dioxide, just multiply by 3.66.) By century’s end, emissions surpassed half a billion metric tons of carbon. By 1950, they had topped 1.5 billion metric tons. The postwar economic expansion in Europe, North America, the U.S.S.R., and Japan, along with the post-1980 economic rise of China, quadrupled emissions thereafter, to about 6.5 billion metric tons of carbon by the year 2000.

The new century has seen a significant divergence. By 2017, emissions had declined by about 15 percent in the European Union, with its slower economic growth and aging population, and also in the United States, thanks largely to the increasing use of natural gas instead of coal. However, all these gains were outbalanced by Chinese carbon emissions, which rose from about 1 billion to about 3 billion metric tons—enough to increase the worldwide total by nearly 45 percent, to 10.1 billion metric tons.

By burning huge stocks of carbon that fossilized ages ago, human beings have pushed carbon dioxide concentrations to levels not seen for about 3 million years. The sampling of air locked in tiny bubbles in cores drilled into Antarctic and Greenland ice has enabled us to reconstruct carbon dioxide concentrations going back some 800,000 years. Back then the atmospheric levels of the gas fluctuated between 180 and 280 parts per million (that is, from 0.018 to 0.028 percent). During the past millennium, the concentrations remained fairly stable, ranging from 275 ppm in the early 1600s to about 285 ppm before the end of the 19th century. Continuous measurements of the gas began near the top of Mauna Loa, in Hawaii, in 1958: The 1959 mean was 316 ppm, the 2015 average reached 400 ppm, and 415 ppm was first recorded in May 2019.

Emissions will continue to decline in affluent countries, and the rate at which they grow in China has begun to slow down. However, it is speeding up in India and Africa, and hence it is unlikely that we will see any substantial global declines anytime soon.

The Paris agreement of 2015 was lauded as the first accord containing specific national commitments to reduce future emissions, but even if all its targets were met by 2030, carbon emissions would still rise to nearly 50 percent above the 2017 level. According to the 2018 study by the Intergovernmental Panel for Climate Change, the only way to keep the average world temperature rise to no more than 1.5 °C would be to put emissions almost immediately into a decline steep enough to bring them to zero by 2050.

That is not impossible—but it is very unlikely. The contrast between the expressed concerns about global warming and the continued releases of record volumes of carbon could not be starker.

This article appears in the October 2019 print issue as “The Carbon Century.”

The Diversity and Unification of the Engineered World

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/consumer-electronics/portable-devices/the-diversity-and-unification-of-the-engineered-world

Inventions can be classified, like animals and plants, into kingdoms, phyla, and species

Our count of living species remains incomplete. In the 250-plus years since Carl Linnaeus set up the modern taxonomic system, we have classified about 1.25 million species, about three-quarters of them animals. Another 17 percent are plants, and the remainder are fungi and microbes. And that’s the official count—the still-unrecognized number of species could be several times higher still.

The diversity of man-made objects is easily as rich. Although my comparisons involve not just those proverbial apples and oranges but apples and automobiles, they still reveal what we have wrought.

I begin constructing my parallel taxonomy with the domain of all man-made objects. This domain is equivalent to the Eukarya, all organisms having nuclei in their cells. It contains the kingdom of complex, multicomponent designs, equivalent to Animalia. Within that kingdom we have the phylum of designs powered by electricity, equivalent to the Chordata, creatures with a dorsal nerve cord. Within that phylum is a major class of portable designs, equivalent to Mammalia. Within that class is the order of communications artifacts, equivalent to Cetacea, the class of whales, dolphins, and porpoises, and it contains the family of phones, equivalent to Delphinidae, the oceanic dolphins.

Families contain genera, such as Delphinus (common dolphin), Orcinus (orcas), and Tursiops (bottlenose dolphins) in the ocean. And, according to GSM Arena, which monitors the industry, in early 2019 there were more than 110 mobile-phone genera (brands). Some genera contain a single specific species—for instance, Orcinus contains only Orcinus orca, the killer whale. Other genera are species-rich. In the mobile-phone realm none is richer than Samsung, which now includes nearly 1,200 devices. It is followed by LG, with more than 600, and Motorola and Nokia, each with nearly 500 designs. Altogether, in early 2019 there were some 9,500 different mobile “species”—and that total is considerably larger than the known diversity of mammals (fewer than 5,500 species).

Even if we were to concede that mobile phones are just varieties of a single species (like the Bengal, Siberian, and Sumatran tigers), there are many other numbers that illustrate how species-rich our designs are. The World Steel Association lists about 3,500 grades of steel, more than all known species of rodents. Screws are another supercategory: Add up the combinations based on screw materials (aluminum to titanium), screw types (from cap to drywall, from machine to sheet metal), screw heads (from washer-faced to countersunk), screw drives (from slot to hex, from Phillips to Robertson), screw shanks and tips (from die point to cone point), and screw dimensions (in metric and other units), and you end up with many millions of possible screw “species.”

Taking a different tack, we have also surpassed nature in the range of mass. The smallest land mammal, the Etruscan shrew, weighs as little as 1.3 grams, whereas the largest, the African elephant, averages about 5 metric tons. That’s a range of six orders of magnitude. Mass-produced mobile-phone vibrator motors match the shrew’s weight, while the largest centrifugal compressors driven by electric motors weigh around 50 metric tons, for a range of seven orders of magnitude.

The smallest bird, the bee hummingbird, weighs about 2 grams, whereas the largest flying bird, the Andean condor, can reach 15 kilograms, for a range of nearly four orders of magnitude. Today’s miniature drones weigh as little as 5 grams versus a fully loaded Airbus 380, which weighs 570 metric tons—a difference of eight orders of magnitude.

And our designs have a key functional advantage: They can work and survive pretty much on their own, unlike our bodies (and those of all animals), which depend on a well-functioning microbiome: There are at least as many bacterial cells in our guts as there are cells in our organs. That’s life for you.

This article appears in the August 2019 print issue as “Animals vs. Artifacts: Which are more diverse?”

Racing Toward Yottabyte Information

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/semiconductors/memory/racing-toward-yottabyte-information

Yotta, yotta, yotta—that’s the Greek prefix we’ll soon need to describe the vastness of our data archives—yotta, yotta, yotta

Once upon a time, information was deposited only inside human brains, and ancient bards could spend hours retelling stories of conflicts and conquests. Then external data storage was invented. 

Small clay cylinders and tablets, invented in Sumer some 5,000 years ago, often contained just a dozen cuneiform characters, equivalent to a few hundred bytes (102 B). The Oresteia, a trilogy of Greek tragedies by Aeschylus (fifth century BCE), amounts to about 300,000 B (105 B). Some rich senators in imperial Rome had libraries housing hundreds of scrolls, with one large collection holding at least 108 B (100 megabytes).

A radical shift came with Johannes Guttenberg’s printing press, using movable type. By 1500, less than half a century after printing’s introduction, European printers had released more than 11,000 new book editions. The extraordinary rise of printing was joined by other forms of stored information. First came engraved and woodcut music scores, illustrations, and maps. Then, in the 19th century, came photographs, sound recordings, and movies. Reference works and other regularly published statistical compendia during the 20th century came on the backs of new storage modes, notably magnetic tapes and long-playing records.

Beginning in the 1960s, computers expanded the scope of digitization to medical imaging (a digital mammogram [PDF] is 50 MB), animated movies (2–3 gigabytes), intercontinental financial transfers, and eventually the mass emailing of spam (more than 100 million messages sent every minute). Such digitally stored information rapidly surpassed all printed materials. Shakespeare’s plays and poems amount to 5 MB, the equivalent of just a single high-resolution photograph, or of 30 seconds of high-fidelity sound, or of eight seconds of streamed high-definition video.

Printed materials have thus been reduced to a marginal component of overall global information storage. By the year 2000, all books in the Library of Congress were on the order of 1013 B (more than 10 terabytes) but that was less than 1 percent of the total collection (1015 B, about 3  petabytes) once all photographs, maps, movie, and audio recordings were added.

And in the 21st century this information is being generated ever faster. In its latest survey of data generated per minute in 2018, Domo [PDF], a cloud service, listed more than 97,000 hours of video streamed by Netflix users, nearly 4.5 million videos watched on YouTube, just over 18 million forecast requests on the Weather Channel, and more than 3 quadrillion bytes (3.1 petabytes) of other Internet data used in the United States alone. By 2016, the annual global data-creation rate surpassed 16 ZB (a zettabyte is 1021 B), and by 2025, it is expected to rise by another order of magnitude—that is, to about 160  ZB or 1023 B. And according to Domo, by 2020 1.7 MB of data will be generated every second for every one of the world’s nearly 8 billion people.

These quantities lead to some obvious questions. Only a fraction of the data flood could be stored, but which part should that be? Challenges of storage are obvious even if less than 1 percent of this flow gets preserved. And for whatever we decide to store, the next question is how long should the data be preserved. No storage need last forever, but what is the optimal span?

The highest prefix in the international system of units is yotta, Y = 1024. We’ll have that many bytes within a decade. And once we start creating more than 50 trillion bytes of information per person per year, will there be any real chance of making effective use of it? It is easier to find new prefixes for large databases than to decide how large is large enough. After all, there are fundamental differences between accumulated data, useful information, and insightful knowledge.

This article appears in the July 2019 print issue as “Data World: Racing Toward Yotta.”

June 1878: Muybridge Photographs a Galloping Horse

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/consumer-electronics/audiovideo/june-1878-muybridge-photographs-a-galloping-horse

Shutter speed rose from a thousandth of a second in 1878 to a millionth of a billionth of a second in the ’90s. Today, that’s considered slow

Eadweard Muybridge (1830–1904), an English photographer, established his American fame in 1867 by taking a mobile studio to Yosemite Valley and producing large silver prints of its stunning vistas. Five years later he was hired by Leland Stanford, then the president of the Central Pacific Railroad, formerly the governor of California and latterly the founder of the eponymous university in Palo Alto. Stanford—who was also a horse breeder—challenged Muybridge to settle the old dispute about whether all four of a horse’s legs are off the ground at one time during a gallop.

Muybridge found it difficult to prove the point. In 1872 he took (and then lost) a single image of a trotting horse with all hooves aloft. But he persevered, and his eventual solution was to capture moving objects with cameras capable of a shutter speed as brief as 1/1,000 of a second.

The conclusive experiment took place 141 years ago, on 19 June 1878, at Stanford’s Palo Alto farm. Muybridge lined up thread-triggered glass-plate cameras along the track, used a white-sheet background for the best contrast, and copied the resulting images as simple silhouettes on a disc rotating in a zoopraxiscope, a device he invented in order to display a rapid series of stills to convey motion. Sallie Gardner, the horse Stanford had provided for the test, clearly had all four hooves off the ground. But the airborne moment did not take place as portrayed in famous paintings, perhaps most notably Théodore Géricault’s 1821 Derby at Epsom, now hanging in the Louvre, which shows the animal’s legs extended, away from its body. Instead, it occurred when the horse’s legs were beneath its body, just prior to the moment the horse pushed off with its hind legs.

This work led to Muybridge’s magnum opus, which he prepared for the University of Pennsylvania. Starting in 1883, he began to make an extensive series depicting animal and human locomotion. Its creation relied on 24 cameras fixed in parallel to the 36-meter-long track and two portable sets of 12 batteries at each end. The track had a marked background, and animals or people activated the shutters by breaking stretched strings.

The final product was a book with 781 plates, published in 1887. This compendium showed not only running domestic animals (dogs and cats, cows and pigs) but also a bison, a deer, an elephant, and a tiger, as well as a running ostrich and a flying parrot. Human sequences depicted various runs and also ascents, descents, lifts, throws, wrestling, dances, a child crawling, and a woman pouring a bucket of water over another woman.

Muybridge’s 1,000 frames a second soon became 10,000. By 1940, the patented design of a rotating mirror camera raised the rate to 1 million per second. In 1999, Ahmed Zewail got a Nobel Prize in Chemistry for developing a spectrograph that could capture the transition states of chemical reactions on a scale of femtoseconds—that is, 10-15 second, or one-millionth of one-billionth of a second.

Today, we can use intense, ultrafast laser pulses to capture events separated by mere attoseconds, or 10-18 second. This time resolution makes it possible to see what has been until recently hidden from any direct experimental access: the motions of electrons on the atomic scale.

Many examples can be given to illustrate the extraordinary scientific and engineering progress we have made during the past 141 years, but this contrast is as impressive as any other advance I can think of—from settling the dispute about airborne horse hooves to observing flitting electrons.

This article appears in the June 2019 print issue as “June 1878: Muybridge’s Galloping Horse.”

Is Life Expectancy Finally Topping Out?

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/biomedical/ethics/is-life-expectancy-finally-topping-out

A slowing rate of improvement hints at a looming asymptote, at least on a population-wide basis

Ray Kurzweil, Google’s chief futurist, says that if you can just hang on until 2029, medical advances will start to “add one additional year, every year, to your life expectancy. By that I don’t mean life expectancy based on your birth date but rather your remaining life expectancy.” Curious readers can calculate what this trend would do to the growth of the global population, but I will limit myself here to a brief review of survival realities.

In 1850, the combined life expectancies of men and women stood at around 40 years in the United States, Canada, Japan and much of Europe. Since then the values have followed an impressive, almost perfectly linear increase that nearly doubled them, to almost 80 years. Women live longer in all societies, with the current maximum at just above 87 years in Japan.

The trend may well continue for a few decades, given that life expectancies of elderly people in affluent countries rose almost linearly from 1950 to 2000 at a combined rate of about 34 days per year. But absent fundamental discoveries that change the way we age, this trend to longer life must weaken and finally end. The long-term trajectory of Japanese female life expectancies—from 81.91 years in 1990 to 87.26 years in 2017—fits a symmetrical logistic curve that is already close to its asymptote of about 90 years. The trajectories for other affluent countries also show the approaching ceiling. Records available show two distinct periods of rising longevity: Faster linear gains (about 20 years in half a century) prevailed until 1950, followed by slower gains.

If we are still far from the limit to the human life-span, then the largest survival gains should be recorded among the oldest people. This was indeed the case for studies conducted in France, Japan, the United States, and the United Kingdom from the 1970s to the early 1990s. Since then, however, the gains have leveled off.

There may be no specific genetically programmed limit to life-span—much as there is no genetic program that limits us to a specific running speed. But life-span is a bodily characteristic that arises from the interaction of genes with the environment. Genes may themselves introduce biophysical limits, and so can environmental effects, such as smoking.

The world record life-span is the 122 years claimed for Jeanne Calment, a Frenchwoman who died in 1997. Strangely, after more than two decades, she still remains the oldest survivor ever, and by a substantial margin. (Indeed, the margin is so big as to be suspicious: Her age and even her identity are in question.) The second oldest supercentenarian died at 119, in 1999, and since that time there have been no survivors beyond the 117th year.

And if you think that you have a high chance to make it to 100 because some of your ancestors lived that long, you should know that the estimated heritability of life-span is modest, just between 15 and 30 percent. Given that people tend to marry others like themselves, a phenomenon known as assortative mating, the true heritability of human longevity is probably even lower than that.

Of course, as with all complex matters, there is always room for different interpretation of published statistical analyses. Kurzweil hopes that dietary interventions and other tricks will extend his own life until such time as major scientific advances can preserve him forever. It is true that there are ideas on how such preservation might be achieved, among them the rejuvenation of human cells by extending their telomeres, the nucleotide sequences at the ends of a chromosome that fray with age. If it works, maybe it can lift the realistic maximum well above 125 years.

But in 2019 the best advice I can give to all but a few remarkably precocious readers of these essays is to plan ahead—but not as far ahead as the 22nd century.

This article appears in the May 2019 print issue as “Life-Span and Life Expectancy.”