Tag Archives: tech-history

How the IBM PC Won, Then Lost, the Personal Computer Market

Post Syndicated from James W. Cortada original https://spectrum.ieee.org/tech-history/silicon-revolution/how-the-ibm-pc-won-then-lost-the-personal-computer-market

On 12 August 1981, at the Waldorf Astoria Hotel in midtown Manhattan, IBM unveiled the company’s entrant into the nascent personal computer market: the IBM PC. With that, the preeminent U.S. computer maker launched another revolution in computing, though few realized it at the time. Press coverage of the announcement was lukewarm.

Soon, though, the world began embracing little computers by the millions, with IBM dominating those sales. The personal computer vastly expanded the number of people and organizations that used computers. Other companies, including Apple and Tandy Corp., were already making personal computers, but no other machine carried the revered IBM name. IBM’s essential contributions were to position the technology as suitable for wide use and to set a technology standard. Rivals were compelled to meet a demand that they had all grossly underestimated. As such, IBM had a greater effect on the PC’s acceptance than did Apple, Compaq, Dell, and even Microsoft.

Despite this initial dominance, by 1986 the IBM PC was becoming an also-ran. And in 2005, the Chinese computer maker Lenovo Group purchased IBM’s PC business.

What occurred between IBM’s wildly successful entry into the personal computer business and its inglorious exit nearly a quarter century later? From IBM’s perspective, a new and vast market quickly turned into an ugly battleground with many rivals. The company stumbled badly, its bureaucratic approach to product development no match for a fast-moving field. Over time, it became clear that the sad story of the IBM PC mirrored the decline of the company.

At the outset, though, things looked rosy.

How the personal computer revolution was launched

IBM did not invent the desktop computer. Most historians agree that the personal computer revolution began in April 1977 at the first West Coast Computer Faire. Here, Steve Jobs introduced the Apple II, with a price tag of US $1,298 (about $5,800 today), while rival Commodore unveiled its PET. Both machines were designed for consumers, not just hobbyists or the technically skilled. In August, Tandy launched its TRS-80, which came with games. Indeed, software for these new machines was largely limited to games and a few programming tools.

IBM’s large commercial customers faced the implications of this emerging technology: Who would maintain the equipment and its software? How secure was the data in these machines? And what was IBM’s position: Should personal computers be taken seriously or not? By 1980, customers in many industries were telling their IBM contacts to enter the fray. At IBM plants in San Diego, Endicott, N.Y,  and Poughkeepsie, N.Y., engineers were forming hobby clubs to learn about the new machines.

The logical place to build a small computer was inside IBM’s General Products Division, which focused on minicomputers and the successful typewriter business. But the division had no budget or people to allocate to another machine. IBM CEO Frank T. Cary decided to fund the PC’s development out of his own budget. He turned to William “Bill” Lowe, who had given some thought to the design of such a machine. Lowe reported directly to Cary, bypassing IBM’s complex product-development bureaucracy, which had grown massively during the creation of the System/360 and S/370. The normal process to get a new product to market took four or five years, but the incipient PC market was moving too quickly for that.

Cary asked Lowe to come back in several months with a plan for developing a machine within a year and to find 40 people from across IBM and relocate them to Boca Raton, Fla.

Lowe’s plan for the PC called for buying existing components and software and bolting them together into a package aimed at the consumer market. There would be no homegrown operating system or IBM-made chips. The product also had to attract corporate customers, although it was unclear how many of those there would be. Mainframe salesmen could be expected to ignore or oppose the PC, so the project was kept reasonably secret.

A friend of Lowe’s, Jack Sams, was a software engineer who vaguely knew Bill Gates, and he reached out to the 24-year-old Gates to see if he had an operating system that might work for the new PC. Gates had dropped out of Harvard to get into the microcomputer business, and he ran a 31-person company called Microsoft. While he thought of programming as an intellectual exercise, Gates also had a sharp eye for business.

In July 1980, the IBMers met with Gates but were not greatly impressed, so they turned instead to Gary Kildall, president of Digital Research, the most recognized microcomputer software company at the time. Kildall then made what may have been the business error of the century. He blew off the blue-suiters so that he could fly his airplane, leaving his wife—a lawyer—to deal with them. The meeting went nowhere, with too much haggling over nondisclosure agreements, and the IBMers left. Gates was now their only option, and he took the IBMers seriously.

That August, Lowe presented his plan to Cary and the rest of the management committee at IBM headquarters in Armonk, N.Y. The idea of putting together a PC outside of IBM’s development process disturbed some committee members. The committee knew that IBM had previously failed with its own tiny machines—specifically the Datamaster and the 5110—but Lowe was offering an alternative strategy and already had Cary’s support. They approved Lowe’s plan.

Lowe negotiated terms, volumes, and delivery dates with suppliers, including Gates. To meet IBM’s deadline, Gates concluded that Microsoft could not write an operating system from scratch, so he acquired one called QDOS (“quick and dirty operating system”) that could be adapted. IBM wanted Microsoft, not the team in Boca Raton, to have responsibility for making the operating system work. That meant Microsoft retained the rights to the operating system. Microsoft paid $75,000 for QDOS. By the early 1990s, that investment had boosted the firm’s worth to $27 billion. IBM’s strategic error in not retaining rights to the operating system went far beyond that $27 billion; it meant that Microsoft would set the standards for the PC operating system. In fairness to IBM, nobody thought the PC business would become so big. Gates said later that he had been “lucky.”

Back at Boca Raton, the pieces started coming together. The team designed the new product, lined up suppliers, and were ready to introduce the IBM Personal Computer just a year after gaining the management committee’s approval. How was IBM able to do this?

Much credit goes to Philip Donald Estridge. An engineering manager known for bucking company norms, Estridge turned out to be the perfect choice to ram this project through. He wouldn’t show up at product-development review meetings or return phone calls. He made decisions quickly and told Lowe and Cary about them later. He staffed up with like-minded rebels, later nicknamed the “Dirty Dozen.” In the fall of 1980, Lowe moved on to a new job at IBM, so Estridge was now in charge. He obtained 8088 microprocessors from Intel, made sure Microsoft kept the development of DOS secret, and quashed rumors that IBM was building a system. The Boca Raton team put in long hours and built a beautiful machine.

The IBM PC was a near-instant success

The big day came on 12 August 1981. Estridge wondered if anyone would show up at the Waldorf Astoria. After all, the PC was a small product, not in IBM’s traditional space. Some 100 people crowded into the hotel. Estridge described the PC, had one there to demonstrate, and answered a few questions.

Meanwhile, IBM salesmen had received packets of materials the previous day. On 12 August, branch managers introduced the PC to employees and then met with customers to do the same. Salesmen weren’t given sample machines. Along with their customers, they collectively scratched their heads, wondering how they could use the new computer. For most customers and IBMers, it was a new world.

Nobody predicted what would happen next. The first shipments began in October 1981, and in its first year, the IBM PC generated $1 billion in revenue, far exceeding company projections. IBM’s original manufacturing forecasts called for 1 million machines over three years, with 200,000 the first year. In reality, customers were buying 200,000 PCs per month by the second year.

Those who ordered the first PCs got what looked to be something pretty clever. It could run various software packages and a nice collection of commercial and consumer tools, including the accessible BASIC programming language. Whimsical ads for the PC starred Charlie Chaplin’s Little Tramp and carried the tag line “A Tool for Modern Times.” People could buy the machines at ComputerLand, a popular retail chain in the United States. For some corporate customers, the fact that IBM now had a personal computing product meant that these little machines were not some crazy geek-hippie fad but in fact a new class of serious computing. Corporate users who did not want to rely on their company’s centralized data centers began turning to these new machines.

Estridge and his team were busy acquiring games and business software for the PC. They lined up Lotus Development Corp. to provide its 1-2-3 spreadsheet package; other software products followed from multiple suppliers. As developers began writing software for the IBM PC, they embraced DOS as the industry standard. IBM’s competitors, too, increasingly had to use DOS and Intel chips. And Cary’s decision to avoid the product-development bureaucracy had paid off handsomely.

IBM couldn’t keep up with rivals in the PC market

Encouraged by their success, the IBMers in Boca Raton released a sequel to the PC in early 1983, called the XT. In 1984 came the XT’s successor, the AT. That machine would be the last PC designed outside IBM’s development process. John Opel, who had succeeded Cary as CEO in January 1981, endorsed reining in the PC business. During his tenure, Opel remained out of touch with the PC and did not fully understand the significance of the technology.

We could conclude that Opel did not need to know much about the PC because business overall was outstanding. IBM’s revenue reached $29 billion in 1981 and climbed to $46 billion in 1984. The company was routinely ranked as one of the best run. IBM’s stock more than doubled, making IBM the most valuable company in the world.

The media only wanted to talk about the PC. On its 3 January 1983 cover, Time featured the personal computer, rather than its usual Man of the Year. IBM customers, too, were falling in love with the new machines, ignoring IBM’s other lines of business—mainframes, minicomputers, and typewriters.

On 1 August 1983, Estridge’s skunkworks was redesignated the Entry Systems Division (ESD), which meant that the PC business was now ensnared in the bureaucracy that Cary had bypassed. Estridge’s 4,000-person group mushroomed to 10,000. He protested that Corporate had transferred thousands of programmers to him who knew nothing about PCs. PC programmers needed the same kind of machine-software knowledge that mainframe programmers in the 1950s had; both had to figure out how to cram software into small memories to do useful work. By the 1970s, mainframe programmers could not think small enough.

Estridge faced incessant calls to report on his activities in Armonk, diverting his attention away from the PC business and slowing development of new products even as rivals began to speed up introduction of their own offerings. Nevertheless, in August 1984, his group managed to release the AT, which had been designed before the reorganization.

But IBM blundered with its first product for the home computing market: the PCjr (pronounced “PC junior”). The company had no experience with this audience, and as soon as IBM salesmen and prospective customers got a glimpse of the machine, they knew something had gone terribly wrong.

Unlike the original PC, the XT, and the AT, the PCjr was the sorry product of IBM’s multilayered development and review process. Rumors inside IBM suggested that the company had spent $250 million to develop it. The computer’s tiny keyboard was scornfully nicknamed the “Chiclet keyboard.” Much of the PCjr’s software, peripheral equipment, memory boards, and other extensions were incompatible with other IBM PCs. Salesmen ignored it, not wanting to make a bad recommendation to customers. IBM lowered the PCjr’s price, added functions, and tried to persuade dealers to promote it, to no avail. ESD even offered the machines to employees as potential Christmas presents for a few hundred dollars, but that ploy also failed.

IBM’s relations with its two most important vendors, Intel and Microsoft, remained contentious. Both Microsoft and Intel made a fortune selling IBM’s competitors the same products they sold to IBM. Rivals figured out that IBM had set the de facto technical standards for PCs, so they developed compatible versions they could bring to market more quickly and sell for less. Vendors like AT&T, Digital Equipment Corp., and Wang Laboratories failed to appreciate that insight about standards, and they suffered. (The notable exception was Apple, which set its own standards and retained its small market share for years.) As the prices of PC clones kept falling, the machines grew more powerful—Moore’s Law at work. By the mid-1980s, IBM was reacting to the market rather than setting the pace.

Estridge was not getting along with senior executives at IBM, particularly those on the mainframe side of the house. In early 1985, Opel made Bill Lowe head of the PC business.

Then disaster struck. On 2 August 1985, Estridge, his wife, Mary Ann, and a handful of IBM salesmen from Los Angeles boarded Delta Flight 191 headed to Dallas. Over the Dallas airport, 700 feet off the ground, a strong downdraft slammed the plane to the ground, killing 137 people including the Estridges and all but one of the other IBM employees. IBMers were in shock. Despite his troubles with senior management, Estridge had been popular and highly respected. Not since the death of Thomas J. Watson Sr. nearly 30 years earlier had employees been so stunned by a death within IBM. Hundreds of employees attended the Estridges’ funeral. The magic of the PC may have died before the airplane crash, but the tragedy at Dallas confirmed it.

More missteps doomed the IBM PC and its OS/2 operating system

While IBM continued to sell millions of personal computers, over time the profit on its PC business declined. IBM’s share of the PC market shrank from roughly 80 percent in 1982–1983 to 20 percent a decade later.

Meanwhile, IBM was collaborating with Microsoft on a new operating system, OS/2, even as Microsoft was working on Windows, its replacement for DOS. The two companies haggled over royalty payments and how to work on OS/2. By 1987, IBM had over a thousand programmers assigned to the project and to developing telecommunications, costing an estimated $125 million a year.

OS/2 finally came out in late 1987, priced at $340, plus $2,000 for additional memory to run it. By then, Windows had been on the market for two years and was proving hugely popular. Application software for OS/2 took another year to come to market, and even then the new operating system didn’t catch on. As the business writer Paul Carroll put it, OS/2 began to acquire “the smell of failure.”

Known to few outside of IBM and Microsoft, Gates had offered to sell IBM a portion of his company in mid-1986. It was already clear that Microsoft was going to become one of the most successful firms in the industry. But Lowe declined the offer, making what was perhaps the second-biggest mistake in IBM’s history up to then, following his first one of not insisting on proprietary rights to Microsoft’s DOS or the Intel chip used in the PC. The purchase price probably would have been around $100 million in 1986, an amount that by 1993 would have yielded a return of $3 billion and in subsequent decades orders of magnitude more.

In fairness to Lowe, he was nervous that such an acquisition might trigger antitrust concerns at the U.S. Department of Justice. But the Reagan administration was not inclined to tamper with the affairs of large multinational corporations.

More to the point, Lowe, Opel, and other senior executives did not understand the PC market. Lowe believed that PCs, and especially their software, should undergo the same rigorous testing as the rest of the company’s products. That meant not introducing software until it was as close to bugproof as possible. All other PC software developers valued speed to market over quality—better to get something out sooner that worked pretty well, let users identify problems, and then fix them quickly. Lowe was aghast at that strategy.

Salesmen came forward with proposals to sell PCs in bulk at discounted prices but got pushback. The sales team I managed arranged to sell 6,000 PCs to American Standard, a maker of bathroom fixtures. But it took more than a year and scores of meetings for IBM’s contract and legal teams to authorize the terms.

Lowe’s team was also slow to embrace the faster chips that Intel was producing, most notably the 80386. The new Intel chip had just the right speed and functionality for the next generation of computers. Even as rivals moved to the 386, IBM remained wedded to the slower 286 chip.

As the PC market matured, the gold rush of the late 1970s and early 1980s gave way to a more stable market. A large software industry grew up. Customers found the PC clones, software, and networking tools to be just as good as IBM’s products. The cost of performing a calculation on a PC dropped so much that it was often significantly cheaper to use a little machine than a mainframe. Corporate customers were beginning to understand that economic reality.

Opel retired in 1986, and John F. Akers inherited the company’s sagging fortunes. Akers recognized that the mainframe business had entered a long, slow decline, the PC business had gone into a more rapid fall, and the move to billable services was just beginning. He decided to trim the ranks by offering an early retirement program. But too many employees took the buyout, including too many of the company’s best and brightest.

In 1995, IBM CEO Louis V. Gerstner Jr. finally pulled the plug on OS/2. It did not matter that Microsoft’s software was notorious for having bugs or that IBM’s was far cleaner. As Gerstner noted in his 2002 book, “What my colleagues seemed unwilling or unable to accept was that the war was already over and was a resounding defeat—90 percent market share for Windows to OS/2’s 5 percent or 6 percent.”

The end of the IBM PC

IBM soldiered on with the PC until Samuel J. Palmisano, who once worked in the PC organization, became CEO in 2002. IBM was still the third-largest producer of personal computers, including laptops, but PCs had become a commodity business, and the company struggled to turn a profit from those products. Palmisano and his senior executives had the courage to set aside any emotional attachments to their “Tool for Modern Times” and end it.

In December 2004, IBM announced it was selling its PC business to Lenovo for $1.75 billion. As the New York Times explained, the sale “signals a recognition by IBM, the prototypical American multinational, that its own future lies even further up the economic ladder, in technology services and consulting, in software and in the larger computers that power corporate networks and the Internet. All are businesses far more profitable for IBM than its personal computer unit.”

IBM already owned 19 percent of Lenovo, which would continue for three years under the deal, with an option to acquire more shares. The head of Lenovo’s PC business would be IBM senior vice president Stephen M. Ward Jr., while his new boss would be Lenovo’s chairman, Yang Yuanquing. Lenovo got a five-year license to use the IBM brand on the popular Thinkpad laptops and PCs, and to hire IBM employees to support existing customers in the West, where Lenovo was virtually unknown. IBM would continue to design new laptops for Lenovo in Raleigh, N.C. Some 4,000 IBMers already working in China would switch to Lenovo, along with 6,000 in the United States.

The deal ensured that IBM’s global customers had familiar support while providing a stable flow of maintenance revenue to IBM for five years. For Lenovo, the deal provided a high-profile partner. Palmisano wanted to expand IBM’s IT services business to Chinese corporations and government agencies. Now the company was partnered with China’s largest computer manufacturer, which controlled 27 percent of the Chinese PC market. The deal was one of the most creative in IBM’s history. And yet it remained for many IBMers a sad close to the quarter-century chapter of the PC.

This article is based on excerpts from IBM: The Rise and Fall and Reinvention of a Global Icon (MIT Press, 2019).

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the August 2021 print issue as “A Tool for Modern Times.”

A Century Ago, the Optophone Allowed Blind People to Hear the Printed Word

Post Syndicated from Edd Thomas original https://spectrum.ieee.org/tech-history/dawn-of-electronics/a-century-ago-the-optophone-allowed-blind-people-to-hear-the-printed-word

On 25 June 1912, the Irish writer, inventor, and physicist Edmund Edward Fournier d’Albe demonstrated a curious machine at the Optical Society Convention in London. He called it an “exploring optophone,” and his remarkable claim was that it allowed people who were completely blind to “hear” light.

The optophone’s sensing apparatus—a cell that relied on the photoelectric properties of selenium—was housed in a long, slender wooden box, to which was attached a pair of headphones. While holding the box, the user would listen for modulations in tone as the cell detected light; the device was surprisingly good at distinguishing between light and dark spaces and even the flickering of a match. Fournier d’Albe pitched it as an important new mobility tool that would allow people who were blind to safely explore their environments. In a newspaper interview, the inventor went as far as to hail it as “the first stage in making the eye dispensable.”

News of the exploring optophone spread rapidly among blind communities as well as scientists. Fournier d’Albe exalted in the praise—until, that is, he received a note from a well-known solicitor named Washington Ranger. Ranger was blind, and his critique was blunt: “The blind problem is not to find lights or windows, but how to earn your living.”

Chastened, Fournier d’Albe went back to the drawing board. This time, his goal was to design a machine that would translate text from ordinary books and newspapers into sounds that the user could interpret as words. He realized that printed letters each have a unique visual ratio of white to black on the page, and that this ratio could be picked up by a photosensitive cell and translated into a series of corresponding sounds. By learning the audible alphabet, a user would be able to read a book, albeit one letter at a time.

Fournier d’Albe demonstrated a crude prototype of the reading optophone in September 1913. Once again, his work was greeted by media acclaim. The optophone would enthrall and frustrate him for the rest of his life. Although it never saw commercial success, we would be wrong to label it a failure or a technological dead end. Other inventors continued to adapt and refine the technology, and the concept of automatically scanning text helped open the door to optical character recognition.

Fournier d’Albe was born in London in 1868 and educated in Germany. Although he is largely forgotten today, during his own lifetime he attained fame and notoriety in a number of unrelated areas, including the physical sciences, spiritualism, linguistics, and pan-Celtic unification. In a 2017 article, historian Ian B. Stewart noted that Fournier d’Albe’s theory of a hierarchical universe later influenced Benoit Mandelbrot’s work on fractals. Fournier d’Albe was also a pioneer in the nascent field of television and the first to transmit an image wirelessly—a 600-dot photo of King George V on Empire Day in 1923, which took 20 minutes to send. Stewart pointed out that Fournier d’Albe, as a member of the fin-de-siècle generation, saw no problem with unifying his diverse interests through his embrace of the social sciences. But the work he was most proud of was the optophone.

He came to his invention by a circuitous route. In 1893, at the age of 25, Fournier d’Albe took a job writing book abstracts for The Electrician magazine and then Physical Society. This work exposed him to a world of cutting-edge scientific discoveries and exceptional thinkers. During his lifetime, he would befriend H.G. Wells, W.B. Yeats, the chemists Wilhelm Ostwald and William Ramsay, physicist George Johnstone Stoney, television pioneer A.A. Campbell-Swinton, and the magician and spiritualist Harry Houdini.

In 1907, Fournier d’Albe decided to pursue a career in physics. Through his impeccable contacts, he secured a post as an assistant lecturer at the University of Birmingham under the renowned physicist Oliver Lodge. Lodge recommended that Fournier d’Albe focus his doctoral research on selenium.

Selenium’s unusual photoelectric properties first surfaced in experiments at the Telegraph Construction and Maintenance Co. in 1873, which showed that the metal’s resistance changed according to the intensity of light falling on it. The resistance was highest when the sample was enclosed in a dark box. Removing the box’s cover caused the conductivity to jump. Selenium (named for the Greek goddess of the moon, Selene) would soon become known as a wonder material, and several inventors attempted to exploit it. Most famously, Alexander Graham Bell used the metal in his photophone of 1880, a telecommunications device that relied on modulated light to transmit a wireless signal.

In 1897 or thereabouts, a Polish ophthalmologist named Kazimierz Noiszewski invented the electroftalm (from the Greek for “electric eye”), a device intended to help blind people “hear” their surroundings. Conceptually, Noiszewski’s invention was remarkably similar to Fournier d’Albe’s exploring optophone, which came later. Given his extensive research into selenium, Fournier d’Albe almost certainly would have known about the electroftalm. That may explain why he was so willing to drop the exploring optophone in favor of developing the reading optophone.

Although Fournier d’Albe excelled in many things, he was not an engineer. And so his reading optophone, while based on sound theory, took eight years and much support to move from prototype to product.

The reading optophone worked by scanning a tiny portion of the page at a time. A small rotating disk that spun at 30 rpm would break up an artificial light source into a line of five beams, each with a different frequency. When the beams were reflected onto a selenium cell, the fluctuations in light intensity would be mirrored by variations in the conductivity of the selenium. To turn the changes in conductivity into an audible signal, Fournier d’Albe used a telephone receiver from S.G. Brown Ltd that could detect fluctuations in current down to a millionth of an ampere.

The notes C, D, F, G, and B represented the frequencies of the five light beams and would mix to create different chords. As Fournier d’Albe recounted in an article for The Electrician (which he quoted in his 1924 book The Moon Element), “The two vertical strokes of [the letters] H and M give a chaos of notes, the middle stroke of N gives a falling gamut, the three horizontal strokes of E give a chord, and the curved lines of O and S give characteristic flourishes of sound.” But some letters with similar visual characteristics—such as lowercase u and n—resulted in similar audio patterns that were difficult for the listener to tell apart.

Fournier d’Albe called his earliest attempt the “white sounding optophone” because he was only able to get the selenium to react to the white of the page and not the black of the letters. So the poor listener had to interpret the sounds generated by the space around each letter rather than the sounds generated by the letters themselves. With such a system, Fournier d’Albe estimated it would take around 8 hours to learn the audible alphabet and 10 to 20 lessons to discern basic words.

The white sounding optophone’s deficiencies were eventually overcome in 1918, when the Scottish scientific instrument maker Barr & Stroud offered to tidy up the machine in preparation for its commercial rollout. Adding a second selenium cell, called a balancing cell, enabled the machine to read black text. [See sidebar, “How the Optophone Translated Text Into Tones.”] The telephone receiver would pick up the signals from both cells and measure the difference in electrical output between them. The white signals canceled each other out so that only the black signal was magnified. The revamped machine used the notes G, E, D, and C, and lower G. This video illustrates the tones associated with the word “Type.”

The Barr & Stroud machine became known as the black sounding optophone. Other design modifications included a magnifying lens for reading different sizes of text, as well as a worm thread that let the user slow the reading from 5 seconds per line to as long as 5 minutes. (Even the most experienced reader never achieved the fastest speed, as I’ll discuss in a bit.) A final improvement held the book or newspaper stationary on a frame over the reading mechanism, and the reading head, or tracer, pivoted on an axis to read the line. On the white sounding optophone, the user had to keep carefully repositioning the book—a tricky task for someone who was blind.

Upon the optophone’s relaunch in 1920, an exuberant Fournier d’Albe proclaimed in a letter in Nature, “It is therefore safe to say that the problem of opening the world’s literature to the blind is now definitely solved.”

Once again, Fournier d’Albe was overreaching. There was certainly a need for a tool like the optophone, considering the hundreds of thousands of World War I servicemen who had been blinded by gas or shells. And yet this innovative and potentially life-changing machine failed to find a foothold in the market. By the time Fournier d’Albe died in 1933, only a tiny number of optophones (perhaps as few as a dozen) had been sold.

What can account for the optophone’s commercial failure? Fournier d’Albe’s reputation might have contributed to the problem. Like many of his generation, he was an ardent spiritualist. It’s easy for us to mock this fascination with séances and the desire to establish a connection with the dead, but we should remember the scientific breakthroughs of that era. The invisible world of electromagnetic waves and the discovery of electrons were tearing up the scientific rule book.

Against that backdrop, it was not a huge leap to believe that a human soul might also be stored somehow within invisible energy forces. Indeed, many hinted at this connection, but Fournier d’Albe went further, publishing a book in 1908 that made the case for the existence of “psychomeres,” or soul-particles, which he said reside in human cells. He even assigned them a weight: 50 milligrams. His book prompted a backlash from the establishment. The New York Times, in a blistering critique, denounced Fournier d’Albe as a “crack-brained pseudo-scientist.” He would later tone down his views on spiritualism, but some in the scientific and political mainstream continued to view him with skepticism.

For the optophone to succeed, Fournier d’Albe knew he needed the backing of the National Institute for the Blind, a powerful association that in the United Kingdom acted as the unofficial gatekeeper to his intended audience. The NIB was led by the strong-willed Arthur Pearson. Pearson had been a celebrated newspaper magnate at the start of the century, but as he slowly lost his sight due to glaucoma, he transferred his energy to supporting people who were blind. Pearson had invested heavily in providing Braille resources throughout the country, so a new machine that threatened to make Braille obsolete was never going to receive his backing.

In April 1917, the NIB agreed to send a delegation to Fournier d’Albe’s lab to view a demonstration of the optophone. By the inventor’s account, everything went off without a hitch, and he used the machine to accurately read a random sample from the daily newspaper at a rate of four words per minute. But when Pearson and the committee released their opinion in an open letter to the London Times a few days later, they could not have been more scathing. They concluded that the machine was little more than an interesting scientific toy—hard to learn and far too slow for any practical use. Pearson closed the letter by telling Fournier d’Albe that he should leave such inventions to “those who have the interests of the blind at heart.”

Compounding the NIB’s lack of support was the optophone’s price. In 1917, the white sounding optophone was being offered for sale for £35 (equivalent to about US $3,500 today). When the black sounding optophone was released three years later, the price had tripled. Too expensive for the average household, it was still affordable by medical institutions. Without the backing of the NIB, however, this was unlikely to happen. In a twist of fate, the National Institute for the Blind was finally pressured into buying a single black sounding optophone in 1920 after King George V and Queen Mary saw one at an exhibition and gave it a glowing review. One of the few optophones still known to exist is in the collection of the charity Blind Veterans UK, which was founded by the man who so opposed the technology—Arthur Pearson.

One of the NIB reviewers’ biggest concerns about the optophone was the amount of time required to achieve proficiency. They had a point. Fournier d’Albe had suggested that a user could learn the basics in just 10 to 20 lessons, but he was clearly overoptimistic. Most people who tried using it were able to read only a handful of words per minute—a frustratingly slow speed.

One of the machine’s defenders was the renowned engineer A.A. Campbell-Swinton. He pointed out that language was a skill that people acquire slowly from infancy, so judging how quickly an adult could learn the optophone was unfair. He helped acquire several optophones to conduct a long-term assessment with a group of children, but I’ve found no evidence that those results were ever formally published.

By far the most successful case study began in 1918, when the inventor enlisted the help of the 18-year-old twins Mary and Margaret Jameson and taught them to use the white sounding optophone. Both already read Braille. Mary seemed more proficient with the optophone, and she ended up accompanying Fournier d’Albe on many of his public demonstrations, where she was a hit with spectators.

By 1920, Mary was up to a reading rate of 25 words per minute. She continued to use the machine for the rest of her life, reaching 60 words per minute by 1972. Sighted people can read 200 to 300 words per minute. Nevertheless, when asked in 1966 about her experience with the machine, Mary seemed untroubled by its speed and only wished that it was a bit quieter and that the selenium cells were more responsive.

Although Fournier d’Albe’s beloved optophone slid into obscurity, his approach to mechanical reading inspired others. One of the first was a professor at Iowa State University named F. C. Browne, who in 1915 improved on Fournier d’Albe’s idea with a device he called a phonoptikon. It used individual crystals of selenium (instead of a preparation) and a handheld wand to read the page. Although it garnered favorable press coverage, it seems to have never gone into production.

Other inventions followed. At the October 1929 Exhibition of Inventions, held in London, J. Butler Burke exhibited a device, called an optograph, that converted text into Braille. Two years later, Robert E. Naumburg of Cambridge, Mass., invented the printing visagraph, which automatically read text and embossed it on aluminum foil. The visagraph reportedly could also handle images and maps. But it was the size of a desk and took about 6 minutes to create a page. A similar machine came out in 1932. Called the photoelectrograph, it read text and embossed it onto a sheet. There was no shortage of inventors who hoped to succeed Fournier d’Albe in this new field.

Perhaps the most important legacy of the optophone came from the electronic television pioneer Vladimir Zworykin and his team at the Radio Corporation of America. During the 1910s, Zworykin had visited Fournier d’Albe to learn more about the optophone, and the visit clearly left an impression. Decades later, Zworykin would draw on Fournier d’Albe’s principles to help produce a reading machine, called simply the A-2. His prototype incorporated a handheld wand and phototubes (instead of selenium) for the sensor. As in Fournier d’Albe’s day, there were many thousands of injured veterans in the post–World War II era who needed tools to help them read. This time, however, the U.S. Veterans Administration (now the Department of Veterans Affairs) was keen to develop the technology, supporting RCA’s efforts to refine Zworykin’s design. Overseen by Leslie E. Flory and Winthrop S. Pike, RCA’s new iteration, introduced in 1949, became the first machine for blind people that not only scanned text automatically but also spoke the letters and words it read.

RCA’s reading machine was introduced just as electronic computers were beginning to take off. Computer scientists soon realized that the technology presented a way to speed up data processing. Indeed, many considered the RCA machine of 1949 to be the first practical optical character-recognition machine in the world. But it was Fournier d’Albe who demonstrated that such a thing was even possible. And by the 1950s, his basic approach to optical scanning was being integrated into computers around the world.

This article appears in the July 2021 print issue as “Turning Letters Into Tones.”

About the Author

Edd Thomas is a writer and antiques dealer who lives with his family in Wiltshire, England.

Meet Catfish Charlie, the CIA’s Robotic Spy

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/silicon-revolution/meet-catfish-charlie-the-cias-robotic-spy

In 1961, Tom Rogers of the Leo Burnett Agency created Charlie the Tuna, a jive-talking cartoon mascot and spokesfish for the StarKist brand. The popular ad campaign ran for several decades, and its catchphrase “Sorry, Charlie” quickly hooked itself in the American lexicon.

When the CIA’s Office of Advanced Technologies and Programs started conducting some fish-focused research in the 1990s, Charlie must have seemed like the perfect code name. Except that the CIA’s Charlie was a catfish. And it was a robot.

More precisely, Charlie was an unmanned underwater vehicle (UUV) designed to surreptitiously collect water samples. Its handler controlled the fish via a line-of-sight radio handset. Not much has been revealed about the fish’s construction except that its body contained a pressure hull, ballast system, and communications system, while its tail housed the propulsion. At 61 centimeters long, Charlie wouldn’t set any biggest-fish records. (Some species of catfish can grow to 2 meters.) Whether Charlie reeled in any useful intel is unknown, as details of its missions are still classified.

For exploring watery environments, nothing beats a robot

The CIA was far from alone in its pursuit of UUVs nor was it the first agency to do so. In the United States, such research began in earnest in the 1950s, with the U.S. Navy’s funding of technology for deep-sea rescue and salvage operations. Other projects looked at sea drones for surveillance and scientific data collection.

Aaron Marburg, a principal electrical and computer engineer who works on UUVs at the University of Washington’s Applied Physics Laboratory, notes that the world’s oceans are largely off-limits to crewed vessels. “The nature of the oceans is that we can only go there with robots,” he told me in a recent Zoom call. To explore those uncharted regions, he said, “we are forced to solve the technical problems and make the robots work.”

One of the earliest UUVs happens to sit in the hall outside Marburg’s office: the Self-Propelled Underwater Research Vehicle, or SPURV, developed at the applied physics lab beginning in the late ’50s. SPURV’s original purpose was to gather data on the physical properties of the sea, in particular temperature and sound velocity. Unlike Charlie, with its fishy exterior, SPURV had a utilitarian torpedo shape that was more in line with its mission. Just over 3 meters long, it could dive to 3,600 meters, had a top speed of 2.5 m/s, and operated for 5.5 hours on a battery pack. Data was recorded to magnetic tape and later transferred to a photosensitive paper strip recorder or other computer-compatible media and then plotted using an IBM 1130.

Over time, SPURV’s instrumentation grew more capable, and the scope of the project expanded. In one study, for example, SPURV carried a fluorometer to measure the dispersion of dye in the water, to support wake studies. The project was so successful that additional SPURVs were developed, eventually completing nearly 400 missions by the time it ended in 1979.

Working on underwater robots, Marburg says, means balancing technical risks and mission objectives against constraints on funding and other resources. Support for purely speculative research in this area is rare. The goal, then, is to build UUVs that are simple, effective, and reliable. “No one wants to write a report to their funders saying, ‘Sorry, the batteries died, and we lost our million-dollar robot fish in a current,’ ” Marburg says.

A robot fish called SoFi

Since SPURV, there have been many other unmanned underwater vehicles, of various shapes and sizes and for various missions, developed in the United States and elsewhere. UUVs and their autonomous cousins, AUVs, are now routinely used for scientific research, education, and surveillance.

At least a few of these robots have been fish-inspired. In the mid-1990s, for instance, engineers at MIT worked on a RoboTuna, also nicknamed Charlie. Modeled loosely on a blue-fin tuna, it had a propulsion system that mimicked the tail fin of a real fish. This was a big departure from the screws or propellers used on UUVs like SPURV. But this Charlie never swam on its own; it was always tethered to a bank of instruments. The MIT group’s next effort, a RoboPike called Wanda, overcame this limitation and swam freely, but never learned to avoid running into the sides of its tank.

Fast-forward 25 years, and a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled SoFi, a decidedly more fishy robot designed to swim next to real fish without disturbing them. Controlled by a retrofitted Super Nintendo handset, SoFi could dive more than 15 meters, control its own buoyancy, and swim around for up to 40 minutes between battery charges. Noting that SoFi’s creators tested their robot fish in the gorgeous waters off Fiji, IEEE Spectrum’s Evan Ackerman noted, “Part of me is convinced that roboticists take on projects like these…because it’s a great way to justify a trip somewhere exotic.”

SoFi, Wanda, and both Charlies are all examples of biomimetics, a term coined in 1974 to describe the study of biological mechanisms, processes, structures, and substances. Biomimetics looks to nature to inspire design.

Sometimes, the resulting technology proves to be more efficient than its natural counterpart, as Richard James Clapham discovered while researching robotic fish for his Ph.D. at the University of Essex, in England. Under the supervision of robotics expert Huosheng Hu, Clapham studied the swimming motion of Cyprinus carpio, the common carp. He then developed four robots that incorporated carplike swimming, the most capable of which was iSplash-II. When tested under ideal conditions—that is, a tank 5 meters long, 2 meters wide, and 1.5 meters deep—iSpash-II obtained a maximum velocity of 11.6 body lengths per second (or about 3.7 m/s). That’s faster than a real carp, which averages a top velocity of 10 body lengths per second. But iSplash-II fell short of the peak performance of a fish darting quickly to avoid a predator.

Of course, swimming in a test pool or placid lake is one thing; surviving the rough and tumble of a breaking wave is another matter. The latter is something that roboticist Kathryn Daltorio has explored in depth.

Daltorio, an assistant professor at Case Western Reserve University and codirector of the Center for Biologically Inspired Robotics Research there, has studied the movements of cockroaches, earthworms, and crabs for clues on how to build better robots. After watching a crab navigate from the sandy beach to shallow water without being thrown off course by a wave, she was inspired to create an amphibious robot with tapered, curved feet that could dig into the sand. This design allowed her robot to withstand forces up to 138 percent of its body weight.

In her designs, Daltorio is following architect Louis Sullivan’s famous maxim: Form follows function. She isn’t trying to imitate the aesthetics of nature—her robot bears only a passing resemblance to a crab—but rather the best functionality. She looks at how animals interact with their environments and steals evolution’s best ideas.

And yet, Daltorio admits, there is also a place for realistic-looking robotic fish, because they can capture the imagination and spark interest in robotics as well as nature. And unlike a hyperrealistic humanoid, a robotic fish is unlikely to fall into the creepiness of the uncanny valley.

In writing this column, I was delighted to come across plenty of recent examples of such robotic fish. Ryomei Engineering, a subsidiary of Mitsubishi Heavy Industries, has developed several: a robo-coelacanth, a robotic gold koi, and a robotic carp. The coelacanth was designed as an educational tool for aquariums, to present a lifelike specimen of a rarely seen fish that is often only known by its fossil record. Meanwhile, engineers at the University of Kitakyushu in Japan created Tai-robot-kun, a credible-looking sea bream. And a team at Evologics, based in Berlin, came up with the BOSS manta ray.

Whatever their official purpose, these nature-inspired robocreatures can inspire us in return. UUVs that open up new and wondrous vistas on the world’s oceans can extend humankind’s ability to explore. We create them, and they enhance us, and that strikes me as a very fair and worthy exchange.

This article appears in the March 2021 print issue as “Catfish, Robot, Swimmer, Spy.”

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Cybernetics, Computer Design, and a Meeting of the Minds

Post Syndicated from David C. Brock original https://spectrum.ieee.org/tech-talk/tech-history/space-age/cybernetics-computer-design-and-a-meeting-of-the-minds

In Paris. Exactly seventy years ago.

He likely arrived in Paris from Darmstadt, Germany, by train that January. At 53, and a full professor of applied mathematics and the founding director of the Institut für Praktische Mathematik at the Technische Hochschule Darmstadt, Alwin Walther was among Germany’s leading figures in computation. The calculational prowess of Walther and his institute—employing all manner of manual, mechanical, and electromechanical approaches—had attracted the attention, and the support, of the Nazi regime during the Second World War. But as Walther made his way to the academic heart of liberated Paris, the Latin Quarter, few passersby would have guessed at this background.

Walther’s institute had been a major source of the calculational support needed for Werner von Braun’s rocketry efforts, most notably the V-2, for the Germany Army. Indeed, during the war, Walther secured funding from the German Army to create an advanced electromechanical analog computer: an advanced differential analyzer to rival that created earlier by Vannevar Bush at MIT. (The German system, the IPM-Ott DGM, was developed throughout the war, but only delivered to Walther’s institute in 1948.)

This work with von Braun and the German Army was not the only effort that connected Alwin Walther with slave labor, like that from German concentration camps exploited in the V-2 rocket factories. More directly, Walther was evidently involved in a scheme with the Nazi SS to enslave Jewish scientists held in the Sachsenhausen concentration camp to perform manual calculations. If the scheme actually had been put in place, which is unclear, it was soon upended, along with the other activities of Walther’s institute, when its facilities, the vast majority of the buildings of the Technische Hochschule Darmstadt, and much of the rest of the city were destroyed during Allied air raids on 11 and 12 September 1944.

In new facilities created after the war, Walther and his institute had their eyes firmly set on electronic digital computing. During the war, members of the institute and its machine shop had provided direct support for Konrad Zuse’s famed electromechanical Z4 computer. (The Z4 was a pioneering digital computer, originally intended for use in the German aircraft industry.)

Walther himself developed wartime plans for a large electromechanical computer along the lines of Howard Aiken’s Mark 1, but those were abandoned. As Walther packed his belongings for his January 1951 trip to Paris, he and his institute had just embarked on a major new effort to create a stored-program, electronic digital computer. This machine, the Darmstädter Elektronischer Rechenautomat, or DERA, would become operational in 1957 (more on that later).

Surely, Walther must have had with him in Paris a particular bundle of papers: his copies of the announcement, printed schedule, registrant list, and both French and English sets of presentation abstracts for the international conference he was attending in Paris from the afternoon of Monday, 8 January 1951, through the afternoon of Saturday, 13 January.

The rise of the computing view of human thought

The conference was organized by the Institut Blaise Pascal, the computing center founded in 1946 by the French scientific and engineering research establishment, the Centre National de la Research Scientifique (CNRS). The French institute was composed of two laboratories, one devoted to analog computation and the other, led by Louis Couffignal, to digital computation. It was Couffignal who was the principal figure behind the January 1951 international conference.

Couffignal had completed a doctorate in mathematics on the theory of computation at the University of Paris in 1938, with the ambition to create a new binary-based calculating machine. During the wartime occupation of the city by the Germans, Couffignal had met intensely and often with perhaps France’s leading physiologist, Louis Lapicque.

Lapicque, famed for his integrate-and-fire model of the neuron, and Couffignal found a common vision, seeing deep analogies and connections between the processes and physiology of human thought and the processes and componentry of calculating machines. Imprisoned by the German Gestapo for aiding the resistance, Lapicque nevertheless managed to write his book La Machine Nerveuse, which was published in 1943. In it, Lapicque gave voice to this vision shared with Couffignal: “The regular periodic organization of cerebellar elements made cerebellum close to artificial machines. Some of its processes may be understood by comparison with calculating machine or automatic telephone relays.”

Like many who came before them, Couffignal and Lapicque were using human-made tools of great current fascination as analogical tools for understanding human bodies and minds. Where earlier thinkers had been entranced with clockworks, seeing themselves in the gears and windings, Couffignal, Lapicque, and others of the 1940s looked at electromechanical and electronic calculating machines and saw human bodies and minds mirrored therein.

Working for the CNRS during the war positioned Couffignal to take up leadership of the digital computing laboratory of the Institut Blaise Pascal in 1946. At the time, the institute was equipped only with desktop electromechanical calculators confiscated from the Germans, but Couffignal was fired with ambition to make a new, French, binary computer.

In the first year of his new directorship, he traveled to the United States to deepen his understanding of American developments in both electronic computing and what we might call the “computing-view of human thinking.” In Philadelphia, he familiarized himself with ENIAC. (ENIAC was a newly completed, all-electronic digital computer, and a milestone in the history of computing.) In Princeton, Couffignal learned about the development of the Institute for Advanced Study computer by John von Neumann. (The IAS computer was a “stored program” machine, holding both software and data in its memory. The power of this approach made the computer a model for many early digital computers around the world.) At Harvard, Couffignal visited Howard Aiken and saw his electromechanical computers.

It was also in Cambridge that Couffignal found the chance to pursue his interest in the computing-view of human thought, when he met with the mathematician Norbert Wiener at MIT. Deep into the work that would lead him to publish his book Cybernetics: Or Control and Communication in the Animal and the Machine in 1948, Wiener evidently found a kindred spirit in Couffignal. In 1947, Wiener returned the visit, meeting with Couffignal and Lapicque in Paris.

The French computer: Couffignal’s pilot-machine

The same sort of analogic thinking that led Couffignal, Lapicque, and Wiener to see bodies and minds as servomechanisms (an automatic control based on feedback), electromechanical relays, and electronic circuits had, by 1951, also led Couffignal to steer France’s main effort to develop a large-scale digital electronic computer in a very particular direction. As early as his 1938 doctoral thesis, Couffignal was already engaged in analogic thinking about calculating machines that connected with physiology. He wrote, “To give a full account of the development (of calculating machines) we must create for machinery the analogues of comparative anatomy and physiology.” What Couffignal concluded, in short, was that this physiology of calculating machines displayed an evolution in which increased calculating power was achieved by increasing complexity.

In 1947, when Couffignal had control of France’s major effort to build a large-scale digital computer, his evolutionary conclusion of 1938 now carried real weight. France’s computer would depart from the designs he saw in America, which in his “comparative anatomy” of computer design embraced a simplicity of calculation logic, thereby emphasizing the need for large memories. This was, for Couffignal, a kind of devolution, a reversion against complexity and therefore progress. France’s computer, in contrast, would embrace large, parallel units of complex calculating logic, and minimize, even eliminate, memory. In this, Couffignal explained, “the problem of organizing a calculation is essentially the same as that of organizing a factory assembly line.”

By January 1951, a “pilot-machine” embodying Couffignal’s strikingly different approach to electronic digital computing was in operation within his laboratory of the Institut Blaise Pascal. It was the result of four years of effort by the principal contractor, the Logabax Company, and the expenditure of millions of francs by the CNRS. Whatever else one might say of it, it could actually compute, able to calculate square roots and sine functions.

Cyberneticists and others converge on Paris

Through the auspices of the CNRS, and with some additional funding from the Rockefeller Foundation, and with his pilot-machine ready to demonstrate, Couffignal organized an ambitious international conference that combined his two greatest passions: large-scale digital electronic computers and the computing-view of human thought. The conference gathered together some of the leading lights of digital computer development from across the world, as well as an international coterie of researchers inspired by the view of the mind as a machine. After 1948, when Wiener’s book spun the term into circulation, this later group would be known as devotees of cybernetics.

The conference was titled “Les Machines a Calculer et la Pensée Humaine” (Computing Machines and Human Thought) and was held in the meeting rooms of the Centre National de Documentation Pédagogique—the French state’s educational publishing arm—located just a three-minute stroll from Couffignal’s laboratory in the Institut Henri Poincaré, around the corner at 29 Rue d’Ulm.

A distributed “Liste Alphabetique des Membres du Colloque” presented a roster of attendees, along with their affiliations. The roster became data for Alwin Walther to analyze, perhaps on his way from Darmstadt to Paris. Pencil in hand, he annotated each page of the list, counting the distribution of attendees by country. What motivated his survey of the conference’s geographic diversity is unclear, but what is undoubted is that he noted that he was the single German to attend. “1 Allemande,” he scribbled in his final count; 5 from the USA; 184 from France; 40 from the UK; 5 from Spain; 8 from the Netherlands; 6 from Belgium; 4 from Italy; 4 from Sweden; 1 from Switzerland; 1 from Germany; and 1 from Brazil. The 259 attendees were occupationally diverse as well, hailing from universities, telecommunications firms, medical institutions, military organizations, computer manufacturers, academic and professional societies, journalism, diplomacy, industry, the UN, and museums.

The six days of the conference were broken into three two-day parts. The opening part focused on “Progrès Récents dans la Technique des Grosses Machines a Calculer,” new developments in the approach to large-scale digital computers. Quantum physics pioneer, Nobel Laureate, and Secretary of the Academie des Sciences Louis de Broglie delivered the opening welcome and was followed by a series of presentations on the latest electronic digital computers. Harvard’s Howard Aiken spoke about his Mark II, III, and IV machines.

Birkbeck College’s Andrew Booth described his experimental SEC and his new APEXC computer. Eduard Stiefel, head of applied mathematics at ETH Zurich, spoke of Konrad Zuse’s Z4 computer, now under Stiefel’s care. E.W. Cannon, a leader for mathematics at the U.S. National Bureau of Standards, surveyed the organization’s history with computing, including SEAC and SWAC. F.C. Colebrook of the UK’s National Physical Laboratory described recent success with the Pilot ACE computer, originally designed by Alan Turing. At the close of the first day, Couffignal demonstrated his laboratory’s pilot-machine, its world debut.

A chess-playing machine and other computational curiosities

The second day continued with presentations about additional new computing work, such as Freddie Williams’ review of computer work at the University of Manchester, and discussions of various digital and analog computing approaches.

The middle part of the conference was devoted to the kinds of mathematical and scientific problems that were appropriate applications for the large-scale digital computers reviewed in the conference’s first part. Linear and differential equations made numerous appearances. Douglas Hartree and Maurice Wilkes both spoke about their experiences programming the University of Cambridge’s new EDSAC computer.

For the concluding part of the conference, Couffignal turned to the colleague with whom he had perhaps most discussed the vision of the mind as machine: Louis Lapicque. The final topic was “Les Grosses Machines, La Logique et la Physiologie du Système Nerveux,” large digital computers and the logic and physiology of the nervous system.

The morning of Friday, 12 January, was dominated by a presentation about Leonardo Torres y Quevedo and demonstrations of a set of devices he had made. Torres y Quevedo, a wildly inventive and celebrated Spanish engineer, was born in 1852 and had died in 1936. He had been, as well, a kind of intellectual companion to Couffignal’s dissertation advisor and mentor, Maurice d’Ocagne.

At Couffignal’s conference, Torres y Quevedo’s son Gonzalo introduced or reacquainted this elite international audience to his father’s works. Torres y Quevedo embraced a vision of what his son called “automism,” the possibility for creating ever more sophisticated autonomous machinery that could encompass actions and behaviors previously the sole domain of humans. On exhibit for the 1951 conferees was an electromagnetic chess-playing machine from the 1920s, a system called Telekino that used radio waves and servomechanisms for the remote guidance of boats, and a complex mechanical device called the “endless spindle” for calculating logarithms.

Following Torres y Quevedo’s morning presentation, the afternoon was given over to names that would, in time, become indelibly linked to cybernetics. W. Ross Ashby, a British psychiatrist and researcher, presented his subsequently famous Homeostat, an electronic system designed to adapt to it environment and described in his 1948 paper, “Design for a Brain.”

Ashby was followed by W. Grey Walter, a British neurophysiologist, who demonstrated his own subsequently famous light-sensitive robot “tortoises.” None other than Norbert Wiener himself followed, with a rather remarkable talk in which he speculated about the possibilities for computing machines to afford humanity new kinds of sensory perceptions of form—“gestalt”—“which are not in our nervous system.” New realms of perception might be opened to a new kind of body combining mind and machine.

Other speakers that Friday afternoon included Albert Uttley, a researcher at the UK’s National Physical Laboratory and a central figure in the Ratio Club, a dining club of British cyberneticists that met in the basement of a London neurological hospital. Appearing also was the Chicago neurophysiologist Warren McCulloch, who with the astonishing talent of Walter Pitts had pioneered the concept of “nervous nets”—our neural nets—in 1943, with ideas for using such nets to perform logic and other calculations. At the Paris conference, McCulloch quipped, “Brains are calculating machines, but man-made calculating machines are not yet brains.”

Computers as models for the human brain

If the notion of McCulloch and Pitts was to create machines on the model of minds—well, at least, brains—the closing remarks for the entire conference marched in a contrary direction. The final speech fell, surely by design, to Couffignal himself. Late in the morning of Saturday, 13 January, he spoke of “Several New Analogies Between the Structures of Calculating Machines and Brain Structures.” Taking the brain to be “a machine where thoughts are elaborated and the [sic] logic as the working-method of that apparatus,” he proposed that a body of knowledge could be constructed about the actual “working processes” of the brain and that these actualities could be directly compared to the “ideal logic performed with computing machines.”

Couffignal’s aim was not to make computers think like humans, but for humans to think like computers. Why? He believed it would be good for people: “On individual scale, an enlargement of social strength of intelligence can be expected,” perhaps by breaking through the “few steady ideas” inherited by our “civilization” with which Couffignal thought humans based our actual reasoning. He concluded, “and, on human scale, an encreasement [sic] of the intellectual potential.” Together, by thinking like computers, humanity would create a new and improved version of itself.

While Couffignal dreamt of reasoning more like a computer, he assuredly practiced eating like a Parisian. After his conclusion to the conference, the conferees made their way across the city to the 17th Arrondissement and the Ecole Hotelière. Opened in the 1930s, the school trained hundreds of aspiring Parisian hotel chefs. One of its restaurants boasted a replica of a dining room aboard the luxury ocean liner SS Normandie. The 1951 conference’s “Banquet de Cloture” was no doubt delicious, but the announcement of it showed the dyspeptic truth that only men had attended the conference proper. Any women who accompanied them, however, were invited to the banquet: “Les dames accompagnato les members du Colloque sont admises au banquet.”

A French project falters, a German one succeeds

After this remarkable conference, Couffignal’s career did not rise to greater heights. The Logabax Company, which was finishing work on Couffignal’s pilot-machine and had started on his ambitious, memory-minimizing, parallel-design large computer, went bankrupt in 1952 and closed. Couffignal’s large machine remained forever unbuilt. He and his laboratory struggled along, eventually buying an Elliott 402 computer, a rather standard vacuum-tube, stored-program computer with a magnetic drum memory. For Couffignal, the purchase must have felt, at least somewhere within him, like humiliation. The Elliott was the epitome of everything his unbuilt design was not. In 1959, he was fired.

Through a series of books published in the 1950s and 1960s on cybernetics, Couffignal earned himself a twin legacy: Today, he is generally seen as having given French digital computer manufacture something of an initial disadvantage, while at the same time he is included as an important early figure within French cybernetics.

Alwin Walther, for his part, left Paris and returned to Darmstadt, where he continued the physical reconstruction and technological evolution of his Institut für Praktische Mathematik. Among other things, he began his own effort to create an electronic digital computer. It would take him until the end of the decade, but he succeeded in creating a vacuum tube-based machine for his institute, the DERA. In this 1963 video, Walther and his associates demonstrate DERA:

Walther must have found his Parisian experience to have been of great value, for in 1955 he and his institute organized and hosted their own large international conference on digital computing. Less fleetingly, he built up a considerable library of the available literature on computing at his institute, turning it into a key resource for the developing German computing community.

For this library, Walther had bound together the various documents he had brought home with him from Couffignal’s 1951 Paris conference: the four-page printed program; the three-page typescript conference announcement; the nine-page list of attendees; and over 100 pages of abstracts for the presentations, both in English and in French. This bound volume sat on the shelves of the institute’s library as item number B8807 for years.

As for the others in attendance at Couffignal’s 1951 conference, their full comments were presented in French in a publication by the CNRS in 1953, under the same title as the conference. Running to over 560 pages, this record ironically remains—at least after an ardent search by this author—inaccessible freely on the Web. Nevertheless, many of the attendees have become the perennial subjects of the history of science and technology, especially computing.

With time, Walther’s bound volume of his 1951 conference materials was deaccessioned by his institute’s library. From there, the volume attracted the discerning eye of Jeremy M. Norman, a noted collector and dealer in rare books and manuscripts in the history of science and technology. Norman, in turn, used the volume in his and Diana H. Hook’s 2002 documentary history, The Origins of Cyberspace: A Library on the History of Computing, Networking, and Telecommunications (Norman Publishing).

More recently, Norman included Walther’s bound volume in a lot of rare books he donated to the collection of the Computer History Museum. The volume now resides in the Museum’s archival facility in the San Francisco Bay Area, while a new PDF scan of it resides in a computer somewhere in the Microsoft Azure infrastructure, and might reside on your computer or phone should you download it here.

Another chapter in the long history of artificial intelligence

How in the end should we think about Couffignal’s 1951 conference? What do we make of it? In a 2017 piece, the computer scientist turned computing historian Herbert Bruderer considers whether the conference marked the “birthplace of artificial intelligence,” rather than the more famous Dartmouth Conference of 1956. He writes, “This well-documented event could also be regarded as the first major conference of artificial intelligence.”

This view hinges on just how one takes the phrase “artificial intelligence.” My own perspective is that the 1951 conference was certainly part of the building of a community: a community of people thinking about and building computers and cybernetic machines. It was a community animated by thinking about minds and machines together, also considering—some even pursuing—machines as minds. In this, the Paris conference is evidence of the long story of what we could today call “artificial intelligence” and how it weaves throughout computer history.

Editor’s note: This post originally appeared as “Thinking About Machines and Thinking” on the blog of the Computer History Museum.

About the Author 

David C. Brock is an historian of technology and director of the Computer History Museum’s Software History Center.

Sources  

German computer pioneer Alwin Walther’s bound papers from the 1951 conference “Les Machines a Calculer et la Pensee Humaine” are in the collection of the Computer History Museum and are available here.

For more on Walther, see Wilfried de Beauclair’s “Alwin Walther, IPM, and the Development of Calculator/Computer Technology in Germany, 1930-1945,” in Annals of the History of Computing, v. 8, no. 4, October 1986, pp. 334-350. Herbert Bruderer discusses Alwin Walther’s work on DERA, in Milestones in Analog and Digital Computing, Third Edition, Springer, 2020. For more on DERA, see Klaus Biener’s article “Alwin Walther—Pionier der Praktischen Mathematik,” August 1999 [in German]. See also the “Alliance with the Nazi Regime” timeline at the Technische Universität Darmstadt.

For more on Louis Couffignal, see Girolamo Ramunni’s “Louis Couffignal, 1902-1966: Informatics Pioneer in France?,” Annals of the History of Computing, v. 11, n. 4, 1989, pp. 247-256, and Pierre E. Mounier-Kuhn’s “The Institut Blaise-Pascal (1946-1969) from Couffignal’s Machine to Artificial Intelligence,” Annals of the History of Computing, v. 11, n.4, 1989, pp. 257-261.

For more on Louis Lapicque, see Jean-Gaël Barbara’s “French Neurophysiology between East and West: Polemics on Pavlovian heritage and Reception of Cybernetics,” in Franco-Russian Relations in the Neurosciences, Hermann, 2011. Lapicque’s papers are at the University of Texas.

Norbert Wiener’s papers are held at the MIT Libraries. For more on Wiener and the evolution of cybernetics, see “Cybernetics and Information Theory in the United States, France and the Soviet Union,” by David Mindell, Jérôme Segal, and Slava Gerovitch, in Science and Ideology: A Comparative History, Routledge, 2003, pp. 66-95.

For more on Leonardo Torres y Quevedo, see Brian Randell’s “From Analytical Engine to Electronic Digital Computer: The Contributions of Ludgate, Torres, and Bush,” Annals of the History of Computing, v. 4, no. 4, October 1981, pp. 327-341.

W. Grey Walter and his cybernetic tortoises are discussed in Andrew Pickering’s The Cybernetic Brain: Sketches of Another Future, University of Chicago Press, 2009. See also “The Ratio Club: a melting pot for British cybernetics,” by Olivia Solon, wired.co.uk, 21 June 2012, and Phil Husbands and Owen Holland’s “The Ratio Club: A Hub of British Cybernetics,” from The Mechanical Mind in History, MIT Press, 2008, pp. 91-148.

For more on Walter Pitts, see Amanda Gefter’s “The Man Who Tried to Redeem the World with Logic,” Nautilus, 5 February 2015.

Herbert Bruderer’s 2017 essay “The Birthplace of Artificial Intelligence?” considers the significance of the 1951 Paris conference organized by Couffignal.

Decoding the Innovation Delusion, Nurturing the Maintenance Mindset

Post Syndicated from David C. Brock original https://spectrum.ieee.org/tech-talk/tech-history/silicon-revolution/decoding-the-innovation-delusion-nurturing-the-maintenance-mindset

Without doubt, technological innovations are tremendously important. We hear the term “innovation” seemingly everywhere: In books; magazines; white papers; blogs; classrooms; offices; factories; government hearings; podcasts; and more. But for all this discussion, have we really gotten much clarity? For Lee Vinsel and Andrew L. Russell, coauthors of the new book The Innovation Delusion: How Our Obsession with the New Has Disrupted the Work That Matters Most, the answer is “Not exactly.” On 2 December 2020, they joined Jean Kumagai, senior editor at IEEE Spectrum, for a Computer History Museum Live event to help decode innovation for the public. They shared what their experiences as historians of technology have taught them innovation is, and is not, and why they believe that an over-emphasis on innovation detracts from two pursuits they believe are of great importance to our present and our future: maintenance and care.

The conversation began with a discussion of some of the key terms used in the book: “innovation,” “care,” and what the coauthors call “innovation speak.”

For Vinsel and Russell, their advocacy of expanded attention to, and practice of, maintenance and care is far from a turn away from technology. Rather, they argue, it is a call to pay attention to important issues that have always been present in technology.

Maintenance, for Vinsel and Russell, is anything but simple. There is a diversity of approaches to maintenance, some with great advantages over others. One of the starkest of these contrasts is between deferred maintenance and preventive maintenance.

Maintenance itself is not always a permanent solution. Vinsel and Russell discuss the inescapable question of when to cease maintenance and embrace retirement.

Maintenance, as with all things, is not without its costs, both direct and indirect. For the latter, the greater costs arise from a lack of maintenance and exacerbate injustice.

At even further extremes, the neglect of maintenance and care over time can constitute a disaster, not necessarily as suddenly as a storm, but rather as a “slow disaster.” Conversely, continued investments in developing and maintaining infrastructures can become platforms for true innovations.

Kumagai challenged Vinsel and Russell to consider the possible opportunity costs of increasing investment in maintenance at the expense of innovation.

Further, she challenged them to consider what a member of the general public could or should do to foster this maintenance mindset.

For Vinsel and Russell, the adoption of a maintenance mindset changes the way in which one views technologies. For example, it frames the question of expanding technological systems or the adoption of new technologies as taking on increased “technological debt.”

Just as maintenance is not simple in Vinsel and Russell’s account, nor is it dull or static. Indeed, the pair see maintenance as essential to creative moves to sustainability in the face of the climate crisis.

Kumagai brought the conversation to a close by asking Vinsel and Russell to participate in CHM’s “one word” initiative, with each sharing their one word of advice for a young person starting their career.

Lee Vinsel and Andrew L. Russell’s book is The Innovation Delusion: How Our Obsession with the New Has Disrupted the Work That Matters Most (Currency, 2020)The Maintainers website has more information about the group they cofounded. 

Editor’s note: This post originally appeared on the blog of the Computer History Museum.

About the Author

David C. Brock is an historian of technology and director of the Computer History Museum’s Software History Center.

The Inventions That Made Heart Disease Less Deadly

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/silicon-revolution/the-inventions-that-made-heart-disease-less-deadly

Cardiac arrhythmia—an irregular heartbeat—can develop in an instant and then quickly, and sometimes fatally, spiral out of control. In the 1960s, physician L. Julian Haywood sought a way to continuously monitor the heart for such rhythm changes and alert nurses and doctors when an anomaly was detected. It would be one of many innovations that Haywood and his associates at Los Angeles County General Hospital implemented to improve the quality of care for patients with heart disease.

Haywood had arrived at the hospital in 1956 as an eager second-year resident in internal medicine. Having already completed a residency at the University of Virginia and Howard University, followed by two years as a medical officer in the U.S. Navy, he set his sights on specializing in cardiology. 

Haywood was thus deeply disappointed to discover that the hospital had no formal teaching in the area, no clinical cardiology rounds, and no cardiology-related review conferences where doctors and students would gather to discuss patients and interesting developments. This despite the fact that the hospital’s mortality rate among heart attack patients was 35 percent. 

Haywood was persistent. He soon learned that Dr. William Paul Thompson presided over a review conference on electrocardiography—the measurement of electricity activity in the heart. These Thursday morning sessions, held in the hospital’s main auditorium, drew faculty from the  medical schools at the University of Southern California (USC), the College of Medical Evangelists, and the California College of Osteopathy. The weekly conference set the stage for Haywood’s lifelong investigation into how technology and specialized care could be used to help patients with heart disease.

How heartbeats came to be measured

An electrocardiogram, also known as an ECG or EKG, is a graph of the heart’s electrical activity, measured in millivolts. Today, an ECG is an easy, safe, and painless test that doctors rely on to check for signs of heart disease. In a conventional ECG, a technician places 10 stick-on electrodes on the patient’s chest and limbs. These days you don’t even need to go to a doctor to get such a test: ECG-enabled smartwatches have hit the market that can measure your blood pressure, heart rate, and blood oxygen saturation, all in real time. In the next few years, the same features may be available through your earbuds.

Back in the late 18th century, though, scientists investigating the heart’s electricity activity would puncture the heart muscle with wires. The technology for noninvasively capturing ECGs took decades to develop. British physiologist Augustus Waller is usually credited with recording the first human ECG in 1887, using a device called a capillary electrometer. Invented in the early 1870s by a doctoral student at Heidelberg University named Gabriel Lippmann, the capillary electrometer was able to measure tiny electrical changes when a voltage was applied.

In 1893, Dutch physiologist Willem Einthoven refined the capillary electrometer to show the voltage graph of a heartbeat cycle and the cycle’s five distinct deflections. He named these points P, Q, R, S, and T, a convention that persists to this day. Einthoven went on to develop an even more sensitive string galvanometer, and he won the 1924 Nobel Prize in Physiology or Medicine for his “discovery of the mechanism of the electrocardiogram.” (Einthoven’s string galvanometer was recently approved as an IEEE Milestone, awarded to significant achievements in electrotechnology.)

In the early 20th century, ECGs were large, fixed instruments that could weigh up to 600 pounds (272 kilograms). One misconception was that they were overly sensitive to vibration, so early installations tended to be placed in basements on concrete floors. Electrostatic charges could interfere with readings, so ECG machines were often enclosed in Faraday cages.

The first “portable” instrument, introduced in the late 1920s, was a General Electric model that used amplifier tubes instead of a string galvanometer. Although it weighed about 80 pounds (32 kg), it could be loaded on a cart and wheeled into a patient’s room, rather than the patient having to be transported to the equipment. In the 1930s, the Sanborn Co. and Cambridge Instrument Co. came out with the first truly portable ECG systems.

The rise of cardiology as a specialty

Despite this long history of invention, cardiology still did not exist as a specialty when Haywood was completing his residency at L.A. County General. He continued to look for ways to expand his educational opportunities. When he learned that the U.S. Public Health Service provided funding to create special units to care for heart attack patients, Haywood applied, and in 1966 the hospital opened a four-bed coronary care unit. He also secured funding from the Los Angeles chapter of the American Heart Association to start a nurse training program in cardiology.

At the time, the hospital’s standard treatment for acute myocardial infarction (heart attack) included four to six weeks of hospitalization. The goal of the new coronary care unit was to reduce the hospital’s 35 percent mortality rate among heart attack patients.

About 40 percent of those deaths were likely due to arrhythmias, which occurred even when patients were being closely observed. Haywood and his associates knew they needed a reliable way to continuously monitor the heart for rhythm changes. They developed the prototype digital heart monitor shown at top and began using it in the coronary care unit in 1969. The automated system detected heart-rhythm abnormalities and alerted nurses and doctors, either at the bedside or at a central monitoring station. The software for the monitor ran on computers supplied by Control Data Corp. and Raytheon. (Haywood donated the monitor to the Smithsonian National Museum of African American History and Culture in 2017.)

Haywood and his collaborators published widely about their work on the monitor, in Computers and Biomedical Research, Mathematical Biosciences, Journal of the Association of the Advancement of Medical Instrumentation, and elsewhere. Haywood also presented their work at the Association for Computing Machinery’s annual conference in 1972. The monitor influenced the design of commercial products, which were subsequently deployed at hospitals throughout the country and beyond.

The hospital’s coronary care unit and the cardiac-nurse training program were also successful, and mortality rates for cardiac patients declined significantly. Trainees went on to work at hospitals throughout Los Angeles County, and a number of them enjoyed successful careers in education and administration. As other hospitals in the region began creating their own coronary care units, friendly competition resulted in improved care for cardiac patients. 

L. Julian Haywood’s legacy of equitable health care for all

Over the course of his distinguished career, Haywood was keenly aware of the disparities in access to health care for racial minorities, as well as the difficulties that minorities faced in gaining acceptance by the medical profession. He was only the third Black internal medicine resident at L.A. County General (now known as Los Angeles County + University of Southern California Medical Center). Nearby, the USC School of Medicine (now the Keck School of Medicine of USC) had yet to graduate any Black medical students (although two were seniors at the time). When Haywood later joined the hospital’s teaching faculty, he was one of the few full-time faculty members who were minorities.

From the time he completed his residency, Haywood was an active member in the Charles R. Drew Medical Society, the Los Angeles affiliate of the National Medical Association. The NMA had formed in 1895 during a time of deep-seated racism and Jim Crow segregation, and its mission was to aid in the professional development of Black physicians, who were for many years excluded from membership and otherwise discriminated against by the American Medical Association.

In the 1960s, a major concern of the Drew Medical Society was the poor access to medical services in the Watts area of Los Angeles. Indeed, the lack of quality health care was one of the systemic injustices that fueled the Watts riots of 1965. In his memoir, Haywood recounts how white physicians tried to bar minority doctors from establishing practices there. Following the riots, the McCone Commission recommended the building of a hospital to serve the needs of South-Central L.A. residents, resulting in the Martin Luther King, Jr. Community Hospital, which opened in 1972.

In 2018, reflecting on his long career, Haywood published an article on the factors leading to a dramatic decline in the death rates for heart disease in Los Angeles County. He concluded that the development of coronary care units in the 1960s helped usher in a new focus on cardiology, leading to progress in angiography (medical imaging of blood vessels and organs), angioplasty (ballooning of arterial obstructions), bypass surgery, and pharmaceuticals to control blood pressure and cholesterol. Additionally, the success of cardiology and cardiothoracic surgery programs at universities spurred research into pacemakers, heart valve replacements, and other lifesaving technologies. 

Such progress is impressive and hard won, but we shouldn’t forget: Heart disease is the leading cause of death in the United States as well as the world. The statistics are particularly grim for African Americans, who are more likely to suffer from conditions like high blood pressure, obesity, and diabetes that increase the risk of heart disease. As Haywood noted in his 2018 essay, “The assault on heart disease and high blood pressure continues, as it must.”

 

Editor’s Note: L. Julian Haywood died of Covid-19 on 24 December 2020. He was 93.

An abridged version of this article appears in the February 2021 print issue as “The Measured Heart.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

 

How Counting Calories Became a Science

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/dawn-of-electronics/how-counting-calories-became-a-science

A new year means another attempt to pay more attention to what I eat in the never-ending hope of trimming my waistline. Naturally my thoughts turn to counting calories and a renewed fascination with Wilbur O. Atwater. It was Atwater who introduced American audiences to the Calorie as a unit of energy for food. (More on the distinction between “Calorie” and “calorie” in a bit.)

Atwater was a professor of chemistry at Wesleyan University in Connecticut from 1873 to 1907. His interest in nutrition and metabolism evolved over time, especially after he traveled to Munich and learned of German techniques for analyzing the nutritional content of food. He investigated the correlation between the chemical energy from food and manual labor, because he wanted to make sure that workers had an appropriate diet.

In 1887, Atwater published an article called “The Potential Energy of Food[PDF], in which he defined the Calorie as the amount of heat that would raise the temperature of a kilogram of water one degree centigrade (or a pound of water 4 degrees Fahrenheit). He wanted to show that a unit of heat could be a unit of mechanical energy, and so he also defined a Calorie as 1.53 foot-tons—that is, the force needed to lift a ton by one foot.

In his article, Atwater listed the Calorie counts of various foods by the pound, such as beef, round, rather lean (807); butter (3,691); cows’ milk, both regular (308) and skimmed (176); oatmeal (1,830); and turnips (139). His figures were based on his estimates of the amounts of nutrients, protein, fat, and carbohydrates in each food, plus some experiments. Although modern calorie counts for these foods differ slightly from Atwater’s, we still use his estimates that a gram of protein contains 4.1 Calories and a gram of lipids 9.3 Calories.

One instrument Atwater relied on in his experiments was a bomb calorimeter. Already in widespread use at the time, it measures the heat given off during a reaction. A sample is placed in a steel reaction vessel called a bomb, which is then immersed in water. An electric current ignites the sample, and the water bath absorbs the resulting heat. The temperature of the water is recorded at defined intervals.

Atwater, along with Wesleyan physicist Edward Rosa and chemist Francis Benedict, also developed a respiration calorimeter. It let the scientists estimate the calories consumed by a human subject, by measuring the person’s intake of oxygen, output of carbon dioxide, and the resulting quantity of heat produced. The calorimeter was a copper box 6 feet high, 4 feet wide, and 7 feet deep (1.8 by 1.4 by 2.1 meters) that was encased in wood and zinc to help maintain a constant temperature. The subject would remain in the box for up to 12 days, doing various assigned tasks, from lying at rest to exercising. These studies became the foundation for understanding metabolic rates.

And that’s how we came to count calories. Except that it’s not.

Defining the calorie took decades—and then the joule came along

Atwater did not coin the term Calorie. That distinction usually goes to Nicolas Clément, a professor of chemistry at the Conservatoire des Arts et Métiers in Paris. In 1819 Clément was teaching a course on industrial chemistry, and he needed a unit of heat for a discussion of how steam engines convert heat into work. He arrived at the calorie, which he defined as the quantity of heat needed to raise the temperature of 1 kg of water by 1 °C—the same definition Atwater later used. Clément, though, was more precise in specifying that the measurement was taken from 0 to 1 °C. Scientists accepted Clément’s definition, and the calorie entered into French physics textbooks.

Among those texts were two by the French physicist Adolphe Ganot that were translated into multiple languages. Universities in Europe and the United States used the popular textbooks into the early 20th century. And so Clément’s calorie entered the English language.

Meanwhile, though, another definition of the calorie was circulating. In 1852, the French chemist Pierre Favre and the French physicist Johann Silbermann defined their calorie as the amount of energy needed to raise the temperature of one gram of water by one degree centigrade—a difference in scale of 1,000! Favre and Silbermann published widely, and German scientists adopted their definition.

By the 1870s, the competing definitions of the calorie pushed French chemist Marcellin Berthelot to make a distinction. He defined the calorie (with a lowercase c) as a gram-calorie à la Favre and Silbermann, and the Calorie (capitalized) as the kilogram-calorie à la Clément. The calorie in turn became known as the “small calorie,” while the Calorie became known as the “large calorie.” In 1894, U.S. physician Joseph Raymond, in his classic textbook A Manual of Human Physiology, proposed calling the large calorie the kilocalorie, but the term didn’t catch on until some years later.

Meanwhile, the British Association for the Advancement of Science was working on an entirely different energy unit: the joule. In 1882 William Siemens proposed the joule during his inauguration speech as chairman of the BAAS. Confounded by the calorie, Siemens argued: “The inconvenience of a unit so entirely arbitrary is sufficiently apparent to justify the introduction of one based on the electro-magnetic system.” He defined a joule as the energy dissipated as heat when an electric current of one ampere passes through a resistance of one ohm for one second.

And so when Atwater was conducting his nutritional investigations of food, he had his choice of units for heat energy. He would have read about Clément’s calorie in Ganot’s translated textbooks. He would have come across Favre and Silbermann’s calorie during his postdoctoral training in Germany. And as a man of science, he likely would have heard about the proposed joule, although Siemens’s definition wasn’t adopted until 1889, at the second International Electrical Congress.

James L. Hargrove of the University of Georgia has investigated the history of the calorie and offers a few suggestions as to why Atwater chose the Calorie. For one, it was the only unit of energy listed in American dictionaries. Perhaps more importantly, Hargrove suggests, the Calorie was of a manageable scale around which Atwater could create a recommended daily intake of 2,000 Calories. A daily intake of 2 million calories, on the other hand, would have seemed onerous.

U.S. nutritionists followed Atwater’s lead, bolstered by tables that Atwater and his workers prepared for the U.S. Department of Agriculture listing the Calorie counts of over 500 foods. Atwater’s daughter Helen assisted in his lab for a decade. After her father’s death in 1907, she went to work for the USDA’s Bureau of Home Economics.

Both the Calorie and the calorie were officially rendered obsolete in 1948 when the international scientific community adopted the joule as the standard unit of energy. As Siemens had noted, it was just too confusing to have two different definitions distinguished only by capitalization and orders of magnitude. To this day, though, U.S. nutrition labels continue to report Calories, while other countries give values in both kcals and joules.

Calorimeters guided the design of steam generators

Nutritionists weren’t the only ones interested in calories. At the turn of the last century, demand for electricity was booming, and municipalities in many countries were building new power plants. With the invention of the steam turbine, generators became more complex and boilers operated at higher temperatures and pressures. Engineers desperately needed data about their steam equipment, yet they lacked internationally standardized values for the properties of water and steam. And so they turned to calorimeters that went back to Clément’s original intent: measuring the work done by steam.

Beginning in 1921 and continuing for almost two decades, Nathan Osborne, Harold Stimson, and Defoe Ginnings worked at the U.S. National Bureau of Standards (now the National Institute of Standards and Technology) on this precise problem. The team developed the elegant calorimeter pictured at top to study the heat capacity and heat of vaporization of water at temperatures up to 100 °C.

The instrument, which has been cut away to show the interior, worked similarly to the bomb calorimeters that Atwater used. The spherical inner shell held the water sample. Energy was added by an electric current, and the scientists observed the change of state. Their data guided the design and evaluation of steam power equipment into the 1960s.

As Osborne noted in his 1925 report “Calorimetry of a Fluid” [PDF], the adoption and refinement of electric heaters, resistance thermometers, and thermocouples allowed calorimeters to become a reliably accurate means of measurement in thermal research.

And so, whether calculating the energy in food or the heat capacity of water, calorimeters have been valuable instruments for chemists, physicists, and engineers for over two centuries. As we enter a new year, and as many of us take a renewed interest in calories, it seems only fitting to pay homage to the instruments that count them.

An abridged version of this article appears in the January 2021 print issue as “Counting Calories.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

When a Giant Mylar Balloon Was the Coolest Thing in Space

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/space-age/when-a-giant-mylar-balloon-was-the-coolest-thing-in-space

During the past few months, complaints have been mounting about delays and out-of-date equipment at the U.S. Postal Service. Such complaints were also familiar to Arthur Summerfield, who served as U.S. Postmaster General from 1953 to 1961. Summerfield had inherited a system that was still largely processing the mail by hand, and he set out to automate and mechanize the work. But he also had big dreams of the next frontier in mail: space. 

How do you deliver mail by satellite?

In 1960 Summerfield partnered with NASA to use the space agency’s Echo 1 satellite for Speed Mail, a service that would allow customers to send letters rapidly across the country. Echo 1 was an early experiment in satellite communication. Launched inside a metal sphere [a spare of which is shown above], it inflated in low Earth orbit into a giant Mylar balloon, 100 feet (30.5 meters) across. Project personnel dubbed it a “satelloon.” It circled the globe every 2 hours, reflecting radio, telephone, and TV signals on two channels—960 megahertz and 2390 megahertz—between ground stations. 

How did Speed Mail work? Patrons would compose their Speed Mail missives on special stationery, similar to the Victory Mail forms used to expedite letters to U.S. soldiers during World War II. The sender would take the Speed Mail form to a designated post office. In accordance with privacy laws and expectations, postal employees would never see the contents of the sealed letter. Instead, a special machine would automatically open the letter, scan it, and then beam the contents via Echo 1 to the destination post office, where it would be printed, sealed, and delivered.

On 9 November 1960, the postmaster sent the first Speed Mail letter. Addressed to “Mr. and Mrs. America,” the letter urged people to post their holiday cards and presents early. That remains sound advice even in the Internet age and certainly during a global pandemic.

Summerfield’s message originated at a postal substation in the Federal Center Building in Washington, D.C., from which it was sent by telephoto circuit—what we would call a fax machine—to the Naval Research Laboratory at Stump Neck, Md. From there it was transmitted to Echo, which dutifully reflected the contents back down to Earth. The Bell Telephone Laboratories station in Holmdel, N.J., picked up the signal and converted it back into a telephoto message. It then traveled over phone lines to nearby Newark, and a facsimile receiver reprinted the original message. Quite a journey for a single letter.

 

Summerfield envisioned a network of 71 Speed Mail stations set up in post offices throughout the United States. He said that each Speed Mail machine would be able to handle up to 15 letters per hour, simultaneously sending and receiving messages. He expected Speed Mail to become popular with the public, estimating that three quarters of the volume would be civilian mail, with communications among defense agencies making up the rest.

Speed Mail was not Summerfield’s first foray into experimental delivery methods. A year earlier, he had launched Missile Mail, with a payload of 3,000 letters tucked inside a Regulus I missile. Shortly before noon on 8 June 1959, the U.S.S. Barbero fired the missile from international waters in the Atlantic Ocean. Twenty-two minutes later, the missile successfully landed at Mayfield Naval Base in Jacksonville, Fla.

“This peacetime employment of a guided missile for the important and practical purpose of carrying mail, is the first known official use of missiles by any Post Office Department of any nation,” announced an enthusiastic Summerfield. He speculated that “before man reaches the moon, mail will be delivered within hours from New York to California, to Britain, to India or Australia by guided missiles.”

Of course that didn’t happen. Missile Mail was a one-hit wonder. Similarly, Congress never approved funding to expand Speed Mail beyond the initial tests. In January 1961, newly inaugurated President John F. Kennedy appointed J. Edward Day as Postmaster General to replace Summerfield. Day’s approach to mail processing was far more down-to-earth. Americans can thank him for introducing 5-digit postal codes, known as Zone Improvement Plan, or ZIP, codes, that sorted the mail into geographic regions.

Project Echo’s rich legacy: satellite spotting, the big bang, and trippy public art

Meanwhile, Echo 1 continued in its orbit for much longer than its expected 1-year life span and helped collect a wealth of information about Earth’s atmosphere. The satelloon didn’t have any fancy instrumentation. It carried only temperature and pressure gauges, plus two 107.9-MHz transmitting beacons powered by 70 solar cells and five batteries that scientists could use to track the satellite. Solar pressure would gradually nudge Echo off orbit, and scientists measured the change to quantify the effect. They also used it for triangulation, to create more accurate maps. Perhaps most important, Echo proved that the ionosphere didn’t cause undue electrical interference for signals reflected from space.

Project Echo was a joint collaboration of NASA, the Jet Propulsion Lab in Pasadena, Calif., and the U.S. Naval Research Laboratory in Washington, D.C. Bell Labs developed the instruments to transmit and receive the signals. In the Mojave Desert, at JPL’s Goldstone Tracking Station, the signal passed through a pair of 85-foot (26-meter) parabolic dishes. On the other side of the country, in Holmdel, N.J., a Bell Labs transmitter and a steerable, 15-meter-long “horn” receiver picked up the reflected signals. The horn was plagued by faint static, so radio astronomers Arno Penzias and Robert Wilson spent significant effort investigating the cause, eventually concluding in 1965 that it was detecting microwave radiation left over from the creation of the universe. For this confirmation of the big bang, Penzias and Wilson shared the Nobel Prize in Physics in 1978.

In 1964, NASA launched Echo 1’s slightly larger successor, Echo 2, into a near-polar orbit. Both satellites were easily visible from the ground without the aid of binoculars or telescopes. Echo-watching became a popular pastime, with newspapers publishing timetables of when the satellites would pass overhead. Echo 1’s orbit eventually decayed, and the satellite burned up in the atmosphere on 24 May 1968. Echo 2 followed a year later, ending its mission on 7 June 1969.

Meanwhile, the company that manufactured the Mylar balloons, G.T. Schjeldahl Co., went on to create a trippy spherical mirror for the 1970 World’s Fair. [See “When Artists, Engineers, and PepsiCo Collaborated, Then Clashed at the 1970 World’s Fair.”]

At the beginning, Project Echo was something out of science fiction— literally, it was described in an essay by J.J. Coupling titled “Don’t Write: Telegraph!” in the March 1952 issue of Astounding Science Fiction, in which the author envisioned a world enriched by interplanetary communication. “J.J. Coupling” was the pseudonym of John R. Pierce, Bell Labs’ director of electronics research, and in his day job he worked to make that world a reality. Echo proved the feasibility of commercial satellite communications, but it was just the beginning. Eventually, active communications satellites that could amplify and rebroadcast messages proved more useful than passive satelloons that simply reflected an echo.

An abridged version of this article appears in the December 2020 print issue as “Special Delivery by Satellite.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

When X-Rays Were All the Rage, a Trip to the Shoe Store Was Dangerously Illuminating

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/heroic-failures/when-xrays-were-all-the-rage-a-trip-to-the-shoe-store-was-dangerously-illuminating

How do those shoes fit? Too tight in the toes? Too wide in the heel? Step right up to the Foot-O-Scope to eliminate the guesswork and take a scientific approach to proper shoe fitting!

When the German engineer and physicist Wilhelm Röntgen accidentally discovered a mysterious light that would pass through most substances and leave behind a ghostly image of an object’s interior, I doubt he had shoes in mind. Indeed, he didn’t even know what the light was, so he called it “X-rays,” the “X” standing for the unknown. That name stuck for English speakers, although in many languages they’re known as Röntgen rays. 8 November marks the 125th anniversary of his discovery.

Röntgen published his findings on 28 December 1895, and within a month, “On a New Kind of Rays” had been translated into English and published in Nature. Three weeks after that, Science reprinted it. Word also spread quickly in the popular press about this wondrous light that allowed you to see inside the human body. Similar to Marie and Pierre Curie, Röntgen refused to take out any patents so that humanity could benefit from this new method for querying nature. Scientists, engineers, and medical doctors dove into X-ray research headlong.

Experimenters quickly realized that X-rays could produce still images, called radiographs, as well as moving images. The object of interest was placed between an X-ray beam and a fluorescent screen. Röntgen had been experimenting with cathode rays and Crookes tubes when he first saw the glow on a screen coated with barium platinocyanide. It took a few weeks of experimenting to capture clear images on a photographic plate. His first X-ray image was of his wife’s hand, distinctly showing the bones and a ring.

Viewing a moving image was simpler: You just looked directly at the fluorescent screen. Thomas Edison, an early X-ray enthusiast, coined the term fluoroscopy for this new technique, which was developed simultaneously in February 1896 in Italy and the United States.

Less than a year after Röntgen’s discovery, William Morton, a medical doctor, and Edwin W. Hammer, an electrical engineer, rushed to publish The X-Ray; or Photography of the Invisible and Its Value in Surgery, which described the necessary apparatus and techniques to produce radiographs. Among the book’s numerous illustrations was a radiograph of a woman’s foot inside a boot. Morton and Hammer’s textbook became popular among surgeons, doctors, and dentists eager to apply this new technology.

From early on, feet in shoes were a popular X-ray subject

A push from the military during World War I helped establish the fluoroscope for shoe fitting. In his highly regarded 1914 publication A Textbook of Military Hygiene and Sanitation, for instance, Frank Keefer included radiographs of feet in boots to highlight proper and ill-fitting footwear. But Keefer stopped short of recommending that every soldier’s foot be imaged to check for fit, as Jacalyn Duffin and Charles R. R. Hayter (both historians and medical doctors) detail in their article “Baring the Sole: The Rise and Fall of the Shoe-Fitting Fluoroscope” (Isis, June 2000).

Jacob J. Lowe, a doctor in Boston, used fluoroscopy to examine the feet of wounded soldiers without removing their boots. When the war ended, Lowe adapted the technology for shoe shops, and he filed for a U.S. patent in 1919, although it wasn’t granted until 1927. He named his device the Foot-O-Scope. Across the Atlantic, inventors in England applied for a British patent in 1924, which was awarded in 1926. Meanwhile, Matthew B. Adrian, inventor of the shoe fitter shown at top, filed a patent claim in 1921, and it was granted in 1927.

Before long, two companies emerged as the leading producers of shoe-fitting fluoroscopes: the Pedoscope Co. in England and X-Ray Shoe Fitter Inc. in the United States. The basic design included a large wooden cabinet with an X-ray tube in its base and a slot where customers would place their shoe-clad feet. When the sales clerk flipped the switch to activate the X-ray stream, the customer could view the image on a fluorescent screen, showing the bones of the feet and the outline of the shoes. The devices usually had three eyepieces so that the clerk, customer, and a third curious onlooker (parent, spouse, sibling) could all view the image simultaneously.

The machines were heralded as providing a more “scientific” method of fitting shoes. Duffin and Hayter argue, however, that shoe-fitting fluoroscopy was first and foremost an elaborate marketing scheme to sell shoes. If so, it definitely worked. My mother fondly remembers her childhood trips to Wenton’s on Bergen Avenue in Jersey City to buy saddle shoes. Not only did she get to view her feet with the fancy technology, but she was given a shoe horn, balloon, and lollipop. Retailers banked on children begging their parents for new shoes.

Radiation risks from shoe-fitting fluoroscopes were largely ignored

Although the fluoroscope appeared to bring scientific rigor to the shoe-fitting process, there was nothing medically necessary about it. My mother grudgingly acknowledges that the fluoroscope didn’t help her bunions in the least. Worse, the unregulated radiation exposure put countless customers and clerks at risk for ailments including dermatitis, cataracts, and, with prolonged exposure, cancer.

The amount of radiation exposure depended on several things, including the person’s proximity to the machine, the amount of protective shielding, and the exposure time. A typical fitting lasted 20 seconds, and of course some customers would have several fittings before settling on just the right pair. The first machines were unregulated. In fact, the roentgen (R) didn’t become the internationally accepted unit of radiation until 1928, and the first systematic survey of the machines wasn’t undertaken until 20 years later. That 1948 study of 43 machines in Detroit showed ranges from 16 to 75 roentgens per minute. In 1946, the American Standards Association had issued a safety code for industrial use of X-rays, limiting exposure to 0.1 R per day.

But some experts had warned about the dangers of X-rays early on. Edison was one. He was already an established inventor when Röntgen made his discovery, and for several years, Edison’s lab worked nonstop on X-ray experiments. That work came to a halt with the decline and eventual death of Clarence M. Dally.

Dally, a technician in Edison’s lab, ran numerous tests with the fluoroscope, regularly exposing himself to radiation for hours on end. By 1900 he had developed lesions on his hands. His hair began to fall out, and his face grew wrinkled. In 1902, his left arm had to be amputated, and the following year his right arm. He died in 1904 at the age of 39 from metastatic skin cancer. The New York Times called him “a martyr to science.” Edison famously stated, “Don’t talk to me about X-rays. I am afraid of them.”

Clarence Dally may have been the first American to die of radiation sickness, but by 1908 the American Roentgen Ray Society reported 47 fatalities due to radiation. In 1915 the Roentgen Society of Great Britain issued guidelines to protect workers from overexposure to radiation. These were incorporated into recommendations made in 1921 by the British X-Ray and Radium Protection Committee, a group with a similar mission. Comparable guidelines were established in the United States in 1922.

For those concerned about radiation exposure, the shoe-fitting fluoroscope seemed a dangerous machine. Christina Jordan was the wife of Alfred Jordan, a pioneer in radiographic disease detection, and in 1925, she wrote a letter to The Times of London decrying the dangerous levels of X-ray radiation to which store clerks were being exposed. Jordan noted that while a scientist who dies of radiation sickness is celebrated as “a martyr to science,” a “‘martyr to commerce’ stands on a different footing.”

Charles H. Baber, a merchant on Regent Street who claimed to be the first shoe retailer to use X-rays, replied with a letter the next day. Having used the machine since 1921, he wrote, he saw no harm to himself or his employees. The Times also ran a letter from J. Edward Seager of X-Rays Limited (as the Pedoscope’s manufacturer was then called), noting that the machine had been tested and certified by the National Physical Laboratory. This fact, he wrote, “should be conclusive evidence that there is no danger whatever to either assistants or users of the pedoscope.”

And that, seemingly, was that. The shoe-fitting fluoroscope flourished in the retail landscape with virtually no oversight. By the early 1950s, an estimated 10,000 machines were operating in the United States, 3,000 in the United Kingdom, and 1,000 in Canada.

After World War II and the dropping of the atomic bombs, though, Americans began to pull back from their love of all things irradiating. The shoe-fitting fluoroscope did not escape notice. As mentioned, the American Standards Association issued guidance on the technology in 1946, and reports published in the Journal of the American Medical Association and the New England Journal of Medicine also raised the alarm. States began passing legislation that the machines could be operated only by licensed physicians, and in 1957, Pennsylvania banned them entirely. But as late as 1970, 17 states still allowed them. Eventually, a few specimens made their way into museum collections; the one at top is from the Health Physics Historical Instrumentation Museum Collection at the Oak Ridge Associated Universities.

This video by the U.S. Food and Drug Administration nicely captures how regulators finally caught up with the machine:

The shoe-fitting fluoroscope is a curious technology. It seemed scientific but it wasn’t. Its makers claimed it wasn’t dangerous, but it was. In the end, it proved utterly superfluous—a competent salesperson could fit a shoe just as easily and with less fuss. And yet I understand the allure. I’ve been scanned for insoles to help my overpronated feet. I’ve been videotaped on a treadmill to help me select running shoes. Was that science? Did it help? I can only hope. I’m pretty sure at least that it did no harm.

An abridged version of this article appears in the November 2020 print issue as “If the X-Ray Fits.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

The 11 Greatest Vacuum Tubes You’ve Never Heard Of

Post Syndicated from Carter M. Armstrong original https://spectrum.ieee.org/tech-history/space-age/the-11-greatest-vacuum-tubes-youve-never-heard-of

In an age propped up by quintillions of solid-state devices, should you even care about vacuum tubes? You definitely should! For richness, drama, and sheer brilliance, few technological timelines can match the 116-year (and counting) history of the vacuum tube. To prove it, I’ve assembled a list of vacuum devices that over the past 60 or 70 years inarguably changed the world.

And just for good measure, you’ll also find here a few tubes that are too unique, cool, or weird to languish in obscurity.

Of course, anytime anyone offers up a list of anything—the comfiest trail-running shoes, the most authentic Italian restaurants in Cleveland, movies that are better than the book they’re based on—someone else is bound to weigh in and either object or amplify. So, to state the obvious: This is my list of vacuum tubes. But I’d love to read yours. Feel free to add it in the comments section at the end of this article.

My list isn’t meant to be comprehensive. Here you’ll find no gas-filled glassware like Nixie tubes or thyratrons, no “uber high” pulsed-power microwave devices, no cathode-ray display tubes. I intentionally left out well-known tubes, such as satellite traveling-wave tubes and microwave-oven magnetrons. And I’ve pretty much stuck with radio-frequency tubes, so I’m ignoring the vast panoply of audio-frequency tubes—with one notable exception.

But even within the parameters I’ve chosen, there are so many amazing devices that it was rather hard to pick just eleven of them. So here’s my take, in no particular order, on some tubes that made a difference.


Medical Magnetron

When it comes to efficiently generating coherent radio-frequency power in a compact package, you can’t beat the magnetron.

The magnetron first rose to glory in World War II, to power British radar. While the magnetron’s use in radar began to wane in the 1970s, the tube found new life in industrial, scientific, and medical applications, which continues today.

It is for this last use that the medical magnetron shines. In a linear accelerator, it creates a high-energy electron beam. When electrons in the beam are deflected by the nuclei in a target—consisting of a material having a high atomic number, such as tungsten—copious X-rays are produced, which can then be directed to kill cancer cells in tumors. The first clinical accelerator for radiotherapy was installed at London’s Hammersmith Hospital in 1952. A 2-megawatt magnetron powered the 3-meter-long accelerator.

High-power magnetrons continue to be developed to meet the demands of radiation oncology. The medical magnetron shown here, manufactured by e2v Technologies (now Teledyne e2v), generates a peak power of 2.6 MW, with an average power of 3 kilowatts and an efficiency of more than 50 percent. Just 37 centimeters long and weighing about 8 kilograms, it’s small and light enough to fit the rotating arm of a radiotherapy machine.


Gyrotron

Conceived in the 1960s in the Soviet Union, the gyrotron is a high-power vacuum device used primarily for heating plasmas in nuclear-fusion experiments, such as ITER, now under construction in southern France. These experimental reactors can require temperatures of up to 150 million °C.

So how does a megawatt-class gyrotron work? The name provides a clue: It uses beams of energetic electrons rotating or gyrating in a strong magnetic field inside a cavity. (We tube folks love our -trons and -trodes.) The interaction between the gyrating electrons and the cavity’s electromagnetic field generates high-frequency radio waves, which are directed into the plasma. The high-frequency waves accelerate the electrons within the plasma, heating the plasma in the process.

A tube that produces 1 MW of average power is not going to be small. Fusion gyrotrons typically stand around 2 to 2.5 meters tall and weigh around a metric ton, including a 6- or 7-tesla superconducting magnet.

In addition to heating fusion plasmas, gyrotrons are used in material processing and nuclear magnetic resonance spectroscopy. They have also been explored for nonlethal crowd control, in the U.S. military’s Active Denial System. This system projects a relatively wide millimeter-wave beam, perhaps a meter and a half in diameter. The beam is designed to heat the surface of a person’s skin, creating a burning sensation but without penetrating into or damaging the tissue below.


Mini Traveling-Wave Tube

As its name suggests, a traveling-wave tube (TWT) amplifies signals through the interaction between an electric field of a traveling, or propagating, electromagnetic wave in a circuit and a streaming electron beam. [For a more detailed description of how a TWT works, see “The Quest for the Ultimate Vacuum Tube,” IEEE Spectrum, December 2015.]

Most TWTs of the 20th century were designed for extremely high power gain, with amplification ratios of 100,000 or more. But you don’t always need that much gain. Enter the mini TWT, shown here in an example from L3Harris Electron Devices. With a gain of around 1,000 (or 30 decibels), a mini TWT is meant for applications where you need output power in the 40- to 200-watt range, and where small size and lower voltage are desirable. A 40-W mini TWT operating at 14 gigahertz, for example, fits in the palm of your hand and weighs less than half a kilogram.

As it turns out, military services have a great need for mini TWTs. Soon after their introduction in the 1980s, mini TWTs were adopted in electronic warfare systems on planes and ships for protection against radar-guided missiles. In the early 1990s, device designers began integrating mini TWTs with a compact high-voltage power supply to energize the device and a solid-state amplifier to drive it. The combination created what is known as a microwave power module, or MPM. Due to their small size, low weight, and high efficiency, MPM amplifiers found immediate use in radar and communications transmitters aboard military drones, such as the Predator and Global Hawk, as well as in electronic countermeasures.


Accelerator Klystron

The klystron helped usher in the era of big science in high-energy physics. Klystrons convert the kinetic energy of an electron beam into radio-frequency energy. The device has much greater output power than does a traveling-wave tube or a magnetron. The brothers Russell and Sigurd Varian invented the klystron in the 1930s and, with others, founded Varian Associates to market it. These days, Varian’s tube business lives on at Communications and Power Industries.

Inside a klystron, electrons emitted by a cathode accelerate toward an anode to form an electron beam. A magnetic field keeps the beam from expanding as it travels through an aperture in the anode to a beam collector. In between the anode and collector are hollow structures called cavity resonators. A high-frequency signal is applied to the resonator nearest the cathode, setting up an electromagnetic field inside the cavity. That field modulates the electron beam as it passes through the resonator, causing the speed of the electrons to vary and the electrons to bunch as they move toward the other cavity resonators downstream. Most of the electrons decelerate as they traverse the final resonator, which oscillates at high power. The result is an output signal that is much greater than the input signal.

In the 1960s, engineers developed a klystron to serve as the RF source for a new 3.2-kilometer linear particle accelerator being built at Stanford University. Operating at 2.856 gigahertz and using a 250-kilovolt electron beam, the SLAC klystron produced a peak power of 24 MW. More than 240 of them were needed to attain particle energies of up to 50 billion electron volts.

The SLAC klystrons paved the way for the widespread use of vacuum tubes as RF sources for advanced particle physics and X-ray light-source facilities. A 65-MW version of the SLAC klystron is still in production. Klystrons are also used for cargo screening, food sterilization, and radiation oncology.


Ring-Bar Traveling-Wave Tube

One Cold War tube that is still going strong is the huge ring-bar traveling-wave tube. This high-power tube stands over 3 meters from cathode to collector, making it the world’s largest TWT. There are 128 ring-bar TWTs providing the radio-frequency oomph for an exceedingly powerful phased-array radar at the Cavalier Air Force Station in North Dakota. Called the Perimeter Acquisition Radar Attack Characterization System (PARCS), this 440-megahertz radar looks for ballistic missiles launched toward North America. It also monitors space launches and orbiting objects as part of the Space Surveillance Network. Built by GE in 1972, PARCS tracks more than half of all Earth-orbiting objects, and it’s said to be able to identify a basketball-size object at a range of 2,000 miles (3,218 km).

An even higher-frequency version of the ring-bar tube is used in a phased-array radar on remote Shemya Island, about 1,900 km off the coast of Alaska. Known as Cobra Dane, the radar monitors non-U.S. ballistic missile launches. It also collects surveillance data on space launches and satellites in low Earth orbit.

The circuit used in this behemoth is known as a ring bar, which consists of circular rings connected by alternating strips, or bars, repeated along its length. This setup provides a higher field intensity across the tube’s electron beam than does a garden-variety TWT, in which the radio-frequency waves propagate along a helix-shaped wire. The ring-bar tube’s higher field intensity results in higher power gain and good efficiency. The tube shown here was developed by Raytheon in the early 1970s; it is now manufactured by L3Harris Electron Devices.


Ubitron

Fifteen years before the term “free-electron laser” was coined, there was a vacuum tube that worked on the same basic principle—the ubitron, which sort of stands for “undulating beam interaction.”

The 1957 invention of the ubitron came about by accident. Robert Phillips, an engineer at the General Electric Microwave Lab in Palo Alto, Calif., was trying to explain why one of the lab’s traveling-wave tubes oscillated and another didn’t. Comparing the two tubes, he noticed variations in their magnetic focusing, which caused the beam in one tube to wiggle. He figured that this undulation could result in a periodic interaction with an electromagnetic wave in a waveguide. That, in turn, could be useful for creating exceedingly high levels of peak radio-frequency power. Thus, the ubitron was born.

From 1957 to 1964, Phillips and colleagues built and tested a variety of ubitrons. The 1963 photo shown here is of GE colleague Charles Enderby holding a ubitron without its wiggler magnet. Operating at 70,000 volts, this tube produced a peak power of 150 kW at 54 GHz, a record power level that stood for well over a decade. But the U.S. Army, which funded the ubitron work, halted R&D in 1964 because there were no antennas or waveguides that could handle power levels that high.

Today’s free-electron lasers employ the same basic principle as the ubitron. In fact, in recognition of his pioneering work on the ubitron, Phillips received the Free-Electron Laser Prize in 1992. The FELs now installed in the large light and X-ray sources at particle accelerators produce powerful electromagnetic radiation, which is used to explore the dynamics of chemical bonds, to understand photosynthesis, to analyze how drugs bind with targets, and even to create warm, dense matter to study how gas planets form.


Carcinotron

The French tube called the carcinotron is another fascinating example born of the Cold War. Related to the magnetron, it was conceived by Bernard Epsztein in 1951 at Compagnie Générale de Télégraphie Sans Fil (CSF, now part of Thales).

Like the ubitron, the carcinotron grew out of an attempt to resolve an oscillation problem on a conventional tube. In this case, the source of the oscillation was traced to a radio-frequency circuit’s power flowing backward, in the opposite direction of the tube’s electron beam. Epsztein discovered that the oscillation frequency could be varied with voltage, which led to a patent for a voltage-tunable “backward wave” tube.

For about 20 years, electronic jammers in the United States and Europe employed carcinotrons as their source of RF power. The tube shown here was one of the first manufactured by CSF in 1952. It delivered 200 W of RF power in the S band, which extends from 2 to 4 GHz.

Considering the level of power they can handle, carcinotrons are fairly compact. Including its permanent focusing magnet, a 500-W model weighs just 8 kg and measures 24 by 17 by 15 cm, a shade smaller than a shoebox.

And the strange name? Philippe Thouvenin, a vacuum electronics scientist at Thales Electron Devices, told me that it comes from a Greek word, karkunos, which means crayfish. And crayfish, of course, swim backwards.


Dual-Mode Traveling-Wave Tube

The dual-mode TWT was an oddball microwave tube developed in the United States in the 1970s and ’80s for electronic countermeasures against radar. Capable of both low-power continuous-wave and high-power pulsed operation, this tube followed the old adage that two is better than one: It had two beams, two circuits, two electron guns, two focusing magnets, and two collectors, all enclosed in a single vacuum envelope.

The tube’s main selling point was that it broadened the uses of a given application—a countermeasure system, for example, could operate in both continuous-wave and pulsed-power modes but with a single transmitter and a simple antenna feed. A control grid in the electron gun in the shorter, pulsed-power section could quickly switch the tube from pulsed to continuous wave, or vice versa. Talk about packing a lot of capability into a small package. Of course, if the vacuum leaked, you’d lose both tube functions.

The tube shown here was developed by Raytheon’s Power Tube Division, which was acquired by Litton Electron Devices in 1993. Raytheon/Litton as well as Northrop Grumman manufactured the dual-mode TWT, but it was notoriously hard to produce in volume and was discontinued in the early 2000s.


Multi-Beam Klystron

Power, as many of us learned as youngsters, equals voltage times current. To get more power out of a vacuum tube, you can increase the voltage of the tube’s electron beam, but that calls for a bigger tube and a more complex power supply. Or you can raise the beam’s current, but that can be problematic too. For that, you need to ensure the device can support the higher current and that the required magnetic field can transport the electron beam safely through the tube’s circuit—that is, the part of the tube that interacts with the electron beam.

Adding to the challenge, a tube’s efficiency generally falls as the beam’s current rises because the bunching of the electrons required for power conversion suffers.

All these caveats apply if you’re talking about a conventional vacuum tube with a single electron beam and a single circuit. But what if you employ multiple beams, originating from multiple cathodes and traveling through a common circuit? Even if the individual beam currents are moderate, the total current will be high, while the device’s overall efficiency is unaffected.

Such a multiple-beam device was studied in the 1960s in the United States, the Soviet Union, and elsewhere. The U.S. work petered out, but activity in the USSR continued, leading to the successful deployment of the multi-beam klystron, or MBK. The Soviets fielded many of these tubes for radar and other uses.

A modern example of an MBK is shown above, produced in 2011 by the French firm Thomson Tubes Electroniques (now part of Thales). This MBK was developed for the German Electron Synchrotron facility (DESY). A later version is used at the European X-Ray Free Electron Laser facility. The tube has seven beams providing a total current of 137 amperes, with a peak power of 10 MW and average power of 150 kW; its efficiency is greater than 63 percent. By contrast, a single-beam klystron developed by Thomson provides 5 MW peak and 100 kW average power, with an efficiency of 40 percent. So, in terms of its amplification capability, one MBK is equivalent to two conventional klystrons.


Coaxitron

All the tubes I’ve described so far are what specialists call beam-wave devices (or stream-wave in the case of the magnetron). But before those devices came along, tubes had grids, which are transparent screenlike metal electrodes inserted between the tube’s cathode and anode to control or modulate the flow of electrons. Depending on how many grids the tube has, it is called a diode (no grids), a triode (one grid), a tetrode (two grids), and so on. Low-power tubes were referred to as “receiving tubes,” because they were typically used in radio receivers, or as switches. (Here I should note that what I’ve been referring to as a “tube” is known to the British as a “valve.”)

There were, of course, higher-power grid tubes. Transmitting tubes were used in—you guessed it—radio transmitters. Later on, high-power grid tubes found their way into a wide array of interesting industrial, scientific, and military applications.

Triodes and higher-order grid tubes all included a cathode, a current-control grid, and an anode or collector (or plate). Most of these tubes were cylindrical, with a central cathode, usually a filament, surrounded by electrodes.

The coaxitron, developed by RCA beginning in the 1960s, is a unique permutation of the cylindrical design. The electrons flow radially from the cylindrical coaxial cathode to the anode. But rather than having a single electron emitter, the coaxitron’s cathode is segmented along its circumference, with numerous heated filaments serving as the electron source. Each filament forms its own little beamlet of electrons. Because the beamlet flows radially to the anode, no magnetic field (or magnet) is required to confine the electrons. The coaxitron is thus very compact, considering its remarkable power level of around a megawatt.

A 1-MW, 425-MHz coaxitron weighed 130 pounds (59 kg) and stood 24 inches (61 cm) tall. While the gain was modest (10 to 15 dB), it was still a tour de force as a compact ultrahigh-frequency power booster. RCA envisioned the coaxitron as a source for driving RF accelerators, but it ultimately found a home in high-power UHF radar. Although coaxitrons were recently overtaken by solid-state devices, some are still in service in legacy radar systems.


Telefunken Audio Tube

An important conventional tube with grids resides at the opposite end of the power/frequency spectrum from megawatt beasts like the klystron and the gyrotron. Revered by audio engineers and recording artists, the Telefunken VF14M was employed as an amplifier in the legendary Neumann U47 and U48 microphones favored by Frank Sinatra and by the Beatles’ producer Sir George Martin. Fun fact: There’s a Neumann U47 microphone on display at the Abbey Road Studio in London. The “M” in the VF14M tube designation indicates it’s suitable for microphone use and was only awarded to tubes that passed screening at Neumann.

The VF14 is a pentode, meaning it has five electrodes, including three grids. When used in a microphone, however, it operates as a triode, with two of its grids strapped together and connected to the anode. This was done to exploit the supposedly superior sonic qualities of a triode. The VF14’s heater circuit, which warms the cathode so that it emits electrons, runs at 55 V. That voltage was chosen so that two tubes could be wired in series across a 110-V main to reduce power-supply costs, which was important in postwar Germany.

Nowadays, you can buy a solid-state replacement for the VF14M that even simulates the tube’s 55-V heater circuit. But can it replicate that warm, lovely tube sound? On that one, audio snobs will never agree.

This article appears in the November 2020 print issue as “The 9 Greatest Vacuum Tubes You’ve Never Heard Of.”

Pandemic Memories and Mortalities

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/tech-history/heroic-failures/pandemic-memories-and-mortalities

When SARS-CoV-2, a new coronavirus, began to spread outside China in the early months of 2020, both the news media and scientific publications looked back to the most lethal pandemic in modern history. It was called the Spanish flu, though it had nothing whatever to do with Spain.

That pandemic began early in 1918, its third and final wave was spent only a year later, and we will never know the exact death toll. Published estimates range from about 17 million to 100 million, with 50 million being perhaps the most likely total. If we divide that number by the 1.8 billion people that were alive in the world, we get at a global mortality rate of about 2.8 percent.

What I find strange is that the unfolding COVID-19 event has prompted relatively few references to the three latest pandemics, for which we do have good numbers. The first event, caused by the H2N2 virus, began to spread from China in February 1957 and ended in April 1958. The second, also beginning in China, came in May 1968, when the H3N2 virus surfaced; the first wave peaked before the year’s end, and in some countries the effects persisted until April 1970. Finally, there was the H1N1 virus, originating in Mexico and declared to be a pandemic by the World Health Organization on 11 June 2009; it stopped spreading before the end of the year.

The best reconstructions estimate that excess deaths—those presumably resulting from pandemics—ranged from 1.5 ­million to 4 million in the first of these three pandemics, from 1.1 million to 4 million in the second, and from 150,000 to 575,000 deaths in the third. The world’s population grew throughout these years, and adjusting for that changing number yields excess death rates of about 52 per 100,000 from 1957 to 1958, 30 per 100,000 from 1968 to 1970, and 2.3 to 5.2 per 100,000 in 2009.

In comparison, the worldwide death toll attributable to SARS CoV-2 was about 865,000 by the end of August 2020. Given the global population of about 7.8 billion, this translates to an interim pandemic mortality of about 11 deaths per 100,000 people. Even if the total number of deaths were to triple, the mortality rate would be comparable to that of the 1968 pandemic, and it would be about two-thirds of the 1957 rate.

Yet it is remarkable that these more virulent pandemics had such evanescent economic consequences. The United Nations’ World Economic and Social Surveys from the late 1950s contain no references to a pandemic or a virus. Nor did the pandemics leave any deep, traumatic traces in memories. Even if one very conservatively assumes that lasting memories start only at 10 years of age, then 350 million of the people who are alive today ought to remember the three previous pandemics, and a billion people ought to remember the last two.

But I have yet to come across anybody who has vivid memories of the pandemics of 1957 or 1968. Countries did not resort to any mass-scale economic lockdowns, enforce any long-lasting school closures, ban sports events, or cut flight schedules deeply.

Today’s pandemic has led to a deep (50 to 90 percent) reduction in flights, but during the earlier pandemics, aviation was marked by notable advances. On 17 October 1958, half a year after the end of the second pandemic wave in the West and about a year before the pandemic ended (in Chile, the last holdout), PanAm inaugurated its Boeing 707 jet service to Europe. And the Boeing 747, the first wide-body jetliner, entered scheduled service months before the last wave of the contemporary pandemic ended, in March 1970.

Why were things so different back then? Was it because we had no ­fear-reinforcing 24/7 cable news, no Twitter, and no incessant and instant case-and-death tickers on all our electronic screens? Or is it we ourselves who have changed, by valuing recurrent but infrequent risks differently?

Discovering Computer Legend Dennis Ritchie’s Lost Dissertation

Post Syndicated from David C. Brock original https://spectrum.ieee.org/tech-talk/tech-history/silicon-revolution/discovering-computer-legend-dennis-ritchies-lost-dissertation

Many of you, dear readers, will have heard of Dennis Ritchie. In the late 1960s, he left graduate studies in applied mathematics at Harvard for a position at the Bell Telephone Laboratories, where he spent the entirety of his career.

Not long after joining the Labs, Ritchie linked arms with Ken Thompson in efforts that would create a fundamental dyad of the digital world that followed: the operating system Unix and the programming language C. Thompson led the development of the system, while Ritchie was lead in the creation of C, in which Thompson rewrote Unix. In time, Unix became the basis for most of the operating systems on which our digital world is built, while C became—and remains—one of the most popular languages for creating the software that animates this world.

On Ritchie’s personal web pages at the Labs (still maintained by Nokia, the current owner), he writes with characteristic dry deprecation of his educational journey into computing:

I . . . received Bachelor’s and advanced degrees from Harvard University, where as an undergraduate I concentrated in Physics and as a graduate student in Applied Mathematics . . . The subject of my 1968 doctoral thesis was subrecursive hierarchies of functions. My undergraduate experience convinced me that I was not smart enough to be a physicist, and that computers were quite neat. My graduate school experience convinced me that I was not smart enough to be an expert in the theory of algorithms and also that I liked procedural languages better than functional ones.

Whatever the actual merits of these self-evaluations, his path certainly did lead him into a field and an environment in which he made extraordinary contributions.

It may come as some surprise to learn that until just this moment, despite Ritchie’s much-deserved computing fame, his dissertation—the intellectual and biographical fork-in-the-road separating an academic career in computer science from the one at Bell Labs leading to C and Unix—was lost. Lost? Yes, very much so in being both unpublished and absent from any public collection; not even an entry for it can be found in Harvard’s library catalog nor in dissertation databases.

After Dennis Ritchie’s death in 2011, his sister Lynn very caringly searched for an official copy and for any records from Harvard. There were none, but she did uncover a copy from the widow of Ritchie’s former advisor. Until very recently then, across a half-century perhaps fewer than a dozen people had ever had the opportunity to read Ritchie’s dissertation. Why?

In Ritchie’s description of his educational path, you will notice that he does not explicitly say that he earned a Ph.D. based on his 1968 dissertation. This is because he did not. Why not? The reason seems to be his failure to take the necessary steps to officially deposit his completed dissertation in Harvard’s libraries. Professor Albert Meyer of MIT, who was in Dennis Ritchie’s graduate school cohort, recalls the story in a recent oral history interview with the Computer History Museum:

So the story as I heard it from Pat Fischer [Ritchie and Meyer’s Harvard advisor] . . . was that it was definitely true at the time that the Harvard rules were that you needed to submit a bound copy of your thesis to the Harvard—you needed the certificate from the library in order to get your Ph.D. And as Pat tells the story, Dennis had submitted his thesis. It had been approved by his thesis committee, he had a typed manuscript of the thesis that he was ready to submit when he heard the library wanted to have it bound and given to them. And the binding fee was something noticeable at the time . . . not an impossible, but a nontrivial sum. And as Pat said, Dennis’ attitude was, ‘If the Harvard library wants a bound copy for them to keep, they should pay for the book, because I’m not going to!’ And apparently, he didn’t give on that. And as a result, never got a Ph.D. So he was more than ‘everything but thesis.’ He was ‘everything but bound copy.’

While Lynn Ritchie’s inquiries confirmed that Dennis Ritchie never did submit the bound copy of his dissertation, and did not then leave Harvard with his Ph.D., his brother John feels that there was something else going on with Dennis Ritchie’s actions beyond a fit of pique about fees: He already had a coveted job as a researcher at Bell Labs, and “never really loved taking care of the details of living.” We will never really know the reason, and perhaps it was never entirely clear to Ritchie himself. But what we can know with certainty is that Dennis Ritchie’s dissertation was lost for a half-century, until now.

Within the Dennis Ritchie collection, recently donated by Ritchie’s siblings to the Computer History Museum, lay several historical gems that we have identified to date. One is a collection of the earliest Unix source code dating from 1970–71. Another is a fading and stained photocopy of Ritchie’s doctoral dissertation, Program Structure and Computational Complexity. The Computer History Museum is delighted to now make a digital copy of Ritchie’s own dissertation manuscript (as well as a more legible digital scan of a copy of the manuscript owned by Albert Meyer) available publicly for the first time.

Recovering a copy of Ritchie’s lost dissertation and making it available is one thing; understanding it is another. To grasp what Ritchie’s dissertation is all about, we need to leap back to the early 20th century to a period of creative ferment in which mathematicians, philosophers, and logicians struggled over the ultimate foundations of mathematics. For centuries preceding this ferment, the particular qualities of mathematical knowledge—its exactitude and certitude—gave it a special, sometimes divine, status. While philosophical speculation about the source or foundation for these qualities stretches back to Pythagoras and Plato at least, in the early 20th century influential mathematicians and philosophers looked to formal logic—in which rules and procedures for reasoning are expressed in symbolic systems—as this foundation for mathematics.

Across the 1920s, the German mathematician David Hilbert was incredibly influential in this attempt to secure the basis of mathematics in formal logic. In particular, Hilbert believed that one could establish certain qualities of mathematics—for example, that mathematics was free of contradictions and that any mathematical assertion could be shown to be true or to be false—by certain kinds of proofs in formal logic. In particular, the kinds of proofs that Hilbert advocated, called “finitist,” relied on applying simple, explicit, almost mechanical rules to the manipulation of the expressive symbols of formal logic. These would be proofs based on rigid creation of strings of symbols, line by line, from one another.

In the 1930s, it was in the pursuit of such rules for logical manipulation of symbols that mathematicians and philosophers made a connection to computation, and the step-by-step rigid processes by which human “computers” and mechanical calculators performed mathematical operations. Kurt Gödel provided a proof of just the sort that Hilbert advocated, but distressingly showed the opposite of what Hilbert and others had hoped. Rather than showing that logic ensured that everything that was true in mathematics could be proven, Gödel’s logic revealed mathematics to be the opposite, to be incomplete. For this stunning result, Gödel’s proof rested on arguments about certain kinds of mathematical objects called primitive recursive functions. What’s important about recursive functions for Gödel is that they were eminently computable—that is, they relied on “finite procedures.” Just the kind of simple, almost mechanical rules for which Hilbert had called.

Quickly following Gödel, in the United States, Alonzo Church used similar arguments about computability to formulate a logical proof that showed also that mathematics was not always decidable—that is, that there were some statements about mathematics for which it is not possible to determine if they are true or are false. Church’s proof is based on a notion of “effectively calculable functions,” grounded in Gödel’s recursive functions. At almost the same time, and independently in the UK, Alan Turing constructed a proof of the very same result, but based on a notion of “computability” defined by the operation of an abstract “computing machine.” This abstract Turing machine, capable of any computation or calculation, would later become an absolutely critical basis for theoretical computer science.

In the decades that followed, and before the emergence of computer science as a recognized discipline, mathematicians, philosophers, and others began to explore the nature of computation in its own right, increasingly divorced from connections to the foundation of mathematics. As Albert Meyer explains in his interview:

In the 1930s and 1940s, the notion of what was and wasn’t computable was very extensively worked on, was understood. There were logical limits due to Gödel and Turing about what could be computed and what couldn’t be computed. But the new idea [in the early 1960s] was ‘Let’s try to understand what you can do with computation, that was when the idea of computational complexity came into being . . . there were . . . all sorts of things you could do with computation, but not all of it was easy . . . How well could it be computed?

With the rise of electronic digital computing, then, for many of these researchers the question was less what logical arguments about computability could teach about the nature of mathematics, but what could these logical arguments reveal about the limits of computability itself. As those limits came to be well understood, the interests of these researchers shifted to the nature of computability within these limits. What could be proven about the realm of possible computations?

One of few places where these new investigations were taking place in the mid 1960s, when Dennis Ritchie and Albert Meyer both entered their graduate studies at Harvard, was in certain corners of departments of applied mathematics. These departments were also, frequently, where the practice of electronic digital computing took root early on academic campuses. As Meyer recalls, “Applied Mathematics was a huge subject in which this kind of theory of computation was a tiny, new part.”

For both Ritchie and Meyer, theirs was a gravitation into Harvard’s applied mathematics department from their undergraduate studies in mathematics at the university, although Meyer does not recall having known Ritchie as an undergraduate. In their graduate studies, both became increasingly interested in the theory of computation, and thus alighted on Patrick Fischer as their advisor. Fischer at the time was a freshly minted Ph.D. who was only at Harvard for the critical first years of Ritchie and Meyer’s studies, before alighting at Cornell in 1965. (Later, in 1982, Fischer was one of the Unabomber’s targets.) As Meyer recalls:

Patrick was very much interested in this notion of understanding the nature of computation, what made things hard, what made things easy, and they were approached in various ways . . . What kinds of things could different kinds of programs do?

After their first year of graduate study, unbeknownst to at least Meyer, Fischer independently hired both Ritchie and Meyer as summer research assistants. Meyer’s assignment? Work on a particular “open problem” in the theory of computation that Fischer had identified, and report back at the end of the summer. Fischer, for his part, would be away. Meyer spent a miserable summer working alone on the problem, reporting to Fischer at the end that he had only accomplished minor results. Soon after, walking to Fischer’s graduate seminar, Meyer was shocked as he realized a solution to the summer problem. Excitedly reporting his breakthrough to Fisher, Meyer was “surprised and a little disappointed to hear that Pat said that Dennis had also solved the problem.” Fischer had set Ritchie and Meyer the very same problem that summer but had not told them!

Fischer’s summer problem was a take on the large question of computational complexity, about the relative ease or time it takes to compute one kind of thing versus another. Recall that Gödel had used primitive recursive functions to exemplify computability by finite procedures, key to his famous work. In the 1950s, the Polish mathematician Andrzej Grzegorczyk defined a hierarchy of these same recursive functions based on how fast or slow the functions grow. Fischer’s summer question, then, was for Meyer and Ritchie to explore how this hierarchy of functions related to computational complexity.

To his great credit, Meyer’s disappointment at summer’s end gave way to a great appreciation for Ritchie’s solution to Fischer’s problem: loop programs. Meyer recalls “. . . this concept of loop programs, which was Dennis’s invention . . . was so beautiful and so important and such a terrific expository mechanism as well as an intellectual one to clarify what the subject was about, that I didn’t care whether he solved the problem.”

Ritchie’s loop program solution to Fischer’s summer problem was the core of his 1968 dissertation. They are essentially very small, limited computer programs that would be familiar to anyone who ever used the FOR command for programming loops in BASIC. In loop programs, one can set a variable to zero, add 1 to a variable, or move the value of one variable to another. That’s it. The only control available in loop programs is . . . a simple loop, in which an instruction sequence is repeated a certain number of times. Importantly, loops can be “nested,” that is, loops within loops.

What Ritchie shows in his dissertation is that these loop functions are exactly what is needed to produce Gödel’s primitive recursive functions, and only these functions; just those functions of Grzegorczyk hierarchy. Gödel held out his recursive functions as eminently computable, and Ritchie showed that loop programs were just the right tools for that job. Ritchie’s dissertation shows that the degree of “nestedness” of loop programs—the depth of loops within loops—is a measure of computational complexity for them, as well as a gauge for how much time is required for their computation. Further, he shows that assessing loop programs by their depth of loops is exactly equivalent to Grzegorczyk’s hierarchy. The rate of growth of primitive recursive functions is indeed related to their computational complexity; in fact, they are identical.

As Meyer recalls:

Loop programs made into a very simple model that any computer scientist could understand instantly, something that the traditional formulation…in terms for primitive recursive hierarchies…with very elaborate logician’s notation for complicated syntax and so on that would make anybody’s eyes glaze over-Suddenly you had a three-line, four-line computer science description of loop programs.

While, as we have seen, Ritchie’s development of this loop program approach to computer science never made it out into the world through his dissertation, it did nevertheless make it into the literature in a joint publication with Albert Meyer in 1967.

Meyer explains:

[Dennis] was a very sweet, easy going, unpretentious guy. Clearly very smart, but also kind of taciturn . . . So we talked a little, and we talked about this paper that we wrote together, which I wrote, I believe. I don’t think he wrote it at all, but he read it . . . he read and made comments . . . and he explained loop programs to me.

The paper, “The Complexity of Loop Programs” [subscription required], was published by the ACM in 1967, and was an important step in the launch of a highly productive and regarded career in theoretical computer science by Meyer. But it was a point of departure for and with Ritchie. As Meyer recalls:

It was a disappointment. I would have loved to collaborate with him, because he seemed like a smart, nice guy who’d be fun to work with, but yeah, you know, he was already doing other things. He was staying up all night playing Spacewar!

At the start of this essay, we noted that in his biographical statement on his website, Ritchie quipped, “My graduate school experience convinced me that I was not smart enough to be an expert in the theory of algorithms and also that I liked procedural languages better than functional ones.” While his predilection for procedural languages is without question, our exploration of his lost dissertation puts the lie to his self-assessment that he was not smart enough for theoretical computer science. More likely, Ritchie’s graduate school experience was one in which the lure of the theoretical gave way to the enchantments of implementation, of building new systems and new languages as a way to explore the bounds, nature, and possibilities of computing.

Editor’s note: This post originally appeared on the blog of the Computer History Museum.

About the Author

David C. Brock is an historian of technology and director of the Computer History Museum’s Software History Center.

The SCR-536 Handie-Talkie Was the Modern Walkie-Talkie’s Finicky Ancestor

Post Syndicated from Richard Brewster original https://spectrum.ieee.org/tech-history/dawn-of-electronics/the-scr536-handietalkie-was-the-modern-walkietalkies-finicky-ancestor

Worldwide, militaries have always sought improved communication. Late in the 18th century, the French began sending messages via a national network of optical telegraph stations. Wired telegraphy proved valuable during the U.S. Civil War. Radio communication saw its first big military deployment during the Russo-Japanese War in 1904, with steady improvements during World War I, between the wars, and into World War II.

The first radio built for soldiers in the field

But the ordinary foot soldier didn’t get his own radio until the introduction of the SCR-536 Handie-Talkie in 1942. Built by Galvin Manufacturing (which later became Motorola), the SCR-536 weighed just 2.3 kilograms and was “designed for operation under battle conditions,” according to its technical manual. At the time, there was no other radio like it, and eventually 130,000 units were manufactured for the Allies during the war.

The handheld radio set first saw action in the Allied invasion of North Africa in November 1942. An ad for the radio featured a soldier’s endorsement: “Just like having a house telephone at your fingertips. We’re never alone. We feel safer, stronger, because we’re always in touch with our command post!”

Real-life experience with the radio was a little different. Some years ago, I acquired a 536 and a technical manual. I also got in touch with a retired Army officer, George H. Goldstone, who told me about using the 536 in the field, starting with Operation Torch, an amphibious landing in North Africa in November 1942.

In a December 1990 letter, Goldstone explained that the 536 was “intended as a radio for infantry company commanders to talk to battalion headquarters—and in that usage, it was inadequate from day one…. Ultimately it was used within infantry companies to communicate down to platoon leaders—a very short-range task.” According to the 536’s manual, it could operate over distances of 1 mile on land and 3 miles over salt water. But there were caveats: Hills, foliage, atmospheric conditions, and ground moisture could shrink the distance, as could the battery’s age and internal dirt and moisture.

Even at the shorter range, Goldstone wrote, “It had no end of problems.” During Operation Torch, he said, “it leaked water—and salt water ruined the sliding switch contacts. Then it required a special battery—and the entire supply of batteries for General Patton’s Western Task Force was on one ship, which the Germans torpedoed off Casablanca harbor. Fortunately for us, I had squirrelled away several boxes of batteries on every jeep in my Radio Section…. We had a supply of batteries for a while!”

Further complicating the radio’s use was its single frequency, which could not be changed on the fly. “This had to be done in our division radio shop, where we had a test rig set up,” Goldstone explained. “It could not be done at regimental or battalion level.”

Soldiers could be hard on the 536, he added. “One set came into our shop…with all the tubes dead. Some infantryman had used it to pound in tent pegs!”

How to camouflage your radio: Put a sock on it

After studying the manual—the TM11-235 War Department Technical Manual, to be specific—and the 536 unit I’d acquired, I gained a little more insight into how the radio worked and why it struggled to perform in the field.

The 536 used five “miniature” vacuum tubes that had been developed for civilian portable radios. To transmit, it used four of the tubes. To receive, it used all five. The unit contained two batteries: a 1.5-volt filament battery and a tall and narrow plate battery that supplied 103.5 volts. The designers of the Handie-Talkie were severely limited by the high power requirements of the unit’s vacuum tubes. The radio could operate for less than 24 hours on a set of batteries.

The 536 was simple to use. According to the manual, it was turned on by extending the antenna. “When thus connected to its internal dry-battery power supply, the radio set functions as a receiver. Pressing the press-to-talk switch converts the receiver circuit to a transmitter circuit.”

The radio could operate in any one of 50 channels between 3.5 and 6.0 megahertz. But changing channels required coils and crystals to be replaced in a maintenance facility—in Goldstone’s case, the division’s radio shop. There was no volume control.

The 536’s use of low-frequency AM transmission, not far above the broadcast band, required an enormous antenna. The unit’s 40-inch (102 centimeter) whip antenna was woefully inadequate at those frequencies. Worse, the long antenna became a target for enemy sharpshooters because it had to be held in a vertical position for best transmission.

Given the radio’s prominent antenna, I was puzzled at the manual’s lengthy discussion of how to camouflage the unit. “Knowledge of how to camouflage the radio set is as important as knowing how to operate it,” the manual explained. “A poorly concealed radio set will draw enemy fire, regardless of how well the operator may be hidden or covered.” Although various methods of camouflage are suggested, the manual states that “the most satisfactory way…is to insert in an issue drab sock.”

A better but bigger radio: the SCR-300 Walkie-Talkie

The Handie-Talkie saw wide use, but by 1943, the Allied Forces had a superior radio: the Galvin-designed SCR-300 Walkie-Talkie. It allowed the operator to select up to 41 frequency channels, versus the 536’s single channel. It also used VHF and FM, which meant less interference and reduced noise. And it had a much greater range—up to 8 miles (13 km).

But there were drawbacks. The SCR-300 was carried in a backpack and weighed a hefty 35 pounds (16 kg), about seven times as much as the handheld 536, and its batteries did not last more than 40 hours. Despite its size and weight, the SCR-300’s technical manual indicates that it was “primarily intended as a walkie-talkie for foot combat troops.”

But it would be a stretch to call the 300 a replacement for the 536.

The Axis powers apparently never had the equivalent of the SCR-536. But the Germans did field a radio set much like the SCR-300 that used rechargeable batteries, while the Japanese had one that relied on a hand-cranked generator.

The SCR-536 gave rise to modern walkie-talkies

After World War II, Galvin went back to producing civilian radios under the Motorola brand, and the company went on to become a major producer of vehicular radios for police and fire departments. These radios were technically quite similar to the SCR-300. The main improvements over the years were the eventual replacement of vacuum tubes with solid-state devices and the use of higher frequencies. These shifts allowed for the handheld two-way radios that are so common today, even as toys.

In 1983, Motorola introduced the DynaTAC 8000X, a walkie-talkie that for a time was the ultimate in personal communications. In its first incarnation, the 8000X was an analog unit. The system later became digital.

Today, the need for walkie-talkies like the 8000X has shrunk dramatically with the ubiquity of cellphones. Ironically, the range of a typical cellphone is no greater than that of the 536—the large number of geographically dispersed cell towers is what allows your cellphone to work. And, of course, battery life remains an issue.

An abridged version of this article appears in the October 2020 print issue as “Built for Battle.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Richard Brewster, a retired nuclear power engineer, served until recently as project engineer on the hospital ship Global Mercy, operated by Mercy Ships. He previously wrote for IEEE Spectrum about re-creating the first flip-flop.

We’ve Been Killing Deadly Germs With UV Light for More Than a Century

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/dawn-of-electronics/weve-been-killing-deadly-germs-with-uv-light-for-more-than-a-century

As researchers race to find successful treatments and an eventual cure for COVID-19, everyone is getting a real-time glimpse into the messiness of scientific discovery. We’re all impatient for solid recommendations based on rigorous testing and established facts, but in a fast-moving field, that’s rarely possible. And someone always has to be the guinea pig.  This was just as true 130 years ago when Niels Ryberg Finsen began experimenting with treating disease with UV light. He started by testing on himself.

The germ-killing properties of the sun

Finsen was far from the first to consider the disease-fighting effects of light. By the end of the 19th century, the germ theory of disease had taken root, and doctors and scientists eagerly sought treatments that had antimicrobial properties. One of the earliest sources of hope was simple sunlight.

In the 12 July 1877 issue of Nature, public health experts Arthur Downes and Thomas P. Blunt published a short note called “The Influence of Light Upon the Development of Bacteria” [subscription required]. Their claim: Sunlight is inimical to bacteria. The authors recognized that it had long been known that light was not essential for the development of germs, but they thought they were the first to systematically study the effects and prove that sunlight destroyed bacteria.

Downes and Blunt were clear in their stated intentions: They wanted to link observed facts to the underlying chemical processes, and then to extend their observations to other phenomena. They were also clear on the limits of their inquiry. Although they hoped to report on the influence of refracted sunlight, they reached no definitive conclusions.

Their report touched off a flurry of scientific inquiry. John Tyndall, a physicist and Fellow of the Royal Society, attempted to reproduce Downes and Blunt’s experiments, albeit under slightly different circumstances. Downes and Blunt had used thin glass test tubes containing a sterilized Pasteur salt solution, which they monitored under direct and indirect sunlight in suburban London. Tyndall conducted his experiments in the Alps with hermetically sealed flasks of infusions of cucumber and turnip.

Tyndall’s initial findings aligned with those of Downes and Blunt: Flasks exposed to strong sunlight remained free of bacteria. But when Tyndall moved them to a warm room, they developed bacteria. This led him to conclude that sunlight only suppressed the growth of bacteria rather than killing it outright.

Tyndall published his results in the 19 December 1878 issue of the Proceedings of the Royal Society of London, ending his report almost apologetically: “I do not wish to offer these results as antagonistic to those so clearly described by Dr. Arthur Downes and Mr. Thomas Blunt…. Their observations are so definite that it is hardly possible to doubt their accuracy. But they noticed anomalies which it is desirable to clear up…. Such irregularities, coupled with the results above recorded, will, I trust, induce them to repeat their experiments, with the view of determining the true limits of the important action which those experiments reveal.”

It was a polite but firm takedown. More work was needed to elucidate the science behind the germ-killing effects of sunlight.

Over the next two decades, the messiness that is the process of discovery continued to play out as investigators nailed down details in different experiments. Downes and Blunt went on to demonstrate that the effectiveness of light to neutralize bacteria depended on the light’s intensity, duration, and wavelength. Later researchers repeating Tyndall’s experiments determined that the high particle concentration in his flasks shielded some of the bacteria from the sun’s damaging rays. Numerous scientists questioned, confirmed, extended, and complicated these initial claims, as Philip Hockberger, a physiologist at Northwestern University, has outlined in his review of the history of ultraviolet photobiology.

The Finsen lamp treated tuberculosis of the skin

Niels Ryberg Finsen entered into this fray in the 1890s. Despite an unimpressive school record, he had studied medicine at the University of Copenhagen. Following his graduation in 1890, he got a job preparing dissections for anatomy classes, but he quit after three years to focus on his research.

Finsen’s personal experience with illness prompted his investigations into the medicinal qualities of light. While in his 20s, he developed what is now known as Niemann-Pick disease, a rare genetic disorder that affects the body’s ability to metabolize fats. His initial symptoms included anemia and fatigue; by the time he died at age 44, he was in a wheelchair. Because Finsen lived in a north-facing house, he wondered if more exposure to sunlight would help. But being a medical man, he also thought it would be inappropriate to apply a theory if it were not built upon scientific fact. He decided to undertake formal studies of both sunlight and artificial light.

Finsen began with the effects of sunlight on insects and amphibians, publishing papers in 1893 and 1894, but he soon turned to the effectiveness of artificial ultraviolet light. Light rays outside of the visible spectrum and adjacent to the blue-violet were then known as “chemical rays,” because of their ability to stimulate chemical reactions, such as oxidizing the metallic salts used in early photography. Chemical rays were distinct from the “heat rays” adjacent to the red-orange end of the visible spectrum and now known as the infrared.

There were many conditions that Finsen thought might benefit from UV light, including smallpox, typhoid, and anthrax. But he had to start somewhere, and he made his choice almost at random. In 1895 he had arranged to use the laboratory of the Copenhagen Electric Light Works. One of their engineers, Niels Mogensen, suffered from lupus vulgaris, a skin condition that results in disfiguring lesions on the neck and face, caused by the same bacterium responsible for tuberculosis in the lungs. Having already tried medication and surgery, Mogensen was willing to be Finsen’s guinea pig. After only four days of treatment with artificial light, he showed improvement.

Finsen published the first report of his findings the next year, with details on 11 patients with lupus vulgaris. Critics claimed that the sample size was too small, but Finsen responded that because the treatment was localized—light directed at a specific spot for a fixed duration—the causality was clear. Score one for clinical tests.

For his treatments, Finsen developed an electric carbon arc light that became known as the Finsen lamp. Initially, he used regular glass in the lens, but soon replaced it with a fused quartz lens that provided more uniform transmission of UV rays. A single lamp could have four to eight tubes or arms, each of which looked like a telescope. The tubes could direct light to multiple patients at the same time.

A nurse would press a disk firmly against the patient’s skin to prevent blood from entering the treatment area and hindering the light’s penetration. The light was so intense that both patients and medical practitioners wore dark glasses to shield their eyes. Treatment occurred daily, lasting for 1 to 2 hours per day. Mild cases could be cured in a matter of weeks, but the average treatment lasted about seven months, with severe cases taking over a year.

With the help of the mayor of Copenhagen and some philanthropic donors, Finsen established the Medical Light Institute to treat lupus vulgaris. On 12 August 1896, the institute welcomed its first two patients. By 1901 it had treated 804 patients and claimed an impressive cure rate of 83 percent. Patients came from across Denmark and Europe, paying the equivalent of US $18 per month for Danish citizens and $30 for foreigners. Danes without the means to pay had their costs defrayed by their home districts.

As word of Finsen’s success spread, Finsen Institutes began popping up across Europe and North America. In 1900, Princess Alexandra, wife of the future King Edward VII of the United Kingdom, was instrumental in bringing the Finsen lamp to the U.K. Alexandra was Danish and supported the work of her countrymen. She presented a lamp to the Royal London Hospital, one of the lenses of which is featured at top.

Other forms of light therapy, based on sunlight and artificial light, also grew in popularity, for treating TB, circulatory disorders, varicose veins, degenerative diseases, and delicate constitutions. Light therapy for pets was faddish for a time, too.

As might be expected, some of the treatments for both humans and animals didn’t hold up to scientific scrutiny. In 1928, for example, the medical researcher Dora Colebrook published in The Lancet a study of 237 children that showed no beneficial effects of therapy with sunlight or artificial light. She was swiftly denounced in both medical journals and popular newspapers. The public—and many practicing physicians—were convinced it worked, and treating patients and building the associated electrical equipment were big business.

The shift to using UV lights to disinfect

In 1903, Finsen was awarded the Nobel Prize in Medicine, although the award was not without controversy. Some critics argued that his contribution did not have sufficient theoretical backing, which Finsen didn’t deny. But he contended that if clinical experiments alone could lead to the treatment of disease, it served no purpose to wait until an explanation was found in a lab. Other critics saw his investigations as superfluous, considering that knowledge of the bactericidal effects of UV light had been known for decades.

Meanwhile, supporters found his treatment innovative and in the spirit of Alfred Nobel because it benefited humanity. Those who suffered from lupus vulgaris faced discrimination and ostracism. In fact, Finsen’s out-of-town patients sometimes had trouble obtaining lodging due to their appearance. That predicament was remedied when two former patients established a boarding house for those undergoing treatment, which turned out to be a very lucrative investment. Finsen’s phototherapy transformed his patients’ appearances and allowed them to live normal, active, social lives.

Finsen was too sick to attend the Nobel Prize ceremony, and he died less than a year later. He donated a significant portion of his prize money to his institute, which was matched by two other donors. He also gave a large sum to a sanatorium for heart and liver disease.

The use of phototherapy for lupus vulgaris eventually declined, especially after penicillin became widely available. The Royal London Hospital’s lamp was repurposed to treat rickets. In 1953, the hospital transferred the lamp to the Wellcome Institute for the History of Medicine, and it now resides in the collections of the Science Museum, London.

But even as Finsen lamps were being retired, research on UV light was taking off in a new direction: killing germs in the environment before they had a chance to infect anyone.

In the 1930s, William F. Wells, an instructor in sanitary science at the Harvard School of Public Health, introduced the idea of aerosolized infection—droplets containing an infectious disease suspended in the air for hours. He postulated that a room could be rid of these airborne germs by an ultraviolet lamp and proper air circulation.

UV ventilation systems were soon being installed in schools and barracks with the hope of controlling the spread of tuberculosis and measles. Several later studies, though, were unable to reproduce Wells’s success. But Richard Riley, a medical student who worked in Wells’s lab, continued to believe and to research the effects of UV ventilation systems.

During the early 1970s, Riley and his colleagues published numerous papers that confirmed aerosolized TB could be controlled through UV ventilation. They also showed that relative humidity, temperature, the placement of the lights, and airflow affected how well it worked. In the 1980s, the United States saw a spike in tuberculosis, some strains of which were resistant to antibiotics, and interest in Riley’s and Well’s research was renewed.

UV lights were also found to disinfect drinking water and kill germs on surfaces. The first UV water treatment plant was prototyped in Marseilles, France, in 1910, but the lamp technology needed to improve before the technology could see widespread adoption. Commercial water treatment with UV began in the 1950s and really took off after research in 1998 showed that UV was effective at killing chlorine-resistant protozoa, such as giardia and cryptosporidium. Today, there are a whole host of consumer products that tout the benefits of UV sterilization. I recently bought my mom one of those pouches that have tiny UV lights for sterilizing your keys and cellphone.

These days, the germicidal benefits of both sunlight and artificial UV light are back in the spotlight as potential weapons against the coronavirus. Much is still unknown about the virus that causes COVID-19—how it spreads, whether immunity is short-lived or enduring, why the list of symptoms keeps growing, why it makes some people very sick and others not at all. As we watch this scientific process play out in real time, it might be helpful to remember the words of Finsen upon his acceptance of the Nobel Prize: “The supreme qualities of all science are honesty, reliability, and sober, healthy criticism.” Trial and error, proposal and critique, success and failure are all part of the messiness of science.

An abridged version of this article appears in the September 2020 print issue as “The Healing Light.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

The Rich Tapestry of Fiber Optics

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/cyberspace/the-rich-tapestry-of-fiber-optics

Whoopee! So wrote Donald Keck, a researcher at Corning Glass Works, in the 7 August 1970 entry of his lab notebook. The object of his exuberance was a 29-meter-long piece of highly purified, titanium-doped optical fiber, through which he had successfully passed a light signal with a measured loss of only 17 decibels per kilometer. A few years earlier, typical losses had been closer to 1,000 dB/km. Keck’s experiment was the first demonstration of low-loss optical fiber for telecommunications, and it paved the way for transmitting voice, data, and video over long distances.

As important as this achievement was, it was not an isolated event. Physicists and engineers had been working for decades to make optical telecommunications possible, developing not just fibers but waveguides, lasers, and other components. (More on that in a bit.) And if you take the long view, as historians like me tend to do, it’s part of a fascinating tapestry that also encompasses glass, weaving, art, and fashion.

Optical fiber creates stunning effects in art and fashion

Shown above is a sculpture called Crossform Pendant Lamp, by the New York–based textile artist Suzanne Tick. Tick is known for incorporating unusual materials into her weaving: recycled dry cleaner hangers, Mylar balloons washed up on the beach, documents from her divorce. For the lamp, she used industrial fiber-optic yarn.

The piece was part of a collaboration between Tick and industrial designer Harry Allen. Allen worked on mounting ideas and illuminators, while Tick experimented with techniques to weave the relatively stiff fiber-optic yarn. (Optical fiber is flexible compared with other types of glass, but inflexible compared to, say, wool.) The designers had to determine how the lamp would hang and how it would connect to a power and light source. The result is an artwork that glows from within.

Weaving is of course an ancient technology, as is glassmaking. The ability to draw, or pull, glass into consistent fibers, on the other hand, emerged only at the end of the 19th century. As soon as it did, designers attempted to weave with glass, creating hats, neckties, shawls, and other garments.

Perhaps the most famous of these went on display at the 1893 World’s Columbian Exposition in Chicago. The exhibit mounted by the Libbey Glass Company, of Toledo, Ohio, showcased a dress made from silk and glass fibers. The effect was enchanting. According to one account, it captured the light and shimmered “as crusted snow in sunlight.” One admirer was Princess Eulalia of Spain, a royal celebrity of the day, who requested a similar dress be made for her. Libbey Glass was happy to oblige—and receive the substantial international press.

The glass fabric was too brittle for practical wear, which may explain why few such garments emerged over the years. But the idea of an illuminated dress did not fade, awaiting just the right technology. Designer Zac Posen found a stunning combination when he crafted a dress of organza, optical fiber, LEDs, and 30 tiny battery packs, for the actor Claire Danes to wear to the 2016 Met Gala. The theme of that year’s fashion extravaganza was “Manus x Machina: Fashion in an Age of Technology,” and Danes’s dress stole the show. Princess Eulalia would have approved. 

Fiber optics has many founders

Of course, most of the work on fiber optics has occurred in the mainstream of science and engineering, with physicists and engineers experimenting with different ways to manipulate light and funnel it through glass fibers. Here, though, the history gets a bit tangled. Let’s consider the man credited with coining the term “fiber optics”: Narinder Singh Kapany

Kapany was born in India in 1926, received his Ph.D. in optics from Imperial College London, and then moved to the United States, where he spent the bulk of his career as a businessman and entrepreneur. He began working with optical fibers during his graduate studies, trying to improve the quality of image transmission. He introduced the term and the field to a broader audience in the November 1960 issue of Scientific American with an article simply titled “Fiber Optics.”

As Kapany informed his readers, a fiber-optic thread is a cylindrical glass fiber having a high index of refraction surrounded by a thin coating of glass with a low index of refraction. Near-total internal reflection takes place between the two, thus keeping a light signal from escaping its conductor. He explained how light transmitted along bundles of flexible glass fibers could transport optical images along torturous paths with useful outcomes. Kapany predicted that it would soon become routine for physicians to examine the inside of a patient’s body using a “fiberscope”—and indeed fiber-optic endoscopy is now commonplace.

Kapany’s article unintentionally introduced an extra loop into the historical thread of fiber optics. In an anecdote that leads off the article, Kapany relates that in the 1870s, Irish physicist John Tyndall demonstrated how light could travel along a curved path. His “light pipe” was formed by a stream of water emerging from a hole in the side of a tank. When Tyndall shone a light into the tank, the light followed the stream of water as it exited the tank and arced to the floor. This same effect is seen in illuminated fountains.

Kapany’s anecdote conjures a mental image that allows readers to begin to understand the concept of guiding light, and I always love when scientists evoke history. In this case, though, the history was wrong: Tyndall wasn’t the originator of the guided-light demonstration.

While researching his highly readable 1999 book City of Light: The Story of Fiber Optics, Jeff Hecht discovered that in fact Jean-Daniel Colladon deserves the credit. In 1841, the Swiss physicist performed the water-jet experiment in Geneva and published an account the following year in Comptes Rendus, the proceedings of the French Academy of Sciences. Hecht, a frequent contributor to IEEE Spectrum, concluded that Michael Faraday, Tyndall’s mentor, probably saw another Swiss physicist, Auguste de la Rive, demonstrate a water jet based on Colladon’s apparatus, and Faraday then encouraged Tyndall to attempt something similar back in London.

I forgive Kapany for not digging around in the archives, even if his anecdote did exaggerate Tyndall’s role in fiber optics. And sure, Tyndall should have credited Colladon, but then there is a long history of scientists not getting the credit they deserve. Indeed, Kapany himself is considered one of them. In 1999 Fortune magazine listed him as one of the “unsung heroes” of 20th-century businessmen. This idea was perpetuated after Kapany did not share the Nobel Prize in Physics with Charles Kao in 2009 for achievements in the transmission of light in fibers for optical communication.

Whether or not Kapany should have shared the prize—and there are never any winners when it comes to debates over overlooked Nobelists—Kao certainly deserved what he got. In 1963 Kao joined a team at Standard Telecommunication Laboratories (STL) in England, the research center for Standard Telephones and Cables. Working with George Hockham, he spent the next three years researching how to use fiber optics for long-distance communication, both audio and video.

On 27 January 1966, Kao demonstrated a short-distance optical waveguide at a meeting of the Institution of Electrical Engineers (IEE).

According to a press release from STL, the waveguide had “the information-carrying capacity of one Gigacycle, which is equivalent to 200 television channels or over 200,000 telephone channels.” Once the technology was perfected, the press release went on, a single undersea cable would be capable of transmitting large amounts of data from the Americas to Europe.

In July, Kao and Hockham published their work [PDF] in the Proceedings of the IEE. They proposed that fiber-optic communication over long distances would be viable, but only if an attenuation of less than 20 dB/km could be achieved. That’s when Corning got involved.

Corning’s contribution brought long-distance optical communication closer to reality

In 1966, the head of a new group in fiber-optic communication at the Post Office Research Station in London mentioned to a visitor from Corning the need for low-loss glass fibers to realize Kao’s vision of long-distance communication. Corning already made fiber optics for medical and military use, but those short pieces of cable had losses of approximately 1,000 dB/km—not even close to Kao and Hockham’s threshold.

That visitor from Corning, William Shaver, told his colleague Robert Maurer about the British effort, and Maurer in turn recruited Keck, Peter Schultz, and Frank Zimar to work on a better way of drawing the glass fibers. The group eventually settled on a process involving a titanium-doped core. The testing of each new iteration of fiber could take several months, but by 1970 the Corning team thought they had a workable technology. On 11 May 1970, they filed for two patents. The first was US3659915A, a fused silica optical waveguide, awarded to Maurer and Schultz, and the second was US271126A, a method of producing optical waveguide fibers, awarded to Keck and Schultz.

Three months after the filing, Keck recorded the jubilant note in his lab notebook. Alas, it was after 5:00 pm on a Friday, and no one was around to join in his celebration. Keck verified the result with a second test on 21 August 1970. In 2012, the achievement was recognized with an IEEE Milestone as a significant event in electrotechnology.

Of course, there was still much work to be done to make long-distance optical communication commercially viable. Just like Princess Eulalia’s dress, Corning’s titanium-doped fibers weren’t strong enough for practical use. Eventually, the team discovered a better process with germanium-doped fibers, which remain the industry standard to this day. A half-century after the first successful low-loss transmission, fiber-optic cables encircle the globe, transmitting terabits of data every second.

An abridged version of this article appears in the August 2020 print issue as “Weaving Light.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Cranes Lift More Than Their Weight in the World of Shipping and Construction

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/tech-history/heroic-failures/cranes-lift-more-than-their-weight-in-the-world-of-shipping-and-construction

A crane can lift a burden, move it sideways, and lower it. These operations, which the Greeks employed when building marble temples 25 centuries ago, are now performed every second somewhere around the world as tall ship-to-shore gantry cranes empty and reload container vessels. The two fundamental differences between ancient cranes and the modern versions involve the materials that make up those cranes and the energies that power them.

The cranes of early Greek antiquity were just wooden jibs with a simple pulley whose mechanical advantage came from winding the ropes by winches. By the third century B.C.E., compound pulleys had come into use; these Roman trispastos provided a nearly threefold mechanical advantage—nearly, because there were losses to friction. Those losses became prohibitive with the addition of more than the five ropes of the pentaspastos.

The most powerful cranes in the Roman and later the medieval periods were powered by men treading inside wheels or by animals turning windlasses in tight circles. Their lifting capacities were generally between 1 and 5 metric tons. Major advances came only during the 19th century.

William Fairbairn’s largest harbor crane, which he developed in the 1850s, was powered by four men turning winches. Its performance was further expanded by fixed and movable pulleys, which gave it more than a ­600-fold mechanical advantage, enabling it to lift weights of up to 60 metric tons and move them over a circle with a 32-meter diameter. And by the early 1860s, ­William Armstrong’s company was producing annually more than 100 hydraulic cranes (transmitting forces through the liquid under pressure) for English docks. Steam-powered cranes of the latter half of the 19th century were able to lift more than 100 metric tons in steel mills; some of these cranes hung above the factory floor and moved on roof-mounted rails. During the 1890s, steam engines were supplanted by electric motors.

The next fundamental advance came after World War I, with Hans Liebherr’s invention of a tower crane that could swing its loads horizontally and could be quickly assembled at a construction site. Its tower top is the fulcrum of a lever whose lifting (jib) arm is balanced by a counterweight, and the crane’s capacity is enhanced by pulleys. Tower cranes were first deployed to reconstruct bombed-out German cities; they have diffused rapidly and are now seen on construction projects around the world. Typical lifting capacities are between 12 and 20 metric tons, with the record held by the K10000, made by the Danish firm Krøll Cranes. It can lift up to 120 metric tons at the maximum radius of 100 meters.

Cranes are also mainstays of global commerce: Without gantry quay cranes capable of hoisting from 40 to 80 metric tons, it would take weeks to unload the 20,000 standard-size steel containers that sit in one of today’s enormous container ships. But put cranes together in a tightly coordinated operation involving straddle carriers and trucks and the job now takes less than 72 hours. And 20,568 containers were unloaded from the Madrid Maersk in Antwerp in June 2017 in the record time of 59 hours.

Without these giant cranes, every piece of clothing, every pair of shoes, every TV, every mobile phone imported from Asia to North America or Europe would take longer to arrive and cost more to buy. As for the lifting capacity records, Liebherr now makes a truly gargantuan mobile crane that sits on an 18-wheeler truck and can support 1,200 metric tons. And, not surprisingly given China’s dominance of the industry, the Taisun, the most powerful shipbuilding gantry crane, can hoist 20,000 metric tons. That’s about half again as heavy as the Brooklyn Bridge.

This article appears in the August 2020 print issue as “Cranes (The Machines, Not Birds).”

Today’s Internet Still Relies on an ARPANET-Era Protocol: The Request for Comments

Post Syndicated from Steve Crocker original https://spectrum.ieee.org/tech-history/cyberspace/todays-internet-still-relies-on-an-arpanetera-protocol-the-request-for-comments

Each March, July, and November, we are reminded that the Internet is not quite the mature, stable technology that it seems to be. We rely on the Internet as an essential tool for our economic, social, educational, and political lives. But when the Internet Engineering Task Force meets every four months at an open conference that bounces from continent to continent, more than 1,000 people from around the world gather with change on their minds. Their vision of the global network that all humanity shares is dynamic, evolving, and continuously improving. Their efforts combine with the contributions of myriad others to ensure that the Internet always works but is never done, never complete.

The rapid yet orderly evolution of the Internet is all the more remarkable considering the highly unusual way it happens: without a company, a government, or a board of directors in charge. Nothing about digital communications technology suggests that it should be self-organizing or, for that matter, fundamentally reliable. We enjoy an Internet that is both of those at once because multiple generations of network developers have embraced a principle and a process that have been quite rare in the history of technology. The principle is that the protocols that govern how Internet-connected devices communicate should be open, expandable, and robust. And the process that invents and refines those protocols demands collaboration and a large degree of consensus among all who care to participate.

As someone who was part of the small team that very deliberately adopted a collaborative, consensus-based process to develop protocols for the ARPANET—predecessor to the Internet—I have been pleasantly surprised by how those ideas have persisted and succeeded, even as the physical network has evolved from 50-kilobit-per-second telephone lines in the mid-1960s to the fiber-optic, 5G, and satellite links we enjoy today. Though our team certainly never envisioned unforgeable “privacy passes” or unique identifiers for Internet-connected drones—two proposed protocols discussed at the task force meeting this past March—we did circulate our ideas for the ARPANET as technical memos among a far-flung group of computer scientists, collecting feedback and settling on solutions in much the same way as today, albeit at a much smaller scale.

We called each of those early memos a “Request for Comments” or RFC. Whatever networked device you use today, it almost certainly follows rules laid down in ARPANET RFCs written decades ago, probably including protocols for sending plain ASCII text (RFC 20, issued in 1969), audio or video data streams (RFC 768, 1980), and Post Office Protocol, or POP, email (RFC 918, 1984).

Of course, technology moves on. None of the computer or communication hardware used to build the ARPANET are crucial parts of the Internet today. But there is one technological system that has remained in constant use since 1969: the humble RFC, which we invented to manage change itself in those early days.

The ARPANET was far simpler than the Internet because it was a single network, not a network of networks. But in 1966, when the Pentagon’s Advanced Research Projects Agency (ARPA) started planning the idea of linking together completely different kinds of computers at a dozen or more research universities from California to Massachusetts, the project seemed quite ambitious.

It took two years to create the basic design, which was for an initial subnet that would exchange packets of data over dedicated telephone lines connecting computers at just four sites: the Santa Barbara and Los Angeles campuses of the University of California; Stanford Research Institute (SRI) in Menlo Park, Calif.; and the University of Utah in Salt Lake City. At each site, a router—we called them IMPs, for interface message processors—would chop outgoing blocks of bits into smaller packets. The IMPs would also reassemble incoming packets from distant computers into blocks that the local “host” computer could process.

In the tumultuous summer of 1968, I was a graduate student spending a few months in the computer science department at UCLA, where a close friend of mine from high school, Vint Cerf, was studying. Like many others in the field, I was much more interested in artificial intelligence and computer graphics than in networking. Indeed, some principal investigators outside of the first four sites initially viewed the ARPANET project as an intrusion rather than an opportunity. When ARPA invited each of the four pilot sites to send two people to a kickoff meeting in Santa Barbara at the end of August, Vint and I drove up from UCLA and discovered that all of the other attendees were also graduate students or staff members. No professors had come.

Almost none of us had met, let alone worked with, anyone from the other sites before. But all of us had worked on time-sharing systems, which doled out chunks of processing time on centralized mainframe computers to a series of remotely connected users, and so we all had a sense that interesting things could be done by connecting distant computers and getting their applications to interact with one another. In fact, we expected that general-purpose interconnection of computers would be so useful that it would eventually spread to include essentially every computer. But we certainly did not anticipate how that meeting would launch a collaborative process that would grow this little network into a critical piece of global infrastructure. And we had no inkling how dramatically our collaboration over the next few years would change our lives.

After getting to know each other in Santa Barbara, we organized follow-up meetings at each of the other sites so that we would all have a common view of what this eclectic network would look like. The SDS Sigma 7 computer at UCLA would be connecting to a DEC PDP-10 in Utah, an IBM System/360 in Santa Barbara, and an SDS 940 at SRI.

We would be a distributed team, writing software that would have to work on a diverse collection of machines and operating systems—some of which didn’t even use the same number of bits to represent characters. Co-opting the name of the ARPA-appointed committee of professors that had assigned us to this project, we called ourselves the Network Working Group.

We had only a few months during the autumn of 1968 and the winter of 1969 to complete our theoretical work on the general architecture of the protocols, while we waited for the IMPs to be built in Cambridge, Mass., by the R&D company Bolt, Beranek and Newman (BBN).

Our group was given no concrete requirements for what the network should do. No project manager asked us for regular status reports or set firm milestones. Other than a general assumption that users at each site should be able to remotely log on and transfer files to and from hosts at the other sites, it was up to us to create useful services.

Through our regular meetings, a broader vision emerged, shaped by three ideas. First, we saw the potential for lots of interesting network services. We imagined that different application programs could exchange messages across the network, for example, and even control one another remotely by executing each other’s subroutines. We wanted to explore that potential.

Second, we felt that the network services should be expandable. Time-sharing systems had demonstrated how you could offer a new service merely by writing a program and letting others use it. We felt that the network should have a similar capacity.

Finally, we recognized that the network would be most useful if it were agnostic about the hardware of its hosts. Whatever software we wrote ought to support any machine seamlessly, regardless of its word length, character set, instruction set, or architecture.

We couldn’t translate these ideas into software immediately because BBN had yet to release its specification for the interface to the IMP. But we wanted to get our thoughts down on paper. When the Network Working Group gathered in Utah in March 1969, we dealt out writing assignments to one another. Until we got the network running and created an email protocol, we would have to share our memos through the U. S. Mail. To make the process as easy and efficient as possible, I kept a numbered list of documents in circulation, and authors mailed copies of memos they wrote to everyone else.

Tentatively, but with building excitement, our group of grad students felt our way through the dark together. We didn’t even appoint a leader. Surely at some point the “real experts”—probably from some big-name institution in the Northeast or Washington, D.C.—would take charge. We didn’t want to step on anyone’s toes. So we certainly weren’t going to call our technical memos “standards” or “orders.” Even “proposals” seemed too strong—we just saw them as ideas that we were communicating without prior approval or coordination. Finally, we settled on a term suggested by Bill Duvall, a young programmer at SRI, that emphasized that these documents were part of an ongoing and often preliminary discussion: Request for Comments.

The first batch of RFCs arrived in April 1969. What was arguably one of our best initial ideas was not spelled out in these RFCs but only implicit in them: the agreement to structure protocols in layers, so that one protocol could build on another if desired and so that programmers could write software that tapped into whatever level of the protocol stack worked best for their needs.

We started with the bottom layer, the foundation. I wrote RFC 1, and Duvall wrote RFC 2. Together, these first two memos described basic streaming connections between hosts. We kept this layer simple—easy to define and easy to implement. Interactive terminal connections (like Telnet), file transfer mechanisms (like FTP), and other applications yet to be defined (like email) could then be built on top of it.

That was the plan, anyway. It turned out to be more challenging than expected. We wrestled with, among other things, how to establish connections, how to assign addresses that allowed for multiple connections, how to handle flow control, what to use as the common unit of transmission, and how to enable users to interrupt the remote system. Only after multiple iterations and many, many months of back-and-forth did we finally reach consensus on the details.

Some of the RFCs in that first batch were more administrative, laying out the minimalist conventions we wanted these memos to take, presenting software testing schedules, and tracking the growing mailing list.

Others laid out grand visions that nevertheless failed to gain traction. To my mind, RFC 5 was the most ambitious and interesting of the lot. In it, Jeff Rulifson, then at SRI, introduced a very powerful idea: downloading a small application at the beginning of an interactive session that could mediate the session and speed things up by handling “small” actions locally.

As one very simple example, the downloaded program could let you edit or auto-complete a command on the console before sending it to the remote host. The application would be written in a machine-agnostic language called Decode-Encode Language (DEL). For this to work, every host would have to be able to run a DEL interpreter. But we felt that the language could be kept simple enough for this to be feasible and that it might significantly improve responsiveness for users.

Aside from small bursts of experimentation with DEL, however, the idea didn’t catch on until many years later, when Microsoft released ActiveX and Sun Microsystems produced Java. Today, the technique is at the heart of every online app.

The handful of RFCs we circulated in early 1969 captured our ideas for network protocols, but our work really began in earnest that September and October, when the first IMPs arrived at UCLA and then SRI. Two were enough to start experimenting. Duvall at SRI and Charley Kline at UCLA (who worked in Leonard Kleinrock’s group) dashed off some software to allow a user on the UCLA machine to log on to the machine at SRI. On the evening of 29 October 1969, Charley tried unsuccessfully to do so. After a quick fix to a small glitch in the SRI software, a successful connection was made that evening. The software was adequate for connecting UCLA to SRI, but it wasn’t general enough for all of the machines that would eventually be connected to the ARPANET. More work was needed.

By February 1970, we had a basic host-to-host communication protocol working well enough to present it at that spring’s Joint Computer Conference in Atlantic City. Within a few more months, the protocol was solid enough that we could shift our attention up the stack to two application-layer protocols, Telnet and FTP.

Rather than writing monolithic programs to run on each computer, as some of our bosses had originally envisioned, we stuck to our principle that protocols should build on one another so that the system would remain open and extensible. Designing Telnet and FTP to communicate through the host-to-host protocol guaranteed that they could be updated independently of the base system.

By October 1971, we were ready to put the ARPANET through its paces. Gathering at MIT for a complete shakedown test—we called it “the bake-off”—we checked that each host could log on to every other host. It was a proud moment, as well as a milestone that the Network Working Group had set for itself.

And yet we knew there was still so much to do. The network had grown to connect 23 hosts at 15 sites. A year later, at a big communications conference in Washington, D.C., the ARPANET was demonstrated publicly in a hotel ballroom. Visitors were able to sit down at any of several terminals and log on to computers all over the United States.

Year after year, our group continued to produce RFCs with observations, suggested changes, and possible extensions to the ARPANET and its protocols. Email was among those early additions. It started as a specialized case of file transfer but was later reworked into a separate protocol (Simple Mail Transfer Protocol, or SMTP, RFC 788, issued in 1981). Somewhat to the bemusement of both us and our bosses, email became the dominant use of the ARPANET, the first “killer app.”

Email also affected our own work, of course, as it allowed our group to circulate RFCs faster and to a much wider group of collaborators. A virtuous cycle had begun: Each new feature enabled programmers to create other new features more easily.

Protocol development flourished. The TCP and IP protocols replaced and greatly enhanced the host-to-host protocol and laid the foundation for the Internet. The RFC process led to the adoption of the Domain Name System (DNS, RFC 1035, issued in 1987), the Simple Network Management Protocol (SNMP, RFC 1157, 1990), and the Hypertext Transfer Protocol (HTTP, RFC 1945, 1996).

In time, the development process evolved along with the technology and the growing importance of the Internet in international communication and commerce. In 1979, Vint Cerf, by then a program manager at DARPA, created the Internet Configuration Control Board, which eventually spawned the Internet Engineering Task Force. That task force continues the work that was originally done by the Network Working Group. Its members still discuss problems facing the network, modifications that might be necessary to existing protocols, and new protocols that may be of value. And they still publish protocol specifications as documents with the label “Request for Comments.”

And the core idea of continual improvement by consensus among a coalition of the willing still lives strong in Internet culture. Ideas for new protocols and changes to protocols are now circulated via email lists devoted to specific protocol topics, known as working groups. There are now about a hundred of these groups. When they meet at the triannual conferences, the organizers still don’t take votes: They ask participants to hum if they agree with an idea, then take the sense of the room. Formal decisions follow a subsequent exchange over email.

Drafts of protocol specifications are circulated as “Internet-Drafts,” which are intended for discussion leading to an RFC. One discussion that was recently begun on new network software to enable quantum Internet communication, for example, is recorded in an RFC-like Internet-Draft.

And in an ironic twist, the specification for this or any other new protocol will appear in a Request for Comments only after it has been approved for formal adoption and published. At that point, comments are no longer actually requested.

This article appears in the August 2020 print issue as “The Consensus Protocol.”

About the Author

Steve Crocker is the chief architect of the Arpanet’s Request for Comments (RFC) process and one of the founding members of the Network Working Group, the forerunner of the Internet Engineering Task Force. For many years, he was a board member of ICANN, serving as its vice chairman and then its chairman until 2017.

Solar Storms May Have Hindered SOS During Historic “Red Tent” Expedition

Post Syndicated from Nola Taylor Redd original https://spectrum.ieee.org/tech-talk/tech-history/dawn-of-electronics/solar-storms-sos-red-tent-expedition

In May 1928, a team of explorers returning from the North Pole via airship crashed on the frigid ice. Their attempts to use their portable radio transmitter to call for help failed; although they could hear broadcasts from Rome detailing attempts to rescue them, their calls could not reach a relatively nearby ship. Now, new research suggests that the communication problems may have been caused by radio “dead zones,” made worse by high solar activity that triggered massive solar storms.

High frequency radios bounce signals through the ionosphere, the upper layers of Earth’s atmosphere most affected by solar radiation. Around the poles, the ionosphere is especially challenging for radio waves to travel through—a fact not yet realized in the 1920s when scientists were just beginning to understand their movement through the charged air.

“The peculiar morphology of the high latitude ionosphere, with the large variation of the electron density…may cause frequent problems to radio communication,” says Bruno Zolesi, a researcher at the Istituto Nazionale di Geofisica e Vulcanologia in Rome. Zolesi, who studies how the Earth’s atmosphere reacts to particles emitted by the sun, is lead author of a new study probing the tragedy of the airship Italia. He and his colleagues modeled the radio environment of the North Pole at the time of the expedition. They found that space weather, which is the way that charged particles from the sun can affect the environment around planets, likely plagued the expedition, delaying the rescue of the explorers by more than a week and perhaps costing the life of at least one of the team members.

“The space weather conditions are particularly intense in polar regions,” Zolesi says.

The ‘Red Tent’

On 24 May 1928, after just over 20 hours of flight, the ‘Dirigible Italia,’ captained by Italian designer Umberto Nobile, circled the North Pole. Nobile had flown on a 1926 Norwegian expedition aboard an airship he had designed; that was the first vehicle to reach the North Pole. Two years later, he had returned to stake a claim for his native country.

After a brief ceremony at the pole, with winds too strong to attempt a landing on the ice, the vehicle turned south to make the 360-kilometer return trip to the crew’s base on the Svalbard archipelago. But an unknown problem caused the airship to plunge to the Earth, slamming into the ice and shattering the cabin. The crash killed one of the explorers. The balloon, freed from the weight of the carriage, took to the air, carrying six more crew members away, never to be seen again. The nine survivors sheltered beneath a red tent that gave its name to the historical disaster.

Among the supplies left on the ice was the simple high-frequency radio transmitter intended to allow communications between the airship and explorers on the ground. The low-powered radio ran on batteries and had a transmission range of 30 to 50 meters.

As the shipwrecked crew settled into their uncomfortable new quarters, radio operator Giuseppe Biagi began sending SOS messages. At the 55th minute of each odd hour, the prearranged time for the Italia to contact the Italian Navy’s ship, Citta dei Milano, anchored in King’s Bay, he pled for help, then listened in vain for a response.

Amazingly, while the tiny antenna could not contact the ship, it could pick up radio broadcasts from Rome—with signals originating more than ten times farther away than the point where the navy ship was docked. The explorers listened as news of their disappearance and updates on the rescue operations were broadcast.

It took nine days for someone to finally hear their calls for help. On 3 June 1928, a Russian amateur radio enthusiast, Nicolaj Schmidt, picked up the SOS with his homemade radio receiver in a small village approximately 1900 km from the Red Tent. After nearly 50 days on the ice, the explorers were ultimately rescued, though 15 of the rescuers died in the attempt.

Blocked frequencies

Over the past 90 years, the crew of the Italia have been the subject of several books and articles, as well as a 1969 Soviet-Italian move starring actor Sean Connery. The continued cultural interest over the event intrigued Zolesi and his colleagues; they hoped to combine their scientific knowledge with cultural history to explain some of the radio communication problems encountered by the survivors.

Along with exploring previously untapped regions of Earth, the first half of the twentieth century was marked by the exploration and investigation of Earth’s ionosphere. Systematic measurements of radio and telegraph transmissions provided the first realistic picture of the ionosphere and a generalized understanding of how radio waves move through charged regions. But in 1928, scientists were only beginning to understand the ionosphere and the space weather that affected how radio waves traveled.

Radio waves sent through the ionosphere travel upward at an angle, and the triangular wedge beneath their source and the point where they return to ground is known as their skip distance. Inside that area exists a dead zone wherein their signal cannot pass (with the exception of the limited ground reach they also cover). Dead zones, as we now know, vary based on the strength of the broadcast signal and the conditions of the ionosphere.

Zolesi and his colleagues relied on a standard international model of the ionosphere to provide a monthly average picture of the northern pole. But their lack of knowledge about the Earth’s atmosphere nearly proved fatal to the team. The 8.9 MHZ frequency relied on by the explorers would have fallen in the radio dead zone for locations to the north of the 66° N line of latitude. Both the Red Tent and the Citta di Milano sat at nearly 80° N, while Arkhangelsk, the closest city to Schmidt, sits at 64.5°.

The researchers also studied sunspot drawings captured by the Mount Wilson Observatory in California between 25 and 31 May 1928. They revealed a significant increase in the number of sunspot groups. Sunspots have been directly linked to increased solar radiation and electron density in the atmosphere, making them an important marker of the ionosphere’s behavior. The researchers also examined the history of two magnetic observatories in England and Scotland to understand how Earth’s geomagnetic field could have played a role. They found that the planet underwent periods of magnetic fluctuations in mid- to late May 1928, peaking on 28 May.

“The space weather conditions were affected by a significant geomagnetic storm during the early days after the shipwreck,” Zolesi says. “These conditions might have severely affected the radio communications of the survivors during the tragedy.”

The researchers concluded that it may have been impossible for the stranded team to reach the Citta di Milano with their low-powered radio, particularly given their unstable antenna and the noisy radio environment courtesy of the local coal mining industry and news agencies. The addition of skip distance issues and space weather conditions not fully understood at the time made communication an even greater challenge.

As the researchers explained in their paper, which was published in the journal Space Weather, the increased activity of the sun should have released more charged particles, which would have interacted with the layers of the upper atmosphere, especially around the northern and southern poles they are funneled to by the planet’s magnetic field. The activity would have made it more difficult for radio waves to pass through the atmosphere.

The lessons learned by the tragic Italia expedition could be particularly relevant as humans move to off-planet exploration. While space weather didn’t play a direct role in the airship’s crash, it played a crucial role in sabotaging calls for help and the belated rescue attempt. Similar problems could plague expeditions to other bodies in the solar system if the effects of space weather aren’t considered.

“For the moon and Mars, the problems could be completely different because there are different ways radio communication may be used,” Zolesi says. “But many other problems may occur due to the large number of space weather events [affecting those].”

How the Digital Camera Transformed Our Concept of History

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/silicon-revolution/how-the-digital-camera-transformed-our-concept-of-history

For an inventor, the main challenge might be technical, but sometimes it’s timing that determines success. Steven Sasson had the technical talent but developed his prototype for an all-digital camera a couple of decades too early.

A CCD from Fairchild was used in Kodak’s first digital camera prototype

It was 1974, and Sasson, a young electrical engineer at Eastman Kodak Co., in Rochester, N.Y., was looking for a use for Fairchild Semiconductor’s new type 201 charge-coupled device. His boss suggested that he try using the 100-by-100-pixel CCD to digitize an image. So Sasson built a digital camera to capture the photo, store it, and then play it back on another device.

Sasson’s camera was a kluge of components. He salvaged the lens and exposure mechanism from a Kodak XL55 movie camera to serve as his camera’s optical piece. The CCD would capture the image, which would then be run through a Motorola analog-to-digital converter, stored temporarily in a DRAM array of a dozen 4,096-bit chips, and then transferred to audio tape running on a portable Memodyne data cassette recorder. The camera weighed 3.6 kilograms, ran on 16 AA batteries, and was about the size of a toaster.

After working on his camera on and off for a year, Sasson decided on 12 December 1975 that he was ready to take his first picture. Lab technician Joy Marshall agreed to pose. The photo took about 23 seconds to record onto the audio tape. But when Sasson played it back on the lab computer, the image was a mess—although the camera could render shades that were clearly dark or light, anything in between appeared as static. So Marshall’s hair looked okay, but her face was missing. She took one look and said, “Needs work.”

Sasson continued to improve the camera, eventually capturing impressive images of different people and objects around the lab. He and his supervisor, Garreth Lloyd, received U.S. Patent No. 4,131,919 for an electronic still camera in 1978, but the project never went beyond the prototype stage. Sasson estimated that image resolution wouldn’t be competitive with chemical photography until sometime between 1990 and 1995, and that was enough for Kodak to mothball the project.

Digital photography took nearly two decades to take off

While Kodak chose to withdraw from digital photography, other companies, including Sony and Fuji, continued to move ahead. After Sony introduced the Mavica, an analog electronic camera, in 1981, Kodak decided to restart its digital camera effort. During the ’80s and into the ’90s, companies made incremental improvements, releasing products that sold for astronomical prices and found limited audiences. [For a recap of these early efforts, see Tekla S. Perry’s IEEE Spectrum article, “Digital Photography: The Power of Pixels.”]

Then, in 1994 Apple unveiled the QuickTake 100, the first digital camera for under US $1,000. Manufactured by Kodak for Apple, it had a maximum resolution of 640 by 480 pixels and could only store up to eight images at that resolution on its memory card, but it was considered the breakthrough to the consumer market. The following year saw the introduction of Apple’s QuickTake 150, with JPEG image compression, and Casio’s QV10, the first digital camera with a built-in LCD screen. It was also the year that Sasson’s original patent expired.

Digital photography really came into its own as a cultural phenomenon when the Kyocera VisualPhone VP-210, the first cellphone with an embedded camera, debuted in Japan in 1999. Three years later, camera phones were introduced in the United States. The first mobile-phone cameras lacked the resolution and quality of stand-alone digital cameras, often taking distorted, fish-eye photographs. Users didn’t seem to care. Suddenly, their phones were no longer just for talking or texting. They were for capturing and sharing images.

The rise of cameras in phones inevitably led to a decline in stand-alone digital cameras, the sales of which peaked in 2012. Sadly, Kodak’s early advantage in digital photography did not prevent the company’s eventual bankruptcy, as Mark Harris recounts in his 2014 Spectrum article “The Lowballing of Kodak’s Patent Portfolio.” Although there is still a market for professional and single-lens reflex cameras, most people now rely on their smartphones for taking photographs—and so much more.

How a technology can change the course of history

The transformational nature of Sasson’s invention can’t be overstated. Experts estimate that people will take more than 1.4 trillion photographs in 2020. Compare that to 1995, the year Sasson’s patent expired. That spring, a group of historians gathered to study the results of a survey of Americans’ feelings about the past. A quarter century on, two of the survey questions stand out:

  • During the last 12 months, have you looked at photographs with family or friends?

  • During the last 12 months, have you taken any photographs or videos to preserve memories?

In the nationwide survey of nearly 1,500 people, 91 percent of respondents said they’d looked at photographs with family or friends and 83 percent said they’d taken a photograph—in the past year. If the survey were repeated today, those numbers would almost certainly be even higher. I know I’ve snapped dozens of pictures in the last week alone, most of them of my ridiculously cute puppy. Thanks to the ubiquity of high-quality smartphone cameras, cheap digital storage, and social media, we’re all taking and sharing photos all the time—last night’s Instagram-worthy dessert; a selfie with your bestie; the spot where you parked your car.

So are all of these captured moments, these personal memories, a part of history? That depends on how you define history.

For Roy Rosenzweig and David Thelen, two of the historians who led the 1995 survey, the very idea of history was in flux. At the time, pundits were criticizing Americans’ ignorance of past events, and professional historians were wringing their hands about the public’s historical illiteracy.

Instead of focusing on what people didn’t know, Rosenzweig and Thelen set out to quantify how people thought about the past. They published their results in the 1998 book The Presence of the Past: Popular Uses of History in American Life (Columbia University Press). This groundbreaking study was heralded by historians, those working within academic settings as well as those working in museums and other public-facing institutions, because it helped them to think about the public’s understanding of their field.

Little did Rosenzweig and Thelen know that the entire discipline of history was about to be disrupted by a whole host of technologies. The digital camera was just the beginning.

For example, a little over a third of the survey’s respondents said they had researched their family history or worked on a family tree. That kind of activity got a whole lot easier the following year, when Paul Brent Allen and Dan Taggart launched Ancestry.com, which is now one of the largest online genealogical databases, with 3 million subscribers and approximately 10 billion records. Researching your family tree no longer means poring over documents in the local library.

Similarly, when the survey was conducted, the Human Genome Project was still years away from mapping our DNA. Today, at-home DNA kits make it simple for anyone to order up their genetic profile. In the process, family secrets and unknown branches on those family trees are revealed, complicating the histories that families might tell about themselves.

Finally, the survey asked whether respondents had watched a movie or television show about history in the last year; four-fifths responded that they had. The survey was conducted shortly before the 1 January 1995 launch of the History Channel, the cable channel that opened the floodgates on history-themed TV. These days, streaming services let people binge-watch historical documentaries and dramas on demand.

Today, people aren’t just watching history. They’re recording it and sharing it in real time. Recall that Sasson’s MacGyvered digital camera included parts from a movie camera. In the early 2000s, cellphones with digital video recording emerged in Japan and South Korea and then spread to the rest of the world. As with the early still cameras, the initial quality of the video was poor, and memory limits kept the video clips short. But by the mid-2000s, digital video had become a standard feature on cellphones.

As these technologies become commonplace, digital photos and video are revealing injustice and brutality in stark and powerful ways. In turn, they are rewriting the official narrative of history. A short video clip taken by a bystander with a mobile phone can now carry more authority than a government report.

Maybe the best way to think about Rosenzweig and Thelen’s survey is that it captured a snapshot of public habits, just as those habits were about to change irrevocably.

Digital cameras also changed how historians conduct their research

For professional historians, the advent of digital photography has had other important implications. Lately, there’s been a lot of discussion about how digital cameras in general, and smartphones in particular, have changed the practice of historical research. At the 2020 annual meeting of the American Historical Association, for instance, Ian Milligan, an associate professor at the University of Waterloo, in Canada, gave a talk in which he revealed that 96 percent of historians have no formal training in digital photography and yet the vast majority use digital photographs extensively in their work. About 40 percent said they took more than 2,000 digital photographs of archival material in their latest project. W. Patrick McCray of the University of California, Santa Barbara, told a writer with The Atlantic that he’d accumulated 77 gigabytes of digitized documents and imagery for his latest book project [an aspect of which he recently wrote about for Spectrum].

So let’s recap: In the last 45 years, Sasson took his first digital picture, digital cameras were brought into the mainstream and then embedded into another pivotal technology—the cellphone and then the smartphone—and people began taking photos with abandon, for any and every reason. And in the last 25 years, historians went from thinking that looking at a photograph within the past year was a significant marker of engagement with the past to themselves compiling gigabytes of archival images in pursuit of their research.

So are those 1.4 trillion digital photographs that we’ll collectively take this year a part of history? I think it helps to consider how they fit into the overall historical narrative. A century ago, nobody, not even a science fiction writer, predicted that someone would take a photo of a parking lot to remember where they’d left their car. A century from now, who knows if people will still be doing the same thing. In that sense, even the most mundane digital photograph can serve as both a personal memory and a piece of the historical record.

An abridged version of this article appears in the July 2020 print issue as “Born Digital.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

NASA’s Original Laptop: The GRiD Compass

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/silicon-revolution/nasas-original-laptop-the-grid-compass

The year 1982 was a notable one in personal computing. The BBC Micro was introduced in the United Kingdom, as was the Sinclair ZX Spectrum. The Commodore 64 came to market in the United States. And then there was the GRiD Compass.

The GRiD Compass Was the First Laptop to Feature a Clamshell Design

The Graphical Retrieval Information Display (GRiD) Compass had a unique clamshell design, in which the monitor folded down over the keyboard. Its 21.6-centimer plasma screen could display 25 lines of up to 128 characters in a high-contrast amber that the company claimed could be “viewed from any angle and under any lighting conditions.”

By today’s standards, the GRiD was a bulky beast of a machine. About the size of a large three-ring binder, it weighed 4.5 kilograms (10 pounds). But compared with, say, the Osborne 1 or the Compaq Portable, both of which had a heavier CRT screen and tipped the scales at 10.7 kg and 13 kg, respectively, the Compass was feather light. Some people call the Compass the first truly portable laptop computer.

The computer had 384 kilobytes of nonvolatile bubble memory, a magnetic storage system that showed promise in the 1970s and ’80s. With no rotating disks or moving parts, solid-state bubble memory worked well in settings where a laptop might, say, take a tumble. Indeed, sales representatives claimed they would drop the computer in front of prospective buyers to show off its durability.

But bubble memory also tends to run hot, so the exterior case was designed out of a magnesium alloy to make it a heat sink. The metal case added to the laptop’s reputation for ruggedness. The Compass also included a 16-bit 8086 microprocessor and up to 512 KB of RAM. Floppy drives and hard disks were available as peripherals.

With a price tag of US $8,150 (about $23,000 today), the Compass wasn’t intended for consumers but rather for business executives. Accordingly, it came preloaded with a text editor, a spreadsheet, a plotter, a terminal emulator, a database management system, and other business software. The built-in 1200-baud modem was designed to connect to a central computer at the GRiD Systems’ headquarters in Mountain View, Calif., from which additional applications could be downloaded.

The GRiD’s Sturdy Design Made It Ideal for Space

The rugged laptop soon found a home with NASA and the U.S. military, both of which valued its sturdy design and didn’t blink at the cost.

The first GRiD Compass launched into space on 28 November 1983 aboard the space shuttle Columbia. The hardware adaptations for microgravity were relatively minor: a new cord to plug into the shuttle’s power supply and a small fan to compensate for the lack of convective cooling in space.

The software modifications were more significant. Special graphical software displayed the orbiter’s position relative to Earth and the line of daylight/darkness. Astronauts used the feature to plan upcoming photo shoots of specific locations. The GRiD also featured a backup reentry program, just in case all of the IBMs at Mission Control failed.

For its maiden voyage, the laptop received the code name SPOC (short for Shuttle Portable On-Board Computer). Neither NASA nor GRiD Systems officially connected the acronym to a certain pointy-eared Vulcan on Star Trek, but the GRiD Compass became a Hollywood staple whenever a character had to show off wealth and tech savviness. The Compass featured prominently in Aliens, Wall Street, and Pulp Fiction.

The Compass/SPOC remained a regular on shuttle missions into the early 1990s. NASA’s trust in the computer was not misplaced: Reportedly, the GRiD flying aboard Challenger survived the January 1986 crash.

The GRiDPad 1900 Was a First in Tablet Computing

John Ellenby and Glenn Edens, both from Xerox PARC, and David Paulson had founded GRiD Systems Corp. in 1979. The company went public in 1981, and the following year they launched the GRiD Compass.

Not a company to rest on its laurels, GRiD continued to be a pioneer in portable computers, especially thanks to the work of Jeff Hawkins. He joined the company in 1982, left for school in 1986, and returned as vice president of research. At GRiD, Hawkins led the development of a pen- or stylus-based computer. In 1989, this work culminated in the GRiDPad 1900, often regarded as the first commercially successful tablet computer. Hawkins went on to invent the PalmPilot and Treo, though not at GRiD.

Amid the rapidly consolidating personal computer industry, GRiD Systems was bought by Tandy Corp. in 1988 as a wholly owned subsidiary. Five years later, GRiD was bought again, by Irvine, Calif.–based AST Research, which was itself acquired by Samsung in 1996.

In 2006 the Computer History Museum sponsored a roundtable discussion by key members of the original GRiD engineering team: Glenn Edens, Carol Hankins, Craig Mathias, and Dave Paulson, moderated by New York Times journalist (and former GRiD employee) John Markoff:

How Do You Preserve an Old Computer?

Although the GRiD Compass’s tenure as a computer product was relatively short, its life as a historic artifact goes on. To be added to a museum collection, an object must be pioneering, iconic, or historic. The GRiD Compass is all three, which is how the computer found its way into the permanent holdings of not one, but two separate Smithsonian museums.

One Compass was acquired by the National Air and Space Museum in 1989. No surprise there, seeing as how the Compass was the first laptop used in space aboard a NASA mission. Seven years later, curators at the Cooper Hewitt, Smithsonian Design Museum added one to their collections in recognition of the innovative clamshell design.

Credit for the GRiD Compass’s iconic look and feel goes to the British designer Bill Moggridge. His firm was initially tasked with designing the exterior case for the new computer. After taking a prototype home and trying to use it, Moggridge realized he needed to create a design that unified the user, the object, and the software. It was a key moment in the development of computer-human interactive design. In 2010, Moggridge became the fourth director of the Cooper Hewitt and its first director without a background as a museum professional.

Considering the importance Moggridge placed on interactive design, it’s fitting that preservation of the GRiD laptop was overseen by the museum’s Digital Collection Materials Project. The project, launched in 2017, aims to develop standards, practices, and strategies for preserving digital materials, including personal electronics, computers, mobile devices, media players, and born-digital products.

Keeping an electronic device in working order can be extremely challenging in an age of planned obsolescence. Cooper Hewitt brought in Ben Fino-Radin, a media archeologist and digital conservator, to help resurrect their moribund GRiD. Fino-Radin in turn reached out to Ian Finder, a passionate collector and restorer of vintage computers who has a particular expertise in restoring GRiD Compass laptops. Using bubble memory from Finder’s personal collection, curators at Cooper Hewitt were able to boot the museum’s GRiD and to document the software for their research collections.

Even as museums strive to preserve their old GRiDs, new GRiDs are being born. Back in 1993, former GRiD employees in the United Kingdom formed GRiD Defence Systems during a management buyout. The London-based company continues the GRiD tradition of building rugged military computers. The company’s GRiDCASE 1510 Rugged Laptop, a portable device with a 14.2-cm backlit LED display, looks remarkably like a smaller version of the Compass circa 1982. I guess when you have a winning combination, you stick with it.

An abridged version of this article appears in the June 2020 print issue as “The First Laptop in Orbit.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.