Tag Archives: tech-history/silicon-revolution

How the IBM PC Won, Then Lost, the Personal Computer Market

Post Syndicated from James W. Cortada original https://spectrum.ieee.org/tech-history/silicon-revolution/how-the-ibm-pc-won-then-lost-the-personal-computer-market

On 12 August 1981, at the Waldorf Astoria Hotel in midtown Manhattan, IBM unveiled the company’s entrant into the nascent personal computer market: the IBM PC. With that, the preeminent U.S. computer maker launched another revolution in computing, though few realized it at the time. Press coverage of the announcement was lukewarm.

Soon, though, the world began embracing little computers by the millions, with IBM dominating those sales. The personal computer vastly expanded the number of people and organizations that used computers. Other companies, including Apple and Tandy Corp., were already making personal computers, but no other machine carried the revered IBM name. IBM’s essential contributions were to position the technology as suitable for wide use and to set a technology standard. Rivals were compelled to meet a demand that they had all grossly underestimated. As such, IBM had a greater effect on the PC’s acceptance than did Apple, Compaq, Dell, and even Microsoft.

Despite this initial dominance, by 1986 the IBM PC was becoming an also-ran. And in 2005, the Chinese computer maker Lenovo Group purchased IBM’s PC business.

What occurred between IBM’s wildly successful entry into the personal computer business and its inglorious exit nearly a quarter century later? From IBM’s perspective, a new and vast market quickly turned into an ugly battleground with many rivals. The company stumbled badly, its bureaucratic approach to product development no match for a fast-moving field. Over time, it became clear that the sad story of the IBM PC mirrored the decline of the company.

At the outset, though, things looked rosy.

How the personal computer revolution was launched

IBM did not invent the desktop computer. Most historians agree that the personal computer revolution began in April 1977 at the first West Coast Computer Faire. Here, Steve Jobs introduced the Apple II, with a price tag of US $1,298 (about $5,800 today), while rival Commodore unveiled its PET. Both machines were designed for consumers, not just hobbyists or the technically skilled. In August, Tandy launched its TRS-80, which came with games. Indeed, software for these new machines was largely limited to games and a few programming tools.

IBM’s large commercial customers faced the implications of this emerging technology: Who would maintain the equipment and its software? How secure was the data in these machines? And what was IBM’s position: Should personal computers be taken seriously or not? By 1980, customers in many industries were telling their IBM contacts to enter the fray. At IBM plants in San Diego, Endicott, N.Y,  and Poughkeepsie, N.Y., engineers were forming hobby clubs to learn about the new machines.

The logical place to build a small computer was inside IBM’s General Products Division, which focused on minicomputers and the successful typewriter business. But the division had no budget or people to allocate to another machine. IBM CEO Frank T. Cary decided to fund the PC’s development out of his own budget. He turned to William “Bill” Lowe, who had given some thought to the design of such a machine. Lowe reported directly to Cary, bypassing IBM’s complex product-development bureaucracy, which had grown massively during the creation of the System/360 and S/370. The normal process to get a new product to market took four or five years, but the incipient PC market was moving too quickly for that.

Cary asked Lowe to come back in several months with a plan for developing a machine within a year and to find 40 people from across IBM and relocate them to Boca Raton, Fla.

Lowe’s plan for the PC called for buying existing components and software and bolting them together into a package aimed at the consumer market. There would be no homegrown operating system or IBM-made chips. The product also had to attract corporate customers, although it was unclear how many of those there would be. Mainframe salesmen could be expected to ignore or oppose the PC, so the project was kept reasonably secret.

A friend of Lowe’s, Jack Sams, was a software engineer who vaguely knew Bill Gates, and he reached out to the 24-year-old Gates to see if he had an operating system that might work for the new PC. Gates had dropped out of Harvard to get into the microcomputer business, and he ran a 31-person company called Microsoft. While he thought of programming as an intellectual exercise, Gates also had a sharp eye for business.

In July 1980, the IBMers met with Gates but were not greatly impressed, so they turned instead to Gary Kildall, president of Digital Research, the most recognized microcomputer software company at the time. Kildall then made what may have been the business error of the century. He blew off the blue-suiters so that he could fly his airplane, leaving his wife—a lawyer—to deal with them. The meeting went nowhere, with too much haggling over nondisclosure agreements, and the IBMers left. Gates was now their only option, and he took the IBMers seriously.

That August, Lowe presented his plan to Cary and the rest of the management committee at IBM headquarters in Armonk, N.Y. The idea of putting together a PC outside of IBM’s development process disturbed some committee members. The committee knew that IBM had previously failed with its own tiny machines—specifically the Datamaster and the 5110—but Lowe was offering an alternative strategy and already had Cary’s support. They approved Lowe’s plan.

Lowe negotiated terms, volumes, and delivery dates with suppliers, including Gates. To meet IBM’s deadline, Gates concluded that Microsoft could not write an operating system from scratch, so he acquired one called QDOS (“quick and dirty operating system”) that could be adapted. IBM wanted Microsoft, not the team in Boca Raton, to have responsibility for making the operating system work. That meant Microsoft retained the rights to the operating system. Microsoft paid $75,000 for QDOS. By the early 1990s, that investment had boosted the firm’s worth to $27 billion. IBM’s strategic error in not retaining rights to the operating system went far beyond that $27 billion; it meant that Microsoft would set the standards for the PC operating system. In fairness to IBM, nobody thought the PC business would become so big. Gates said later that he had been “lucky.”

Back at Boca Raton, the pieces started coming together. The team designed the new product, lined up suppliers, and were ready to introduce the IBM Personal Computer just a year after gaining the management committee’s approval. How was IBM able to do this?

Much credit goes to Philip Donald Estridge. An engineering manager known for bucking company norms, Estridge turned out to be the perfect choice to ram this project through. He wouldn’t show up at product-development review meetings or return phone calls. He made decisions quickly and told Lowe and Cary about them later. He staffed up with like-minded rebels, later nicknamed the “Dirty Dozen.” In the fall of 1980, Lowe moved on to a new job at IBM, so Estridge was now in charge. He obtained 8088 microprocessors from Intel, made sure Microsoft kept the development of DOS secret, and quashed rumors that IBM was building a system. The Boca Raton team put in long hours and built a beautiful machine.

The IBM PC was a near-instant success

The big day came on 12 August 1981. Estridge wondered if anyone would show up at the Waldorf Astoria. After all, the PC was a small product, not in IBM’s traditional space. Some 100 people crowded into the hotel. Estridge described the PC, had one there to demonstrate, and answered a few questions.

Meanwhile, IBM salesmen had received packets of materials the previous day. On 12 August, branch managers introduced the PC to employees and then met with customers to do the same. Salesmen weren’t given sample machines. Along with their customers, they collectively scratched their heads, wondering how they could use the new computer. For most customers and IBMers, it was a new world.

Nobody predicted what would happen next. The first shipments began in October 1981, and in its first year, the IBM PC generated $1 billion in revenue, far exceeding company projections. IBM’s original manufacturing forecasts called for 1 million machines over three years, with 200,000 the first year. In reality, customers were buying 200,000 PCs per month by the second year.

Those who ordered the first PCs got what looked to be something pretty clever. It could run various software packages and a nice collection of commercial and consumer tools, including the accessible BASIC programming language. Whimsical ads for the PC starred Charlie Chaplin’s Little Tramp and carried the tag line “A Tool for Modern Times.” People could buy the machines at ComputerLand, a popular retail chain in the United States. For some corporate customers, the fact that IBM now had a personal computing product meant that these little machines were not some crazy geek-hippie fad but in fact a new class of serious computing. Corporate users who did not want to rely on their company’s centralized data centers began turning to these new machines.

Estridge and his team were busy acquiring games and business software for the PC. They lined up Lotus Development Corp. to provide its 1-2-3 spreadsheet package; other software products followed from multiple suppliers. As developers began writing software for the IBM PC, they embraced DOS as the industry standard. IBM’s competitors, too, increasingly had to use DOS and Intel chips. And Cary’s decision to avoid the product-development bureaucracy had paid off handsomely.

IBM couldn’t keep up with rivals in the PC market

Encouraged by their success, the IBMers in Boca Raton released a sequel to the PC in early 1983, called the XT. In 1984 came the XT’s successor, the AT. That machine would be the last PC designed outside IBM’s development process. John Opel, who had succeeded Cary as CEO in January 1981, endorsed reining in the PC business. During his tenure, Opel remained out of touch with the PC and did not fully understand the significance of the technology.

We could conclude that Opel did not need to know much about the PC because business overall was outstanding. IBM’s revenue reached $29 billion in 1981 and climbed to $46 billion in 1984. The company was routinely ranked as one of the best run. IBM’s stock more than doubled, making IBM the most valuable company in the world.

The media only wanted to talk about the PC. On its 3 January 1983 cover, Time featured the personal computer, rather than its usual Man of the Year. IBM customers, too, were falling in love with the new machines, ignoring IBM’s other lines of business—mainframes, minicomputers, and typewriters.

On 1 August 1983, Estridge’s skunkworks was redesignated the Entry Systems Division (ESD), which meant that the PC business was now ensnared in the bureaucracy that Cary had bypassed. Estridge’s 4,000-person group mushroomed to 10,000. He protested that Corporate had transferred thousands of programmers to him who knew nothing about PCs. PC programmers needed the same kind of machine-software knowledge that mainframe programmers in the 1950s had; both had to figure out how to cram software into small memories to do useful work. By the 1970s, mainframe programmers could not think small enough.

Estridge faced incessant calls to report on his activities in Armonk, diverting his attention away from the PC business and slowing development of new products even as rivals began to speed up introduction of their own offerings. Nevertheless, in August 1984, his group managed to release the AT, which had been designed before the reorganization.

But IBM blundered with its first product for the home computing market: the PCjr (pronounced “PC junior”). The company had no experience with this audience, and as soon as IBM salesmen and prospective customers got a glimpse of the machine, they knew something had gone terribly wrong.

Unlike the original PC, the XT, and the AT, the PCjr was the sorry product of IBM’s multilayered development and review process. Rumors inside IBM suggested that the company had spent $250 million to develop it. The computer’s tiny keyboard was scornfully nicknamed the “Chiclet keyboard.” Much of the PCjr’s software, peripheral equipment, memory boards, and other extensions were incompatible with other IBM PCs. Salesmen ignored it, not wanting to make a bad recommendation to customers. IBM lowered the PCjr’s price, added functions, and tried to persuade dealers to promote it, to no avail. ESD even offered the machines to employees as potential Christmas presents for a few hundred dollars, but that ploy also failed.

IBM’s relations with its two most important vendors, Intel and Microsoft, remained contentious. Both Microsoft and Intel made a fortune selling IBM’s competitors the same products they sold to IBM. Rivals figured out that IBM had set the de facto technical standards for PCs, so they developed compatible versions they could bring to market more quickly and sell for less. Vendors like AT&T, Digital Equipment Corp., and Wang Laboratories failed to appreciate that insight about standards, and they suffered. (The notable exception was Apple, which set its own standards and retained its small market share for years.) As the prices of PC clones kept falling, the machines grew more powerful—Moore’s Law at work. By the mid-1980s, IBM was reacting to the market rather than setting the pace.

Estridge was not getting along with senior executives at IBM, particularly those on the mainframe side of the house. In early 1985, Opel made Bill Lowe head of the PC business.

Then disaster struck. On 2 August 1985, Estridge, his wife, Mary Ann, and a handful of IBM salesmen from Los Angeles boarded Delta Flight 191 headed to Dallas. Over the Dallas airport, 700 feet off the ground, a strong downdraft slammed the plane to the ground, killing 137 people including the Estridges and all but one of the other IBM employees. IBMers were in shock. Despite his troubles with senior management, Estridge had been popular and highly respected. Not since the death of Thomas J. Watson Sr. nearly 30 years earlier had employees been so stunned by a death within IBM. Hundreds of employees attended the Estridges’ funeral. The magic of the PC may have died before the airplane crash, but the tragedy at Dallas confirmed it.

More missteps doomed the IBM PC and its OS/2 operating system

While IBM continued to sell millions of personal computers, over time the profit on its PC business declined. IBM’s share of the PC market shrank from roughly 80 percent in 1982–1983 to 20 percent a decade later.

Meanwhile, IBM was collaborating with Microsoft on a new operating system, OS/2, even as Microsoft was working on Windows, its replacement for DOS. The two companies haggled over royalty payments and how to work on OS/2. By 1987, IBM had over a thousand programmers assigned to the project and to developing telecommunications, costing an estimated $125 million a year.

OS/2 finally came out in late 1987, priced at $340, plus $2,000 for additional memory to run it. By then, Windows had been on the market for two years and was proving hugely popular. Application software for OS/2 took another year to come to market, and even then the new operating system didn’t catch on. As the business writer Paul Carroll put it, OS/2 began to acquire “the smell of failure.”

Known to few outside of IBM and Microsoft, Gates had offered to sell IBM a portion of his company in mid-1986. It was already clear that Microsoft was going to become one of the most successful firms in the industry. But Lowe declined the offer, making what was perhaps the second-biggest mistake in IBM’s history up to then, following his first one of not insisting on proprietary rights to Microsoft’s DOS or the Intel chip used in the PC. The purchase price probably would have been around $100 million in 1986, an amount that by 1993 would have yielded a return of $3 billion and in subsequent decades orders of magnitude more.

In fairness to Lowe, he was nervous that such an acquisition might trigger antitrust concerns at the U.S. Department of Justice. But the Reagan administration was not inclined to tamper with the affairs of large multinational corporations.

More to the point, Lowe, Opel, and other senior executives did not understand the PC market. Lowe believed that PCs, and especially their software, should undergo the same rigorous testing as the rest of the company’s products. That meant not introducing software until it was as close to bugproof as possible. All other PC software developers valued speed to market over quality—better to get something out sooner that worked pretty well, let users identify problems, and then fix them quickly. Lowe was aghast at that strategy.

Salesmen came forward with proposals to sell PCs in bulk at discounted prices but got pushback. The sales team I managed arranged to sell 6,000 PCs to American Standard, a maker of bathroom fixtures. But it took more than a year and scores of meetings for IBM’s contract and legal teams to authorize the terms.

Lowe’s team was also slow to embrace the faster chips that Intel was producing, most notably the 80386. The new Intel chip had just the right speed and functionality for the next generation of computers. Even as rivals moved to the 386, IBM remained wedded to the slower 286 chip.

As the PC market matured, the gold rush of the late 1970s and early 1980s gave way to a more stable market. A large software industry grew up. Customers found the PC clones, software, and networking tools to be just as good as IBM’s products. The cost of performing a calculation on a PC dropped so much that it was often significantly cheaper to use a little machine than a mainframe. Corporate customers were beginning to understand that economic reality.

Opel retired in 1986, and John F. Akers inherited the company’s sagging fortunes. Akers recognized that the mainframe business had entered a long, slow decline, the PC business had gone into a more rapid fall, and the move to billable services was just beginning. He decided to trim the ranks by offering an early retirement program. But too many employees took the buyout, including too many of the company’s best and brightest.

In 1995, IBM CEO Louis V. Gerstner Jr. finally pulled the plug on OS/2. It did not matter that Microsoft’s software was notorious for having bugs or that IBM’s was far cleaner. As Gerstner noted in his 2002 book, “What my colleagues seemed unwilling or unable to accept was that the war was already over and was a resounding defeat—90 percent market share for Windows to OS/2’s 5 percent or 6 percent.”

The end of the IBM PC

IBM soldiered on with the PC until Samuel J. Palmisano, who once worked in the PC organization, became CEO in 2002. IBM was still the third-largest producer of personal computers, including laptops, but PCs had become a commodity business, and the company struggled to turn a profit from those products. Palmisano and his senior executives had the courage to set aside any emotional attachments to their “Tool for Modern Times” and end it.

In December 2004, IBM announced it was selling its PC business to Lenovo for $1.75 billion. As the New York Times explained, the sale “signals a recognition by IBM, the prototypical American multinational, that its own future lies even further up the economic ladder, in technology services and consulting, in software and in the larger computers that power corporate networks and the Internet. All are businesses far more profitable for IBM than its personal computer unit.”

IBM already owned 19 percent of Lenovo, which would continue for three years under the deal, with an option to acquire more shares. The head of Lenovo’s PC business would be IBM senior vice president Stephen M. Ward Jr., while his new boss would be Lenovo’s chairman, Yang Yuanquing. Lenovo got a five-year license to use the IBM brand on the popular Thinkpad laptops and PCs, and to hire IBM employees to support existing customers in the West, where Lenovo was virtually unknown. IBM would continue to design new laptops for Lenovo in Raleigh, N.C. Some 4,000 IBMers already working in China would switch to Lenovo, along with 6,000 in the United States.

The deal ensured that IBM’s global customers had familiar support while providing a stable flow of maintenance revenue to IBM for five years. For Lenovo, the deal provided a high-profile partner. Palmisano wanted to expand IBM’s IT services business to Chinese corporations and government agencies. Now the company was partnered with China’s largest computer manufacturer, which controlled 27 percent of the Chinese PC market. The deal was one of the most creative in IBM’s history. And yet it remained for many IBMers a sad close to the quarter-century chapter of the PC.

This article is based on excerpts from IBM: The Rise and Fall and Reinvention of a Global Icon (MIT Press, 2019).

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the August 2021 print issue as “A Tool for Modern Times.”

Meet Catfish Charlie, the CIA’s Robotic Spy

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/silicon-revolution/meet-catfish-charlie-the-cias-robotic-spy

In 1961, Tom Rogers of the Leo Burnett Agency created Charlie the Tuna, a jive-talking cartoon mascot and spokesfish for the StarKist brand. The popular ad campaign ran for several decades, and its catchphrase “Sorry, Charlie” quickly hooked itself in the American lexicon.

When the CIA’s Office of Advanced Technologies and Programs started conducting some fish-focused research in the 1990s, Charlie must have seemed like the perfect code name. Except that the CIA’s Charlie was a catfish. And it was a robot.

More precisely, Charlie was an unmanned underwater vehicle (UUV) designed to surreptitiously collect water samples. Its handler controlled the fish via a line-of-sight radio handset. Not much has been revealed about the fish’s construction except that its body contained a pressure hull, ballast system, and communications system, while its tail housed the propulsion. At 61 centimeters long, Charlie wouldn’t set any biggest-fish records. (Some species of catfish can grow to 2 meters.) Whether Charlie reeled in any useful intel is unknown, as details of its missions are still classified.

For exploring watery environments, nothing beats a robot

The CIA was far from alone in its pursuit of UUVs nor was it the first agency to do so. In the United States, such research began in earnest in the 1950s, with the U.S. Navy’s funding of technology for deep-sea rescue and salvage operations. Other projects looked at sea drones for surveillance and scientific data collection.

Aaron Marburg, a principal electrical and computer engineer who works on UUVs at the University of Washington’s Applied Physics Laboratory, notes that the world’s oceans are largely off-limits to crewed vessels. “The nature of the oceans is that we can only go there with robots,” he told me in a recent Zoom call. To explore those uncharted regions, he said, “we are forced to solve the technical problems and make the robots work.”

One of the earliest UUVs happens to sit in the hall outside Marburg’s office: the Self-Propelled Underwater Research Vehicle, or SPURV, developed at the applied physics lab beginning in the late ’50s. SPURV’s original purpose was to gather data on the physical properties of the sea, in particular temperature and sound velocity. Unlike Charlie, with its fishy exterior, SPURV had a utilitarian torpedo shape that was more in line with its mission. Just over 3 meters long, it could dive to 3,600 meters, had a top speed of 2.5 m/s, and operated for 5.5 hours on a battery pack. Data was recorded to magnetic tape and later transferred to a photosensitive paper strip recorder or other computer-compatible media and then plotted using an IBM 1130.

Over time, SPURV’s instrumentation grew more capable, and the scope of the project expanded. In one study, for example, SPURV carried a fluorometer to measure the dispersion of dye in the water, to support wake studies. The project was so successful that additional SPURVs were developed, eventually completing nearly 400 missions by the time it ended in 1979.

Working on underwater robots, Marburg says, means balancing technical risks and mission objectives against constraints on funding and other resources. Support for purely speculative research in this area is rare. The goal, then, is to build UUVs that are simple, effective, and reliable. “No one wants to write a report to their funders saying, ‘Sorry, the batteries died, and we lost our million-dollar robot fish in a current,’ ” Marburg says.

A robot fish called SoFi

Since SPURV, there have been many other unmanned underwater vehicles, of various shapes and sizes and for various missions, developed in the United States and elsewhere. UUVs and their autonomous cousins, AUVs, are now routinely used for scientific research, education, and surveillance.

At least a few of these robots have been fish-inspired. In the mid-1990s, for instance, engineers at MIT worked on a RoboTuna, also nicknamed Charlie. Modeled loosely on a blue-fin tuna, it had a propulsion system that mimicked the tail fin of a real fish. This was a big departure from the screws or propellers used on UUVs like SPURV. But this Charlie never swam on its own; it was always tethered to a bank of instruments. The MIT group’s next effort, a RoboPike called Wanda, overcame this limitation and swam freely, but never learned to avoid running into the sides of its tank.

Fast-forward 25 years, and a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled SoFi, a decidedly more fishy robot designed to swim next to real fish without disturbing them. Controlled by a retrofitted Super Nintendo handset, SoFi could dive more than 15 meters, control its own buoyancy, and swim around for up to 40 minutes between battery charges. Noting that SoFi’s creators tested their robot fish in the gorgeous waters off Fiji, IEEE Spectrum’s Evan Ackerman noted, “Part of me is convinced that roboticists take on projects like these…because it’s a great way to justify a trip somewhere exotic.”

SoFi, Wanda, and both Charlies are all examples of biomimetics, a term coined in 1974 to describe the study of biological mechanisms, processes, structures, and substances. Biomimetics looks to nature to inspire design.

Sometimes, the resulting technology proves to be more efficient than its natural counterpart, as Richard James Clapham discovered while researching robotic fish for his Ph.D. at the University of Essex, in England. Under the supervision of robotics expert Huosheng Hu, Clapham studied the swimming motion of Cyprinus carpio, the common carp. He then developed four robots that incorporated carplike swimming, the most capable of which was iSplash-II. When tested under ideal conditions—that is, a tank 5 meters long, 2 meters wide, and 1.5 meters deep—iSpash-II obtained a maximum velocity of 11.6 body lengths per second (or about 3.7 m/s). That’s faster than a real carp, which averages a top velocity of 10 body lengths per second. But iSplash-II fell short of the peak performance of a fish darting quickly to avoid a predator.

Of course, swimming in a test pool or placid lake is one thing; surviving the rough and tumble of a breaking wave is another matter. The latter is something that roboticist Kathryn Daltorio has explored in depth.

Daltorio, an assistant professor at Case Western Reserve University and codirector of the Center for Biologically Inspired Robotics Research there, has studied the movements of cockroaches, earthworms, and crabs for clues on how to build better robots. After watching a crab navigate from the sandy beach to shallow water without being thrown off course by a wave, she was inspired to create an amphibious robot with tapered, curved feet that could dig into the sand. This design allowed her robot to withstand forces up to 138 percent of its body weight.

In her designs, Daltorio is following architect Louis Sullivan’s famous maxim: Form follows function. She isn’t trying to imitate the aesthetics of nature—her robot bears only a passing resemblance to a crab—but rather the best functionality. She looks at how animals interact with their environments and steals evolution’s best ideas.

And yet, Daltorio admits, there is also a place for realistic-looking robotic fish, because they can capture the imagination and spark interest in robotics as well as nature. And unlike a hyperrealistic humanoid, a robotic fish is unlikely to fall into the creepiness of the uncanny valley.

In writing this column, I was delighted to come across plenty of recent examples of such robotic fish. Ryomei Engineering, a subsidiary of Mitsubishi Heavy Industries, has developed several: a robo-coelacanth, a robotic gold koi, and a robotic carp. The coelacanth was designed as an educational tool for aquariums, to present a lifelike specimen of a rarely seen fish that is often only known by its fossil record. Meanwhile, engineers at the University of Kitakyushu in Japan created Tai-robot-kun, a credible-looking sea bream. And a team at Evologics, based in Berlin, came up with the BOSS manta ray.

Whatever their official purpose, these nature-inspired robocreatures can inspire us in return. UUVs that open up new and wondrous vistas on the world’s oceans can extend humankind’s ability to explore. We create them, and they enhance us, and that strikes me as a very fair and worthy exchange.

This article appears in the March 2021 print issue as “Catfish, Robot, Swimmer, Spy.”

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Decoding the Innovation Delusion, Nurturing the Maintenance Mindset

Post Syndicated from David C. Brock original https://spectrum.ieee.org/tech-talk/tech-history/silicon-revolution/decoding-the-innovation-delusion-nurturing-the-maintenance-mindset

Without doubt, technological innovations are tremendously important. We hear the term “innovation” seemingly everywhere: In books; magazines; white papers; blogs; classrooms; offices; factories; government hearings; podcasts; and more. But for all this discussion, have we really gotten much clarity? For Lee Vinsel and Andrew L. Russell, coauthors of the new book The Innovation Delusion: How Our Obsession with the New Has Disrupted the Work That Matters Most, the answer is “Not exactly.” On 2 December 2020, they joined Jean Kumagai, senior editor at IEEE Spectrum, for a Computer History Museum Live event to help decode innovation for the public. They shared what their experiences as historians of technology have taught them innovation is, and is not, and why they believe that an over-emphasis on innovation detracts from two pursuits they believe are of great importance to our present and our future: maintenance and care.

The conversation began with a discussion of some of the key terms used in the book: “innovation,” “care,” and what the coauthors call “innovation speak.”

For Vinsel and Russell, their advocacy of expanded attention to, and practice of, maintenance and care is far from a turn away from technology. Rather, they argue, it is a call to pay attention to important issues that have always been present in technology.

Maintenance, for Vinsel and Russell, is anything but simple. There is a diversity of approaches to maintenance, some with great advantages over others. One of the starkest of these contrasts is between deferred maintenance and preventive maintenance.

Maintenance itself is not always a permanent solution. Vinsel and Russell discuss the inescapable question of when to cease maintenance and embrace retirement.

Maintenance, as with all things, is not without its costs, both direct and indirect. For the latter, the greater costs arise from a lack of maintenance and exacerbate injustice.

At even further extremes, the neglect of maintenance and care over time can constitute a disaster, not necessarily as suddenly as a storm, but rather as a “slow disaster.” Conversely, continued investments in developing and maintaining infrastructures can become platforms for true innovations.

Kumagai challenged Vinsel and Russell to consider the possible opportunity costs of increasing investment in maintenance at the expense of innovation.

Further, she challenged them to consider what a member of the general public could or should do to foster this maintenance mindset.

For Vinsel and Russell, the adoption of a maintenance mindset changes the way in which one views technologies. For example, it frames the question of expanding technological systems or the adoption of new technologies as taking on increased “technological debt.”

Just as maintenance is not simple in Vinsel and Russell’s account, nor is it dull or static. Indeed, the pair see maintenance as essential to creative moves to sustainability in the face of the climate crisis.

Kumagai brought the conversation to a close by asking Vinsel and Russell to participate in CHM’s “one word” initiative, with each sharing their one word of advice for a young person starting their career.

Lee Vinsel and Andrew L. Russell’s book is The Innovation Delusion: How Our Obsession with the New Has Disrupted the Work That Matters Most (Currency, 2020)The Maintainers website has more information about the group they cofounded. 

Editor’s note: This post originally appeared on the blog of the Computer History Museum.

About the Author

David C. Brock is an historian of technology and director of the Computer History Museum’s Software History Center.

The Inventions That Made Heart Disease Less Deadly

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/silicon-revolution/the-inventions-that-made-heart-disease-less-deadly

Cardiac arrhythmia—an irregular heartbeat—can develop in an instant and then quickly, and sometimes fatally, spiral out of control. In the 1960s, physician L. Julian Haywood sought a way to continuously monitor the heart for such rhythm changes and alert nurses and doctors when an anomaly was detected. It would be one of many innovations that Haywood and his associates at Los Angeles County General Hospital implemented to improve the quality of care for patients with heart disease.

Haywood had arrived at the hospital in 1956 as an eager second-year resident in internal medicine. Having already completed a residency at the University of Virginia and Howard University, followed by two years as a medical officer in the U.S. Navy, he set his sights on specializing in cardiology. 

Haywood was thus deeply disappointed to discover that the hospital had no formal teaching in the area, no clinical cardiology rounds, and no cardiology-related review conferences where doctors and students would gather to discuss patients and interesting developments. This despite the fact that the hospital’s mortality rate among heart attack patients was 35 percent. 

Haywood was persistent. He soon learned that Dr. William Paul Thompson presided over a review conference on electrocardiography—the measurement of electricity activity in the heart. These Thursday morning sessions, held in the hospital’s main auditorium, drew faculty from the  medical schools at the University of Southern California (USC), the College of Medical Evangelists, and the California College of Osteopathy. The weekly conference set the stage for Haywood’s lifelong investigation into how technology and specialized care could be used to help patients with heart disease.

How heartbeats came to be measured

An electrocardiogram, also known as an ECG or EKG, is a graph of the heart’s electrical activity, measured in millivolts. Today, an ECG is an easy, safe, and painless test that doctors rely on to check for signs of heart disease. In a conventional ECG, a technician places 10 stick-on electrodes on the patient’s chest and limbs. These days you don’t even need to go to a doctor to get such a test: ECG-enabled smartwatches have hit the market that can measure your blood pressure, heart rate, and blood oxygen saturation, all in real time. In the next few years, the same features may be available through your earbuds.

Back in the late 18th century, though, scientists investigating the heart’s electricity activity would puncture the heart muscle with wires. The technology for noninvasively capturing ECGs took decades to develop. British physiologist Augustus Waller is usually credited with recording the first human ECG in 1887, using a device called a capillary electrometer. Invented in the early 1870s by a doctoral student at Heidelberg University named Gabriel Lippmann, the capillary electrometer was able to measure tiny electrical changes when a voltage was applied.

In 1893, Dutch physiologist Willem Einthoven refined the capillary electrometer to show the voltage graph of a heartbeat cycle and the cycle’s five distinct deflections. He named these points P, Q, R, S, and T, a convention that persists to this day. Einthoven went on to develop an even more sensitive string galvanometer, and he won the 1924 Nobel Prize in Physiology or Medicine for his “discovery of the mechanism of the electrocardiogram.” (Einthoven’s string galvanometer was recently approved as an IEEE Milestone, awarded to significant achievements in electrotechnology.)

In the early 20th century, ECGs were large, fixed instruments that could weigh up to 600 pounds (272 kilograms). One misconception was that they were overly sensitive to vibration, so early installations tended to be placed in basements on concrete floors. Electrostatic charges could interfere with readings, so ECG machines were often enclosed in Faraday cages.

The first “portable” instrument, introduced in the late 1920s, was a General Electric model that used amplifier tubes instead of a string galvanometer. Although it weighed about 80 pounds (32 kg), it could be loaded on a cart and wheeled into a patient’s room, rather than the patient having to be transported to the equipment. In the 1930s, the Sanborn Co. and Cambridge Instrument Co. came out with the first truly portable ECG systems.

The rise of cardiology as a specialty

Despite this long history of invention, cardiology still did not exist as a specialty when Haywood was completing his residency at L.A. County General. He continued to look for ways to expand his educational opportunities. When he learned that the U.S. Public Health Service provided funding to create special units to care for heart attack patients, Haywood applied, and in 1966 the hospital opened a four-bed coronary care unit. He also secured funding from the Los Angeles chapter of the American Heart Association to start a nurse training program in cardiology.

At the time, the hospital’s standard treatment for acute myocardial infarction (heart attack) included four to six weeks of hospitalization. The goal of the new coronary care unit was to reduce the hospital’s 35 percent mortality rate among heart attack patients.

About 40 percent of those deaths were likely due to arrhythmias, which occurred even when patients were being closely observed. Haywood and his associates knew they needed a reliable way to continuously monitor the heart for rhythm changes. They developed the prototype digital heart monitor shown at top and began using it in the coronary care unit in 1969. The automated system detected heart-rhythm abnormalities and alerted nurses and doctors, either at the bedside or at a central monitoring station. The software for the monitor ran on computers supplied by Control Data Corp. and Raytheon. (Haywood donated the monitor to the Smithsonian National Museum of African American History and Culture in 2017.)

Haywood and his collaborators published widely about their work on the monitor, in Computers and Biomedical Research, Mathematical Biosciences, Journal of the Association of the Advancement of Medical Instrumentation, and elsewhere. Haywood also presented their work at the Association for Computing Machinery’s annual conference in 1972. The monitor influenced the design of commercial products, which were subsequently deployed at hospitals throughout the country and beyond.

The hospital’s coronary care unit and the cardiac-nurse training program were also successful, and mortality rates for cardiac patients declined significantly. Trainees went on to work at hospitals throughout Los Angeles County, and a number of them enjoyed successful careers in education and administration. As other hospitals in the region began creating their own coronary care units, friendly competition resulted in improved care for cardiac patients. 

L. Julian Haywood’s legacy of equitable health care for all

Over the course of his distinguished career, Haywood was keenly aware of the disparities in access to health care for racial minorities, as well as the difficulties that minorities faced in gaining acceptance by the medical profession. He was only the third Black internal medicine resident at L.A. County General (now known as Los Angeles County + University of Southern California Medical Center). Nearby, the USC School of Medicine (now the Keck School of Medicine of USC) had yet to graduate any Black medical students (although two were seniors at the time). When Haywood later joined the hospital’s teaching faculty, he was one of the few full-time faculty members who were minorities.

From the time he completed his residency, Haywood was an active member in the Charles R. Drew Medical Society, the Los Angeles affiliate of the National Medical Association. The NMA had formed in 1895 during a time of deep-seated racism and Jim Crow segregation, and its mission was to aid in the professional development of Black physicians, who were for many years excluded from membership and otherwise discriminated against by the American Medical Association.

In the 1960s, a major concern of the Drew Medical Society was the poor access to medical services in the Watts area of Los Angeles. Indeed, the lack of quality health care was one of the systemic injustices that fueled the Watts riots of 1965. In his memoir, Haywood recounts how white physicians tried to bar minority doctors from establishing practices there. Following the riots, the McCone Commission recommended the building of a hospital to serve the needs of South-Central L.A. residents, resulting in the Martin Luther King, Jr. Community Hospital, which opened in 1972.

In 2018, reflecting on his long career, Haywood published an article on the factors leading to a dramatic decline in the death rates for heart disease in Los Angeles County. He concluded that the development of coronary care units in the 1960s helped usher in a new focus on cardiology, leading to progress in angiography (medical imaging of blood vessels and organs), angioplasty (ballooning of arterial obstructions), bypass surgery, and pharmaceuticals to control blood pressure and cholesterol. Additionally, the success of cardiology and cardiothoracic surgery programs at universities spurred research into pacemakers, heart valve replacements, and other lifesaving technologies. 

Such progress is impressive and hard won, but we shouldn’t forget: Heart disease is the leading cause of death in the United States as well as the world. The statistics are particularly grim for African Americans, who are more likely to suffer from conditions like high blood pressure, obesity, and diabetes that increase the risk of heart disease. As Haywood noted in his 2018 essay, “The assault on heart disease and high blood pressure continues, as it must.”

 

Editor’s Note: L. Julian Haywood died of Covid-19 on 24 December 2020. He was 93.

An abridged version of this article appears in the February 2021 print issue as “The Measured Heart.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

 

Discovering Computer Legend Dennis Ritchie’s Lost Dissertation

Post Syndicated from David C. Brock original https://spectrum.ieee.org/tech-talk/tech-history/silicon-revolution/discovering-computer-legend-dennis-ritchies-lost-dissertation

Many of you, dear readers, will have heard of Dennis Ritchie. In the late 1960s, he left graduate studies in applied mathematics at Harvard for a position at the Bell Telephone Laboratories, where he spent the entirety of his career.

Not long after joining the Labs, Ritchie linked arms with Ken Thompson in efforts that would create a fundamental dyad of the digital world that followed: the operating system Unix and the programming language C. Thompson led the development of the system, while Ritchie was lead in the creation of C, in which Thompson rewrote Unix. In time, Unix became the basis for most of the operating systems on which our digital world is built, while C became—and remains—one of the most popular languages for creating the software that animates this world.

On Ritchie’s personal web pages at the Labs (still maintained by Nokia, the current owner), he writes with characteristic dry deprecation of his educational journey into computing:

I . . . received Bachelor’s and advanced degrees from Harvard University, where as an undergraduate I concentrated in Physics and as a graduate student in Applied Mathematics . . . The subject of my 1968 doctoral thesis was subrecursive hierarchies of functions. My undergraduate experience convinced me that I was not smart enough to be a physicist, and that computers were quite neat. My graduate school experience convinced me that I was not smart enough to be an expert in the theory of algorithms and also that I liked procedural languages better than functional ones.

Whatever the actual merits of these self-evaluations, his path certainly did lead him into a field and an environment in which he made extraordinary contributions.

It may come as some surprise to learn that until just this moment, despite Ritchie’s much-deserved computing fame, his dissertation—the intellectual and biographical fork-in-the-road separating an academic career in computer science from the one at Bell Labs leading to C and Unix—was lost. Lost? Yes, very much so in being both unpublished and absent from any public collection; not even an entry for it can be found in Harvard’s library catalog nor in dissertation databases.

After Dennis Ritchie’s death in 2011, his sister Lynn very caringly searched for an official copy and for any records from Harvard. There were none, but she did uncover a copy from the widow of Ritchie’s former advisor. Until very recently then, across a half-century perhaps fewer than a dozen people had ever had the opportunity to read Ritchie’s dissertation. Why?

In Ritchie’s description of his educational path, you will notice that he does not explicitly say that he earned a Ph.D. based on his 1968 dissertation. This is because he did not. Why not? The reason seems to be his failure to take the necessary steps to officially deposit his completed dissertation in Harvard’s libraries. Professor Albert Meyer of MIT, who was in Dennis Ritchie’s graduate school cohort, recalls the story in a recent oral history interview with the Computer History Museum:

So the story as I heard it from Pat Fischer [Ritchie and Meyer’s Harvard advisor] . . . was that it was definitely true at the time that the Harvard rules were that you needed to submit a bound copy of your thesis to the Harvard—you needed the certificate from the library in order to get your Ph.D. And as Pat tells the story, Dennis had submitted his thesis. It had been approved by his thesis committee, he had a typed manuscript of the thesis that he was ready to submit when he heard the library wanted to have it bound and given to them. And the binding fee was something noticeable at the time . . . not an impossible, but a nontrivial sum. And as Pat said, Dennis’ attitude was, ‘If the Harvard library wants a bound copy for them to keep, they should pay for the book, because I’m not going to!’ And apparently, he didn’t give on that. And as a result, never got a Ph.D. So he was more than ‘everything but thesis.’ He was ‘everything but bound copy.’

While Lynn Ritchie’s inquiries confirmed that Dennis Ritchie never did submit the bound copy of his dissertation, and did not then leave Harvard with his Ph.D., his brother John feels that there was something else going on with Dennis Ritchie’s actions beyond a fit of pique about fees: He already had a coveted job as a researcher at Bell Labs, and “never really loved taking care of the details of living.” We will never really know the reason, and perhaps it was never entirely clear to Ritchie himself. But what we can know with certainty is that Dennis Ritchie’s dissertation was lost for a half-century, until now.

Within the Dennis Ritchie collection, recently donated by Ritchie’s siblings to the Computer History Museum, lay several historical gems that we have identified to date. One is a collection of the earliest Unix source code dating from 1970–71. Another is a fading and stained photocopy of Ritchie’s doctoral dissertation, Program Structure and Computational Complexity. The Computer History Museum is delighted to now make a digital copy of Ritchie’s own dissertation manuscript (as well as a more legible digital scan of a copy of the manuscript owned by Albert Meyer) available publicly for the first time.

Recovering a copy of Ritchie’s lost dissertation and making it available is one thing; understanding it is another. To grasp what Ritchie’s dissertation is all about, we need to leap back to the early 20th century to a period of creative ferment in which mathematicians, philosophers, and logicians struggled over the ultimate foundations of mathematics. For centuries preceding this ferment, the particular qualities of mathematical knowledge—its exactitude and certitude—gave it a special, sometimes divine, status. While philosophical speculation about the source or foundation for these qualities stretches back to Pythagoras and Plato at least, in the early 20th century influential mathematicians and philosophers looked to formal logic—in which rules and procedures for reasoning are expressed in symbolic systems—as this foundation for mathematics.

Across the 1920s, the German mathematician David Hilbert was incredibly influential in this attempt to secure the basis of mathematics in formal logic. In particular, Hilbert believed that one could establish certain qualities of mathematics—for example, that mathematics was free of contradictions and that any mathematical assertion could be shown to be true or to be false—by certain kinds of proofs in formal logic. In particular, the kinds of proofs that Hilbert advocated, called “finitist,” relied on applying simple, explicit, almost mechanical rules to the manipulation of the expressive symbols of formal logic. These would be proofs based on rigid creation of strings of symbols, line by line, from one another.

In the 1930s, it was in the pursuit of such rules for logical manipulation of symbols that mathematicians and philosophers made a connection to computation, and the step-by-step rigid processes by which human “computers” and mechanical calculators performed mathematical operations. Kurt Gödel provided a proof of just the sort that Hilbert advocated, but distressingly showed the opposite of what Hilbert and others had hoped. Rather than showing that logic ensured that everything that was true in mathematics could be proven, Gödel’s logic revealed mathematics to be the opposite, to be incomplete. For this stunning result, Gödel’s proof rested on arguments about certain kinds of mathematical objects called primitive recursive functions. What’s important about recursive functions for Gödel is that they were eminently computable—that is, they relied on “finite procedures.” Just the kind of simple, almost mechanical rules for which Hilbert had called.

Quickly following Gödel, in the United States, Alonzo Church used similar arguments about computability to formulate a logical proof that showed also that mathematics was not always decidable—that is, that there were some statements about mathematics for which it is not possible to determine if they are true or are false. Church’s proof is based on a notion of “effectively calculable functions,” grounded in Gödel’s recursive functions. At almost the same time, and independently in the UK, Alan Turing constructed a proof of the very same result, but based on a notion of “computability” defined by the operation of an abstract “computing machine.” This abstract Turing machine, capable of any computation or calculation, would later become an absolutely critical basis for theoretical computer science.

In the decades that followed, and before the emergence of computer science as a recognized discipline, mathematicians, philosophers, and others began to explore the nature of computation in its own right, increasingly divorced from connections to the foundation of mathematics. As Albert Meyer explains in his interview:

In the 1930s and 1940s, the notion of what was and wasn’t computable was very extensively worked on, was understood. There were logical limits due to Gödel and Turing about what could be computed and what couldn’t be computed. But the new idea [in the early 1960s] was ‘Let’s try to understand what you can do with computation, that was when the idea of computational complexity came into being . . . there were . . . all sorts of things you could do with computation, but not all of it was easy . . . How well could it be computed?

With the rise of electronic digital computing, then, for many of these researchers the question was less what logical arguments about computability could teach about the nature of mathematics, but what could these logical arguments reveal about the limits of computability itself. As those limits came to be well understood, the interests of these researchers shifted to the nature of computability within these limits. What could be proven about the realm of possible computations?

One of few places where these new investigations were taking place in the mid 1960s, when Dennis Ritchie and Albert Meyer both entered their graduate studies at Harvard, was in certain corners of departments of applied mathematics. These departments were also, frequently, where the practice of electronic digital computing took root early on academic campuses. As Meyer recalls, “Applied Mathematics was a huge subject in which this kind of theory of computation was a tiny, new part.”

For both Ritchie and Meyer, theirs was a gravitation into Harvard’s applied mathematics department from their undergraduate studies in mathematics at the university, although Meyer does not recall having known Ritchie as an undergraduate. In their graduate studies, both became increasingly interested in the theory of computation, and thus alighted on Patrick Fischer as their advisor. Fischer at the time was a freshly minted Ph.D. who was only at Harvard for the critical first years of Ritchie and Meyer’s studies, before alighting at Cornell in 1965. (Later, in 1982, Fischer was one of the Unabomber’s targets.) As Meyer recalls:

Patrick was very much interested in this notion of understanding the nature of computation, what made things hard, what made things easy, and they were approached in various ways . . . What kinds of things could different kinds of programs do?

After their first year of graduate study, unbeknownst to at least Meyer, Fischer independently hired both Ritchie and Meyer as summer research assistants. Meyer’s assignment? Work on a particular “open problem” in the theory of computation that Fischer had identified, and report back at the end of the summer. Fischer, for his part, would be away. Meyer spent a miserable summer working alone on the problem, reporting to Fischer at the end that he had only accomplished minor results. Soon after, walking to Fischer’s graduate seminar, Meyer was shocked as he realized a solution to the summer problem. Excitedly reporting his breakthrough to Fisher, Meyer was “surprised and a little disappointed to hear that Pat said that Dennis had also solved the problem.” Fischer had set Ritchie and Meyer the very same problem that summer but had not told them!

Fischer’s summer problem was a take on the large question of computational complexity, about the relative ease or time it takes to compute one kind of thing versus another. Recall that Gödel had used primitive recursive functions to exemplify computability by finite procedures, key to his famous work. In the 1950s, the Polish mathematician Andrzej Grzegorczyk defined a hierarchy of these same recursive functions based on how fast or slow the functions grow. Fischer’s summer question, then, was for Meyer and Ritchie to explore how this hierarchy of functions related to computational complexity.

To his great credit, Meyer’s disappointment at summer’s end gave way to a great appreciation for Ritchie’s solution to Fischer’s problem: loop programs. Meyer recalls “. . . this concept of loop programs, which was Dennis’s invention . . . was so beautiful and so important and such a terrific expository mechanism as well as an intellectual one to clarify what the subject was about, that I didn’t care whether he solved the problem.”

Ritchie’s loop program solution to Fischer’s summer problem was the core of his 1968 dissertation. They are essentially very small, limited computer programs that would be familiar to anyone who ever used the FOR command for programming loops in BASIC. In loop programs, one can set a variable to zero, add 1 to a variable, or move the value of one variable to another. That’s it. The only control available in loop programs is . . . a simple loop, in which an instruction sequence is repeated a certain number of times. Importantly, loops can be “nested,” that is, loops within loops.

What Ritchie shows in his dissertation is that these loop functions are exactly what is needed to produce Gödel’s primitive recursive functions, and only these functions; just those functions of Grzegorczyk hierarchy. Gödel held out his recursive functions as eminently computable, and Ritchie showed that loop programs were just the right tools for that job. Ritchie’s dissertation shows that the degree of “nestedness” of loop programs—the depth of loops within loops—is a measure of computational complexity for them, as well as a gauge for how much time is required for their computation. Further, he shows that assessing loop programs by their depth of loops is exactly equivalent to Grzegorczyk’s hierarchy. The rate of growth of primitive recursive functions is indeed related to their computational complexity; in fact, they are identical.

As Meyer recalls:

Loop programs made into a very simple model that any computer scientist could understand instantly, something that the traditional formulation…in terms for primitive recursive hierarchies…with very elaborate logician’s notation for complicated syntax and so on that would make anybody’s eyes glaze over-Suddenly you had a three-line, four-line computer science description of loop programs.

While, as we have seen, Ritchie’s development of this loop program approach to computer science never made it out into the world through his dissertation, it did nevertheless make it into the literature in a joint publication with Albert Meyer in 1967.

Meyer explains:

[Dennis] was a very sweet, easy going, unpretentious guy. Clearly very smart, but also kind of taciturn . . . So we talked a little, and we talked about this paper that we wrote together, which I wrote, I believe. I don’t think he wrote it at all, but he read it . . . he read and made comments . . . and he explained loop programs to me.

The paper, “The Complexity of Loop Programs” [subscription required], was published by the ACM in 1967, and was an important step in the launch of a highly productive and regarded career in theoretical computer science by Meyer. But it was a point of departure for and with Ritchie. As Meyer recalls:

It was a disappointment. I would have loved to collaborate with him, because he seemed like a smart, nice guy who’d be fun to work with, but yeah, you know, he was already doing other things. He was staying up all night playing Spacewar!

At the start of this essay, we noted that in his biographical statement on his website, Ritchie quipped, “My graduate school experience convinced me that I was not smart enough to be an expert in the theory of algorithms and also that I liked procedural languages better than functional ones.” While his predilection for procedural languages is without question, our exploration of his lost dissertation puts the lie to his self-assessment that he was not smart enough for theoretical computer science. More likely, Ritchie’s graduate school experience was one in which the lure of the theoretical gave way to the enchantments of implementation, of building new systems and new languages as a way to explore the bounds, nature, and possibilities of computing.

Editor’s note: This post originally appeared on the blog of the Computer History Museum.

About the Author

David C. Brock is an historian of technology and director of the Computer History Museum’s Software History Center.

How the Digital Camera Transformed Our Concept of History

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/silicon-revolution/how-the-digital-camera-transformed-our-concept-of-history

For an inventor, the main challenge might be technical, but sometimes it’s timing that determines success. Steven Sasson had the technical talent but developed his prototype for an all-digital camera a couple of decades too early.

A CCD from Fairchild was used in Kodak’s first digital camera prototype

It was 1974, and Sasson, a young electrical engineer at Eastman Kodak Co., in Rochester, N.Y., was looking for a use for Fairchild Semiconductor’s new type 201 charge-coupled device. His boss suggested that he try using the 100-by-100-pixel CCD to digitize an image. So Sasson built a digital camera to capture the photo, store it, and then play it back on another device.

Sasson’s camera was a kluge of components. He salvaged the lens and exposure mechanism from a Kodak XL55 movie camera to serve as his camera’s optical piece. The CCD would capture the image, which would then be run through a Motorola analog-to-digital converter, stored temporarily in a DRAM array of a dozen 4,096-bit chips, and then transferred to audio tape running on a portable Memodyne data cassette recorder. The camera weighed 3.6 kilograms, ran on 16 AA batteries, and was about the size of a toaster.

After working on his camera on and off for a year, Sasson decided on 12 December 1975 that he was ready to take his first picture. Lab technician Joy Marshall agreed to pose. The photo took about 23 seconds to record onto the audio tape. But when Sasson played it back on the lab computer, the image was a mess—although the camera could render shades that were clearly dark or light, anything in between appeared as static. So Marshall’s hair looked okay, but her face was missing. She took one look and said, “Needs work.”

Sasson continued to improve the camera, eventually capturing impressive images of different people and objects around the lab. He and his supervisor, Garreth Lloyd, received U.S. Patent No. 4,131,919 for an electronic still camera in 1978, but the project never went beyond the prototype stage. Sasson estimated that image resolution wouldn’t be competitive with chemical photography until sometime between 1990 and 1995, and that was enough for Kodak to mothball the project.

Digital photography took nearly two decades to take off

While Kodak chose to withdraw from digital photography, other companies, including Sony and Fuji, continued to move ahead. After Sony introduced the Mavica, an analog electronic camera, in 1981, Kodak decided to restart its digital camera effort. During the ’80s and into the ’90s, companies made incremental improvements, releasing products that sold for astronomical prices and found limited audiences. [For a recap of these early efforts, see Tekla S. Perry’s IEEE Spectrum article, “Digital Photography: The Power of Pixels.”]

Then, in 1994 Apple unveiled the QuickTake 100, the first digital camera for under US $1,000. Manufactured by Kodak for Apple, it had a maximum resolution of 640 by 480 pixels and could only store up to eight images at that resolution on its memory card, but it was considered the breakthrough to the consumer market. The following year saw the introduction of Apple’s QuickTake 150, with JPEG image compression, and Casio’s QV10, the first digital camera with a built-in LCD screen. It was also the year that Sasson’s original patent expired.

Digital photography really came into its own as a cultural phenomenon when the Kyocera VisualPhone VP-210, the first cellphone with an embedded camera, debuted in Japan in 1999. Three years later, camera phones were introduced in the United States. The first mobile-phone cameras lacked the resolution and quality of stand-alone digital cameras, often taking distorted, fish-eye photographs. Users didn’t seem to care. Suddenly, their phones were no longer just for talking or texting. They were for capturing and sharing images.

The rise of cameras in phones inevitably led to a decline in stand-alone digital cameras, the sales of which peaked in 2012. Sadly, Kodak’s early advantage in digital photography did not prevent the company’s eventual bankruptcy, as Mark Harris recounts in his 2014 Spectrum article “The Lowballing of Kodak’s Patent Portfolio.” Although there is still a market for professional and single-lens reflex cameras, most people now rely on their smartphones for taking photographs—and so much more.

How a technology can change the course of history

The transformational nature of Sasson’s invention can’t be overstated. Experts estimate that people will take more than 1.4 trillion photographs in 2020. Compare that to 1995, the year Sasson’s patent expired. That spring, a group of historians gathered to study the results of a survey of Americans’ feelings about the past. A quarter century on, two of the survey questions stand out:

  • During the last 12 months, have you looked at photographs with family or friends?

  • During the last 12 months, have you taken any photographs or videos to preserve memories?

In the nationwide survey of nearly 1,500 people, 91 percent of respondents said they’d looked at photographs with family or friends and 83 percent said they’d taken a photograph—in the past year. If the survey were repeated today, those numbers would almost certainly be even higher. I know I’ve snapped dozens of pictures in the last week alone, most of them of my ridiculously cute puppy. Thanks to the ubiquity of high-quality smartphone cameras, cheap digital storage, and social media, we’re all taking and sharing photos all the time—last night’s Instagram-worthy dessert; a selfie with your bestie; the spot where you parked your car.

So are all of these captured moments, these personal memories, a part of history? That depends on how you define history.

For Roy Rosenzweig and David Thelen, two of the historians who led the 1995 survey, the very idea of history was in flux. At the time, pundits were criticizing Americans’ ignorance of past events, and professional historians were wringing their hands about the public’s historical illiteracy.

Instead of focusing on what people didn’t know, Rosenzweig and Thelen set out to quantify how people thought about the past. They published their results in the 1998 book The Presence of the Past: Popular Uses of History in American Life (Columbia University Press). This groundbreaking study was heralded by historians, those working within academic settings as well as those working in museums and other public-facing institutions, because it helped them to think about the public’s understanding of their field.

Little did Rosenzweig and Thelen know that the entire discipline of history was about to be disrupted by a whole host of technologies. The digital camera was just the beginning.

For example, a little over a third of the survey’s respondents said they had researched their family history or worked on a family tree. That kind of activity got a whole lot easier the following year, when Paul Brent Allen and Dan Taggart launched Ancestry.com, which is now one of the largest online genealogical databases, with 3 million subscribers and approximately 10 billion records. Researching your family tree no longer means poring over documents in the local library.

Similarly, when the survey was conducted, the Human Genome Project was still years away from mapping our DNA. Today, at-home DNA kits make it simple for anyone to order up their genetic profile. In the process, family secrets and unknown branches on those family trees are revealed, complicating the histories that families might tell about themselves.

Finally, the survey asked whether respondents had watched a movie or television show about history in the last year; four-fifths responded that they had. The survey was conducted shortly before the 1 January 1995 launch of the History Channel, the cable channel that opened the floodgates on history-themed TV. These days, streaming services let people binge-watch historical documentaries and dramas on demand.

Today, people aren’t just watching history. They’re recording it and sharing it in real time. Recall that Sasson’s MacGyvered digital camera included parts from a movie camera. In the early 2000s, cellphones with digital video recording emerged in Japan and South Korea and then spread to the rest of the world. As with the early still cameras, the initial quality of the video was poor, and memory limits kept the video clips short. But by the mid-2000s, digital video had become a standard feature on cellphones.

As these technologies become commonplace, digital photos and video are revealing injustice and brutality in stark and powerful ways. In turn, they are rewriting the official narrative of history. A short video clip taken by a bystander with a mobile phone can now carry more authority than a government report.

Maybe the best way to think about Rosenzweig and Thelen’s survey is that it captured a snapshot of public habits, just as those habits were about to change irrevocably.

Digital cameras also changed how historians conduct their research

For professional historians, the advent of digital photography has had other important implications. Lately, there’s been a lot of discussion about how digital cameras in general, and smartphones in particular, have changed the practice of historical research. At the 2020 annual meeting of the American Historical Association, for instance, Ian Milligan, an associate professor at the University of Waterloo, in Canada, gave a talk in which he revealed that 96 percent of historians have no formal training in digital photography and yet the vast majority use digital photographs extensively in their work. About 40 percent said they took more than 2,000 digital photographs of archival material in their latest project. W. Patrick McCray of the University of California, Santa Barbara, told a writer with The Atlantic that he’d accumulated 77 gigabytes of digitized documents and imagery for his latest book project [an aspect of which he recently wrote about for Spectrum].

So let’s recap: In the last 45 years, Sasson took his first digital picture, digital cameras were brought into the mainstream and then embedded into another pivotal technology—the cellphone and then the smartphone—and people began taking photos with abandon, for any and every reason. And in the last 25 years, historians went from thinking that looking at a photograph within the past year was a significant marker of engagement with the past to themselves compiling gigabytes of archival images in pursuit of their research.

So are those 1.4 trillion digital photographs that we’ll collectively take this year a part of history? I think it helps to consider how they fit into the overall historical narrative. A century ago, nobody, not even a science fiction writer, predicted that someone would take a photo of a parking lot to remember where they’d left their car. A century from now, who knows if people will still be doing the same thing. In that sense, even the most mundane digital photograph can serve as both a personal memory and a piece of the historical record.

An abridged version of this article appears in the July 2020 print issue as “Born Digital.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

NASA’s Original Laptop: The GRiD Compass

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/silicon-revolution/nasas-original-laptop-the-grid-compass

The year 1982 was a notable one in personal computing. The BBC Micro was introduced in the United Kingdom, as was the Sinclair ZX Spectrum. The Commodore 64 came to market in the United States. And then there was the GRiD Compass.

The GRiD Compass Was the First Laptop to Feature a Clamshell Design

The Graphical Retrieval Information Display (GRiD) Compass had a unique clamshell design, in which the monitor folded down over the keyboard. Its 21.6-centimer plasma screen could display 25 lines of up to 128 characters in a high-contrast amber that the company claimed could be “viewed from any angle and under any lighting conditions.”

By today’s standards, the GRiD was a bulky beast of a machine. About the size of a large three-ring binder, it weighed 4.5 kilograms (10 pounds). But compared with, say, the Osborne 1 or the Compaq Portable, both of which had a heavier CRT screen and tipped the scales at 10.7 kg and 13 kg, respectively, the Compass was feather light. Some people call the Compass the first truly portable laptop computer.

The computer had 384 kilobytes of nonvolatile bubble memory, a magnetic storage system that showed promise in the 1970s and ’80s. With no rotating disks or moving parts, solid-state bubble memory worked well in settings where a laptop might, say, take a tumble. Indeed, sales representatives claimed they would drop the computer in front of prospective buyers to show off its durability.

But bubble memory also tends to run hot, so the exterior case was designed out of a magnesium alloy to make it a heat sink. The metal case added to the laptop’s reputation for ruggedness. The Compass also included a 16-bit 8086 microprocessor and up to 512 KB of RAM. Floppy drives and hard disks were available as peripherals.

With a price tag of US $8,150 (about $23,000 today), the Compass wasn’t intended for consumers but rather for business executives. Accordingly, it came preloaded with a text editor, a spreadsheet, a plotter, a terminal emulator, a database management system, and other business software. The built-in 1200-baud modem was designed to connect to a central computer at the GRiD Systems’ headquarters in Mountain View, Calif., from which additional applications could be downloaded.

The GRiD’s Sturdy Design Made It Ideal for Space

The rugged laptop soon found a home with NASA and the U.S. military, both of which valued its sturdy design and didn’t blink at the cost.

The first GRiD Compass launched into space on 28 November 1983 aboard the space shuttle Columbia. The hardware adaptations for microgravity were relatively minor: a new cord to plug into the shuttle’s power supply and a small fan to compensate for the lack of convective cooling in space.

The software modifications were more significant. Special graphical software displayed the orbiter’s position relative to Earth and the line of daylight/darkness. Astronauts used the feature to plan upcoming photo shoots of specific locations. The GRiD also featured a backup reentry program, just in case all of the IBMs at Mission Control failed.

For its maiden voyage, the laptop received the code name SPOC (short for Shuttle Portable On-Board Computer). Neither NASA nor GRiD Systems officially connected the acronym to a certain pointy-eared Vulcan on Star Trek, but the GRiD Compass became a Hollywood staple whenever a character had to show off wealth and tech savviness. The Compass featured prominently in Aliens, Wall Street, and Pulp Fiction.

The Compass/SPOC remained a regular on shuttle missions into the early 1990s. NASA’s trust in the computer was not misplaced: Reportedly, the GRiD flying aboard Challenger survived the January 1986 crash.

The GRiDPad 1900 Was a First in Tablet Computing

John Ellenby and Glenn Edens, both from Xerox PARC, and David Paulson had founded GRiD Systems Corp. in 1979. The company went public in 1981, and the following year they launched the GRiD Compass.

Not a company to rest on its laurels, GRiD continued to be a pioneer in portable computers, especially thanks to the work of Jeff Hawkins. He joined the company in 1982, left for school in 1986, and returned as vice president of research. At GRiD, Hawkins led the development of a pen- or stylus-based computer. In 1989, this work culminated in the GRiDPad 1900, often regarded as the first commercially successful tablet computer. Hawkins went on to invent the PalmPilot and Treo, though not at GRiD.

Amid the rapidly consolidating personal computer industry, GRiD Systems was bought by Tandy Corp. in 1988 as a wholly owned subsidiary. Five years later, GRiD was bought again, by Irvine, Calif.–based AST Research, which was itself acquired by Samsung in 1996.

In 2006 the Computer History Museum sponsored a roundtable discussion by key members of the original GRiD engineering team: Glenn Edens, Carol Hankins, Craig Mathias, and Dave Paulson, moderated by New York Times journalist (and former GRiD employee) John Markoff:

How Do You Preserve an Old Computer?

Although the GRiD Compass’s tenure as a computer product was relatively short, its life as a historic artifact goes on. To be added to a museum collection, an object must be pioneering, iconic, or historic. The GRiD Compass is all three, which is how the computer found its way into the permanent holdings of not one, but two separate Smithsonian museums.

One Compass was acquired by the National Air and Space Museum in 1989. No surprise there, seeing as how the Compass was the first laptop used in space aboard a NASA mission. Seven years later, curators at the Cooper Hewitt, Smithsonian Design Museum added one to their collections in recognition of the innovative clamshell design.

Credit for the GRiD Compass’s iconic look and feel goes to the British designer Bill Moggridge. His firm was initially tasked with designing the exterior case for the new computer. After taking a prototype home and trying to use it, Moggridge realized he needed to create a design that unified the user, the object, and the software. It was a key moment in the development of computer-human interactive design. In 2010, Moggridge became the fourth director of the Cooper Hewitt and its first director without a background as a museum professional.

Considering the importance Moggridge placed on interactive design, it’s fitting that preservation of the GRiD laptop was overseen by the museum’s Digital Collection Materials Project. The project, launched in 2017, aims to develop standards, practices, and strategies for preserving digital materials, including personal electronics, computers, mobile devices, media players, and born-digital products.

Keeping an electronic device in working order can be extremely challenging in an age of planned obsolescence. Cooper Hewitt brought in Ben Fino-Radin, a media archeologist and digital conservator, to help resurrect their moribund GRiD. Fino-Radin in turn reached out to Ian Finder, a passionate collector and restorer of vintage computers who has a particular expertise in restoring GRiD Compass laptops. Using bubble memory from Finder’s personal collection, curators at Cooper Hewitt were able to boot the museum’s GRiD and to document the software for their research collections.

Even as museums strive to preserve their old GRiDs, new GRiDs are being born. Back in 1993, former GRiD employees in the United Kingdom formed GRiD Defence Systems during a management buyout. The London-based company continues the GRiD tradition of building rugged military computers. The company’s GRiDCASE 1510 Rugged Laptop, a portable device with a 14.2-cm backlit LED display, looks remarkably like a smaller version of the Compass circa 1982. I guess when you have a winning combination, you stick with it.

An abridged version of this article appears in the June 2020 print issue as “The First Laptop in Orbit.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Al Alcorn, Creator of Pong, Explains How Early Home Computers Owe Their Color Graphics to This One Cheap, Sleazy Trick

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-talk/tech-history/silicon-revolution/al-alcorn-creator-of-pong-explains-how-early-home-computers-owe-their-color-to-this-one-cheap-sleazy-trick

In March, we published a Hands On article about Matt Sarnoff’s modern homebrew computer that uses a very old hack: NTSC artifact color. This hack allows digital systems without specialized graphics hardware to produce color images by exploiting quirks in how TVs decode analog video signals.

NTSC artifact color was used most notably by the Apple II in 1977, where Steve “Woz” Wozniak’s use of the hack brought it to wide attention; it was later used in the IBM PC and TRS-80 Color Computers. But it was unclear where the idea had originally come from, so we were thrilled to see that video game and electrical engineering legend Allan Alcorn left a comment on the article with an answer: the first color computer graphics that many people ever saw owe their origin to a cheap test tool used in a Californian TV repair shop in the 1960s. IEEE Spectrum talked to Alcorn to find out more:

Stephen Cass: Analog NTSC televisions generate color by looking at the phase of a signal relative to a reference frequency. So how did you come across this color test tool, and how did it work?

Al Alcorn: When I was 13, 14, my neighbor across the street had a television repair shop. I would go down there and at the same time, I had my father sign me up for an RCA correspondence course on radio and television repair. So, by the time I got to Berkeley, I was a journeyman TV repairman and actually paid my way through college through television. In one repair shop, there was a real cheap, sleazy color bar generator [for testing televisions]. And instead of doing color properly by synthesizing the phases and stuff like that, it simply used a crystal that was 3.58 megahertz [the carrier frequency for the color signal] minus 15.750 kilohertz, which was the horizontal scan frequency. So it slipped one phase, 360 degrees, every scan line. You put that signal on the screen and you’ve got a color bar from left to right. It really was really the cheapest, sleaziest way of doing it!

SC: How did that idea of not doing NTSC “by the book” enter into your own designs?

AA: So, I learned the cheap, sleazy way doing repair work. But then I got a job at Ampex [a leader in audio/visual technology at the time]. Initially, I wanted to be an analog engineer, digital was not as interesting. At Ampex, it was the first time I saw video being done by digital circuits; they had gotten fast enough, and that opened my eyes up. [Then I went to Atari]. Nolan [Bushnell, co-founder of Atari] decided we wanted to be in the home consumer electronics space. We had done this [monochrome] arcade game [1972’s Pong] which got us going, but he always wanted to be in the consumer space. I worked with another engineer and we reduced the entire logic of the Pong game down to a single N-channel silicon chip. Anyway, part of the way into the design, Nolan said, “Oh, by the way, it has to be colored.” But I knew he was going to pull this stunt, so I’d already chosen the crystal [that drove the chip] to be 3.58 MHz, minus 15.750 kilohertz.

SC: Why did you suspect he was going to do that?

AA: Because there never was a plan. We had no outline or business plan, [it was just Nolan]. I’m sure you’ve heard that the whole idea behind the original arcade Pong was that it was a test for me just to practice, building the simplest possible game. But Nolan lied to me and said it was going to be a home product. Well, at the end it was kind of sad, a failure, because I had like 70 ICs in it, and that was [too expensive] for a home game. But [then Nolan decided] it would work for an arcade game! And near the end of making [arcade] Pong, Nolan said, “Well, where’s the sound?” I said “What do you mean, sound?” I didn’t want to add in any more parts. He said “I want the roar of the crowd of thousands applauding.” And [Ted] Dabney, the other owner said “I want boos and hisses.” I said to them “Okay, I’ll be right back.” I just went in with a little probe, looking for around the vertical sync circuit for frequencies that [happened to be in the audible range]. I found a place and used a 555 timer [to briefly connect the circuit to a loudspeaker to make blip sounds when triggered]. I said “There you go Nolan, if you don’t like it, you do it.” And he said “Okay.” Subsequently, I’ve seen articles about how brilliant the sound was!  The whole idea is to get the maximum functionality for the minimum circuitry. It worked. We had $500 in the bank. We had nothing and so we put it out there. Time is of the essence.

SC: So, in the home version of Pong, the graphics would simply change color from one side of the screen to the other?

AA: Right, the whole goal for doing this was just to put on the box: “Color!” Funny story—home Pong becomes a hit. This is like in 1974, 75. It’s a big hit. And they’re creating advertisements for television. Trying to record the Pong signal onto videotape. I get a call from some studio somewhere, saying, “We can’t get it to play on the videotape recorder, why?”  I say, “Well, it’s not really video! There’s no interlace… Treat it as though it’s PAL, just run up through a standard converter.”

SC: How does Wozniak get wind of this?

AA: In those days, in Silicon Valley, we didn’t keep secrets. I hired Steve Jobs on a fluke, and he’s not an engineer. His buddy Woz was working at HP, but we were a far more fun place to hang out. We had a production floor with about 30 to 50 arcade video games being shipped, and they were on the floor being burnt in. Jobs didn’t get along with the other guys very well, so he’d work at night. Woz would come in and play while Jobs did his work, or got Woz to do it for him. And I enjoyed Woz. I mean, this guy is a genius, I mean, a savant. It’s just like, “Oh my God.”

When the Apple II was being done, I helped them. I mean, I actually loaned them my oscilloscope, I had a 465 Tektronix scope, which I still have, and they designed the Apple II with it. I designed Pong with it. I did some work, I think, on the cassette storage. And then I remember showing Woz the trick for the hi-res color, explaining, sitting him down and saying, “Okay, this is how NTSC is supposed to work.” And then I said, “Okay. Now the reality is that if you do everything at this clock [frequency] and you do this with a pulse of square waves…” And basically explained the trick. And he ran with it.  That was the tradition. I mean, it was cool. I was kind of showing off!

SC: When people today are encouraged to tinker and experiment with electronics, it’s typically using things like the Arduino, which are heavily focused on digital circuits. Do you think analog engineering has been neglected?

AA: Well, it certainly is. There was a time, I think it was in the ’90s, where it got so absurd that there just weren’t any good analog engineers out there. And you really need analog engineers on certain things. A good analog engineer at that time was highly paid. Made a lot of money because they’re just rare. So, yeah. But most kids want to be—well, just want to get rich. And the path is through programming something on an iPhone. And that’s it. You get rich and go. But there is a lot of value in analog engineering. 

When Artists, Engineers, and PepsiCo Collaborated, Then Clashed at the 1970 World’s Fair

Post Syndicated from W. Patrick McCray original https://spectrum.ieee.org/tech-history/silicon-revolution/when-artists-engineers-and-pepsico-collaborated-then-clashed-at-the-1970-worlds-fair

On 18 March 1970, a former Japanese princess stood at the center of a cavernous domed structure on the outskirts of Osaka. With a small crowd of dignitaries, artists, engineers, and business executives looking on, she gracefully cut a ribbon that tethered a large red balloon to a ceremonial Shinto altar. Rumbles of thunder rolled out from speakers hidden in the ceiling. As the balloon slowly floated upward, it appeared to meet itself in midair, reflecting off the massive spherical mirror that covered the walls and ceiling.

With that, one of the world’s most extravagant and expensive multimedia installations officially opened, and the attendees turned to congratulate one another on this collaborative melding of art, science, and technology. Underwritten by PepsiCo, the installation was the beverage company’s signal contribution to Expo ’70, the first international exposition to be held in an Asian country.

A year and a half in the making, the Pepsi Pavilion drew eager crowds and elicited effusive reviews. And no wonder: The pavilion was the creation of Experiments in Art and Technology—E.A.T.—an influential collective of artists, engineers, technicians, and scientists based in New York City. Led by Johan Wilhelm “Billy” Klüver, an electrical engineer at Bell Telephone Laboratories, E.A.T. at its peak had more than a thousand members and enjoyed generous support from corporate donors and philanthropic foundations. Starting in the mid-1960s and continuing into the ’70s, the group mounted performances and installations that blended electronics, lasers, telecommunications, and computers with artistic interpretations of current events, the natural world, and the human condition.

E.A.T. members saw their activities transcending the making of art. Artist–engineer collaborations were understood as creative experiments that would benefit not just the art world but also industry and academia. For engineers, subject to vociferous attacks about their complicity in the arms race, the Vietnam War, environmental destruction, and other global ills, the art-and-technology movement presented an opportunity to humanize their work.

Accordingly, Klüver and the scores of E.A.T. members in the United States and Japan who designed and built the pavilion considered it an “experiment in the scientific sense,” as the 1972 book Pavilion: Experiments in Art and Technology stated. Klüver pitched the installation as a “piece of hardware” that engineers and artists would program with “software” (that is, live performances) to create an immersive visual, audio, and tactile experience. As with other E.A.T. projects, the goal was not about the product but the process.

Pepsi executives, unsurprisingly, viewed their pavilion on somewhat different terms. These were the years of the Pepsi Generation, the company’s mildly countercultural branding. For them, the pavilion would be at once an advertisement, a striking visual statement, and a chance to burnish the company’s global reputation. To that end, Pepsi directed close to US $2 million (over $13 million today) to E.A.T. to create the biggest, most elaborate, and most expensive art project of its time.

Perhaps it was inevitable, but over the 18 months it took E.A.T. to execute the project, Pepsi executives grew increasingly concerned about the group’s vision. Just a month after the opening, the partnership collapsed amidst a flurry of recriminating letters and legal threats. And yet, despite this inglorious end, the participants considered the pavilion a triumph.

The pavilion was born during a backyard conversation in the fall of 1968 between David Thomas, vice president in charge of Pepsi’s marketing, and his neighbor, Robert Breer, a sculptor and filmmaker who belonged to the E.A.T. collective. Pepsi had planned to contract with Disney to build its Expo ’70 exhibition, as it had done for the 1964 World’s Fair in New York City. Some Pepsi executives were, however, concerned that the conservative entertainment company wouldn’t produce something hip enough for the burgeoning youth market, and they had memories of the 1964 project, when Disney ran well over its already considerable budget. Breer put Thomas in touch with Klüver, productive dialogue ensued, and the company hired E.A.T. in December 1968.

Klüver was a master at straddling the two worlds of art and science. Born in Monaco in 1927 and raised in Stockholm, he developed a deep appreciation for cinema as a teen, an interest he maintained while studying with future Nobel physicist Hannes Alfvén. After earning a Ph.D. in electrical engineering at the University of California, Berkeley, in 1957, he accepted a coveted research position at Bell Labs in Murray Hill, N.J.

While keeping up a busy research program, Klüver made time to explore performances and gallery openings in downtown Manhattan and to seek out artists. He soon began collaborating with artists such as Yvonne Rainer, Andy Warhol, Jasper Johns, and Robert Rauschenberg, contributing his technical expertise and helping to organize exhibitions and shows. His collaboration with Jean Tinguely on a self-destructing sculpture, called Homage to New York, appeared on the April 1969 cover of IEEE Spectrum. Klüver emerged as the era’s most visible and vocal spokesperson for the merger of art and technology in the United States. Life magazine called him the “Edison-Tesla-Steinmetz-Marconi-Leonardo da Vinci of the American avant-garde.”

Klüver’s supervisor, John R. Pierce, was tolerant and even encouraging of his activities. Pierce had his own creative bent, writing science fiction in his spare time and collaborating with fellow Bell engineer Max Mathews to create computer-generated music. Meanwhile, Bell Labs, buoyed by the economic prosperity of the 1960s, supported a small coterie of artists-in-residence, including Nam June Paik, Lillian Schwartz, and Stan VanDerBeek.

In time, Klüver devised more ambitious projects. For his 1966 orchestration of 9 Evenings: Theatre and Engineering, nearly three dozen engineering colleagues worked with artists to build wireless radio transmitters, carts that floated on cushions of air, an infrared television system, and other electronics. Held at New York City’s 69th Regiment Armory—which in 1913 had hosted a pathbreaking exhibition of modern art9 Evenings expressed a new creative culture in which artists and engineers collaborated.

In the midst of organizing 9 Evenings, Klüver, along with artists Rauschenberg and Robert Whitman and Bell Labs engineer Fred Waldhauer, founded Experiments in Art and Technology. By the end of 1967, more than a thousand artists and technical experts had joined. And a year later, E.A.T. had scored the commission to create the Pepsi Pavilion.

From the start, E.A.T. envisioned the pavilion as a multimedia environment that would offer a flexible, personalized experience for each visitor and that would express irreverent, uncommercial, and antiauthoritarian values.

But reaching consensus on how to realize that vision took months of debate and argument. Breer wanted to include his slow-moving cybernetic “floats”—large, rounded, self-driving sculptures powered by car batteries. Whitman was becoming intrigued with lasers and visual perception, and felt there should be a place for that. Forrest “Frosty” Myers argued for an outdoor light installation using searchlights, his focus at the time. Experimental composer David Tudor imagined a sophisticated sound system that would transform the Pepsi Pavilion into both recording studio and instrument.

“We’re all painters,” Klüver recalled Rauschenberg saying, “so let’s do something nonpainterly.” Rauschenberg’s attempt to break the stalemate prompted a further flood of suggestions. How about creating areas where the temperature changed? Or pods that functioned as anechoic chambers—small spaces of total silence? Maybe the floor could have rear-screen projections that gave visitors the impression of walking over flames, clouds, or swimming fish. Perhaps wind tunnels and waterfalls could surround the entrances.

Eventually, Klüver herded his fellow E.A.T. members into agreeing to an eclectic set of tech-driven pieces. The pavilion building itself was a white, elongated geodesic dome, which E.A.T. detested and did its best to obscure. And so a visitor approaching the finished pavilion encountered not the building but a veil of artificial fog that completely enshrouded the structure. At night, the fog was dramatically lit and framed by high-intensity xenon lights designed by Myers.

On the outdoor terrace, Breer’s white floats rolled about autonomously like large bubbles, emitting soft sounds—speech, music, the sound of sawing wood—and gently reversing themselves when they bumped into something. Steps led downward into a darkened tunnel, where visitors were greeted by a Japanese hostess wearing a futuristic red dress and bell-shaped hat and handed a clear plastic wireless handset. Stepping farther into the tunnel, they would be showered with red, green, yellow, and blue light patterns from a krypton laser system, courtesy of Whitman.

Ascending into the main pavilion, the visitors’ attention would be drawn immediately upward, where their reflections off the huge spherical mirror made it appear that they were floating in space. The dome also created auditory illusions, as echoes and reverberations toyed with people’s sense of acoustic reality. The floors of the circular room sloped gently upward to the center, where a glass insert in the floor allowed visitors to peer down into the entrance tunnel with its laser lights. Other parts of the floor were covered in different materials and textures—stone, wood, carpet. As the visitor moved around, the handset delivered a changing array of sounds. While a viewer stood on the patch of plastic grass, for example, loop antennas embedded in the floor might trigger the sound of birds or a lawn mower.

The experience was deeply personal: You could wander about at your own pace, in any direction, and compose your own trippy sensory experience.

To pull off such a feat of techno-art required an extraordinary amount of engineering. The mirror dome alone took months to design and build. E.A.T. viewed the mirror as, in Frosty Myers’s words, the “key to the whole Pavilion,” and it dictated much of what was planned for the interior. The research and testing for the mirror largely fell to members of E.A.T.’s Los Angeles chapter, led by Elsa Garmire. The physicist had done her graduate work at MIT with laser pioneer Charles Townes and then accepted a postdoc in electrical engineering at Caltech. But Garmire found the environment for women at Caltech unsatisfying, and she began to consider the melding of art and engineering as an alternate career path.

After experimenting with different ideas, Garmire and her colleagues designed a mirror modeled after the Mylar balloon satellites launched by NASA. A vacuum would hold the mirror’s Mylar lining in place, while a rigid outer shell held in the vacuum. E.A.T. unveiled a full-scale prototype of the mirror in September 1969 in a hangar at a Marine Corps airbase. It was built by G.T. Schjeldahl Co., the Minnesota-based company responsible for NASA’s Echo and PAGEOS [PDF] balloon satellites. Gene Youngblood, a columnist for an underground newspaper, found himself mesmerized when he ventured inside the “giant womb-mirror” for the first time. “I’ve never seen anything so spectacular, so transcendentally surrealistic.… The effect is mind-shattering,” he wrote. What you saw depended on the ambient lighting and where you were standing, and so the dome fulfilled E.A.T.’s goal of providing each visitor with a unique, interactive experience. Such effects didn’t come cheap: By the time Expo ’70 started, the cost of the pavilion’s silver lining came to almost $250,000.

An even more visually striking feature of the pavilion was its exterior fog. Ethereal in appearance, it required considerable real-world engineering to execute. This effort was led by Japanese artist Fujiko Nakaya, who had met Klüver in 1966 in New York City, where she was then working. Born in 1933 on the northern island of Hokkaido, she was the daughter of Ukichiro Nakaya, a Japanese physicist famous for his studies of snow crystals. When E.A.T. got the Pepsi commission, Klüver asked Fujiko to explore options for enshrouding the pavilion in clouds.

Nakaya’s aim was to produce a “dense, bubbling fog,” as she wrote in 1972, for a person “to walk in, to feel and smell, and disappear in.” She set up meteorological instruments at the pavilion site to collect baseline temperature, wind, and humidity data. She also discussed several ways of generating fog with scientists in Japan. One idea they considered was dry ice. Solid chunks of carbon dioxide mixed with water or steam could indeed make a thick mist. But the expo’s health officials ruled out the plan, claiming the massive release of CO2 would attract mosquitoes.

Eventually, Nakaya decided that her fog would be generated out of pure  water. For help, she turned to Thomas R. Mee, a physicist in the Pasadena area whom Elsa Garmire knew. Mee had just started his own company to make instruments for weather monitoring. He had never heard of Klüver or E.A.T., but he knew of Nakaya’s father’s pioneering research on snow.

Mee and Nakaya figured out how to create fog by spraying the water under high pressure through copper lines fitted with very narrow nozzles. The lines hugged the edges of the geodesic structure, and the 2,500 or so nozzles atomized some 41,600 liters of water an hour. The pure white fog spilled over the structure’s angled and faceted roof and drifted gently over the fairground. Breer compared it to the clouds found in Edo-period Japanese landscape paintings.

While the fog and mirrored dome were the pavilion’s most obvious features, hidden away in a control room sat an elaborate computerized sound system.

Designed by Tudor, the system could accept signal inputs from 32 sources, which could be modified, amplified, and toggled among 37 speakers. The sources could be set to one of three modes: “line sound,” in which the sound switched rapidly from speaker to speaker in a particular pattern; “point sound,” in which the sound emanated from one speaker; and “immersion” or “environmental” mode, where the sound seemed to come from all directions. “The listener would have the impression that the sound was somehow embodied in a vehicle that was flying about him at varying speeds,” Tudor explained.

The audio system also served as an experimental lab. Much as researchers might book time on a particle accelerator or a telescope, E.A.T. invited “resident programmers” to apply to spend several weeks in Osaka exploring the pavilion’s potential as an artistic instrument. The programmers would have access to a library of several hundred “natural environmental sounds” as well as longer recordings that Tudor and his colleagues had prepared. These included bird calls, whale songs, heartbeats, traffic noises, foghorns, tugboats, and ocean liners. Applicants were encouraged to create “experiences that tend toward the real rather than the philosophical.” Perhaps in deference to its patron’s conservatism, E.A.T. specified it was “not interested in political or social comment.”

In sharp contrast to E.A.T.’s sensibilities, Pepsi executives didn’t view the pavilion as an experiment or even a work of art but rather as a product they had paid for. Eventually, they decided that they were not well pleased by what E.A.T. had delivered. On 20 April 1970, little more than a month after the pavilion opened to the public, Pepsi informed Klüver that E.A.T.’s services were no longer needed. E.A.T. staff who had remained in Osaka to operate the pavilion smuggled the audio tapes out, leaving Pepsi to play a repetitive and banal soundtrack inside its avant-garde building for the remaining months of the expo.

Despite E.A.T.’s abrupt ouster, many critics responded favorably to the pavilion. A Newsweek critic called it “an electronic cathedral in the shape of a geodesic dome,” neither “fine art nor engineering but a true synthesis.” Another critic christened the pavilion a “total work of art”—a Gesamtkunstwerk—in which the aesthetic and technological, human and organic, and mechanical and electric were united.

In hindsight, the Pepsi Pavilion was really the apogee for the art-and-technology movement that burst forth in the mid-1960s. This first wave did not last. Some critics contended that in creating corporate-sponsored large-scale collaborations like the pavilion, artists compromised themselves aesthetically and ethically—“freeload[ing] at the trough of that techno-fascism that had inspired them,” as one incensed observer wrote. By the mid-1970s, such expensive and elaborate projects had become as discredited and out of fashion as moon landings.

Nonetheless, for many E.A.T. members, the Pepsi Pavilion left a lasting mark. Elsa Garmire’s artistic experimentation with lasers led to her cofounding a company, Laser Images, which built equipment for laser light shows. Riffing on the popularity of planetarium shows, the company named its product the “laserium,” which soon became a pop-culture fixture.

Meanwhile, Garmire shifted her professional energies back to science. After leaving Caltech for the University of Southern California, she went on to have an exceptionally successful career in laser physics. She served as engineering dean at Dartmouth College and president of the Optical Society of America. Years later, Garmire said that working with artists influenced her interactions with students, especially when it came to cultivating a sense of play.

After Expo ’70 ended, Mee filed for a U.S. patent to cover an “Environmental Control Method and Apparatus” derived from his pavilion work. As his company, Mee Industries, grew, he continued his collaborations with Nakaya. Even after Mee’s death in 1998, his company contributed hardware to installations Nakaya designed for the Guggenheim Museum in Bilbao, Spain. More recently, her Fog Bridge [PDF] was integrated into the Exploratorium building in San Francisco.

Billy Klüver insisted that the success of his organization would ultimately be judged by the degree to which it became redundant. By that measure, E.A.T. was indeed a success, even if events didn’t unfold quite the way he imagined. At universities in the United States and Europe, dozens of programs now explore the intersections of art, technology, engineering, and design. It’s common these days to find tech-infused art in museum collections and adorning public spaces. Events like Burning Man and its many imitators continue to explore the experimental edges of art and technology—and to emphasize the process over the product.

And that may be the legacy of the pavilion and of E.A.T.: They revealed that engineers and artists could forge a common creative culture. Far from being worlds apart, their communities share values of entrepreneurship, adaptability, and above all, the collective desire to make something beautiful.

This article appears in the March 2020 print issue as “Big in Japan.”

The Crazy Story of How Soviet Russia Bugged an American Embassy’s Typewriters

Post Syndicated from Robert W. Lucky original https://spectrum.ieee.org/tech-history/silicon-revolution/the-crazy-story-of-how-soviet-russia-bugged-an-american-embassys-typewriters

Every engineer has stories of bugs that they discovered through clever detective work. But such exploits are seldom of interest to other engineers, let alone the general public. Nonetheless, a recent book authored by Eric Haseltine, titled The Spy in Moscow Station (Macmillan, 2019), is a true story of bug hunting that should be of interest to all. It recounts a lengthy struggle by Charles Gandy, an electrical engineer at the United States’ National Security Agency, to uncover an elaborate and ingenious scheme by Soviet engineers to intercept communications in the American embassy in Moscow. (I should say that, by coincidence, both Haseltine and Gandy are friends of mine.)

This was during the Cold War in the late 1970s. American spies were being arrested, and how they were being identified was a matter of great concern to U.S. intelligence. The first break came with the accidental discovery of a false chimney cavity at the Moscow embassy. Inside the chimney was an unusual Yagi-style antenna that could be raised and lowered with pulleys. The antenna had three active elements, each tuned to a different wavelength. What was the purpose of this antenna, and what transmitters was it listening to?

Gandy pursued these questions for years, not only baffled by the technology, but buffeted by interagency disputes and hampered by the Soviet KGB. At one point he was issued a “cease and desist” letter by the CIA, which, along with the State Department, had authority over security at the embassy. These agencies were not persuaded that there were any transmitters to be found: Regular scans for emissions from bugs showed nothing.

It was only when Gandy got a letter authorizing his investigation from President Ronald Reagan that he was able to take decisive action. All of the electronics at the embassy—some 10 tons of equipment—was securely shipped back to the United States. Every piece was disassembled and X-rayed.

After tens of thousands of fruitless X-rays, a technician noticed a small coil of wire inside the on/off switch of an IBM Selectric typewriter. Gandy believed that this coil was acting as a step-down transformer to supply lower-voltage power to something within the typewriter. Eventually he uncovered a series of modifications that had been concealed so expertly that they had previously defied detection.

A solid aluminum bar, part of the structural support of the typewriter, had been replaced with one that looked identical but was hollow. Inside the cavity was a circuit board and six magnetometers. The magnetometers sensed movements of tiny magnets that had been embedded in the transposers that moved the typing “golf ball” into position for striking a given letter.

Other components of the typewriters, such as springs and screws, had been repurposed to deliver power to the hidden circuits and to act as antennas. Keystroke information was stored and sent in encrypted burst transmissions that hopped across multiple frequencies.

Perhaps most interesting, the transmissions were at a low power level in a narrow frequency band that was occupied by intermodulation overtones of powerful Soviet TV stations. The TV signals would swamp the illicit transmissions and mask them from detection by embassy security scans, but the clever design of the mystery antenna and associated electronic filtering let the Soviets extract the keystroke signals.

When all had been discovered, Haseltine recounts how Gandy sat back and felt an emotion—a kinship with the Soviet engineers who had designed this ingenious system. This is the same kinship I feel whenever I come across some particularly innovative design, whether by a colleague or competitor. It is the moment when a technology transcends known limits, when the impossible becomes the doable. Gandy and his unknown Soviet opponents were working with 1970s technology. Imagine what limits will be transcended tomorrow!

This article appears in the January 2020 print issue as “The Ingenuity of Spies.”