All posts by Allison Marsh

An Engineer, His Segway, and the ADA: A Tale of the Open Road

Post Syndicated from Allison Marsh original

My dad loved to camp, and he wanted to teach his grandson how to pitch a tent, start a fire, and explore nature. Unfortunately, by the time my nephew Liam was born, my dad had already had two major heart attacks, bypass surgery, and had a pacemaker/defibrillator implanted. Hiking over any distance was no longer in the cards for him.

Luckily, my dad’s decline in health seemed to keep pace with the latest breakthroughs in biomedical technology and new pharmaceuticals. He managed to live with congestive heart failure for more than 20 years. Although he died a few years ago, two recent news stories collided last month to remind me how technology and legislation allowed my dad to continue to explore the outdoors and share adventures with his grandson, despite his condition.

The first news of note was the announcement that the Segway is ceasing production. Coverage of this event tended to emphasize how the overhyped, self-balancing two-wheeled personal transportation device failed to live up to expectations. My father would have begged to differ. An engineer by training and an enthusiastic early adopter by temperament, he’d be the first to say that the Segway changed his life.

Dad got his Segway in 2007 when they cost about US $5,000. That is pricey for a toy, but not for a personal assistive device (power wheelchairs also cost thousands of dollars). My mother never begrudged the expense because it kept his world from closing in. He used it every day for work. Stepping aboard, he was able to zip around his 950-square-meter machine shop without getting winded. When he visited clients (mostly large-scale manufacturers), he would load the Segway into his van and then use it to make the rounds on the factory floor. He tricked out his transporter, upgrading the tires to the larger “off road” version that allowed for more maneuverability over uneven terrain. He added detachable saddlebags and hooks. He built a ramp out of scrap wood from a bowling alley to make loading and unloading into his van easier.

But his Segway really proved its worth on the weekends. My father, Liam, and my sister Amy trekked the entire 300-kilometer towpath of the Chesapeake & Ohio Canal, him on his Segway, Amy and Liam on bicycles. They did it in segments over many weekends, camping along the route. It turns out the battery life of a Segway is about as far as an 8-year-old boy can bike in a day. Dad would hook his Segway up to the car battery to charge overnight, and they’d be off again in the morning.

During the summer of 2009, he and I went on a cross-continent road trip from Richmond, Va., to Dead Horse, Alaska, because he wanted to drive the AlCan Highway and see the oil fields. We took the northern route across Canada one way and dropped down into the Rockies and across the middle of the Great Plains on the return. Along the way, he raced my dog on the Bonneville Salt Flats, explored the wind-swept lava fields of Craters of the Moon, and watched a perfect sunset at Great Sand Dunes National Park.

That brings me to the second piece of recent news: the 30th anniversary of the Americans with Disabilities Act. The ADA requires accessible design for all public sites in the United States, including those overseen by the Bureau of Land Management and the National Parks Service. Thanks to the landmark legislation, the worlds Dad could explore opened up enormously. Although there are some limitations—he couldn’t climb down to see the pueblos at Mesa Verde, although he could admire them from the scenic overlook—wheelchair ramps and paved paths allowed him and his Segway to go almost everywhere.

With his Segway, my dad lived a fuller life despite his condition. The Segway was more maneuverable than a wheelchair, with a smaller footprint and turning radius, and it offered easier access to more places. Perhaps most importantly, it allowed him to look people directly in the eye, or even tower slightly above them. Aboard his Segway, he commanded respect, or at least curiosity, from onlookers.

Occasionally, a park ranger would raise an eyebrow at my dad’s Segway and begin to object to its use. My dad would point to the handicapped sticker he proudly displayed on the front of his “assistive device” and launch into a lecture about the ADA and accessibility. He was fortunate to have this marvel of engineering at his disposal, to help him explore, and nothing was going to stop him.

The Rich Tapestry of Fiber Optics

Post Syndicated from Allison Marsh original

Whoopee! So wrote Donald Keck, a researcher at Corning Glass Works, in the 7 August 1970 entry of his lab notebook. The object of his exuberance was a 29-meter-long piece of highly purified, titanium-doped optical fiber, through which he had successfully passed a light signal with a measured loss of only 17 decibels per kilometer. A few years earlier, typical losses had been closer to 1,000 dB/km. Keck’s experiment was the first demonstration of low-loss optical fiber for telecommunications, and it paved the way for transmitting voice, data, and video over long distances.

As important as this achievement was, it was not an isolated event. Physicists and engineers had been working for decades to make optical telecommunications possible, developing not just fibers but waveguides, lasers, and other components. (More on that in a bit.) And if you take the long view, as historians like me tend to do, it’s part of a fascinating tapestry that also encompasses glass, weaving, art, and fashion.

Optical fiber creates stunning effects in art and fashion

Shown above is a sculpture called Crossform Pendant Lamp, by the New York–based textile artist Suzanne Tick. Tick is known for incorporating unusual materials into her weaving: recycled dry cleaner hangers, Mylar balloons washed up on the beach, documents from her divorce. For the lamp, she used industrial fiber-optic yarn.

The piece was part of a collaboration between Tick and industrial designer Harry Allen. Allen worked on mounting ideas and illuminators, while Tick experimented with techniques to weave the relatively stiff fiber-optic yarn. (Optical fiber is flexible compared with other types of glass, but inflexible compared to, say, wool.) The designers had to determine how the lamp would hang and how it would connect to a power and light source. The result is an artwork that glows from within.

Weaving is of course an ancient technology, as is glassmaking. The ability to draw, or pull, glass into consistent fibers, on the other hand, emerged only at the end of the 19th century. As soon as it did, designers attempted to weave with glass, creating hats, neckties, shawls, and other garments.

Perhaps the most famous of these went on display at the 1893 World’s Columbian Exposition in Chicago. The exhibit mounted by the Libbey Glass Company, of Toledo, Ohio, showcased a dress made from silk and glass fibers. The effect was enchanting. According to one account, it captured the light and shimmered “as crusted snow in sunlight.” One admirer was Princess Eulalia of Spain, a royal celebrity of the day, who requested a similar dress be made for her. Libbey Glass was happy to oblige—and receive the substantial international press.

The glass fabric was too brittle for practical wear, which may explain why few such garments emerged over the years. But the idea of an illuminated dress did not fade, awaiting just the right technology. Designer Zac Posen found a stunning combination when he crafted a dress of organza, optical fiber, LEDs, and 30 tiny battery packs, for the actor Claire Danes to wear to the 2016 Met Gala. The theme of that year’s fashion extravaganza was “Manus x Machina: Fashion in an Age of Technology,” and Danes’s dress stole the show. Princess Eulalia would have approved. 

Fiber optics has many founders

Of course, most of the work on fiber optics has occurred in the mainstream of science and engineering, with physicists and engineers experimenting with different ways to manipulate light and funnel it through glass fibers. Here, though, the history gets a bit tangled. Let’s consider the man credited with coining the term “fiber optics”: Narinder Singh Kapany

Kapany was born in India in 1926, received his Ph.D. in optics from Imperial College London, and then moved to the United States, where he spent the bulk of his career as a businessman and entrepreneur. He began working with optical fibers during his graduate studies, trying to improve the quality of image transmission. He introduced the term and the field to a broader audience in the November 1960 issue of Scientific American with an article simply titled “Fiber Optics.”

As Kapany informed his readers, a fiber-optic thread is a cylindrical glass fiber having a high index of refraction surrounded by a thin coating of glass with a low index of refraction. Near-total internal reflection takes place between the two, thus keeping a light signal from escaping its conductor. He explained how light transmitted along bundles of flexible glass fibers could transport optical images along torturous paths with useful outcomes. Kapany predicted that it would soon become routine for physicians to examine the inside of a patient’s body using a “fiberscope”—and indeed fiber-optic endoscopy is now commonplace.

Kapany’s article unintentionally introduced an extra loop into the historical thread of fiber optics. In an anecdote that leads off the article, Kapany relates that in the 1870s, Irish physicist John Tyndall demonstrated how light could travel along a curved path. His “light pipe” was formed by a stream of water emerging from a hole in the side of a tank. When Tyndall shone a light into the tank, the light followed the stream of water as it exited the tank and arced to the floor. This same effect is seen in illuminated fountains.

Kapany’s anecdote conjures a mental image that allows readers to begin to understand the concept of guiding light, and I always love when scientists evoke history. In this case, though, the history was wrong: Tyndall wasn’t the originator of the guided-light demonstration.

While researching his highly readable 1999 book City of Light: The Story of Fiber Optics, Jeff Hecht discovered that in fact Jean-Daniel Colladon deserves the credit. In 1841, the Swiss physicist performed the water-jet experiment in Geneva and published an account the following year in Comptes Rendus, the proceedings of the French Academy of Sciences. Hecht, a frequent contributor to IEEE Spectrum, concluded that Michael Faraday, Tyndall’s mentor, probably saw another Swiss physicist, Auguste de la Rive, demonstrate a water jet based on Colladon’s apparatus, and Faraday then encouraged Tyndall to attempt something similar back in London.

I forgive Kapany for not digging around in the archives, even if his anecdote did exaggerate Tyndall’s role in fiber optics. And sure, Tyndall should have credited Colladon, but then there is a long history of scientists not getting the credit they deserve. Indeed, Kapany himself is considered one of them. In 1999 Fortune magazine listed him as one of the “unsung heroes” of 20th-century businessmen. This idea was perpetuated after Kapany did not share the Nobel Prize in Physics with Charles Kao in 2009 for achievements in the transmission of light in fibers for optical communication.

Whether or not Kapany should have shared the prize—and there are never any winners when it comes to debates over overlooked Nobelists—Kao certainly deserved what he got. In 1963 Kao joined a team at Standard Telecommunication Laboratories (STL) in England, the research center for Standard Telephones and Cables. Working with George Hockham, he spent the next three years researching how to use fiber optics for long-distance communication, both audio and video.

On 27 January 1966, Kao demonstrated a short-distance optical waveguide at a meeting of the Institution of Electrical Engineers (IEE).

According to a press release from STL, the waveguide had “the information-carrying capacity of one Gigacycle, which is equivalent to 200 television channels or over 200,000 telephone channels.” Once the technology was perfected, the press release went on, a single undersea cable would be capable of transmitting large amounts of data from the Americas to Europe.

In July, Kao and Hockham published their work [PDF] in the Proceedings of the IEE. They proposed that fiber-optic communication over long distances would be viable, but only if an attenuation of less than 20 dB/km could be achieved. That’s when Corning got involved.

Corning’s contribution brought long-distance optical communication closer to reality

In 1966, the head of a new group in fiber-optic communication at the Post Office Research Station in London mentioned to a visitor from Corning the need for low-loss glass fibers to realize Kao’s vision of long-distance communication. Corning already made fiber optics for medical and military use, but those short pieces of cable had losses of approximately 1,000 dB/km—not even close to Kao and Hockham’s threshold.

That visitor from Corning, William Shaver, told his colleague Robert Maurer about the British effort, and Maurer in turn recruited Keck, Peter Schultz, and Frank Zimar to work on a better way of drawing the glass fibers. The group eventually settled on a process involving a titanium-doped core. The testing of each new iteration of fiber could take several months, but by 1970 the Corning team thought they had a workable technology. On 11 May 1970, they filed for two patents. The first was US3659915A, a fused silica optical waveguide, awarded to Maurer and Schultz, and the second was US271126A, a method of producing optical waveguide fibers, awarded to Keck and Schultz.

Three months after the filing, Keck recorded the jubilant note in his lab notebook. Alas, it was after 5:00 pm on a Friday, and no one was around to join in his celebration. Keck verified the result with a second test on 21 August 1970. In 2012, the achievement was recognized with an IEEE Milestone as a significant event in electrotechnology.

Of course, there was still much work to be done to make long-distance optical communication commercially viable. Just like Princess Eulalia’s dress, Corning’s titanium-doped fibers weren’t strong enough for practical use. Eventually, the team discovered a better process with germanium-doped fibers, which remain the industry standard to this day. A half-century after the first successful low-loss transmission, fiber-optic cables encircle the globe, transmitting terabits of data every second.

An abridged version of this article appears in the August 2020 print issue as “Weaving Light.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

How the Digital Camera Transformed Our Concept of History

Post Syndicated from Allison Marsh original

For an inventor, the main challenge might be technical, but sometimes it’s timing that determines success. Steven Sasson had the technical talent but developed his prototype for an all-digital camera a couple of decades too early.

A CCD from Fairchild was used in Kodak’s first digital camera prototype

It was 1974, and Sasson, a young electrical engineer at Eastman Kodak Co., in Rochester, N.Y., was looking for a use for Fairchild Semiconductor’s new type 201 charge-coupled device. His boss suggested that he try using the 100-by-100-pixel CCD to digitize an image. So Sasson built a digital camera to capture the photo, store it, and then play it back on another device.

Sasson’s camera was a kluge of components. He salvaged the lens and exposure mechanism from a Kodak XL55 movie camera to serve as his camera’s optical piece. The CCD would capture the image, which would then be run through a Motorola analog-to-digital converter, stored temporarily in a DRAM array of a dozen 4,096-bit chips, and then transferred to audio tape running on a portable Memodyne data cassette recorder. The camera weighed 3.6 kilograms, ran on 16 AA batteries, and was about the size of a toaster.

After working on his camera on and off for a year, Sasson decided on 12 December 1975 that he was ready to take his first picture. Lab technician Joy Marshall agreed to pose. The photo took about 23 seconds to record onto the audio tape. But when Sasson played it back on the lab computer, the image was a mess—although the camera could render shades that were clearly dark or light, anything in between appeared as static. So Marshall’s hair looked okay, but her face was missing. She took one look and said, “Needs work.”

Sasson continued to improve the camera, eventually capturing impressive images of different people and objects around the lab. He and his supervisor, Garreth Lloyd, received U.S. Patent No. 4,131,919 for an electronic still camera in 1978, but the project never went beyond the prototype stage. Sasson estimated that image resolution wouldn’t be competitive with chemical photography until sometime between 1990 and 1995, and that was enough for Kodak to mothball the project.

Digital photography took nearly two decades to take off

While Kodak chose to withdraw from digital photography, other companies, including Sony and Fuji, continued to move ahead. After Sony introduced the Mavica, an analog electronic camera, in 1981, Kodak decided to restart its digital camera effort. During the ’80s and into the ’90s, companies made incremental improvements, releasing products that sold for astronomical prices and found limited audiences. [For a recap of these early efforts, see Tekla S. Perry’s IEEE Spectrum article, “Digital Photography: The Power of Pixels.”]

Then, in 1994 Apple unveiled the QuickTake 100, the first digital camera for under US $1,000. Manufactured by Kodak for Apple, it had a maximum resolution of 640 by 480 pixels and could only store up to eight images at that resolution on its memory card, but it was considered the breakthrough to the consumer market. The following year saw the introduction of Apple’s QuickTake 150, with JPEG image compression, and Casio’s QV10, the first digital camera with a built-in LCD screen. It was also the year that Sasson’s original patent expired.

Digital photography really came into its own as a cultural phenomenon when the Kyocera VisualPhone VP-210, the first cellphone with an embedded camera, debuted in Japan in 1999. Three years later, camera phones were introduced in the United States. The first mobile-phone cameras lacked the resolution and quality of stand-alone digital cameras, often taking distorted, fish-eye photographs. Users didn’t seem to care. Suddenly, their phones were no longer just for talking or texting. They were for capturing and sharing images.

The rise of cameras in phones inevitably led to a decline in stand-alone digital cameras, the sales of which peaked in 2012. Sadly, Kodak’s early advantage in digital photography did not prevent the company’s eventual bankruptcy, as Mark Harris recounts in his 2014 Spectrum article “The Lowballing of Kodak’s Patent Portfolio.” Although there is still a market for professional and single-lens reflex cameras, most people now rely on their smartphones for taking photographs—and so much more.

How a technology can change the course of history

The transformational nature of Sasson’s invention can’t be overstated. Experts estimate that people will take more than 1.4 trillion photographs in 2020. Compare that to 1995, the year Sasson’s patent expired. That spring, a group of historians gathered to study the results of a survey of Americans’ feelings about the past. A quarter century on, two of the survey questions stand out:

  • During the last 12 months, have you looked at photographs with family or friends?

  • During the last 12 months, have you taken any photographs or videos to preserve memories?

In the nationwide survey of nearly 1,500 people, 91 percent of respondents said they’d looked at photographs with family or friends and 83 percent said they’d taken a photograph—in the past year. If the survey were repeated today, those numbers would almost certainly be even higher. I know I’ve snapped dozens of pictures in the last week alone, most of them of my ridiculously cute puppy. Thanks to the ubiquity of high-quality smartphone cameras, cheap digital storage, and social media, we’re all taking and sharing photos all the time—last night’s Instagram-worthy dessert; a selfie with your bestie; the spot where you parked your car.

So are all of these captured moments, these personal memories, a part of history? That depends on how you define history.

For Roy Rosenzweig and David Thelen, two of the historians who led the 1995 survey, the very idea of history was in flux. At the time, pundits were criticizing Americans’ ignorance of past events, and professional historians were wringing their hands about the public’s historical illiteracy.

Instead of focusing on what people didn’t know, Rosenzweig and Thelen set out to quantify how people thought about the past. They published their results in the 1998 book The Presence of the Past: Popular Uses of History in American Life (Columbia University Press). This groundbreaking study was heralded by historians, those working within academic settings as well as those working in museums and other public-facing institutions, because it helped them to think about the public’s understanding of their field.

Little did Rosenzweig and Thelen know that the entire discipline of history was about to be disrupted by a whole host of technologies. The digital camera was just the beginning.

For example, a little over a third of the survey’s respondents said they had researched their family history or worked on a family tree. That kind of activity got a whole lot easier the following year, when Paul Brent Allen and Dan Taggart launched, which is now one of the largest online genealogical databases, with 3 million subscribers and approximately 10 billion records. Researching your family tree no longer means poring over documents in the local library.

Similarly, when the survey was conducted, the Human Genome Project was still years away from mapping our DNA. Today, at-home DNA kits make it simple for anyone to order up their genetic profile. In the process, family secrets and unknown branches on those family trees are revealed, complicating the histories that families might tell about themselves.

Finally, the survey asked whether respondents had watched a movie or television show about history in the last year; four-fifths responded that they had. The survey was conducted shortly before the 1 January 1995 launch of the History Channel, the cable channel that opened the floodgates on history-themed TV. These days, streaming services let people binge-watch historical documentaries and dramas on demand.

Today, people aren’t just watching history. They’re recording it and sharing it in real time. Recall that Sasson’s MacGyvered digital camera included parts from a movie camera. In the early 2000s, cellphones with digital video recording emerged in Japan and South Korea and then spread to the rest of the world. As with the early still cameras, the initial quality of the video was poor, and memory limits kept the video clips short. But by the mid-2000s, digital video had become a standard feature on cellphones.

As these technologies become commonplace, digital photos and video are revealing injustice and brutality in stark and powerful ways. In turn, they are rewriting the official narrative of history. A short video clip taken by a bystander with a mobile phone can now carry more authority than a government report.

Maybe the best way to think about Rosenzweig and Thelen’s survey is that it captured a snapshot of public habits, just as those habits were about to change irrevocably.

Digital cameras also changed how historians conduct their research

For professional historians, the advent of digital photography has had other important implications. Lately, there’s been a lot of discussion about how digital cameras in general, and smartphones in particular, have changed the practice of historical research. At the 2020 annual meeting of the American Historical Association, for instance, Ian Milligan, an associate professor at the University of Waterloo, in Canada, gave a talk in which he revealed that 96 percent of historians have no formal training in digital photography and yet the vast majority use digital photographs extensively in their work. About 40 percent said they took more than 2,000 digital photographs of archival material in their latest project. W. Patrick McCray of the University of California, Santa Barbara, told a writer with The Atlantic that he’d accumulated 77 gigabytes of digitized documents and imagery for his latest book project [an aspect of which he recently wrote about for Spectrum].

So let’s recap: In the last 45 years, Sasson took his first digital picture, digital cameras were brought into the mainstream and then embedded into another pivotal technology—the cellphone and then the smartphone—and people began taking photos with abandon, for any and every reason. And in the last 25 years, historians went from thinking that looking at a photograph within the past year was a significant marker of engagement with the past to themselves compiling gigabytes of archival images in pursuit of their research.

So are those 1.4 trillion digital photographs that we’ll collectively take this year a part of history? I think it helps to consider how they fit into the overall historical narrative. A century ago, nobody, not even a science fiction writer, predicted that someone would take a photo of a parking lot to remember where they’d left their car. A century from now, who knows if people will still be doing the same thing. In that sense, even the most mundane digital photograph can serve as both a personal memory and a piece of the historical record.

An abridged version of this article appears in the July 2020 print issue as “Born Digital.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

NASA’s Original Laptop: The GRiD Compass

Post Syndicated from Allison Marsh original

The year 1982 was a notable one in personal computing. The BBC Micro was introduced in the United Kingdom, as was the Sinclair ZX Spectrum. The Commodore 64 came to market in the United States. And then there was the GRiD Compass.

The GRiD Compass Was the First Laptop to Feature a Clamshell Design

The Graphical Retrieval Information Display (GRiD) Compass had a unique clamshell design, in which the monitor folded down over the keyboard. Its 21.6-centimer plasma screen could display 25 lines of up to 128 characters in a high-contrast amber that the company claimed could be “viewed from any angle and under any lighting conditions.”

By today’s standards, the GRiD was a bulky beast of a machine. About the size of a large three-ring binder, it weighed 4.5 kilograms (10 pounds). But compared with, say, the Osborne 1 or the Compaq Portable, both of which had a heavier CRT screen and tipped the scales at 10.7 kg and 13 kg, respectively, the Compass was feather light. Some people call the Compass the first truly portable laptop computer.

The computer had 384 kilobytes of nonvolatile bubble memory, a magnetic storage system that showed promise in the 1970s and ’80s. With no rotating disks or moving parts, solid-state bubble memory worked well in settings where a laptop might, say, take a tumble. Indeed, sales representatives claimed they would drop the computer in front of prospective buyers to show off its durability.

But bubble memory also tends to run hot, so the exterior case was designed out of a magnesium alloy to make it a heat sink. The metal case added to the laptop’s reputation for ruggedness. The Compass also included a 16-bit 8086 microprocessor and up to 512 KB of RAM. Floppy drives and hard disks were available as peripherals.

With a price tag of US $8,150 (about $23,000 today), the Compass wasn’t intended for consumers but rather for business executives. Accordingly, it came preloaded with a text editor, a spreadsheet, a plotter, a terminal emulator, a database management system, and other business software. The built-in 1200-baud modem was designed to connect to a central computer at the GRiD Systems’ headquarters in Mountain View, Calif., from which additional applications could be downloaded.

The GRiD’s Sturdy Design Made It Ideal for Space

The rugged laptop soon found a home with NASA and the U.S. military, both of which valued its sturdy design and didn’t blink at the cost.

The first GRiD Compass launched into space on 28 November 1983 aboard the space shuttle Columbia. The hardware adaptations for microgravity were relatively minor: a new cord to plug into the shuttle’s power supply and a small fan to compensate for the lack of convective cooling in space.

The software modifications were more significant. Special graphical software displayed the orbiter’s position relative to Earth and the line of daylight/darkness. Astronauts used the feature to plan upcoming photo shoots of specific locations. The GRiD also featured a backup reentry program, just in case all of the IBMs at Mission Control failed.

For its maiden voyage, the laptop received the code name SPOC (short for Shuttle Portable On-Board Computer). Neither NASA nor GRiD Systems officially connected the acronym to a certain pointy-eared Vulcan on Star Trek, but the GRiD Compass became a Hollywood staple whenever a character had to show off wealth and tech savviness. The Compass featured prominently in Aliens, Wall Street, and Pulp Fiction.

The Compass/SPOC remained a regular on shuttle missions into the early 1990s. NASA’s trust in the computer was not misplaced: Reportedly, the GRiD flying aboard Challenger survived the January 1986 crash.

The GRiDPad 1900 Was a First in Tablet Computing

John Ellenby and Glenn Edens, both from Xerox PARC, and David Paulson had founded GRiD Systems Corp. in 1979. The company went public in 1981, and the following year they launched the GRiD Compass.

Not a company to rest on its laurels, GRiD continued to be a pioneer in portable computers, especially thanks to the work of Jeff Hawkins. He joined the company in 1982, left for school in 1986, and returned as vice president of research. At GRiD, Hawkins led the development of a pen- or stylus-based computer. In 1989, this work culminated in the GRiDPad 1900, often regarded as the first commercially successful tablet computer. Hawkins went on to invent the PalmPilot and Treo, though not at GRiD.

Amid the rapidly consolidating personal computer industry, GRiD Systems was bought by Tandy Corp. in 1988 as a wholly owned subsidiary. Five years later, GRiD was bought again, by Irvine, Calif.–based AST Research, which was itself acquired by Samsung in 1996.

In 2006 the Computer History Museum sponsored a roundtable discussion by key members of the original GRiD engineering team: Glenn Edens, Carol Hankins, Craig Mathias, and Dave Paulson, moderated by New York Times journalist (and former GRiD employee) John Markoff:

How Do You Preserve an Old Computer?

Although the GRiD Compass’s tenure as a computer product was relatively short, its life as a historic artifact goes on. To be added to a museum collection, an object must be pioneering, iconic, or historic. The GRiD Compass is all three, which is how the computer found its way into the permanent holdings of not one, but two separate Smithsonian museums.

One Compass was acquired by the National Air and Space Museum in 1989. No surprise there, seeing as how the Compass was the first laptop used in space aboard a NASA mission. Seven years later, curators at the Cooper Hewitt, Smithsonian Design Museum added one to their collections in recognition of the innovative clamshell design.

Credit for the GRiD Compass’s iconic look and feel goes to the British designer Bill Moggridge. His firm was initially tasked with designing the exterior case for the new computer. After taking a prototype home and trying to use it, Moggridge realized he needed to create a design that unified the user, the object, and the software. It was a key moment in the development of computer-human interactive design. In 2010, Moggridge became the fourth director of the Cooper Hewitt and its first director without a background as a museum professional.

Considering the importance Moggridge placed on interactive design, it’s fitting that preservation of the GRiD laptop was overseen by the museum’s Digital Collection Materials Project. The project, launched in 2017, aims to develop standards, practices, and strategies for preserving digital materials, including personal electronics, computers, mobile devices, media players, and born-digital products.

Keeping an electronic device in working order can be extremely challenging in an age of planned obsolescence. Cooper Hewitt brought in Ben Fino-Radin, a media archeologist and digital conservator, to help resurrect their moribund GRiD. Fino-Radin in turn reached out to Ian Finder, a passionate collector and restorer of vintage computers who has a particular expertise in restoring GRiD Compass laptops. Using bubble memory from Finder’s personal collection, curators at Cooper Hewitt were able to boot the museum’s GRiD and to document the software for their research collections.

Even as museums strive to preserve their old GRiDs, new GRiDs are being born. Back in 1993, former GRiD employees in the United Kingdom formed GRiD Defence Systems during a management buyout. The London-based company continues the GRiD tradition of building rugged military computers. The company’s GRiDCASE 1510 Rugged Laptop, a portable device with a 14.2-cm backlit LED display, looks remarkably like a smaller version of the Compass circa 1982. I guess when you have a winning combination, you stick with it.

An abridged version of this article appears in the June 2020 print issue as “The First Laptop in Orbit.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Who Invented Radio: Guglielmo Marconi or Aleksandr Popov?

Post Syndicated from Allison Marsh original

Who invented radio? Your answer probably depends on where you’re from.

On 7 May 1945, the Bolshoi Theater in Moscow was packed with scientists and officials of the Soviet Communist Party to celebrate the first demonstration of radio 50 years prior, by Aleksandr S. Popov. It was an opportunity to honor a native son and to try to redirect the historical record away from the achievements of Guglielmo Marconi, widely recognized throughout most of the world as the inventor of radio. Going forward, 7 May was declared to be Radio Day, celebrated across the Soviet Union and still celebrated in Russia to this day.

The claim for Popov’s primacy as radio’s inventor came from his presentation of a paper, “On the Relation of Metallic Powders to Electrical Oscillations,” and his demonstration of a radio-wave detecting apparatus at St. Petersburg University on 7 May 1895.

Aleksandr Popov Developed the First Radio Capable of Distinguishing Morse Code

Popov’s device was a simple coherer—a glass tube with two electrodes spaced a few centimeters apart with metal filings between them. The device was based on the work of French physicist Edouard Branly, who described such a circuit in 1890, and of English physicist Oliver Lodge, who refined it in 1893. The electrodes would initially have a high resistance, but when they were hit with an electric impulse, a low-resistance path would develop, allowing conductivity until the metal filings clumped together and the resistance became too steep. The coherer had to be tapped or shaken after each use to rescatter the filings.

According to the A. S. Popov Central Museum of Communications, in St. Petersburg, Popov’s device was the world’s first radio receiver capable of distinguishing signals by duration. He used a Lodge coherer indicator and added a polarized telegraph relay, which served as a direct-current amplifier. The relay allowed Popov to connect the output of the receiver to an electric bell, recorder, or telegraph apparatus, providing electromechanical feedback. [The device at top, from the museum’s collections, has a bell.] The feedback automatically reset the coherer: When the bell rang, the coherer was simultaneously shaken.

On 24 March 1896, Popov gave another groundbreaking public demonstration, this time sending Morse code via wireless telegraphy. Once again at St. Petersburg University at a meeting of the Russian Physicochemical Society, Popov sent signals between two buildings 243 meters apart. A professor stood at the blackboard in the second building, recording the letters that the Morse code spelled out: Heinrich Hertz.

Coherer-based designs similar to Popov’s became the basis of first-generation radio communication equipment. They remained in use until 1907, when crystal receivers eclipsed them.

Popov and Marconi Had Very Different Views About Radio

Popov was a contemporary of Marconi’s, but the two men developed their radio apparatuses independently and without knowledge of the other’s work. Making a definitive claim of who was first is complicated by inadequate documentation of events, conflicting definitions of what constitutes a radio, and national pride.

One of the reasons why Marconi gets the credit and Popov doesn’t is that Marconi was much more savvy about intellectual property. One of the best ways to preserve your place in history is to secure patents and publish your research findings in a timely way. Popov did neither. He never pursued a patent for his lightning detector, and there is no official record of his 24 March 1896 demonstration. He eventually abandoned radio to turn his attention to the newly discovered Röntgen waves, also known as X-rays.

Marconi, on the other hand, filed for a British patent on 2 June 1896, which became the first application for a patent in radiotelegraphy. He quickly raised capital to commercialize his system, built up a vast industrial enterprise, and went on to be known—outside of Russia—as the inventor of radio.

Although Popov never sought to commercialize his radio as a means of sending messages, he did see potential in its use for recording disturbances in the atmosphere—a lightning detector. In July 1895, he installed his first lightning detector at the meteorological observatory of the Institute of Forestry in St. Petersburg. It was able to detect thunderstorms up to 50 kilometers away. He installed a second detector the following year at the All-Russia Industrial and Art Exhibition at Nizhny Novgorod, about 400 km east of Moscow.

Within several years, the clockmaking company Hoser Victor in Budapest was manufacturing lightning detectors based on Popov’s work.

A Popov Device Found Its Way to South Africa

One of those machines made it all the way to South Africa, some 13,000 km away. Today, it can be found in the museum of the South African Institute for Electrical Engineers (SAIEE) in Johannesburg.

Now, it’s not always the case that museums know what’s in their own collections. The origins of equipment that’s long been obsolete can be particularly hard to trace. With spotty record keeping and changes in personnel, institutional memory can lose track of what an object is or why it was important.

That might have been the fate of the South African Popov detector, but for the keen eye of Dirk Vermeulen, an electrical engineer and longtime member of the SAIEE Historical Interest Group. For years, Vermeuelen assumed that the object was an old recording ammeter, used to measure electric current. One day, though, he decided to take a closer look. To his delight, he learned that it was probably the oldest object in the SAIEE collection and the only surviving instrument from the Johannesburg Meteorological Station.

In 1903 the colonial government had ordered the Popov detector as part of the equipment for the newly established station, located on a hill on the eastern edge of town. The station’s detector is similar to Popov’s original design, except that the trembler used to shake up the filings also deflected a recording pen. The recording chart was wrapped around an aluminum drum that revolved once per hour. With each revolution of the drum, a separate screw advanced the chart by 2 millimeters, allowing activity to be recorded over the course of days.

Vermeulen wrote up his discovery [PDF] for the December 2000 Proceedings of the IEEE. Sadly, he passed away about a year ago, but his colleague Max Clarke arranged to get IEEE Spectrum a photo of the South African detector. Vermeulen was a tireless advocate for creating a museum to house the SAIEE’s collection of artifacts, which finally happened in 2014. It seems fitting that in an article that commemorates an early pioneer of radio, I also pay tribute to Vermeulen and the rare radio-wave detector that he helped bring to light.

An abridged version of this article appears in the May 2020 print issue as “The First Radio.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

In World War I, British Biplanes Had Wireless Phones in the Cockpit

Post Syndicated from Allison Marsh original

As soon as the first humans went up in hot air balloons in the late 1700s, military strategists saw the tantalizing possibilities of aerial reconnaissance. Imagine being able to spot enemy movements and artillery from on high—even better if you could instantly communicate those findings to colleagues on the ground. But the technology of the day didn’t offer an elegant way to do that.

By the early 20th century, all of the necessary elements were in place to finally make aerial reconnaissance a reality: the telegraph, the telephone, and the airplane. The challenge was bringing them together. Wireless enthusiasts faced reluctant government bureaucrats, who were parsimonious in funding an unproven technology.

Wireless Telegraphy Provided Vital Intel During World War I Battles

One early attempt involved wireless telegraphy—sending telegraph signals by radio. Its main drawback was size. The battery pack and transmitter weighed up to 45 kilograms and took up an entire seat on a plane, sometimes overflowing into the pilot’s area. The wire antenna trailed behind the plane and had to be reeled in before landing. There was no room for a dedicated radio operator, and so the pilot would have to do everything: observe the enemy, consult the map, and tap out coordinates in Morse code, all while flying the plane under enemy fire.

Despite the complicated setup, some pioneers managed to make it work. In 1911, First Lieutenant Benjamin D. Foulois, pilot of the U.S. Army’s sole airplane, flew along the Mexican border and reported to Signal Corps stations on the ground by Morse code. Three years later, under the auspices of the British Royal Flying Corps (RFC), Lieutenants Donald Lewis and Baron James tried out air-to-air radiotelegraphy by flying 16 kilometers apart, communicating by Morse code as they flew.

It didn’t take long for the RFC’s wireless system to see its first real action. England entered World War I on 4 August 1914. On 6 September while flying during the first Battle of the Marne in France, Lewis spotted a 50-km gap in the enemy line. He sent a wireless message reporting what he saw, and British and French troops charged the gap. It was the first time that a wireless message sent from a British plane was received and acted upon. British army commanders became instant evangelists for wireless, demanding more equipment and training for both pilots and ground support.

From then on, the RFC, which had formed in 1912 under Captain Herbert Musgrave, grew quickly. Initially, Musgrave had been tasked with investigating a laundry list of war-related activities—ballooning, kiting, photography, meteorology, bomb-dropping, musketry, and communication. He decided to focus on the last. At the start of the war, the RFC took over the Experimental Marconi Station at Brooklands Aerodrome in Surrey, southwest of London.

Brooklands had been the site of the first powered flight in England, in 1909, even though it wasn’t an ideal spot for an airfield. The runway sat in the middle of a motor racetrack, high-tension cables surrounded the field on three sides, and two 30-meter-tall brick chimneys stood to the east.

At first, reconnaissance pilots reported on the effectiveness of artillery firings by giving directional instructions. “About 50 yards short and to the right” was one message that Lewis sent at Marne. That’s a fairly long string for a pilot to tap out in Morse code. By October 1914, the British had developed maps with grid references, which meant that with just a few letters and numbers, such as “A5 B3,” you could indicate direction and distance. Even with that simplification, however, using radiotelegraphy was still cumbersome.

Voice Calls From the Cockpit Relied on Good Microphones

Direct voice communication via wireless telephony was a better solution—except that the open cockpit of a biplane wasn’t exactly conducive to easy conversation. Intense noise, vibration, and often-violent air disturbances drowned out voices. The muscles of the face had trouble retaining their shape under varying wind pressure. Pilots had difficulty being understood by crewmen sitting just a few centimeters away, never mind being heard through a microphone over a radio that had to distinguish voice from background noise.

In the spring of 1915, Charles Edmond Prince was sent to Brooklands to lead the development of a two-way voice system for aircraft. Prince had worked as an engineer for the Marconi Co. since 1907, and he and his team, many of whom also came from Marconi, soon got an air-to-ground system up and running.

Prince’s system was not at all like a modern cellphone, nor even like the telephones of the time. Although the pilot could talk to the ground station, the ground operator replied in Morse code. It took another year to develop ground-to-air and machine-to-machine wireless telephony.

Prince’s group experimented with a variety of microphones. Eventually they settled on an older Hunnings Cone microphone that had a thick diaphragm. Through trial and error, they learned the importance of testing the microphone outside the laboratory and under typical flight conditions. They found it almost impossible to predict how a particular microphone would work in the air based solely on its behavior on the ground. As Prince later wrote about the Hunnings Cone, “it appeared curiously dead and ineffective on the ground, but seemed to take on a new sprightliness in the air.”

The diaphragm’s material was also important. The team tested carbon, steel, ebonite, celluloid, aluminum, and mica. Mica was the ultimate winner because its natural frequency was the least affected by engine noise. (Prince published his findings after the war, in a 1920 journal of the Institution of Electrical Engineers (IEE). If you have a subscription to IEEE Xplore, you can read Prince’s paper here [PDF].)

Prince was an early proponent of vacuum tubes, and so his radio relied on tubes rather than crystals. But the tubes his team initially used were incredibly problematic and unreliable, and the team worked through several different models. After Captain H.J. Round of the Marconi Co. joined Prince’s group, he designed vacuum tubes specifically for airborne applications.

During the summer of 1915, Prince’s group successfully tested the first air-to-ground voice communication using an aircraft radio telephony transmitter. Shortly thereafter, Captain J.M. Furnival, one of Prince’s assistants, established the Wireless Training School at Brooklands. Every week 36 fighter pilots passed through to learn how to use the wireless apparatus and the art of proper articulation in the air. The school also trained officers how to maintain the equipment.

Hands-Free Calling Via the Throat Microphone

Prince’s team didn’t stop there. In 1918, they released a new aviator cap that incorporated telephone receivers over the ears and a throat microphone. The throat mic was built into the cap and wrapped around the neck so that it could pick up the vibrations directly from the pilot’s throat, thus avoiding the background noise of the wind and the engine. This was a significant advancement because it allowed the pilots to go “hands free,” as Captain B.S. Cohen wrote in his October 1919 engineering report.

By the end of the war, Prince and his engineers had achieved air-to-ground, ground-to-air, and machine-to-machine wireless speech transmission. The Royal Air Force had equipped 600 planes with continuous-wave voice radio and set up 1,000 ground stations with 18,000 wireless operators.

This seems like a clear example of how military technology drives innovation during times of war. But tracing the history of the achievement muddies the water a bit.

In the formal response to Prince’s 1920 IEE paper, Captain P.P. Eckersley called the airplane telephone as much a problem of propaganda as it was a technical challenge. By that, he meant Prince didn’t have an unlimited R&D budget, and so he had to prove that aerial telephony was going to have practical applications.

In his retelling of the development, Prince was proud of his team’s demonstration for Lord Kitchener at St. Omer in February 1916, the first practical demonstration of the device.

But Major T. Vincent Smith thought such a demonstration was ill-advised. Smith, a technical advisor to the RFC, argued that showing the wireless telephone to the higher command would only inflame their imaginations, believing such a device could solve all of their considerable communication difficulties. Smith saw it as his duty to dampen enthusiasm, lest he be asked to “do all sorts of impossible things.”

Both Round, the vacuum tube designer, and Harry M. Dowsett, Marconi’s chief of testing, added nuance to Prince’s version of events. Round noted that investigations into vacuum-tube systems for sending and receiving telephony began in 1913, well before the war was under way. Dowsett said that more credit should be given to the Marconi engineers who created the first working telephony set (as opposed to Prince’s experimental set of 1915).

In his 1920 article, Prince acknowledges that he did not include the full history and that his contribution was the novel application of existing circuitry to use in an airplane. He then gives credit to the contributions of Round and other engineers, as well as the General Electric Co., which had patented a similar aerial telephony system used by the U.S. Army Signal Corps.

History rarely has room for so much detail. And so it is Prince—and Prince alone—who gets the credit line for the aerial telephony set that is now in the collections of the Science Museum London. It’s up to us to remember that this innovative machine was the work not of one but of many.

An abridged version of this article appears in the April 2020 print issue as “Calling From the Cockpit.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Meet the Roomba’s Ancestor: The Cybernetic Tortoise

Post Syndicated from Allison Marsh original

In the robotics family tree, Roomba’s ancestors were probably Elmer and Elsie, a pair of cybernetic tortoises invented in the 1940s by neurophysiologist W. Grey Walter. The robots could “see” by means of a rotating photocell that steered them toward a light source. If the light was too bright, they would retreat and continue their exploration in a new direction. Likewise, when they ran into obstacles, a touch sensor would compel the tortoises to reverse and change course. In this way, Elmer and Elsie slowly explored their surroundings.

Walter was an early researcher into electroencephalography (EEG), a technique for detecting the electrical activity of the brain using electrodes attached to the scalp. Among his notable clinical breakthroughs was the first diagnosis of a brain tumor by EEG. In 1939 he joined the newly established Burden Neurological Institute in Bristol, England, as head of its physiology department, and he remained at the Burden for the rest of his career.

Norbert Wiener’s cybernetics movement gave birth to a menagerie of cybernetic creatures

In the late 1940s, Walter became involved in the emerging community of scientists who were interested in cybernetics. The field’s founder, Norbert Wiener, defined cybernetics as “the scientific study of control and communication in the animal and the machine.” In the first wave of cybernetics, people were keen on building machines to model animal behavior. Claude Shannon played around with a robotic mouse named Theseus that could navigate mazes. W. Ross Ashby built the Homeostat, a machine that automatically adapted to inputs so as to remain in a stable state.

Walter’s contribution to this cybernetic menagerie was an electromechanical tortoise, which he began working on in the spring of 1948 in his spare time. His first attempts were inelegant. In 1951 W. J. “Bunny” Warren, an electrical engineer at the Burden, constructed six tortoises for Walter that were more solidly engineered. Two of these six tortoises became Elmer and Elsie, their names taken from Grey’s somewhat contrived acronym: ELectro MEchanical Robots, Light Sensitive, with Internal and External stability.

Walter considered Elmer and Elsie to be the Adam and Eve of a new species, Machina speculatrix. The scientific nomenclature reflected the robots’ exploratory or speculative behavior. The creatures each had a smooth protective shell and a protruding neck, so Walter put them in the Linnaean genus Testudo, or tortoise. Extending his naming scheme, he dubbed Shannon’s maze-crawling mouse Machina labyrinthia and Ashby’s Homestat Machina sopora (sleeping machine).

Did W. Grey Walter’s cybernetic tortoises exhibit free will?

Each tortoise moved on three wheels with two sets of motors, one for locomotion and the other for steering. Its “brain” consisted of two vacuum tubes, which Walter said gave it the equivalent of two functioning neurons.

Despite such limited equipment, the tortoises displayed free will, he claimed. In the May 1950 issue of Scientific American [PDF], he described how the photocell atop the tortoise’s neck scanned the surroundings for a light source. The photocell was attached to the steering mechanism, and as the tortoise searched, it moved forward in a circular pattern. Walter compared this to the alpha rhythm of the electric pulses in the brain, which sweeps over the visual areas and at the same time releases impulses for the muscles to move.

In a dark room, the tortoise wandered aimlessly. When it detected a light, the tortoise moved directly toward the source. But if the light surpassed a certain brightness, it retreated. Presented with two distinct light sources, it would trace a path back and forth between the pair. “Like a moth to a flame,” Walter wrote, the tortoise oscillated between seeking and withdrawing from the lights.

The tortoise had a running light that came on when it was searching for a light source. Originally, this was just to signal to observers what command the robot was processing, but it had some unintended consequences. If Elmer happened to catch a glimpse of itself in a mirror, it would begin moving closer to the image until the reflected light became too bright, and then it would retreat. In his 1953 book The Living Brain, Walter compared the robot to “a clumsy Narcissus.”

Similarly, if Elmer and Elsie were in the same area and saw the other’s light, they would lock onto the source and approach, only to veer away when they got too close. Ever willing to describe the machines in biological terms, Walter called this a mating dance where the unfortunate lovers could never “consummate their ‘desire.’ ”

The tortoise’s shell did much more than just protect the machine’s electromechanical insides. If the robot bumped into an obstacle, a touch sensor in the shell caused it to reverse and change direction. In this manner it could explore its surroundings despite being effectively blind.

M. speculatrix was powered by a hearing-aid battery and a 6-volt battery. When its wanderings were done—that is, when its battery levels were low—it made its way to its hutch. There, it could connect its circuits, turn off its motors, and recharge.

Elmer and Elsie were a huge hit at the 1951 Festival of Britain

During the summer of 1951, Elmer and Elsie performed daily in the science exhibition at the Festival of Britain. Held at sites throughout the United Kingdom, the festival drew millions of visitors. The tortoises were a huge hit. Attendees wondered at their curious activity as they navigated their pen, moved toward and away from light sources, and avoided obstacles in their path. A third tortoise with a transparent shell was on display to showcase the inner workings and to advertise the component parts.

Even as M. speculatrix was wowing the public, Walter was investigating the next evolution of the species. Elmer and Elsie successfully demonstrated unpredictable behavior that could be compared with a basic animal reaction to stimuli, but they never learned from their experience. They had no memory and could not adapt to their environment.

Walter dubbed his next experimental tortoise M. docilis, from the Latin for teachable, and he attempted to build a robot that could mimic Pavlovian conditioned responses. Where the Russian psychologist used dogs, food, and some sort of sound, Walter used his cybernetic tortoises, light, and a whistle. That is, he taught his M. docilis tortoises that the sound of a whistle was the same as a light source and that the tortoise would move toward the sound even if no light was present.

Walter published his findings on M. docilis in a second Scientific American article, “A Machine That Learns” [PDF]. This follow-up article had much to offer electrical engineers, including circuit diagrams and a technical discussion of some of the challenges in constructing the robots, such as amplifying the sound of the whistle to overcome the noise of the motors.

The brain of M. docilis was CORA (short for COnditioned Reflex Analog) circuitry, which detected repeated coincidental sensory inputs on separate channels, such as light and sound that happened at the same time. After CORA logged a certain number of repetitions, often between 10 and 20 instances, it linked the resulting behavior, which Walter described as a conditioned response. Just as CORA could learn a behavior, it could also forget it. If the operator teased the tortoise by withholding the light from the sound of the whistle, CORA would delink the response.

At the end of his article, Walter acknowledged that future experiments with more circuits and inputs were feasible, but the increase in complexity would come at the cost of stability. Eventually, scientists would find it too difficult to model the behavior and understand the reactions to multiple stimuli.

Walter discontinued his experiments with robotic tortoises after CORA, and the research was not picked up by others. As the historian of science Andrew Pickering noted in his 2009 book, The Cybernetic Brain, “CORA remains an unexploited resource in the history of cybernetics.”

Walter’s legacy lives on in his tortoises. The late Rueben Hoggett compiled a treasure trove of archival research on Walter’s tortoises, which can be found on Hoggett’s website, Cybernetic Zoo. The three tortoises from the Festival of Britain were auctioned off, and the winner, Wes Clutterbuck, nicknamed them Slo, Mo, and Shun. Although two were later destroyed in a fire, the Clutterbuck family donated the one with a transparent shell to the Smithsonian Institution. The only other known surviving tortoise from the original six crafted by Bunny Warren is at the Science Museum in London. It is currently on exhibit in the Making the Modern World Gallery.

An abridged version of this article appears in the March 2020 print issue as “The Proto-Roomba.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Fun—and Uranium—for the Whole Family in This 1950s Science Kit

Post Syndicated from Allison Marsh original

“Users should not take ore samples out of their jars, for they tend to flake and crumble and you would run the risk of having radioactive ore spread out in your laboratory.” Such was the warning that came with the Gilbert U-238 Atomic Energy Lab, a 1950s science kit that included four small jars of actual uranium. Budding young nuclear scientists were encouraged to use the enclosed instruments to measure the samples’ radioactivity, observe radioactive decay, and even go prospecting for radioactive ores. Yes, the Gilbert company definitely intended for kids to try this at home. And so the company’s warning was couched not in terms of health risk but rather as bad scientific practice: Removing the ore from its jar would raise the background radiation, thereby invalidating your experimental results.

The Gilbert U-238 Atomic Energy Lab put a positive spin on radioactivity

The A.C. Gilbert Co., founded in 1909 as the Mysto Manufacturing Co., was already a leader in toys designed to inspire interests in science and engineering. Founder Alfred Carlton Gilbert’s first hit was the Erector Set, which he introduced in 1913. In the early 1920s, the company sold vacuum tubes and radio receivers until Westinghouse Electric cried patent infringement. Beginning in 1922, A.C. Gilbert began selling chemistry sets.

When the Atomic Energy Lab hit the market in 1950, it was one of the most elaborate science kits available. In addition to uranium, it had beta-alpha, beta, and gamma radiation sources. It contained a cloud chamber, a spinthariscope (a simple device for watching atoms decay), an electroscope, and a Geiger counter, as well as a 60-page instruction book and a guide to mining uranium.

Also included in every kit was Learn How Dagwood Splits the Atom! Part comic book, part educational manual, it used the popular comic strip characters Blondie and Dagwood Bumstead, as well as their children, dog, and friends, to explain the basics of atomic energy. In the tale, they all shrink to the size of atoms while Mandrake the Magician, another popular comic strip hero of the day, supervises the experiment and explains how to split an atom of uranium-235.

Despite the incongruity of a magician explaining science, the booklet was prepared with expert advice. Published in 1949 by King Features Syndicate, it featured Leslie R. Groves (director of the Manhattan Project) and John R. Dunning (a physicist who verified fission of the uranium atom) as consultants.

Groves’s opening statement encourages the pursuit of truth, facts, and knowledge. He strives to allay readers’ fears about atomic energy and encourages them to see how it can be used for peacetime pursuits. The journalist Bob Considine, who covered the atomic bomb tests at Bikini, likewise dwells on the positive possibilities of nuclear energy and the availability of careers in the field.

Alas, fewer than 5,000 of the Gilbert kits were sold, and it remained on the market only until 1951. The lackluster sales may have been due to the eye-popping price: US $49.50, or about $500 today. Two competing sets, from the Porter Chemical Company, also contained uranium ore and were advertised as having atomic energy components, but retailed for $10 and $25.

Starting in the 1960s, toy safety became a concern

Parents today might be baffled that products containing radioactive elements were ever marketed to children. At the time, however, the radioactivity wasn’t considered a flaw. The inside cover of the Atomic Energy Lab proclaimed the product “Safe!”

But it’s also true that in the 1950s few consumer protection laws regulated the safety of toys in the United States. Instead, toy manufacturers responded to trends in popular opinion and consumer taste, which had been pro-science since World War II.

Those attitudes began to change in the 1960s. Books such as Rachel Carson’s Silent Spring (1962, Houghton Mifflin) raised concerns about how chemicals were harming the environment, and the U.S. Congress began investigating whether toy manufacturers were providing adequate safeguards for children.

Beginning with the passage of the 1960 Federal Hazardous Substances Labeling Act [PDF], all products sold in the United States that contained toxic, corrosive, or flammable ingredients had to include warning labels. Additionally, any product that could be an irritant or a sensitizer, or that could generate pressure when heated or decomposed, had to be labeled a “hazardous substance.”

More far reaching was the 1966 Child Protection Act, which allowed the U.S. Secretary of Health, Education, and Welfare to ban the sale of toys that contained hazardous substances. Due to a limited definition of “hazardous substance,” it did not regulate electrical, mechanical, or thermal hazards. The 1969 Child Protection and Toy Safety Act closed these loopholes. And the Toxic Substances Control Act of 1976 banned some chemicals outright and strictly controlled the quantities of others.

Clearly, makers of chemistry sets and other scientific toys were being put on notice.

Did the rise of product safety laws inadvertently undermine science toys?

What were ostensible wins for child safety was a loss for science education. Chemistry sets were radically simplified, and the substances they contained were either diluted or eliminated. In-depth instruction booklets became brief pamphlets offering only basic, innocuous experiments. The A.C. Gilbert Co., which struggled after the death of its founder in 1961, finally went bankrupt in 1967.

The U.S. Consumer Product Safety Commission, established in 1972, continues to police the toy market. Toys get recalled for high levels of arsenic, lead, or other harmful substances for being too flammable, or for containing parts small enough to choke on.

And so in 2001, the commission reported the recall of Professor Wacko’s Exothermic Exuberance chemistry kit. As you might expect from the product’s name, there was a fire risk. The kit included glycerin and potassium permanganate, which ignite when mixed. (This chemical combo is also the basis of the popular—at least on some university campuses—burning book experiment.) A controlled fire is one thing, but in Professor Wacko’s case the bottles had interchangeable lids. If the lids, which might contain residual chemicals, were accidentally switched, the set could be put away without the user realizing that a reaction was brewing. Several house fires resulted.

Another recalled science toy was 2007’s CSI Fingerprint Examination Kit, based on the hit television show. Children, pretending to be crime-scene investigators, dusted for fingerprints. Unfortunately, the fingerprint powder contained up to 5 percent asbestos, which can cause serious lung ailments if inhaled.

In comparison, the risk from the uranium-238 in Gilbert’s U-238 Atomic Energy Lab was minimal, about the equivalent to a day’s UV exposure from the sun. And the kit had the beneficial effect of teaching that radioactivity is a naturally occurring phenomena. Bananas are mildly radioactive, after all, as are Brazil nuts and lima beans. To be sure, experts don’t recommend ingesting uranium or carrying it around in your pocket for extended periods of time. Perhaps it was too much to expect that every kid would abide by the kit’s clear warning. But despite sometimes being called the “most dangerous toy in the world,” Gilbert’s U-238 Atomic Energy Lab was unlikely to have ever produced a glowing child.

An abridged version of this article appears in the February 2020 print issue as “Fun With Uranium!”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

The Hidden Figures Behind Bletchley Park’s Code-Breaking Colossus

Post Syndicated from Allison Marsh original

“If anyone asked us what we did, we were to say that we…did secretarial work.” That’s how Eleanor Ireland described the secrecy surrounding her years at Bletchley Park. Ireland was decidedly not a secretary, but there was good reason for the subterfuge.

Ireland was one of 273 women recruited during World War II to operate Bletchley Park’s Colossus machines, which were custom built to help decrypt German messages that had been encoded using the sophisticated Lorenz cipher machines. (Bletchley Park’s more famous code-breaking effort, pioneered by Alan Turing, involved breaking the ciphers of the simpler Enigma machines.) Because of the intense, high-stakes, highly classified work, the women were all required to sign statements under the Official Secrets Act. And so their contributions, and Colossus itself, remained state secrets for decades after the end of the war.

In 1975, the U.K. government began slowly declassifying the project, starting with the release of some photos. The historian Brian Randell, who had been lobbying the government to declassify Colossus, was given permission to interview engineers involved in the project. He was also allowed to write a paper about their work, but without discussing the code-breaking aspects. Randell presented his findings at a conference in Los Alamos, N.M., in June 1976.

In 1983, Tommy Flowers, the electrical engineer chiefly responsible for designing the machines, was permitted to write about Colossus, again without disclosing details about what Colossus was used for. In 1987, IEEE Spectrum’s Glenn Zorpette wrote one of the first journalistic accounts of the code-breaking effort [see “Breaking the Code,” September 1987]. It wasn’t until 1996, when the U.S. government declassified its own documents from Bletchley Park, that the women’s story finally started to emerge.

Beginning in 1943, a group of Wrens—members of the Women’s Royal Naval Service—who either excelled at mathematics or had received strong recommendations were told to report to Bletchley Park. This Victorian mansion and estate about 80 kilometers northwest of London was home to the Government Code and Cipher School.

There, the Wrens received training in binary math, the teleprinter alphabet, and how to read machine punch tapes. Max Newman, head of the section responsible for devising mechanized methods of code breaking, initially led these tutorials. After two weeks, the women were tested on their knowledge and placed into jobs accordingly. Eleanor Ireland landed the plum assignment of Colossus operator.

Colossus was the first digital electronic computer, predating ENIAC by two years. Tommy Flowers, who worked on switching electronics at the Post Office Research Station in Dollis Hill, designed the machine to help decipher the encrypted messages that the Nazi high command sent by radioteleprinter. The Germans called their radioteleprinter equipment Sägefisch, or sawfish, reportedly because of the radio signals’ sawtooth wave. The British accordingly nicknamed the German coded messages “fish,” and the cipher that Colossus was designed to break became “Tunny,” short for tuna fish.

The Tunny machine was known to the Germans as the Lorenz SZ40, and it operated as an attachment to a standard teleprinter. Like the Enigma, the Lorenz had a set of wheels that encrypted the message. But where the Enigma had three wheels, the Lorenz had 12. Because of the Lorenz’s significantly stronger encryption, the Germans used it for their highest-level messages, such as those sent by Hitler to his generals.

The Bletchley Park code-breakers figured out how to break the Tunny codes without ever having seen a Lorenz. Each of the 12 wheels was imprinted with a different number of two-digit numerals. The code breakers discovered that the wheels consisted of two groups of five—which they called the psi wheels and the chi wheels—plus two motor, or mu, wheels. The chi wheels moved forward in unison with each letter of a message. The psi wheels advanced irregularly based on the position of the mu wheels. Each letter of a message was the sum of the letters—that is, the sum of the numbers representing the letters—generated by the chi and psi wheels.

The initial function of Colossus was to help determine the starting point of the wheels. Colossus read the cipher’s stream of characters and counted the frequency of each character. Cryptographers then compared the results to the frequency of letter distribution in the German language and to a sample chi-wheel combination, continually refining the chi-wheel settings until they found the optimal one.

Eventually, there were 10 Colossi operating around the clock at Bletchley Park. These room-size machines, filled with banks of vacuum tubes, switches, and whirring tape, were impressive to behold. The official government account of the project, the 1945 General Report on Tunny, used words such as “fantastic,” “uncanny,” and “wizardry” to describe Colossus, creating a mental image of a mythic machine.

But the actual task of operating Colossus was tedious, time-consuming, and stressful. Before the machine could even start crunching data, the punched paper tapes that stored the information had to be joined into a loop. The Wrens experimented with various glues and applications until they arrived at the ones that worked best given the speed, heat, and tension of the tape as it ran through the machine. Dorothy Du Boisson described the process as the art of using just the right amount of glue, French chalk, and a warm clamp to make a proper joint.

The operator then had to feed the tape through a small gate in front of the machine’s photoelectric reader, adjusting the tape’s tautness using a series of pulleys. Getting the right tension was tricky. Too tight and the tape might break; too loose and it would slip in the machine. Either meant losing valuable time. Colossus read the tape at thousands of characters per second, and each tape run took approximately an hour.

The cryptographers at Bletchley Park decided which patterns to run, and the Wrens entered the desired programming into Colossus using switches and telephone-exchange plugs and cords. They called this pegging a wheel pattern. Ireland recalled getting an electric shock every time she put in a plug.

During the first three months of the Colossus program, many of the Wrens suffered from exhaustion and malnutrition, and their living conditions were far from enviable. The women bunked four to a room in the cold and dreary servant quarters of nearby Woburn Abbey. Catherine Caughey reported that the abbey’s plumbing couldn’t keep up.

The rooms that housed the Colossi were, by contrast, constantly overheated. The vacuum tubes on the machines gave off the equivalent of a hundred electric heaters. Whenever a Wren got too hot or sleepy, she would step outside the bunkers to splash water on her face. Male colleagues suggested that the women go topless. They declined.

The Wrens worked in 8-hour shifts around the clock. They rotated through a week of day shifts, a week of evenings, and a week of nights with one weekend off every month. Women on different shifts were often assigned to the same dorm room, the comings and goings disrupting their sleep.

Those in charge of the Wrens at Woburn Abbey didn’t know what the women were doing, so they still required everyone to participate in squad-drill training every day. Only after women started fainting during the exercises were a few improvements made. Women in the same shift began rooming together. Those working the night shift were served a light breakfast before starting, rather than reheated leftovers from supper.

During their time at Bletchley Park, the computer operators knew very little about their successful contribution to the war effort.

Having signed the Official Secrets Act, none of the 273 women who operated the Colossi could speak of their work after the war. Most of the machines were destroyed, and Tommy Flowers was ordered to burn the designs and operating instructions. As a result, for decades the history of computing was missing an important first.

Beginning in the early 1990s, Tony Sale, an engineer and curator at the Science Museum, London, began to re-create a Colossus, with the help of some volunteers. They were motivated by intellectual curiosity as well as a bit of national pride. For years, U.S. computer scientists had touted ENIAC as the first electronic computer. Sale wanted to have a working Colossus up and running before the 50th anniversary of ENIAC’s dedication in 1996.

On 6 June 1996, the Duke of Kent switched on a basic working Colossus at Bletchley Park. Sale’s machine is still on view in the Colossus Gallery at the National Museum of Computing on the Bletchley estate, which is open every day to the public.

When the British government finally released the 500-page General Report on Tunny in 2000, the story of Colossus could be told in full. Jack Copeland captures both the technical detail and the personal stories in his 2006 book Colossus: The Secrets of Bletchley Park’s Codebreaking Computers (Oxford University Press), which he wrote in collaboration with Flowers and a number of Bletchley Park code breakers and computer historians.

And what of the women computer operators? Their stories have been slower to be integrated into the historical narrative, but historians such as Janet Abbate and Mar Hicks are leading the way. Beginning in 2001, Abbate led an oral history project interviewing 52 women pioneers in computing, including Eleanor Ireland. These interviews became the basis for Abbate’s 2012 book Recoding Gender: Women’s Changing Participation in Computing (MIT Press).

In 2017, Hicks published Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing (MIT Press). In it she traces women’s work in the burgeoning computer field before World War II through the profession’s gender flip in the 1960s. The book documents the industry’s systematic gender discrimination, which is still felt today.

As for the computer operators themselves, Ireland took advantage of the lifting of the classification to write an essay about Colossus and the fellowship of the Wrens: “When we meet, as we do in recent years every September, we all agree that those were our finest hours.”

An abridged version of this article appears in the January 2020 print issue as “The Hidden Figures of Colossus.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Hasbro’s Classic Game Operation Was Sparked by a Grad Student’s Electric Idea

Post Syndicated from Allison Marsh original

Cavity Sam, the cartoon patient in the board game Operation, suffers from an array of anatomically questionable ailments: writer’s cramp (represented by a tiny plastic pencil), water on the knee (a bucket of water), butterflies in the stomach (you get the idea). Each player takes a turn as Sam’s doctor, using a pair of tweezers to try to remove the plastic piece for each ailment. Dexterity is key. If the tweezers touch the side of the opening, it closes a circuit, causing the red bulb that is Sam’s nose to light up and a buzzer to sound. Your turn is then over. The game’s main flaw, at least from the patient’s perspective, is that it’s more fun to lose your turn than to play perfectly.

John Spinello created the initial concept for what became Operation in the early 1960s, when he was an industrial design student at the University of Illinois. Spinello’s game, called Death Valley, didn’t feature a patient, but rather a character lost in the desert. His canteen drained by a bullet hole, he wanders through ridiculous hazards in search of water. Players moved around the board, inserting their game piece—a metal probe—into holes of various sizes. The probe had to go in cleanly without touching the sides; otherwise it would complete a circuit and sound a buzzer. Spinello’s professor gave him an A.

Spinello sold the idea to Marvin Glass and Associates, a Chicago-based toy design company, for US $500, his name on the U.S. patent (3,333,846), and the promise of a job, which never materialized.

Mel Taft, a game designer at Milton Bradley, saw a prototype of Death Valley and thought it had potential. His team tinkered with the idea but decided it would be more interesting if the players had to remove an object rather than insert a probe. They created a surgery-themed game, and Operation was born.

The game debuted in 1965, and the English-language version has remained virtually  the same for decades. (A 2013 attempt to make Cavity Sam thinner and younger and the game pieces larger and easier to extract did not go over well with fans.) Variations of Operation have featured cartoon characters other than Cavity Sam, such as Homer Simpson, from whose body the player extracted donuts and pretzels, and Star Wars’ Chewbacca, who’s been infested with Porgs and other “hair hazards.” International editions are available in French, German, Italian, and Portuguese/Spanish. The global franchise for all this electrified silliness has generated an estimated $40 million in sales over the years.

Such complete-the-circuit games actually date back to the mid-1700s. Benjamin Franklin designed a game called Treason, which involved removing a gilt crown from a picture of King George II without touching the frame. And in a popular carnival game, contestants had to guide an electrified wire loop along a twisted rod without touching it.

Spinello was not the first to patent such a game. In 1957, Walter Goldfinger and Seymour Beckler received U.S. patent 2,808,263  for a portable electric game that simulated a golf course and its hazards. They in turn cited John Braund’s 1950 patent for a simulated baseball game and John Archer Smith and Victor Merrow’s 1946 patent for a game involving steering vehicles around a board.

A few years after Spinello filed for his U.S. patent, but two months before it was granted, an inventor named Narayan Patel filed for a remarkably similar game that also called for inserting a metal probe between electrified plates. Patel outlined four themed games based on this setup, one of which was aimed at adult partiers. He called his amusement and dexterity test “How Much Do You Drink?” with ranges from “Never heard of alcohol” to “Brother, you are dead.”

But if your main goal was to get inebriated while testing your fine motor skills, you didn’t really need a dedicated game. Some players simply adapted their own drinking rules to Operation. Needless to say, if you didn’t start the game with the skilled hands of a surgeon, it is unlikely that a few alcoholic beverages would help.

An abridged version of this article appears in the December 2019 print issue as “A Game With Buzz.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

The First Transatlantic Telegraph Cable Was a Bold, Beautiful Failure

Post Syndicated from Allison Marsh original

On 16 August 1858, Queen Victoria and U.S. president James Buchanan exchanged telegraphic pleasantries, inaugurating the first transatlantic cable connecting British North America to Ireland. It wasn’t exactly instant messaging: The queen’s 98-word greeting of goodwill took almost 16 hours to send through the 3,200-kilometer cable. Still, compared to packet steamships, which could take 10 days to cross the Atlantic, the cable promised a tremendous improvement in speed for urgent communications.

This milestone in telegraphy had been a long time coming. Samuel Morse first suggested linking the two continents in 1840, and various attempts were made over the ensuing years. Progress on the project took off in the mid-1850s, when U.S. entrepreneur Cyrus W. Field began investing heavily in telegraphy.

Field had made his fortune in the paper industry by the age of 34. The first telegraph project he invested in was a link from St. Johns, Newfoundland, to New York City, as envisioned by Canadian engineer Frederic Newton Gisborne. The venture never secured enough funding, but Field’s enthusiasm for telegraphy was undiminished. Over the next decade, he invested his own money and rallied other inventors and investors to form several telegraph companies.

The most audacious of these was the Atlantic Telegraph Company (ATC). Field and the English engineers John Watkins Brett and Charles Tilston Bright, both specialists in submarine telegraphy, formed the company in 1856, with the goal of laying a transatlantic cable. The British and U.S. governments both agreed to subsidize the project.

Terrestrial telegraphy was by then well established, and several shorter submarine cables had been deployed in Europe and the United States. Still, the transatlantic cable’s great length posed some unique challenges, especially because transmission theory and cable design were still very much under debate.

Morse and British physicist Michael Faraday believed that the conducting wire of a submarine cable should be as narrow as possible, to limit retardation of the signal. And the wider the wire, the more electricity would be needed to charge it. Edward Orange Wildman Whitehouse, an electrician for the Atlantic Telegraph Company, subscribed to this view.

The other school of thought was represented by William Thomson (later Lord Kelvin). He argued that the amount of retardation was inversely proportional to the square of the cable’s length. Thomson suggested using a large-diameter core made with the purest copper available in order to reduce the resistance. Bright, the project’s chief engineer, shared Thomson’s view. This design was significantly heavier and more costly than the one proposed by the Morse-Faraday school, and the ATC did not adopt it.

The Gutta Percha Co. manufactured the cable’s core and insulation. The core consisted of seven strands of copper wire twisted together to make a wire 0.083 inch in diameter. The finished core weighed 107 pounds per nautical mile, which was significantly lighter than the 392 pounds per nautical mile that Thomson and Bright had proposed. The copper core was wrapped in three layers of gutta-percha, a latex that comes from trees of the same name. The insulated core was then covered in tarred hemp and wrapped with iron wire. The finished cable was about five-eighths of an inch in diameter.

At the time, no ship could carry all of the submarine cable needed, so the cargo was split between two naval ships, the HMS Agamemnon and the USSF Niagara, both of which were refitted to carry the load. It took three weeks to load the cable. Many spectators gathered to watch, while local officials and visiting dignitaries treated the naval officers to countless dinners and celebrations, much of which was recorded and amplified by the press.

Of course, two ships meant that the cables would have to be spliced together at some point. Once again, there was disagreement about how to proceed.

Bright argued for splicing the cable in midocean and then having each ship head in opposite directions, paying out its cable as it went. Whitehouse and the other electricians preferred to begin laying the cable in Ireland and splicing in the second half once the first half had been laid. This plan would allow continuous contact with the shore and ongoing testing of the cable’s signal. Bright’s plan had the advantage of halving the time to lay the cable, thus lessening the chance of encountering foul weather.

The directors initially chose Whitehouse’s plan. Niagara and Agamemnon met at Queenstown, Ireland, to test the cable with a temporary splice. After a successful transmission, the ships headed to Valentia Bay to begin their mission, escorted by the USS Susquehanna and the HMS Leopard. Also joining the fleet were the support ships HMS Advice, HMS Willing Mind, and HMS Cyclops.

On 5 August 1857, the expedition got under way. The first portion of cable to be laid was known as the shore cable: heavily reinforced line to guard against strains of waves, currents, rocks, and anchors. But less than 5 miles out, the shore cable got caught in the machinery and broke. The fleet returned to port.

Willing Mind ran a thick rope to retrieve the broken shore cable, and crew members spliced it back to the shore cable on the Niagara. The fleet set out again. When they reached the end of the shore cable, the crews spliced it to the ocean cable and slowly lowered it to the ocean floor.

For the next few days, the cable laying proceeded. There was nearly continuous communication between Whitehouse on shore and Field, Morse, and Thomson on board, although Morse was incapacitated by seasickness much of the time.

The paying-out machinery was fiddly to operate. The cable was occasionally thrown off the wheel, and tar from the cable built up in the grooves and had to be cleaned off. To keep the cable paying out at a controlled rate required constant adjustment of the machinery’s brakes. The brakeman had to balance the ship’s speed and the ocean current. In perfect weather and flat seas, this was easy to judge. But weather can be fickle, and humans are fallible.

Around 3:45 a.m. on 11 August, the stern of Niagara headed into the trough of a wave. As the ship rose, the pressure on the cable increased. The brakes should have been released, but they weren’t. The cable broke and plunged to an irretrievable depth.

Field immediately and headed to England aboard the Leopard to meet with the ATC’s board of directors. Niagara and Agamemnon remained at the site for a few days to practice splicing the cable from the two ships. Cyclops, which had done the initial survey of the route the previous year, conducted soundings of the site. When they returned to shore, the crews learned that the project had been halted for the year.

Over the winter months, William Everett was named chief engineer and set about redesigning the paying-out machinery with more attention to the braking mechanism and safety features. The crew practiced their maneuvers. Thomson thought more about transmission speed and developed his mirrored galvanometer, an instrument for detecting current in, say, a long cable.

The ships set out again the following summer. This time they would follow Bright’s plan. Agamemnon and Niagara would meet at 52°2’ N, 33°18’ W, the halfway point of the proposed line. In the middle of the Atlantic Ocean, they would splice together the cable and drop it to the ocean floor. Agamemnon would head east to Ireland, while Niagara headed west to Newfoundland.

Although the weather was fine when the ships set out, it soon turned. For six days, the two ships, laden with 1,500 tons of cable, pitched alarmingly from side to side. Although no one was lost, 45 men were injured, and Agamemnon ended up 200 miles off course.

Finally, on 25 June 1858 Agamemnon and Niagara met. The crews spliced together the cable, and the ships set off. At first, the two ships were able to communicate via the cable, but around 3:30 a.m. on 27 June, both logbooks recorded a failure. Because things looked fine on each ship, the crews assumed the problem was on the other end, and the ships returned to the rendezvous spot. The crews didn’t want to waste time investigating, so they agreed to abandon the 100 km of cable that had been laid and spliced together a new one, and the ships set off once more.

By 29 June, Agamemnon had paid out almost all of the cable stored on deck, which meant the crew would have to switch to the main coil in the middle of the night. Although they had practiced the maneuver over the winter, luck was not on their side. Around midnight, the cable snapped and was lost. As it turned out, the six-day storm had damaged the cable as it lay on the deck. The two ships were hundreds of kilometers apart by this point, and they headed back to Queenstown to await further direction.

Field was not deterred, but it took some doing to convince the rest of the ATC’s board of directors to make another attempt. He could be a persuasive guy.

The ships set out for a third time on 17 July 1858. This time the cable laying progressed without incident, having been blessed, finally, by the weather gods. On 29 July, as Field recorded in his journal, the two ships spliced the two ends of the cable together in the middle of the Atlantic Ocean, dropped it in the water at 1,500 fathoms (2,745 meters), and then each ship headed to its destination port. Niagara arrived on 4 August, Agamemnon the following day. The 3,200-km cable now connected Heart’s Content, in Newfoundland, to Telegraph Field on Valentia Island, in Ireland.

By 10 August dispatchers were sending test messages, and on 16 August, with the queen’s and Buchanan’s exchanges, the line was officially open.

All of the project’s many starts and stops had been followed closely by the press and an eager public. Archaeologist Cassie Newland has called the heroic effort “the Victorian equivalent of the Apollo mission.”

The triumphant opening, after years of speculation and so many failures, was lauded as the communications achievement of the century. New York City celebrated with a parade and fireworks, which accidentally set the dome of City Hall on fire. Trinity Church in lower Manhattan held a special service to commemorate the cable, with the mayor of New York City and other officials in attendance and the Right Reverend George Washington Doane, Bishop of New Jersey, giving the address. On the other side of the Atlantic, shares in the ATC more than doubled, and Charles Bright was knighted for his work overseeing the project.

Of course, companies wanted to cash in on the celebration and immediately crafted all sorts of memorabilia and souvenirs. Niagara had arrived in New York with hundreds of kilometers of excess cable. The jeweler Tiffany & Co. bought it all.

Tiffany craftsmen cut the cable into 10-centimeter pieces, banding the ends of each piece with brass ferrules and attaching a descriptive plate [see photo at top]. The souvenirs retailed for 50 cents each (about US $15 today); wholesalers could buy lots of 100 for $25. Each piece came with a facsimile of a note, signed by Cyrus W. Field, certifying that he had sold the balance of the cable from Niagara to Tiffany. Although Tiffany claimed to have a monopoly on the cable, numerous competitors sprang up with their own cable souvenirs, including watch fobs, earrings, pendants, charms, letter openers, candlesticks, walking-stick toppers, and tabletop displays.

Tiffany reportedly sold thousands of its cable souvenirs, but it was a short-lived venture. Transmission on the transatlantic cable, never very strong, degraded quickly. Within a few weeks, the line failed entirely.

Blame for the failure quickly landed on Whitehouse, chief engineer for the eastern terminus of the cable. He believed that the farther the signal had to travel, the stronger the necessary voltage, and so he at times used up to 2,000 volts to try to boost the signal. Meanwhile, Thomson, the chief engineer for the cable’s western terminus, was using his mirror galvanometer to detect and amplify the faint signal coming through the cable.

In blaming Whitehouse for the failure, people were quick to point out his lack of traditional qualifications. This is a bit unfair—his path was similar to that of many gentleman scientists of the time. Trained as a medical doctor, he had a successful surgical practice before turning his attention to electrical experiments. Whitehouse patented several improvements to telegraphic apparatus and was elected a member of the British Association for the Advancement of Science. But the commission investigating the cable’s failure laid fault on Whitehouse’s use of high voltages.

In 1985 historian and engineer Donard de Cogan published an article that somewhat vindicated Whitehouse. De Cogan’s analysis of a length of cable that had been retrieved from the original deployment noted its poor manufacture, including the fact that the copper core was not centered within the insulator and at places was perilously close to the metal sheathing. Additionally, there was significant deterioration of the gutta percha insulator. De Cogan speculated that the impurities—which even Thomson objected to—along with improper storage over the 1857–58 winter resulted in a cable whose failure was inevitable. De Cogan also concluded, though, that having Whitehouse as a scapegoat may have actually helped advance transoceanic telegraphy. Had the cable failed without a cause, investors would have been more hesitant.

Regardless, the failure left Tiffany with thousands of unsellable souvenirs. Some ended up in museum collections, but many were put into storage and forgotten, leading to the next twist in this tale.

In 1974 a company called Lanello Reserves advertised the sale of 2,000 Tiffany cable souvenirs. The asking price was $100—about $500 in today’s dollars. Lanello Reserves also donated 100 pieces to the Smithsonian Institution, which the museum resold in its shops. Today, original transatlantic cable souvenirs pop up regularly in online auction sites. You too can own a piece of history.

While the souvenirs may have had a longer life than the transmission cable itself, that doesn’t diminish the accomplishment of Bright, Thomson, Whitehouse, and their teams. Even though the cable never worked well, a total of 732 messages were sent before it failed. These included the reporting of a collision between the Cunard Line ships Europa and Arabia, as well as an order from the British government to hold two regiments in Canada. The regiments were en route to India, but when the British government learned that the Indian Rebellion had been repressed, they sent new orders via the cable. By not sending the troops, the treasury saved an estimated £50,000 to £60,000, recouping about one-seventh of their investment in the cable with a single military order.

Public sentiment toward the cable quickly cooled, however. By the end of 1858, rumors abounded that this was all an elaborate hoax or a fraudulent stock scheme aimed at fleecing unsuspecting investors. Similar to today’s conspiracy theorists who refuse to believe that the Apollo moon landing was real, the cable doubters were not convinced by souvenirs, messages from heads of state, or effusive press coverage.

Field was eager to try again. Most of the ATC’s original financial backers weren’t interested in investing in another transatlantic cable, but the British government still saw the potential and continued to provide funding. The company finally succeeded in building a permanent transatlantic cable in 1866.

Field may have lost many of his friends and investors, but he never lost his optimism. He was able to believe in a future of instant communication at a time when most people didn’t have—indeed, couldn’t even conceive of—indoor plumbing and electric lights.

An abridged version of this article appears in the November 2019 print issue as “Tiffany’s Transatlantic Telegraphy Doodad.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

How the Trautonium Found Favor With Nazis and Gave Hitchcock’s The Birds its Distinctive Screech

Post Syndicated from Allison Marsh original

I knew next to nothing about electronic music when I began researching this month’s column. My only association with the genre was the synthesizer sounds of ’80s pop and the (for me) headache-inducing beats of EDM. I never stopped to think about the roots of electronic music, and so I was surprised to learn that it can be traced back more than a century. It also has more than a passing association with the Nazis. Frode Weium, senior curator at the Norwegian Museum of Science and Technology, is the person who nominated the Volkstrautonium for this month’s Past Forward artifact and sent me down this fascinating musical rabbit hole.

The Volkstrautonium arose during the wave of electronic music that began shortly after World War I. Inventors in Europe, the Soviet Union, and the United States were creating a cacophony (or, if you like, a symphony) of new electronic musical instruments. It’s hard to say exactly why electronic music took off as it did, but the results were diverse and abundant. Some of the new creations took the name of their inventor, such as the theremin (for León Theremin) and the Ondes Martenot (for Maurice Martenot). Others were portmanteaus that merged musical and electronic terms: the Terpiston, the Rhythmicon, the Cathodic Harmonium, the Radiophonic Organ, the Magnetophone, the Spherophone, the Elektrochord.

The music theorist Jörg Mager welcomed these new sounds. Often considered the founder of the electro-music movement, Mager in 1924 published his essay “Eine neue Epoche der Musik durch Radio” (“A New Epoch in Music Through Radio”), in which he argued that the radio was not simply a tool to disseminate sound, but also a tool to manipulate sound waves and create a new form of music.

In this burgeoning world of sonic experimentation, Germany was the epicenter. And an electrical engineer and trained musician named Friedrich Trautwein wanted a piece of the action. Trautwein had trained at the Heidelberg Conservatory and then studied engineering and acoustics at Karlsruhe Technical University, receiving his doctorate in 1921. He filed his first patent for electric tone generation a year later.

Trautwein’s first attempt at a new musical instrument was the Trautonium, an electrified cross between a violin (a violin with only one string, that is) and a piano (minus the keyboard). The playing interface, or manual, of the Trautonium consisted of a single wire stretched over a metal plate. When the musician pressed the wire against the plate, it closed the circuit and produced a tone. Moving your finger along the wire from left to right changed the resistance and therefore the pitch, while knobs on the console further adjusted the pitch. A set of movable keys let the musician set fixed pitches. It sounded like a violin. Sort of.

The original Trautonium generated its sound using tubes filled with neon gas, which functioned as a relaxation oscillator. The neon tubes were later replaced with a type of high-energy gas tube called a thyratron, which helped stabilize the pitch.

Trautwein continued to experiment with ways of manipulating the electronic tone to produce the most pleasing timbre, according to Thomas Patteson, a professor of music history at the Curtis Institute of Music, in Philadelphia, who has done significant research on the history of the Trautonium. (For an extended description of Trautwein’s formant theory, for instance, see Chapter 5 of Patteson’s book Instruments for New Music: Sound, Technology, and Modernism, University of California Press, 2015.)

In 1930 Trautwein became a lecturer in acoustics at the Berlin Academy of Music, where he met Paul Hindemith, who was teaching music composition there. Almost immediately, the two began to collaborate on improving the Trautonium. Hindemith, already an established composer, wrote music specifically for the instrument and encouraged others to play it. One of Hindemith’s students, Oskar Sala, became a virtuoso on the instrument.

In this 1930 recording, Sala plays an early Trautononium:

Sala, like Trautwein, had dual interests in music and science; he went on to study physics at what is now Humboldt University of Berlin.

The Trautonium debuted to the public on 20 June 1930 as part of a summer concert series devoted to new music. For the performance, Hindemith composed Das kleinen Elektromuskiers Lieblinge (The Little Electro-musician’s Favorites), a set of seven short pieces for a trio of Trautonia. Hindemith, Sala, and the pianist Rudolf Schmidt performed the pieces and demonstrated the potential range of the new instrument. In conjunction with the performance, Trautwein published a short book, Elektrische Musik, that served as a technical guide to the Trautonium.

The following year, Hindemith conducted his Concertino for Trautonium and String Orchestra at the second Radio Music Convention, in Munich, with Sala as the soloist. And at the 1932 Radio Exhibition, in Berlin, the Trautonium was part of the “electric orchestra,” which also featured an electric cello, electric violin, electric piano, and theremin.

Trautwein and Sala believed that the Trautonium had commercial appeal beyond the concert hall. Beginning in 1931, they partnered with the electronics firm Telefunken to create a mass-marketable instrument. The result was the Telefunken-Trautonium, which later became known as the Volkstrautonium.

Because the Volkstrautonium was intended for use in the home, it underwent a few design changes from Trautwein’s original machine. The manual and circuitry were consolidated into a single box with a cover to minimize dust. Additional knobs and switches helped the player control the sound. The instrument could be plugged into a radio receiver for amplification.

Despite all these enhancements, the Volkstrautonium did not make a splash when it debuted at the 1933 Radio Exhibition. Of the 200 or so that were produced, only a few were ever sold.

The instrument may have been a victim of particularly poor timing. Priced at 400 reichsmarks, or about two and a half months’ salary for the average German worker, it would have been a significant investment. Meanwhile, amidst a global economic depression, unemployment in Germany hovered around 30 percent. The Volkstrautonium was simply unaffordable for most people.

Telefunken’s lackluster marketing of the instrument, which included almost no advertising, didn’t help matters. The company officially stopped making them in 1937, and all unsold units were given to Trautwein.

According to Frode Weium, the Volkstrautonium pictured at top was a gift from AEG Berlin, which partially owned Telefunken, to Alf Scott-Hansen Jr., a Norwegian electrical engineer, amateur jazz musician, and film director. It’s unclear whether Scott-Hansen used this Volkstrautonium. The Norwegian Museum of Science and Technology acquired it in 1995.

Though the Volkstrautonium was not a commercial success, that didn’t stop the Trautonium from finding a niche audience among radio enthusiasts. Despite the high price of the Volkstrautonium, it had a fairly simple design. You could build a pared-down version with easily available parts. In March 1933, Radio-Craft magazine published detailed instructions on how to build a Trautonium [PDF], slightly altered for U.S. customers with parts available in the United States.

According to the Radio-Craft article, the Trautonium was not just easy to build but also easy to play: “One may learn to play it in a short time, even though one is not a musician.” Perhaps that was true, but playing well was probably another matter.

Finding music to play on the Trautonium would also have been tricky. In order to popularize any new instrument, you need new music to be written for it. Otherwise, the instrument only mimics other instruments—there’s no signature sound or essential need that allows the new instrument to take root. The theremin, for example, didn’t pass into obscurity like many of the other instruments of the early electro-music age because its uniquely eerie sound became popular in scores for science fiction and horror movies. [The theremin also inspires occasional reboots, a few of which are described in “The Return of the Theremin” and “How to Build a Theremin You Can Play With Your Whole Body,” in IEEE Spectrum.]

The Trautonium, for its part, produced a surprising range of sound effects for Alfred Hitchcock’s 1963 horror-thriller The Birds, a movie famous for its lack of a traditional score. Instead, Oskar Sala created the screeches and cries of the birds, as well as the slamming of doors and windows, using a variation of Trautwein’s instrument that he designed:

Sala scored hundreds of films with his Trautonium, refining the instrument throughout his life. He never trained any students to play it, however, nor did other composers besides Hindemith produce melodic smash hits with the instrument. It thus fell into obscurity.

More recently, a few artists have rediscovered the range and potential of the Trautonium. The Danish musician Agnes Obel played a replica Trautonium on her 2016 album Citizen of Glass. The German musician Peter Pichler wrote a music theater piece, Wiedersehen in Trautonien (Reunion in Trautonia), which is an account of an imagined meeting of Trautwein, Hindemith, and Sala. He also took his Trautonium on tour in Australia last April.

I said there were Nazis, and here they are: Like all German academics, Trautwein had to navigate the difficult political climate of the 1930s and ’40s. Like many, he joined the Nazi Party, was rewarded with a promotion to professor, and rode out the war years mostly unscathed.

In 1935 Trautwein and Sala presented a Trautonium to Joseph Goebbels, Hitler’s minister of propaganda. Goebbels was, unsurprisingly, mostly interested in the propaganda value of music. Luckily for Trautwein, electro-music fit into the Reich’s desire to reconcile technology and culture. Trautwein volunteered the Trautonium to test the speaker system for the 1936 Olympic Games in Berlin, and it was played three times in official radio programs accompanying the games.

Hindemith, who was married to a Jewish woman and who often collaborated with leftists, didn’t fare as well under the Nazis. Goebbels pressured him to take an extended leave of absence from the Berlin Academy, and he found it increasingly difficult to perform and conduct. His work was banned in 1936. Hindemith and his wife immigrated to Switzerland two years later, settled in the United States in 1940, and returned to Europe in 1953.

When I first listened to the 1930 recording of Oskar Sala playing Trautwein’s simple electronic instrument, I was struck by how the sound seems both strange and familiar. Now that I know the history of the Trautonium and its champions, I think the word that best describes it is haunting.

An abridged version of this article appears in the October 2019 print issue as “An Instrument for The Birds.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society. She dedicates this month’s column to her uncle Ralph Morrison, who died in August. “Ralph was an IEEE Life Member and also an avid violinist,” Marsh says. “I am pretty sure he would have hated the music the Trautonium produced, but I know he would have loved discussing the electrical and acoustical challenges of the instrument.”

A Brief History of the Lie Detector

Post Syndicated from Allison Marsh original

It’s surprisingly hard to create a real-life Lasso of Truth

When Wonder Woman deftly ensnares someone in her golden lariat, she can compel that person to speak the absolute truth. It’s a handy tool for battling evil supervillains. Had the Lasso of Truth been an actual piece of technology, police detectives no doubt would be lining up to borrow it.

Indeed, for much of the past century, psychologists, crime experts, and others have searched in vain for an infallible lie detector. Some thought they’d discovered it in the polygraph machine. A medical device for recording a patient’s vital signs—pulse, blood pressure, temperature, breathing rate—the polygraph was designed to help diagnose cardiac anomalies and to monitor patients during surgery.

The polygraph was a concatenation of several instruments. One of the first was a 1906 device, invented by British cardiologist James Mackenzie, that measured the arterial and venous pulse and plotted them as continuous lines on paper. The Grass Instrument Co., of Massachusetts, maker of the 1960 polygraph machine pictured above, also sold equipment for monitoring EEGs, epilepsy, and sleep.

The leap from medical device to interrogation tool is a curious one, as historian Ken Alder describes in his 2007 book The Lie Detectors: The History of an American Obsession (Free Press). Well before the polygraph’s invention, scientists had tried to link vital signs with emotions. As early as 1858, French physiologist Étienne-Jules Marey recorded bodily changes as responses to uncomfortable stressors, including nausea and sharp noises. In the 1890s, Italian criminologist Cesare Lombroso used a specialized glove to measure a criminal suspect’s blood pressure during interrogation. Lombroso believed that criminals constituted a distinct, lower race, and his glove was one way he tried to verify that belief.

In the years leading up to World War I, Harvard psychologist Hugo Münsterberg used a variety of instruments, including the polygraph, to record and analyze subjective feelings. Münsterberg argued for the machine’s application to criminal law, seeing both scientific impartiality and conclusiveness.

As an undergraduate, William Moulton Marston worked in Münsterberg’s lab and was captivated by his vision. After receiving his B.A. in 1915, Marston decided to continue at Harvard, pursuing both a law degree and a Ph.D. in psychology, which he saw as complementary fields. He invented a systolic blood pressure cuff and with his wife, Elizabeth Holloway Marston, used the device to investigate the links between vital signs and emotions. In tests on fellow students, he reported a 96 percent success rate in detecting liars.

World War I proved to be a fine time to research the arts of deception. Robert Mearns Yerkes, who also earned a Ph.D. in psychology from Harvard and went on to develop intelligence tests for the U.S. Army, agreed to sponsor more rigorous tests of Marston’s research under the aegis of the National Research Council. In one test on 20 detainees in the Boston Municipal court, Marston claimed a 100 percent success rate in lie detection. But his high success rate made his supervisors suspicious. And his critics argued that interpreting polygraph results was more art than science. Many people, for instance, experience higher heart rate and blood pressure when they feel nervous or stressed, which may in turn affect their reaction to a lie detector test. Maybe they’re lying, but maybe they just don’t like being interrogated.

Marston (like Yerkes) was a racist. He claimed he could not be fully confident in the results on African Americans because he thought their minds were more primitive than those of whites. The war ended before Marston could convince other psychologists of the validity of the polygraph.

Across the country in Berkeley, Calif., the chief of police was in the process of turning his department into a science- and data-driven crime-fighting powerhouse. Chief August Vollmer centralized his department’s command and communications and had his officers communicate by radio. He created a records system with extensive cross-references for fingerprints and crime types. He compiled crime statistics and assessed the efficacy of policing techniques. He started an in-house training program for officers, with university faculty teaching evidentiary law, forensics, and crime-scene photography. In 1916 Volmer hired the department’s first chemist, and in 1919 he began recruiting college graduates to become officers. He vetted all applicants with a battery of intelligence tests and psychiatric exams.

Against this backdrop, John Augustus Larson, a rookie cop who happened to have a Ph.D. in physiology, read Marston’s 1921 article “Physiological Possibilities of the Deception Test” [PDF]. Larson decided he could improve Marston’s technique and began testing subjects using his own contraption, the “cardio-pneumo-psychogram.” Vollmer gave Larson free rein to test his device in hundreds of cases.

Larson established a protocol of yes/no questions, delivered by the interrogator in a monotone, to create a baseline sample. All suspects in a case were also asked the same set of questions about the case; no interrogation lasted more than a few minutes. Larson secured consent before administering his tests, although he believed only guilty parties would refuse to participate. In all, he tested 861 subjects in 313 cases, corroborating 80 percent of his findings. Chief Vollmer was convinced and helped promote the polygraph through newspaper stories.

And yet, despite the Berkeley Police Department’s enthusiastic support and a growing popular fascination with the lie detector, U.S. courts were less than receptive to polygraph results as evidence.

In 1922, for instance, Marston applied to be an expert witness in the case of Frye v. United States. The defendant, James Alphonso Frye, had been arrested for robbery and then confessed to the murder of Dr. R.W. Brown. Marston believed his lie detector could verify that Frye’s confession was false, but he never got the chance.

Chief Justice Walter McCoy didn’t allow Marston to take the stand, claiming that lie detection was not “a matter of common knowledge.” The decision was upheld by the court of appeals with a slightly different justification: that the science was not widely accepted by the relevant scientific community. This became known as the Frye Standard or the general acceptance test, and it set the precedent for the court’s acceptance of any new scientific test as evidence.

Marston was no doubt disappointed, and the idea of an infallible lie detector seems to have stuck with him. Later in life, he helped create Wonder Woman. The superhero’s Lasso of Truth proved far more effective at apprehending criminals and revealing their misdeeds than Marston’s polygraph ever was.

To this day, polygraph results are not admissible in most courts. Decades after the Frye case, the U.S. Supreme Court, in United States v. Scheffer, ruled that criminal defendants could not admit polygraph evidence in their defense, noting that “the scientific community remains extremely polarized about the reliability of polygraph techniques.”

But that hasn’t stopped the use of polygraphs for criminal investigation, at least in the United States. The U.S. military, the federal government, and other agencies have also made ample use of the polygraph in determining a person’s suitability for employment and security clearances.

Meanwhile, the technology of lie detection has evolved from monitoring basic vital signs to tracking brain waves. In the 1980s, J. Peter Rosenfeld, a psychologist at Northwestern University, developed one of the first methods for doing so. It took advantage of a type of brain activity, known as P300, that is emitted about 300 milliseconds after the person recognizes a distinct image. The idea behind Rosenfield’s P300 test was that a suspect accused, say, of theft would have a distinct P300 response when shown an image of the stolen object, while an innocent party would not. One of the main drawbacks was finding an image associated with the crime that only the suspect would have seen.

In 2002 Daniel Langleben, a professor of psychiatry at the University of Pennsylvania, began using functional magnetic resonance imaging, or fMRI, to do real-time imaging of the brain while a subject was telling the truth and also lying. Langleben found that the brain was generally more active when lying and suggested that truth telling was the default modality for most humans, which I would say is a point in favor of humanity. Langleben has reported being able to correctly classify individual lies or truths 78 percent of the time. (In 2010, IEEE Spectrum contributing editor Mark Harris wrote about his own close encounter with an fMRI lie detector. It’s a good read.)

More recently, the power of artificial intelligence has been brought to bear on lie detection. Researchers at the University of Arizona developed the Automated Virtual Agent for Truth Assessments in Real-Time, or AVATAR, for interrogating an individual via a video interface. The system uses AI to assess changes in the person’s eyes, voice, gestures, and posture that raise flags about possible deception. According to Fast Company and CNBC, the U.S. Department of Homeland Security has been testing AVATAR at border crossings to identify people for additional screening, with a reported success rate of 60 to 75 percent. The accuracy of human judges, by comparison, is at best 54 to 60 percent, according to AVATAR’s developers.

While the results for AVATAR and fMRI may seem promising, they also show the machines are not infallible. Both techniques compare individual results against group data sets. As with any machine-learning algorithm, the data set must be diverse and representative of the entire population. If the data is poor quality or incomplete or if the algorithm is biased or if the sensors measuring the subject’s physiological response don’t work properly, it’s simply a more high-tech version of Marston’s scientific racism.

Both fMRI and AVATAR pose new challenges to the already contested history of lie detection technology. Over the years, psychologists, detectives, and governments have continued to argued for their validity. There is, for example, a professional organization called the American Polygraph Association. Meanwhile, lawyers, civil libertarians, and other psychologists have decried their use. Proponents seem to have an unwavering faith in data and instrumentation over human intuition. Detractors see many alternative explanations for positive results and cite a preponderance of evidence that polygraph tests are no more reliable than guesswork.

Along the way, sensational crime reporting and Hollywood dramatizations have led the public to believe that lie detectors are a proven technology and also, contradictorily, that master criminals can fake the results.

I think Ken Alder comes closest to the truth when he notes that at its core, the lie detector is really only successful when suspects believe it works.

An abridged version of this article appears in the August 2019 print issue as “A Real-Life Lasso of Truth.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

How NASA Recruited Snoopy and Drafted Barbie

Post Syndicated from Allison Marsh original

The space agency has long relied on kid-friendly mascots to make the case for space

graphic link to special report landing page
graphic link to special report landing  page

In the comic-strip universe of Peanuts, Snoopy beat Neil Armstrong to the moon. It was in March 1969—four months before Armstrong would take his famous small step—that the intrepid astrobeagle and his flying doghouse touched down on the lunar surface. “I beat the Russians…I beat everybody,” Snoopy marveled. “I even beat that stupid cat who lives next door!”

The comic-strip dog had begun a formal partnership with NASA the previous year, when Charles Schulz, the creator of Peanuts, and its distributor United Feature Syndicate, agreed to the use of Snoopy as a semi-official NASA mascot.

Snoopy was already a renowned World War I flying ace—again, within the Peanuts universe. Clad in a leather flying helmet, goggles, and signature red scarf, he sat atop his doghouse, reenacting epic battles with his nemesis, the Red Baron. Just as NASA had turned to real-life fighter pilots for its first cohort of astronauts, the space agency also recruited Snoopy.

Two months after the comic-strip Snoopy’s lunar landing, a second, real-world Snoopy buzzed the surface of the moon, as part of Apollo 10. This mission was essentially a dress rehearsal for Apollo 11. The crew was tasked with skimming, or “snooping,” the surface of the moon, so they nicknamed the lunar module “Snoopy.” It logically followed that Apollo 10’s command module was “Charlie Brown.”

On 21 May, as the astronauts settled in for their first night in lunar orbit, Snoopy’s pilot, Eugene Cernan, asked ground control to “watch Snoopy well tonight, and make him sleep good, and we’ll take him out for a walk and let him stretch his legs in the morning.” The next day, Cernan and Tom Stafford descended in Snoopy, stopping some 14,000 meters above the surface.

Since then, Snoopy and NASA have been locked in a mutually beneficial orbit. Schulz, a space enthusiast, ran comic strips about space exploration, and the moon shot in particular, which helped excite popular support for the program. Commercial tie-ins extended well beyond the commemorative plush toy shown at top. Over the years, Snoopy figurines, music boxes, banks, watches, pencil cases, bags, posters, towels, and pins have all promoted a fun and upbeat attitude toward life beyond Earth’s atmosphere.

There’s also a serious side to Snoopy. In the wake of the tragic Apollo 1 fire, which claimed the lives of three astronauts, NASA wanted to promote greater flight safety and awareness. Al Chop, director of public affairs for the Manned Spacecraft Center (now the Lyndon B. Johnson Space Center), suggested using Snoopy as a symbol for safety, and Schulz agreed. 

NASA created the Silver Snoopy Award to honor ground crew who have contributed to flight safety and mission success. The recipient’s prize? A silver Snoopy lapel pin, designed by Schulz and presented by an astronaut, in appreciation for the person’s efforts to preserve astronauts’ lives.

Snoopy was by no means the only popularizer of the U.S. space program. Over the years, there have been GI Joe astronauts, LEGO astronauts, and Hello Kitty astronauts. Not all of these came with the NASA stamp of approval, but even unofficially they served as tiny ambassadors for space.

Of all the astronautical dolls, I’m most intrigued by Astronaut Barbie, of which there have been numerous incarnations over the years. The first was Miss Astronaut Barbie, who debuted in 1965—13 years before women were accepted into NASA’s astronaut classes and 18 years before Sally Ride flew in space.

Miss Astronaut Barbie might have been ahead of her time, but she was also a reflection of that era’s pioneering women. Cosmonaut Valentina Tereshkova became the first woman to go to space on 16 June 1963, when she completed a solo mission aboard Vostok 6. Meanwhile, American women were training for space as early as 1960, through the privately funded Women in Space program. The Mercury 13 endured the same battery of tests that NASA used to train the all-male astronaut corps and were celebrated in the press, but none of them ever went to space.

In 2009, Mattel reissued Miss Astronaut of 1965 as part of the celebration of Barbie’s 50th anniversary. “Yes, she was a rocket scientist,” the packaging declares, “taking us to new fashion heights, while firmly placing her stilettos on the moon.” For the record, Miss Astronaut Barbie wore zippered boots, not high heels.

Other Barbies chose careers in space exploration and always with a flair for fashion. A 1985 Astronaut Barbie modeled a hot pink jumpsuit, with matching miniskirt for attending press conferences. Space Camp Barbie, produced through a partnership between Mattel and the U.S. Space & Rocket Center in Huntsville, Ala., wore a blue flight suit, although a later version sported white and pink. An Apollo 11 commemorative Barbie rocked a red- and silver-trimmed jumpsuit and silver boots and came with a Barbie flag, backpack, and three glow-in-the-dark moon rocks. (Scientific accuracy has never been Mattel’s strong suit, at least where Barbie is concerned.) And in 2013, Mattel collaborated with NASA to create Mars Explorer Barbie, to mark the first anniversary of the rover Curiosity.

More recently, Mattel has extended the Barbie brand to promote real-life role models for girls. In 2018, as part of its Inspiring Women series, the toymaker debuted the Katherine Johnson doll, which pays homage to the African-American mathematician who calculated the trajectory for NASA’s first crewed spaceflight. Needless to say, this Barbie is also clad in pink, with era-appropriate cat-eye glasses, a double strand of pearls, and a NASA employee ID tag.

Commemorative dolls and stuffed animals may be playthings designed to tug at our consumerist heartstrings. But let’s suspend the cynicism for a minute and imagine what goes on in the mind of a young girl or boy who plays with a doll and dreams of the future. Maybe we’re seeing a recruit for the next generation of astronauts, scientists, and engineers.

An abridged version of this article appears in the July 2019 print issue as “The Beagle Has Landed.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

This British Family Changed the Course of Engineering

Post Syndicated from Allison Marsh original

Charles Parsons invented the modern steam turbine, but his wife and daughter built something just as lasting

The British engineer Charles Parsons knew how to make a splash. In honor of Queen Victoria’s Diamond Jubilee, the British Royal Navy held a parade of vessels on 26 June 1897 for the Lords of the Admiralty, foreign ambassadors, and other dignitaries. Parsons wasn’t invited, but he decided to join the parade anyway. Three years earlier, he’d introduced a powerful turbine generator—considered the first modern steam turbine—and he then built the SY Turbinia to demonstrate the engine’s power.

Arriving at the naval parade, Parsons raised a red pennant and then broke through the navy’s perimeter of patrol boats. With a top speed of almost 34 knots (60 kilometers per hour), Turbinia was faster than any other vessel and could not be caught. Parsons had made his point. The Royal Navy placed an order for its first turbine-powered ship the following year.

Onboard the Turbinia that day was Parsons’s 12-year-old daughter, Rachel, whose wide-ranging interests in science and engineering Parsons and his wife encouraged. From a young age, Rachel Parsons and her brother, Algernon, tinkered in their father’s home workshop, just as Charles had done when he was growing up. Indeed, the Parsons family tree shows generation after generation of engineering inquisitiveness from both the men and the women, each of whom made their mark on the field.

Charles grew up at Birr Castle, in County Offaly, Ireland. His father, William, who became the 3rd Earl of Rosse in 1841, was a mathematician with an interest in astronomy. Scientists and inventors, including Charles Babbage, traveled to Birr Castle to see the Leviathan of Parsonstown, a 1.8-meter (72-inch) reflecting telescope that William built during the 1840s. His wife, Mary, a skilled blacksmith, forged the iron work for the telescope’s tube.

William dabbled in photography, unsuccessfully attempting to photograph the stars. Mary was the real photography talent. Her detailed photos of the famous telescope won the Photographic Society of Ireland’s first Silver Medal.

Charles and his siblings enjoyed a traditional education by private tutors. They also had the benefit of a hands-on education, experimenting with the earl’s many steam-powered machines, including a steam-powered carriage. They worked on the Leviathan’s adjustment apparatus and in their mother’s dark room.

After studying mathematics at Trinity College, Dublin, and St. John’s College, Cambridge, Charles apprenticed at the Elswick Works, a large manufacturing complex operated by the engineering firm W.G. Armstrong in Newcastle upon Tyne, England. It was unusual for someone of his social class to apprentice, and he paid £500 for the opportunity (about US $60,000 today), in the hopes of later gaining a management position.

During his time at the works, Charles refined some engine designs that he’d sketched out while at Cambridge. The reciprocating, or piston, steam engine had by then been around for more than 100 years, itself an improvement on Thomas Newcomen’s earlier but inefficient atmospheric steam engine. Beginning in the 1760s, James Watt and Matthew Boulton made improvements that included adding a separate condenser to eliminate the loss of heat when water was injected into the cylinder. The water created a vacuum and pulled the piston in a stroke. A later improvement was the double-acting engine, where the piston could both push and pull. Still, piston steam engines were loud, dirty, and prone to exploding, and Charles saw room for improvement.

His initial design was for a four-cylinder epicycloidal engine, in which the cylinders as well as the crankshaft rotated. One advantage of this unusual configuration was that it could work at high speed with limited vibration. Charles designed it to directly drive a dynamo so as to avoid any connecting belts or pulleys. He applied for a British patent in 1877 at the age of 23.

Charles offered the design to his employer, who declined, but Kitson and Co., a locomotive manufacturer in Leeds, was interested. Charles’s brother Richard Clere Parsons was a partner at Kitson and persuaded him to join the company, which eventually produced 40 of the engines. Charles spent two years there, mostly working on rocket-powered torpedoes that proved unsuccessful.

More successful was his courting of Katharine Bethell, the daughter of a prominent Yorkshire family. Charles was said to have impressed Katharine with his skill at needlework, and they married in 1883.

In 1884, Charles became a junior partner and the head of the electrical section at Clarke, Chapman and Co., a manufacturer of marine equipment in Newcastle upon Tyne. He developed a new turbine engine, which he used to drive an electric generator, also of his own design. [His first prototype, now part of the collection of the Science Museum, London, is shown above.] The turbine generator was 1.73 meters long, 0.4 meters wide, and 0.8 meters high, and it weighed a metric ton.

Charles Parsons’s engine is often considered the first modern turbine. Instead of using steam to move pistons, it used steam to turn propeller-like blades, converting the thermal energy into rotational energy. Parsons’s original design was inefficient, running at 18,000 rpm and producing 7.5 kilowatts—about the power of a small household backup generator today. He made rapid incremental improvements, such as changing the shape of the blades, and he soon had an engine with an output of 50,000 kW, which would be enough to power up to 50,000 homes today.

In 1889 Charles established C.A. Parsons and Co., in Heaton, a suburb of Newcastle, with the goal of manufacturing his turbo-generator. The only hitch was that Clarke, Chapman still held the patent rights. While the patent issues got sorted out, Charles founded the Newcastle and District Electric Lighting Co., which became the first electric company to rely entirely on steam turbines. It wouldn’t be the last.

During his lifetime, he saw turbine-generated electricity become affordable and readily available to a large population. Even today, most electricity generation relies on steam turbines.

Once Charles had secured the patent rights to his invention, he set about improving the steam turbo-generator, making it more efficient and more compact. He established the Marine Steam Turbine Co., which built the Turbinia in 1894. Charles spent several years refining the mechanics before the ship made its sensational public appearance at the Diamond Jubilee. In 1905, just eight years after the Turbinia’s public debut, the British admiralty decided all future Royal Navy vessels should be turbine powered. The private commercial shipping industry followed suit.

Charles Parsons never stopped designing or innovating, trying his hand at many other ventures. Not all were winners. For instance, he spent 25 years attempting to craft artificial diamonds before finally admitting defeat. More lucrative was the manufacture of optical glass for telescopes and searchlights. In the end, he earned over 300 patents, received a knighthood, and was awarded the Order of Merit.

But Charles was not the only engineer in his very talented household.

When I first started thinking about this month’s column, I wanted to mark the centenary of the founding of the Women’s Engineering Society (WES), one of the oldest organizations dedicated to the advancement of women in engineering. I searched for a suitable museum object that honored female engineers. That proved more difficult than I anticipated. Although the WES maintains extensive archives at the Institution of Engineering and Technology, including a complete digitized run of its journal, The Woman Engineer, it doesn’t have much in the way of three-dimensional artifacts. There was, for example, a fancy rose bowl that was commissioned for the society’s 50th anniversary. But it seemed not quite right to represent women engineers with a purely decorative object.

I then turned my attention to the founders of WES, who included Charles Parsons’s wife, Katharine, and daughter, Rachel. Although Charles was a prolific inventor, neither Katharine nor Rachel invented anything, so there was no obvious museum object linked to them. But inventions aren’t the only way to be a pioneering engineer.

After what must have been a wonderful childhood of open-ended inquiry and scientific exploration, Rachel followed in her father’s footsteps to Cambridge. She was one of the first women to study mechanical sciences there. At the time, though, the university barred women from receiving a degree.

When World War I broke out and Rachel’s brother enlisted, she took over his position as a director on the board of the Heaton Works. She also joined the training division of the Ministry of Munitions and was responsible for instructing thousands of women in mechanical tasks.

As described in Henrietta Heald’s upcoming book Magnificent Women and their Revolutionary Machines (to be published in February 2020 by the crowdfunding publisher Unbound), the war brought about significant demographic changes in the British workforce. More than 2 million women went to work outside the home, as factories ramped up to increase war supplies of all sorts. Of these, more than 800,000 entered the engineering trades.

This upsurge in female employment coincided with a shift in national sentiment toward women’s suffrage. Women had been fighting for the right to vote for decades, and they finally achieved a partial success in 1918, when women over the age of 30 who met certain property and education requirements were allowed to vote. It took another decade before women had the same voting rights as men.

But these political and workplace victories for women were built on shaky ground. The passage of the Sex Disqualification (Removal) Act of 1919 made it illegal to discriminate against women in the workplace. But the Restoration of Pre-War Practices Act, passed the same year, required that women give up their jobs to returning servicemen, unless they happened to work for firms that had employed women in the same role before the war.

These contradictory laws both stemmed from negotiations between Prime Minister David Lloyd George and British trade unions. The unions had vigorously objected to employing women during the war, but the government needed the women to work. And so it came up with the Treasury Agreement of 1915, which stipulated that skilled work could be subdivided and automated, allowing women and unskilled men to take them on. Under those terms, the unions acquiesced to the “dilution” of the skilled male workforce.

And so, although the end of the war brought openings for women in some professions, tens of thousands of women in engineering suddenly found themselves out of work.

The Parsons women fought back, using their social standing to advocate on behalf of female engineers. On 23 June 1919, Katharine and Rachel Parsons, along with several other prominent women, founded the Women’s Engineering Society to resist the relinquishing of wartime jobs to men and to promote engineering as a rewarding profession for both sexes.

Two weeks later, Katharine gave a rousing speech, “Women’s Work in Engineering and Shipbuilding during the War” [PDF] at a meeting of the North East Coast Institution of Engineers and Shipbuilders. “Women are able to work on almost every known operation in engineering, from the most highly skilled precision work, measured to [the] micrometer, down to the rougher sort of laboring jobs,” she proclaimed. “To enumerate all the varieties of work intervening between these two extremes would be to make a catalogue of every process in engineering.” Importantly, Katharine mentioned not just the diluted skills of factory workers but also the intellectual and design work of female engineers.

Just as impassioned, Rachel wrote an article for the National Review several months later that positioned the WES as a voice for women engineers:

Women must organize; this is the only royal road to victory in the industrial world. Women have won their political independence; now is the time for them to achieve their economic freedom too. It is useless to wait patiently for the closed doors of the skilled trade unions to swing open. It is better far to form a strong alliance, which, armed as it will be with the parliamentary vote, may be as powerful an influence in safeguarding the interests of women-engineers as the men’s unions have been in improving the lot of their members.

The following year, Rachel was one of the founding members of an all-female engineering firm, Atalanta, of which her mother was a shareholder. The firm specialized in small machinery work, similar to the work Rachel had been overseeing at her father’s firm. Although the business voluntarily shuttered after eight years, the name lived on as a manufacturer of small hand tools and household fixtures.

The WES has had a much longer history. In its first year, it began publishing The Women Engineer, which still comes out quarterly. In 1923 the WES began holding an annual conference, which has been canceled only twice, both times due to war. Over its 100 years, the organization worked to secure employment rights for women from the shop floor to management, guarantee access to formal education, and even encouraged the use of new consumer technologies, such as electrical appliances in the home.

Early members of the WES came from many different branches of engineering. Dorothée Pullinger ran a factory in Scotland that produced the Galloway, an automobile that was entirely designed and built by women for women. Amy Johnson was a world-renowned pilot who also earned a ground engineer’s license. Jeanie Dicks, the first female member of the Electrical Contractors Association, won the contract for the electrification of Winchester Cathedral.

Today the WES continues its mission of supporting women in pursuit of engineering, scientific, and technical careers. Its website gives thanks and credit to early male allies, including Charles Parsons, who supported female engineers. Charles may have earned his place in history due to his numerous inventions, but if you come across his turbine at the Science Museum, remember that his wife and daughter earned their places, too.

An abridged version of this article appears in the June 2019 print issue as “As the Turbine Turns.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

In 1983, This Bell Labs Computer Was the First Machine to Become a Chess Master

Post Syndicated from Allison Marsh original

Belle used a brute-force approach to best other computers and humans

Chess is a complicated game. It’s a game of strategy between two opponents, but with no hidden information and all of the potential moves known by both players at the outset. With each turn, players communicate their intent and try to anticipate the possible countermoves. The ability to envision several moves in advance is a recipe for victory, and one that mathematicians and logicians have long found intriguing.

Despite some early mechanical chess-playing machines—and at least one chess-playing hoax—mechanized chess play remained hypothetical until the advent of digital computing. While working on his Ph.D. in the early 1940s, the German computer pioneer Konrad Zuse used computer chess as an example for the high-level programming language he was developing, called Plankalkül. Due to World War II, however, his work wasn’t published until 1972. With Zuse’s work unknown to engineers in Britain and the United States, Norbert Wiener, Alan Turing, and notably Claude Shannon (with his 1950 paper “Programming a Computer for Playing Chess” [PDF]) paved the way for thinking about computer chess.

Beginning in the early 1970s, Bell Telephone Laboratories researchers Ken Thompson and Joe Condon developed Belle, a chess-playing computer. Thompson is cocreator of the Unix operating system, and he’s also a great lover of chess. He grew up in the era of Bobby Fischer, and as a youth he played in chess tournaments. He joined Bell Labs in 1966, after earning a master’s in electrical engineering and computer science from the University of California, Berkeley.

Joe Condon was a physicist by training who worked in the Metallurgy Division at Bell Labs. His research contributed to the understanding of the electronic band structure of metals, and his interests evolved with the rise of digital computing. Thompson got to know Condon when he and his Unix collaborator, Dennis Ritchie, began collaborating on a game called Space Travel, using a PDP-7 minicomputer that was under Condon’s purview. Thompson and Condon went on to collaborate on numerous projects, including promoting the use of C as the language for AT&T’s switching system.

Belle began as a software approach—Thompson had written a sample chess program in an early Unix manual. But after Cordon joined the team, the program morphed into a hybrid computer chess-playing machine, with Thompson handling the programming and Condon designing the hardware.

Belle consisted of three main parts: a move generator, a board evaluator, and a transposition table. The move generator identified the highest-value piece under attack and the lowest-value piece attacking, and it sorted potential moves based on that information. The board evaluator noted the king’s position and its relative safety during different stages of the game. The transposition table contained a memory cache of potential moves, and it made the evaluation more efficient.

Belle employed a brute-force approach. It looked at all of the possible moves a player could make with the current configuration of the board, and then considered all of the moves that the opponent could make. In chess, a turn taken by one player is called a ply. Initially, Belle could compute moves four plies deep. When Belle debuted at the Association for Computing Machinery’s North American Computer Chess Championship in 1978, where it claimed its first title, it had a search depth of eight plies. Belle went on to win the championship four more times. In 1983, it also become the first computer to earn the title of chess “master.”

Computer chess programmers were often treated with hostility when they pitted their systems against human competitors, some of whom were suspicious of potential cheating, while others were simply apprehensive. When Thompson wanted to test out Belle at his local chess club, he took pains to build up personal relationships. He offered his opponents a printout of the computer’s analysis of the match. If Belle won in mixed human/computer tournaments, he refused the prize money, offering it to the next person in line. Belle went on to play weekly at the Westfield Chess Club, in Westfield, N.J., for almost 10 years.

In contrast to human-centered chess competitions, where silence reigns so as not to disturb a player’s concentration, computer chess tournaments could be noisy affairs, with people discussing and debating different algorithms and game strategies. In a 2005 oral history, Thompson remembers them fondly. After a tournament, he would be invigorated and head back to the lab, ready to tackle a new problem.

For a computer, Belle led a colorful life, at one point becoming the object of a corporate practical joke. One day in 1978, Bell Labs computer scientist Mike Lesk, another member of the Unix team, stole some letterhead from AT&T chairman John D. deButts and wrote a fake memo, calling for the suspension of the “T. Belle Computer” project.

At the heart of the fake memo was a philosophical question: Is a game between a person and a computer a form of communication or of data processing? The memo claimed that it was the latter and that Belle therefore violated the 1956 antitrust decision barring the company from engaging in the computer business. In fact, though, AT&T’s top executives never pressured Belle’s creators to stop playing or inventing games at work, likely because the diversions led to economically productive research. The hoax became more broadly known after Dennis Ritchie featured it in a 2001 article, for a special issue of the International Computer Games Association Journal that was dedicated to Thompson’s contributions to computer chess.

In his oral history, Thompson describes how Belle also became the object of international intrigue. In the early 1980s, Soviet electrical engineer, computer scientist, and chess grandmaster Mikhail Botvinnik invited Thompson to bring Belle to Moscow for a series of demonstrations. He departed from New York’s John F. Kennedy International Airport, only to discover that Belle was not on the same plane.

Thompson learned of the machine’s fate after he’d been in Moscow for several days. A Bell Labs security guard who was moonlighting at JFK airport happened to see a Bell Labs box labeled “computer” that was roped off in the customs area. The guard alerted his friends at Bell Labs, and word eventually reached Condon, who lost no time in calling Thompson.

Condon warned Thompson to throw out the spare parts for Belle that he’d brought with him. “You’re probably going to be arrested when you get back,” he said. Why? Thompson asked. “For smuggling computers into Russia,” Condon replied.

In his oral history, Thompson speculates that Belle had fallen victim to the Reagan administration’s rhetoric concerning the “hemorrhage of technology” to the Soviet Union. Overzealous U.S. Customs agents had spotted Thompson’s box and confiscated it, but never alerted him or Bell Labs. His Moscow hosts seemed to agree that Reagan was to blame. When Thompson met with them to explain that Belle had been detained, the head of the Soviet chess club pointed out that the Ayatollah Khomeini had outlawed chess in Iran because it was against God. “Do you suppose Reagan did this to outlaw chess in the United States?” he asked Thompson.

Returning to the states, Thompson took Condon’s advice and dumped the spare parts in Germany. Arriving back home, he wasn’t arrested, for smuggling or anything else. But when he attempted to retrieve Belle at JFK, he was told that he was in violation of the Export Act—Belle’s old, outdated Hewlett-Packard monitor was on a list of banned items. Bell Labs paid a fine, and Belle was eventually returned.

After Belle had dominated the computer chess world for several years, its star began to fade, as more powerful computers with craftier algorithms came along. Chief among them was IBM’s Deep Blue, which captured international attention in 1996 when it won a game against world champion Garry Kasparov. Kasparov went on to win the match, but the ground was laid for a rematch. The following year, after extensive upgrades, Deep Blue defeated Kasparov, becoming the first computer to beat a human world champion in a tournament under regulation time controls.

Photographer Peter Adams brought Belle to my attention, and his story shows the value of being friendly to archivists. Adams had photographed Thompson and many of his Bell Labs colleagues for his portrait series “Faces of Open Source.” During Adams’s research for the series, Bell Labs corporate archivist Ed Eckert granted him permission to photograph some of the artifacts associated with the Unix research lab. Adams put Belle on his wish list, but he assumed that it was now in some museum collection. To his astonishment, he learned that the machine was still at Nokia Bell Labs in Murray Hill, N.J. As Adams wrote to me in an email, “It still had all the wear on it from the epic chess games it had played… :).”

An abridged version of this article appears in the May 2019 print issue as “Cold War Chess.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.