Tag Archives: tech-history

The Rich Tapestry of Fiber Optics

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/cyberspace/the-rich-tapestry-of-fiber-optics

Whoopee! So wrote Donald Keck, a researcher at Corning Glass Works, in the 7 August 1970 entry of his lab notebook. The object of his exuberance was a 29-meter-long piece of highly purified, titanium-doped optical fiber, through which he had successfully passed a light signal with a measured loss of only 17 decibels per kilometer. A few years earlier, typical losses had been closer to 1,000 dB/km. Keck’s experiment was the first demonstration of low-loss optical fiber for telecommunications, and it paved the way for transmitting voice, data, and video over long distances.

As important as this achievement was, it was not an isolated event. Physicists and engineers had been working for decades to make optical telecommunications possible, developing not just fibers but waveguides, lasers, and other components. (More on that in a bit.) And if you take the long view, as historians like me tend to do, it’s part of a fascinating tapestry that also encompasses glass, weaving, art, and fashion.

Optical fiber creates stunning effects in art and fashion

Shown above is a sculpture called Crossform Pendant Lamp, by the New York–based textile artist Suzanne Tick. Tick is known for incorporating unusual materials into her weaving: recycled dry cleaner hangers, Mylar balloons washed up on the beach, documents from her divorce. For the lamp, she used industrial fiber-optic yarn.

The piece was part of a collaboration between Tick and industrial designer Harry Allen. Allen worked on mounting ideas and illuminators, while Tick experimented with techniques to weave the relatively stiff fiber-optic yarn. (Optical fiber is flexible compared with other types of glass, but inflexible compared to, say, wool.) The designers had to determine how the lamp would hang and how it would connect to a power and light source. The result is an artwork that glows from within.

Weaving is of course an ancient technology, as is glassmaking. The ability to draw, or pull, glass into consistent fibers, on the other hand, emerged only at the end of the 19th century. As soon as it did, designers attempted to weave with glass, creating hats, neckties, shawls, and other garments.

Perhaps the most famous of these went on display at the 1893 World’s Columbian Exposition in Chicago. The exhibit mounted by the Libbey Glass Company, of Toledo, Ohio, showcased a dress made from silk and glass fibers. The effect was enchanting. According to one account, it captured the light and shimmered “as crusted snow in sunlight.” One admirer was Princess Eulalia of Spain, a royal celebrity of the day, who requested a similar dress be made for her. Libbey Glass was happy to oblige—and receive the substantial international press.

The glass fabric was too brittle for practical wear, which may explain why few such garments emerged over the years. But the idea of an illuminated dress did not fade, awaiting just the right technology. Designer Zac Posen found a stunning combination when he crafted a dress of organza, optical fiber, LEDs, and 30 tiny battery packs, for the actor Claire Danes to wear to the 2016 Met Gala. The theme of that year’s fashion extravaganza was “Manus x Machina: Fashion in an Age of Technology,” and Danes’s dress stole the show. Princess Eulalia would have approved. 

Fiber optics has many founders

Of course, most of the work on fiber optics has occurred in the mainstream of science and engineering, with physicists and engineers experimenting with different ways to manipulate light and funnel it through glass fibers. Here, though, the history gets a bit tangled. Let’s consider the man credited with coining the term “fiber optics”: Narinder Singh Kapany

Kapany was born in India in 1926, received his Ph.D. in optics from Imperial College London, and then moved to the United States, where he spent the bulk of his career as a businessman and entrepreneur. He began working with optical fibers during his graduate studies, trying to improve the quality of image transmission. He introduced the term and the field to a broader audience in the November 1960 issue of Scientific American with an article simply titled “Fiber Optics.”

As Kapany informed his readers, a fiber-optic thread is a cylindrical glass fiber having a high index of refraction surrounded by a thin coating of glass with a low index of refraction. Near-total internal reflection takes place between the two, thus keeping a light signal from escaping its conductor. He explained how light transmitted along bundles of flexible glass fibers could transport optical images along torturous paths with useful outcomes. Kapany predicted that it would soon become routine for physicians to examine the inside of a patient’s body using a “fiberscope”—and indeed fiber-optic endoscopy is now commonplace.

Kapany’s article unintentionally introduced an extra loop into the historical thread of fiber optics. In an anecdote that leads off the article, Kapany relates that in the 1870s, Irish physicist John Tyndall demonstrated how light could travel along a curved path. His “light pipe” was formed by a stream of water emerging from a hole in the side of a tank. When Tyndall shone a light into the tank, the light followed the stream of water as it exited the tank and arced to the floor. This same effect is seen in illuminated fountains.

Kapany’s anecdote conjures a mental image that allows readers to begin to understand the concept of guiding light, and I always love when scientists evoke history. In this case, though, the history was wrong: Tyndall wasn’t the originator of the guided-light demonstration.

While researching his highly readable 1999 book City of Light: The Story of Fiber Optics, Jeff Hecht discovered that in fact Jean-Daniel Colladon deserves the credit. In 1841, the Swiss physicist performed the water-jet experiment in Geneva and published an account the following year in Comptes Rendus, the proceedings of the French Academy of Sciences. Hecht, a frequent contributor to IEEE Spectrum, concluded that Michael Faraday, Tyndall’s mentor, probably saw another Swiss physicist, Auguste de la Rive, demonstrate a water jet based on Colladon’s apparatus, and Faraday then encouraged Tyndall to attempt something similar back in London.

I forgive Kapany for not digging around in the archives, even if his anecdote did exaggerate Tyndall’s role in fiber optics. And sure, Tyndall should have credited Colladon, but then there is a long history of scientists not getting the credit they deserve. Indeed, Kapany himself is considered one of them. In 1999 Fortune magazine listed him as one of the “unsung heroes” of 20th-century businessmen. This idea was perpetuated after Kapany did not share the Nobel Prize in Physics with Charles Kao in 2009 for achievements in the transmission of light in fibers for optical communication.

Whether or not Kapany should have shared the prize—and there are never any winners when it comes to debates over overlooked Nobelists—Kao certainly deserved what he got. In 1963 Kao joined a team at Standard Telecommunication Laboratories (STL) in England, the research center for Standard Telephones and Cables. Working with George Hockham, he spent the next three years researching how to use fiber optics for long-distance communication, both audio and video.

On 27 January 1966, Kao demonstrated a short-distance optical waveguide at a meeting of the Institution of Electrical Engineers (IEE).

According to a press release from STL, the waveguide had “the information-carrying capacity of one Gigacycle, which is equivalent to 200 television channels or over 200,000 telephone channels.” Once the technology was perfected, the press release went on, a single undersea cable would be capable of transmitting large amounts of data from the Americas to Europe.

In July, Kao and Hockham published their work [PDF] in the Proceedings of the IEE. They proposed that fiber-optic communication over long distances would be viable, but only if an attenuation of less than 20 dB/km could be achieved. That’s when Corning got involved.

Corning’s contribution brought long-distance optical communication closer to reality

In 1966, the head of a new group in fiber-optic communication at the Post Office Research Station in London mentioned to a visitor from Corning the need for low-loss glass fibers to realize Kao’s vision of long-distance communication. Corning already made fiber optics for medical and military use, but those short pieces of cable had losses of approximately 1,000 dB/km—not even close to Kao and Hockham’s threshold.

That visitor from Corning, William Shaver, told his colleague Robert Maurer about the British effort, and Maurer in turn recruited Keck, Peter Schultz, and Frank Zimar to work on a better way of drawing the glass fibers. The group eventually settled on a process involving a titanium-doped core. The testing of each new iteration of fiber could take several months, but by 1970 the Corning team thought they had a workable technology. On 11 May 1970, they filed for two patents. The first was US3659915A, a fused silica optical waveguide, awarded to Maurer and Schultz, and the second was US271126A, a method of producing optical waveguide fibers, awarded to Keck and Schultz.

Three months after the filing, Keck recorded the jubilant note in his lab notebook. Alas, it was after 5:00 pm on a Friday, and no one was around to join in his celebration. Keck verified the result with a second test on 21 August 1970. In 2012, the achievement was recognized with an IEEE Milestone as a significant event in electrotechnology.

Of course, there was still much work to be done to make long-distance optical communication commercially viable. Just like Princess Eulalia’s dress, Corning’s titanium-doped fibers weren’t strong enough for practical use. Eventually, the team discovered a better process with germanium-doped fibers, which remain the industry standard to this day. A half-century after the first successful low-loss transmission, fiber-optic cables encircle the globe, transmitting terabits of data every second.

An abridged version of this article appears in the August 2020 print issue as “Weaving Light.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Cranes Lift More Than Their Weight in the World of Shipping and Construction

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/tech-history/heroic-failures/cranes-lift-more-than-their-weight-in-the-world-of-shipping-and-construction

A crane can lift a burden, move it sideways, and lower it. These operations, which the Greeks employed when building marble temples 25 centuries ago, are now performed every second somewhere around the world as tall ship-to-shore gantry cranes empty and reload container vessels. The two fundamental differences between ancient cranes and the modern versions involve the materials that make up those cranes and the energies that power them.

The cranes of early Greek antiquity were just wooden jibs with a simple pulley whose mechanical advantage came from winding the ropes by winches. By the third century B.C.E., compound pulleys had come into use; these Roman trispastos provided a nearly threefold mechanical advantage—nearly, because there were losses to friction. Those losses became prohibitive with the addition of more than the five ropes of the pentaspastos.

The most powerful cranes in the Roman and later the medieval periods were powered by men treading inside wheels or by animals turning windlasses in tight circles. Their lifting capacities were generally between 1 and 5 metric tons. Major advances came only during the 19th century.

William Fairbairn’s largest harbor crane, which he developed in the 1850s, was powered by four men turning winches. Its performance was further expanded by fixed and movable pulleys, which gave it more than a ­600-fold mechanical advantage, enabling it to lift weights of up to 60 metric tons and move them over a circle with a 32-meter diameter. And by the early 1860s, ­William Armstrong’s company was producing annually more than 100 hydraulic cranes (transmitting forces through the liquid under pressure) for English docks. Steam-powered cranes of the latter half of the 19th century were able to lift more than 100 metric tons in steel mills; some of these cranes hung above the factory floor and moved on roof-mounted rails. During the 1890s, steam engines were supplanted by electric motors.

The next fundamental advance came after World War I, with Hans Liebherr’s invention of a tower crane that could swing its loads horizontally and could be quickly assembled at a construction site. Its tower top is the fulcrum of a lever whose lifting (jib) arm is balanced by a counterweight, and the crane’s capacity is enhanced by pulleys. Tower cranes were first deployed to reconstruct bombed-out German cities; they have diffused rapidly and are now seen on construction projects around the world. Typical lifting capacities are between 12 and 20 metric tons, with the record held by the K10000, made by the Danish firm Krøll Cranes. It can lift up to 120 metric tons at the maximum radius of 100 meters.

Cranes are also mainstays of global commerce: Without gantry quay cranes capable of hoisting from 40 to 80 metric tons, it would take weeks to unload the 20,000 standard-size steel containers that sit in one of today’s enormous container ships. But put cranes together in a tightly coordinated operation involving straddle carriers and trucks and the job now takes less than 72 hours. And 20,568 containers were unloaded from the Madrid Maersk in Antwerp in June 2017 in the record time of 59 hours.

Without these giant cranes, every piece of clothing, every pair of shoes, every TV, every mobile phone imported from Asia to North America or Europe would take longer to arrive and cost more to buy. As for the lifting capacity records, Liebherr now makes a truly gargantuan mobile crane that sits on an 18-wheeler truck and can support 1,200 metric tons. And, not surprisingly given China’s dominance of the industry, the Taisun, the most powerful shipbuilding gantry crane, can hoist 20,000 metric tons. That’s about half again as heavy as the Brooklyn Bridge.

This article appears in the August 2020 print issue as “Cranes (The Machines, Not Birds).”

Today’s Internet Still Relies on an ARPANET-Era Protocol: The Request for Comments

Post Syndicated from Steve Crocker original https://spectrum.ieee.org/tech-history/cyberspace/todays-internet-still-relies-on-an-arpanetera-protocol-the-request-for-comments

Each March, July, and November, we are reminded that the Internet is not quite the mature, stable technology that it seems to be. We rely on the Internet as an essential tool for our economic, social, educational, and political lives. But when the Internet Engineering Task Force meets every four months at an open conference that bounces from continent to continent, more than 1,000 people from around the world gather with change on their minds. Their vision of the global network that all humanity shares is dynamic, evolving, and continuously improving. Their efforts combine with the contributions of myriad others to ensure that the Internet always works but is never done, never complete.

The rapid yet orderly evolution of the Internet is all the more remarkable considering the highly unusual way it happens: without a company, a government, or a board of directors in charge. Nothing about digital communications technology suggests that it should be self-organizing or, for that matter, fundamentally reliable. We enjoy an Internet that is both of those at once because multiple generations of network developers have embraced a principle and a process that have been quite rare in the history of technology. The principle is that the protocols that govern how Internet-connected devices communicate should be open, expandable, and robust. And the process that invents and refines those protocols demands collaboration and a large degree of consensus among all who care to participate.

As someone who was part of the small team that very deliberately adopted a collaborative, consensus-based process to develop protocols for the ARPANET—predecessor to the Internet—I have been pleasantly surprised by how those ideas have persisted and succeeded, even as the physical network has evolved from 50-kilobit-per-second telephone lines in the mid-1960s to the fiber-optic, 5G, and satellite links we enjoy today. Though our team certainly never envisioned unforgeable “privacy passes” or unique identifiers for Internet-connected drones—two proposed protocols discussed at the task force meeting this past March—we did circulate our ideas for the ARPANET as technical memos among a far-flung group of computer scientists, collecting feedback and settling on solutions in much the same way as today, albeit at a much smaller scale.

We called each of those early memos a “Request for Comments” or RFC. Whatever networked device you use today, it almost certainly follows rules laid down in ARPANET RFCs written decades ago, probably including protocols for sending plain ASCII text (RFC 20, issued in 1969), audio or video data streams (RFC 768, 1980), and Post Office Protocol, or POP, email (RFC 918, 1984).

Of course, technology moves on. None of the computer or communication hardware used to build the ARPANET are crucial parts of the Internet today. But there is one technological system that has remained in constant use since 1969: the humble RFC, which we invented to manage change itself in those early days.

The ARPANET was far simpler than the Internet because it was a single network, not a network of networks. But in 1966, when the Pentagon’s Advanced Research Projects Agency (ARPA) started planning the idea of linking together completely different kinds of computers at a dozen or more research universities from California to Massachusetts, the project seemed quite ambitious.

It took two years to create the basic design, which was for an initial subnet that would exchange packets of data over dedicated telephone lines connecting computers at just four sites: the Santa Barbara and Los Angeles campuses of the University of California; Stanford Research Institute (SRI) in Menlo Park, Calif.; and the University of Utah in Salt Lake City. At each site, a router—we called them IMPs, for interface message processors—would chop outgoing blocks of bits into smaller packets. The IMPs would also reassemble incoming packets from distant computers into blocks that the local “host” computer could process.

In the tumultuous summer of 1968, I was a graduate student spending a few months in the computer science department at UCLA, where a close friend of mine from high school, Vint Cerf, was studying. Like many others in the field, I was much more interested in artificial intelligence and computer graphics than in networking. Indeed, some principal investigators outside of the first four sites initially viewed the ARPANET project as an intrusion rather than an opportunity. When ARPA invited each of the four pilot sites to send two people to a kickoff meeting in Santa Barbara at the end of August, Vint and I drove up from UCLA and discovered that all of the other attendees were also graduate students or staff members. No professors had come.

Almost none of us had met, let alone worked with, anyone from the other sites before. But all of us had worked on time-sharing systems, which doled out chunks of processing time on centralized mainframe computers to a series of remotely connected users, and so we all had a sense that interesting things could be done by connecting distant computers and getting their applications to interact with one another. In fact, we expected that general-purpose interconnection of computers would be so useful that it would eventually spread to include essentially every computer. But we certainly did not anticipate how that meeting would launch a collaborative process that would grow this little network into a critical piece of global infrastructure. And we had no inkling how dramatically our collaboration over the next few years would change our lives.

After getting to know each other in Santa Barbara, we organized follow-up meetings at each of the other sites so that we would all have a common view of what this eclectic network would look like. The SDS Sigma 7 computer at UCLA would be connecting to a DEC PDP-10 in Utah, an IBM System/360 in Santa Barbara, and an SDS 940 at SRI.

We would be a distributed team, writing software that would have to work on a diverse collection of machines and operating systems—some of which didn’t even use the same number of bits to represent characters. Co-opting the name of the ARPA-appointed committee of professors that had assigned us to this project, we called ourselves the Network Working Group.

We had only a few months during the autumn of 1968 and the winter of 1969 to complete our theoretical work on the general architecture of the protocols, while we waited for the IMPs to be built in Cambridge, Mass., by the R&D company Bolt, Beranek and Newman (BBN).

Our group was given no concrete requirements for what the network should do. No project manager asked us for regular status reports or set firm milestones. Other than a general assumption that users at each site should be able to remotely log on and transfer files to and from hosts at the other sites, it was up to us to create useful services.

Through our regular meetings, a broader vision emerged, shaped by three ideas. First, we saw the potential for lots of interesting network services. We imagined that different application programs could exchange messages across the network, for example, and even control one another remotely by executing each other’s subroutines. We wanted to explore that potential.

Second, we felt that the network services should be expandable. Time-sharing systems had demonstrated how you could offer a new service merely by writing a program and letting others use it. We felt that the network should have a similar capacity.

Finally, we recognized that the network would be most useful if it were agnostic about the hardware of its hosts. Whatever software we wrote ought to support any machine seamlessly, regardless of its word length, character set, instruction set, or architecture.

We couldn’t translate these ideas into software immediately because BBN had yet to release its specification for the interface to the IMP. But we wanted to get our thoughts down on paper. When the Network Working Group gathered in Utah in March 1969, we dealt out writing assignments to one another. Until we got the network running and created an email protocol, we would have to share our memos through the U. S. Mail. To make the process as easy and efficient as possible, I kept a numbered list of documents in circulation, and authors mailed copies of memos they wrote to everyone else.

Tentatively, but with building excitement, our group of grad students felt our way through the dark together. We didn’t even appoint a leader. Surely at some point the “real experts”—probably from some big-name institution in the Northeast or Washington, D.C.—would take charge. We didn’t want to step on anyone’s toes. So we certainly weren’t going to call our technical memos “standards” or “orders.” Even “proposals” seemed too strong—we just saw them as ideas that we were communicating without prior approval or coordination. Finally, we settled on a term suggested by Bill Duvall, a young programmer at SRI, that emphasized that these documents were part of an ongoing and often preliminary discussion: Request for Comments.

The first batch of RFCs arrived in April 1969. What was arguably one of our best initial ideas was not spelled out in these RFCs but only implicit in them: the agreement to structure protocols in layers, so that one protocol could build on another if desired and so that programmers could write software that tapped into whatever level of the protocol stack worked best for their needs.

We started with the bottom layer, the foundation. I wrote RFC 1, and Duvall wrote RFC 2. Together, these first two memos described basic streaming connections between hosts. We kept this layer simple—easy to define and easy to implement. Interactive terminal connections (like Telnet), file transfer mechanisms (like FTP), and other applications yet to be defined (like email) could then be built on top of it.

That was the plan, anyway. It turned out to be more challenging than expected. We wrestled with, among other things, how to establish connections, how to assign addresses that allowed for multiple connections, how to handle flow control, what to use as the common unit of transmission, and how to enable users to interrupt the remote system. Only after multiple iterations and many, many months of back-and-forth did we finally reach consensus on the details.

Some of the RFCs in that first batch were more administrative, laying out the minimalist conventions we wanted these memos to take, presenting software testing schedules, and tracking the growing mailing list.

Others laid out grand visions that nevertheless failed to gain traction. To my mind, RFC 5 was the most ambitious and interesting of the lot. In it, Jeff Rulifson, then at SRI, introduced a very powerful idea: downloading a small application at the beginning of an interactive session that could mediate the session and speed things up by handling “small” actions locally.

As one very simple example, the downloaded program could let you edit or auto-complete a command on the console before sending it to the remote host. The application would be written in a machine-agnostic language called Decode-Encode Language (DEL). For this to work, every host would have to be able to run a DEL interpreter. But we felt that the language could be kept simple enough for this to be feasible and that it might significantly improve responsiveness for users.

Aside from small bursts of experimentation with DEL, however, the idea didn’t catch on until many years later, when Microsoft released ActiveX and Sun Microsystems produced Java. Today, the technique is at the heart of every online app.

The handful of RFCs we circulated in early 1969 captured our ideas for network protocols, but our work really began in earnest that September and October, when the first IMPs arrived at UCLA and then SRI. Two were enough to start experimenting. Duvall at SRI and Charley Kline at UCLA (who worked in Leonard Kleinrock’s group) dashed off some software to allow a user on the UCLA machine to log on to the machine at SRI. On the evening of 29 October 1969, Charley tried unsuccessfully to do so. After a quick fix to a small glitch in the SRI software, a successful connection was made that evening. The software was adequate for connecting UCLA to SRI, but it wasn’t general enough for all of the machines that would eventually be connected to the ARPANET. More work was needed.

By February 1970, we had a basic host-to-host communication protocol working well enough to present it at that spring’s Joint Computer Conference in Atlantic City. Within a few more months, the protocol was solid enough that we could shift our attention up the stack to two application-layer protocols, Telnet and FTP.

Rather than writing monolithic programs to run on each computer, as some of our bosses had originally envisioned, we stuck to our principle that protocols should build on one another so that the system would remain open and extensible. Designing Telnet and FTP to communicate through the host-to-host protocol guaranteed that they could be updated independently of the base system.

By October 1971, we were ready to put the ARPANET through its paces. Gathering at MIT for a complete shakedown test—we called it “the bake-off”—we checked that each host could log on to every other host. It was a proud moment, as well as a milestone that the Network Working Group had set for itself.

And yet we knew there was still so much to do. The network had grown to connect 23 hosts at 15 sites. A year later, at a big communications conference in Washington, D.C., the ARPANET was demonstrated publicly in a hotel ballroom. Visitors were able to sit down at any of several terminals and log on to computers all over the United States.

Year after year, our group continued to produce RFCs with observations, suggested changes, and possible extensions to the ARPANET and its protocols. Email was among those early additions. It started as a specialized case of file transfer but was later reworked into a separate protocol (Simple Mail Transfer Protocol, or SMTP, RFC 788, issued in 1981). Somewhat to the bemusement of both us and our bosses, email became the dominant use of the ARPANET, the first “killer app.”

Email also affected our own work, of course, as it allowed our group to circulate RFCs faster and to a much wider group of collaborators. A virtuous cycle had begun: Each new feature enabled programmers to create other new features more easily.

Protocol development flourished. The TCP and IP protocols replaced and greatly enhanced the host-to-host protocol and laid the foundation for the Internet. The RFC process led to the adoption of the Domain Name System (DNS, RFC 1035, issued in 1987), the Simple Network Management Protocol (SNMP, RFC 1157, 1990), and the Hypertext Transfer Protocol (HTTP, RFC 1945, 1996).

In time, the development process evolved along with the technology and the growing importance of the Internet in international communication and commerce. In 1979, Vint Cerf, by then a program manager at DARPA, created the Internet Configuration Control Board, which eventually spawned the Internet Engineering Task Force. That task force continues the work that was originally done by the Network Working Group. Its members still discuss problems facing the network, modifications that might be necessary to existing protocols, and new protocols that may be of value. And they still publish protocol specifications as documents with the label “Request for Comments.”

And the core idea of continual improvement by consensus among a coalition of the willing still lives strong in Internet culture. Ideas for new protocols and changes to protocols are now circulated via email lists devoted to specific protocol topics, known as working groups. There are now about a hundred of these groups. When they meet at the triannual conferences, the organizers still don’t take votes: They ask participants to hum if they agree with an idea, then take the sense of the room. Formal decisions follow a subsequent exchange over email.

Drafts of protocol specifications are circulated as “Internet-Drafts,” which are intended for discussion leading to an RFC. One discussion that was recently begun on new network software to enable quantum Internet communication, for example, is recorded in an RFC-like Internet-Draft.

And in an ironic twist, the specification for this or any other new protocol will appear in a Request for Comments only after it has been approved for formal adoption and published. At that point, comments are no longer actually requested.

This article appears in the August 2020 print issue as “The Consensus Protocol.”

About the Author

Steve Crocker is the chief architect of the Arpanet’s Request for Comments (RFC) process and one of the founding members of the Network Working Group, the forerunner of the Internet Engineering Task Force. For many years, he was a board member of ICANN, serving as its vice chairman and then its chairman until 2017.

Solar Storms May Have Hindered SOS During Historic “Red Tent” Expedition

Post Syndicated from Nola Taylor Redd original https://spectrum.ieee.org/tech-talk/tech-history/dawn-of-electronics/solar-storms-sos-red-tent-expedition

In May 1928, a team of explorers returning from the North Pole via airship crashed on the frigid ice. Their attempts to use their portable radio transmitter to call for help failed; although they could hear broadcasts from Rome detailing attempts to rescue them, their calls could not reach a relatively nearby ship. Now, new research suggests that the communication problems may have been caused by radio “dead zones,” made worse by high solar activity that triggered massive solar storms.

High frequency radios bounce signals through the ionosphere, the upper layers of Earth’s atmosphere most affected by solar radiation. Around the poles, the ionosphere is especially challenging for radio waves to travel through—a fact not yet realized in the 1920s when scientists were just beginning to understand their movement through the charged air.

“The peculiar morphology of the high latitude ionosphere, with the large variation of the electron density…may cause frequent problems to radio communication,” says Bruno Zolesi, a researcher at the Istituto Nazionale di Geofisica e Vulcanologia in Rome. Zolesi, who studies how the Earth’s atmosphere reacts to particles emitted by the sun, is lead author of a new study probing the tragedy of the airship Italia. He and his colleagues modeled the radio environment of the North Pole at the time of the expedition. They found that space weather, which is the way that charged particles from the sun can affect the environment around planets, likely plagued the expedition, delaying the rescue of the explorers by more than a week and perhaps costing the life of at least one of the team members.

“The space weather conditions are particularly intense in polar regions,” Zolesi says.

The ‘Red Tent’

On 24 May 1928, after just over 20 hours of flight, the ‘Dirigible Italia,’ captained by Italian designer Umberto Nobile, circled the North Pole. Nobile had flown on a 1926 Norwegian expedition aboard an airship he had designed; that was the first vehicle to reach the North Pole. Two years later, he had returned to stake a claim for his native country.

After a brief ceremony at the pole, with winds too strong to attempt a landing on the ice, the vehicle turned south to make the 360-kilometer return trip to the crew’s base on the Svalbard archipelago. But an unknown problem caused the airship to plunge to the Earth, slamming into the ice and shattering the cabin. The crash killed one of the explorers. The balloon, freed from the weight of the carriage, took to the air, carrying six more crew members away, never to be seen again. The nine survivors sheltered beneath a red tent that gave its name to the historical disaster.

Among the supplies left on the ice was the simple high-frequency radio transmitter intended to allow communications between the airship and explorers on the ground. The low-powered radio ran on batteries and had a transmission range of 30 to 50 meters.

As the shipwrecked crew settled into their uncomfortable new quarters, radio operator Giuseppe Biagi began sending SOS messages. At the 55th minute of each odd hour, the prearranged time for the Italia to contact the Italian Navy’s ship, Citta dei Milano, anchored in King’s Bay, he pled for help, then listened in vain for a response.

Amazingly, while the tiny antenna could not contact the ship, it could pick up radio broadcasts from Rome—with signals originating more than ten times farther away than the point where the navy ship was docked. The explorers listened as news of their disappearance and updates on the rescue operations were broadcast.

It took nine days for someone to finally hear their calls for help. On 3 June 1928, a Russian amateur radio enthusiast, Nicolaj Schmidt, picked up the SOS with his homemade radio receiver in a small village approximately 1900 km from the Red Tent. After nearly 50 days on the ice, the explorers were ultimately rescued, though 15 of the rescuers died in the attempt.

Blocked frequencies

Over the past 90 years, the crew of the Italia have been the subject of several books and articles, as well as a 1969 Soviet-Italian move starring actor Sean Connery. The continued cultural interest over the event intrigued Zolesi and his colleagues; they hoped to combine their scientific knowledge with cultural history to explain some of the radio communication problems encountered by the survivors.

Along with exploring previously untapped regions of Earth, the first half of the twentieth century was marked by the exploration and investigation of Earth’s ionosphere. Systematic measurements of radio and telegraph transmissions provided the first realistic picture of the ionosphere and a generalized understanding of how radio waves move through charged regions. But in 1928, scientists were only beginning to understand the ionosphere and the space weather that affected how radio waves traveled.

Radio waves sent through the ionosphere travel upward at an angle, and the triangular wedge beneath their source and the point where they return to ground is known as their skip distance. Inside that area exists a dead zone wherein their signal cannot pass (with the exception of the limited ground reach they also cover). Dead zones, as we now know, vary based on the strength of the broadcast signal and the conditions of the ionosphere.

Zolesi and his colleagues relied on a standard international model of the ionosphere to provide a monthly average picture of the northern pole. But their lack of knowledge about the Earth’s atmosphere nearly proved fatal to the team. The 8.9 MHZ frequency relied on by the explorers would have fallen in the radio dead zone for locations to the north of the 66° N line of latitude. Both the Red Tent and the Citta di Milano sat at nearly 80° N, while Arkhangelsk, the closest city to Schmidt, sits at 64.5°.

The researchers also studied sunspot drawings captured by the Mount Wilson Observatory in California between 25 and 31 May 1928. They revealed a significant increase in the number of sunspot groups. Sunspots have been directly linked to increased solar radiation and electron density in the atmosphere, making them an important marker of the ionosphere’s behavior. The researchers also examined the history of two magnetic observatories in England and Scotland to understand how Earth’s geomagnetic field could have played a role. They found that the planet underwent periods of magnetic fluctuations in mid- to late May 1928, peaking on 28 May.

“The space weather conditions were affected by a significant geomagnetic storm during the early days after the shipwreck,” Zolesi says. “These conditions might have severely affected the radio communications of the survivors during the tragedy.”

The researchers concluded that it may have been impossible for the stranded team to reach the Citta di Milano with their low-powered radio, particularly given their unstable antenna and the noisy radio environment courtesy of the local coal mining industry and news agencies. The addition of skip distance issues and space weather conditions not fully understood at the time made communication an even greater challenge.

As the researchers explained in their paper, which was published in the journal Space Weather, the increased activity of the sun should have released more charged particles, which would have interacted with the layers of the upper atmosphere, especially around the northern and southern poles they are funneled to by the planet’s magnetic field. The activity would have made it more difficult for radio waves to pass through the atmosphere.

The lessons learned by the tragic Italia expedition could be particularly relevant as humans move to off-planet exploration. While space weather didn’t play a direct role in the airship’s crash, it played a crucial role in sabotaging calls for help and the belated rescue attempt. Similar problems could plague expeditions to other bodies in the solar system if the effects of space weather aren’t considered.

“For the moon and Mars, the problems could be completely different because there are different ways radio communication may be used,” Zolesi says. “But many other problems may occur due to the large number of space weather events [affecting those].”

How the Digital Camera Transformed Our Concept of History

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/silicon-revolution/how-the-digital-camera-transformed-our-concept-of-history

For an inventor, the main challenge might be technical, but sometimes it’s timing that determines success. Steven Sasson had the technical talent but developed his prototype for an all-digital camera a couple of decades too early.

A CCD from Fairchild was used in Kodak’s first digital camera prototype

It was 1974, and Sasson, a young electrical engineer at Eastman Kodak Co., in Rochester, N.Y., was looking for a use for Fairchild Semiconductor’s new type 201 charge-coupled device. His boss suggested that he try using the 100-by-100-pixel CCD to digitize an image. So Sasson built a digital camera to capture the photo, store it, and then play it back on another device.

Sasson’s camera was a kluge of components. He salvaged the lens and exposure mechanism from a Kodak XL55 movie camera to serve as his camera’s optical piece. The CCD would capture the image, which would then be run through a Motorola analog-to-digital converter, stored temporarily in a DRAM array of a dozen 4,096-bit chips, and then transferred to audio tape running on a portable Memodyne data cassette recorder. The camera weighed 3.6 kilograms, ran on 16 AA batteries, and was about the size of a toaster.

After working on his camera on and off for a year, Sasson decided on 12 December 1975 that he was ready to take his first picture. Lab technician Joy Marshall agreed to pose. The photo took about 23 seconds to record onto the audio tape. But when Sasson played it back on the lab computer, the image was a mess—although the camera could render shades that were clearly dark or light, anything in between appeared as static. So Marshall’s hair looked okay, but her face was missing. She took one look and said, “Needs work.”

Sasson continued to improve the camera, eventually capturing impressive images of different people and objects around the lab. He and his supervisor, Garreth Lloyd, received U.S. Patent No. 4,131,919 for an electronic still camera in 1978, but the project never went beyond the prototype stage. Sasson estimated that image resolution wouldn’t be competitive with chemical photography until sometime between 1990 and 1995, and that was enough for Kodak to mothball the project.

Digital photography took nearly two decades to take off

While Kodak chose to withdraw from digital photography, other companies, including Sony and Fuji, continued to move ahead. After Sony introduced the Mavica, an analog electronic camera, in 1981, Kodak decided to restart its digital camera effort. During the ’80s and into the ’90s, companies made incremental improvements, releasing products that sold for astronomical prices and found limited audiences. [For a recap of these early efforts, see Tekla S. Perry’s IEEE Spectrum article, “Digital Photography: The Power of Pixels.”]

Then, in 1994 Apple unveiled the QuickTake 100, the first digital camera for under US $1,000. Manufactured by Kodak for Apple, it had a maximum resolution of 640 by 480 pixels and could only store up to eight images at that resolution on its memory card, but it was considered the breakthrough to the consumer market. The following year saw the introduction of Apple’s QuickTake 150, with JPEG image compression, and Casio’s QV10, the first digital camera with a built-in LCD screen. It was also the year that Sasson’s original patent expired.

Digital photography really came into its own as a cultural phenomenon when the Kyocera VisualPhone VP-210, the first cellphone with an embedded camera, debuted in Japan in 1999. Three years later, camera phones were introduced in the United States. The first mobile-phone cameras lacked the resolution and quality of stand-alone digital cameras, often taking distorted, fish-eye photographs. Users didn’t seem to care. Suddenly, their phones were no longer just for talking or texting. They were for capturing and sharing images.

The rise of cameras in phones inevitably led to a decline in stand-alone digital cameras, the sales of which peaked in 2012. Sadly, Kodak’s early advantage in digital photography did not prevent the company’s eventual bankruptcy, as Mark Harris recounts in his 2014 Spectrum article “The Lowballing of Kodak’s Patent Portfolio.” Although there is still a market for professional and single-lens reflex cameras, most people now rely on their smartphones for taking photographs—and so much more.

How a technology can change the course of history

The transformational nature of Sasson’s invention can’t be overstated. Experts estimate that people will take more than 1.4 trillion photographs in 2020. Compare that to 1995, the year Sasson’s patent expired. That spring, a group of historians gathered to study the results of a survey of Americans’ feelings about the past. A quarter century on, two of the survey questions stand out:

  • During the last 12 months, have you looked at photographs with family or friends?

  • During the last 12 months, have you taken any photographs or videos to preserve memories?

In the nationwide survey of nearly 1,500 people, 91 percent of respondents said they’d looked at photographs with family or friends and 83 percent said they’d taken a photograph—in the past year. If the survey were repeated today, those numbers would almost certainly be even higher. I know I’ve snapped dozens of pictures in the last week alone, most of them of my ridiculously cute puppy. Thanks to the ubiquity of high-quality smartphone cameras, cheap digital storage, and social media, we’re all taking and sharing photos all the time—last night’s Instagram-worthy dessert; a selfie with your bestie; the spot where you parked your car.

So are all of these captured moments, these personal memories, a part of history? That depends on how you define history.

For Roy Rosenzweig and David Thelen, two of the historians who led the 1995 survey, the very idea of history was in flux. At the time, pundits were criticizing Americans’ ignorance of past events, and professional historians were wringing their hands about the public’s historical illiteracy.

Instead of focusing on what people didn’t know, Rosenzweig and Thelen set out to quantify how people thought about the past. They published their results in the 1998 book The Presence of the Past: Popular Uses of History in American Life (Columbia University Press). This groundbreaking study was heralded by historians, those working within academic settings as well as those working in museums and other public-facing institutions, because it helped them to think about the public’s understanding of their field.

Little did Rosenzweig and Thelen know that the entire discipline of history was about to be disrupted by a whole host of technologies. The digital camera was just the beginning.

For example, a little over a third of the survey’s respondents said they had researched their family history or worked on a family tree. That kind of activity got a whole lot easier the following year, when Paul Brent Allen and Dan Taggart launched Ancestry.com, which is now one of the largest online genealogical databases, with 3 million subscribers and approximately 10 billion records. Researching your family tree no longer means poring over documents in the local library.

Similarly, when the survey was conducted, the Human Genome Project was still years away from mapping our DNA. Today, at-home DNA kits make it simple for anyone to order up their genetic profile. In the process, family secrets and unknown branches on those family trees are revealed, complicating the histories that families might tell about themselves.

Finally, the survey asked whether respondents had watched a movie or television show about history in the last year; four-fifths responded that they had. The survey was conducted shortly before the 1 January 1995 launch of the History Channel, the cable channel that opened the floodgates on history-themed TV. These days, streaming services let people binge-watch historical documentaries and dramas on demand.

Today, people aren’t just watching history. They’re recording it and sharing it in real time. Recall that Sasson’s MacGyvered digital camera included parts from a movie camera. In the early 2000s, cellphones with digital video recording emerged in Japan and South Korea and then spread to the rest of the world. As with the early still cameras, the initial quality of the video was poor, and memory limits kept the video clips short. But by the mid-2000s, digital video had become a standard feature on cellphones.

As these technologies become commonplace, digital photos and video are revealing injustice and brutality in stark and powerful ways. In turn, they are rewriting the official narrative of history. A short video clip taken by a bystander with a mobile phone can now carry more authority than a government report.

Maybe the best way to think about Rosenzweig and Thelen’s survey is that it captured a snapshot of public habits, just as those habits were about to change irrevocably.

Digital cameras also changed how historians conduct their research

For professional historians, the advent of digital photography has had other important implications. Lately, there’s been a lot of discussion about how digital cameras in general, and smartphones in particular, have changed the practice of historical research. At the 2020 annual meeting of the American Historical Association, for instance, Ian Milligan, an associate professor at the University of Waterloo, in Canada, gave a talk in which he revealed that 96 percent of historians have no formal training in digital photography and yet the vast majority use digital photographs extensively in their work. About 40 percent said they took more than 2,000 digital photographs of archival material in their latest project. W. Patrick McCray of the University of California, Santa Barbara, told a writer with The Atlantic that he’d accumulated 77 gigabytes of digitized documents and imagery for his latest book project [an aspect of which he recently wrote about for Spectrum].

So let’s recap: In the last 45 years, Sasson took his first digital picture, digital cameras were brought into the mainstream and then embedded into another pivotal technology—the cellphone and then the smartphone—and people began taking photos with abandon, for any and every reason. And in the last 25 years, historians went from thinking that looking at a photograph within the past year was a significant marker of engagement with the past to themselves compiling gigabytes of archival images in pursuit of their research.

So are those 1.4 trillion digital photographs that we’ll collectively take this year a part of history? I think it helps to consider how they fit into the overall historical narrative. A century ago, nobody, not even a science fiction writer, predicted that someone would take a photo of a parking lot to remember where they’d left their car. A century from now, who knows if people will still be doing the same thing. In that sense, even the most mundane digital photograph can serve as both a personal memory and a piece of the historical record.

An abridged version of this article appears in the July 2020 print issue as “Born Digital.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

NASA’s Original Laptop: The GRiD Compass

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/silicon-revolution/nasas-original-laptop-the-grid-compass

The year 1982 was a notable one in personal computing. The BBC Micro was introduced in the United Kingdom, as was the Sinclair ZX Spectrum. The Commodore 64 came to market in the United States. And then there was the GRiD Compass.

The GRiD Compass Was the First Laptop to Feature a Clamshell Design

The Graphical Retrieval Information Display (GRiD) Compass had a unique clamshell design, in which the monitor folded down over the keyboard. Its 21.6-centimer plasma screen could display 25 lines of up to 128 characters in a high-contrast amber that the company claimed could be “viewed from any angle and under any lighting conditions.”

By today’s standards, the GRiD was a bulky beast of a machine. About the size of a large three-ring binder, it weighed 4.5 kilograms (10 pounds). But compared with, say, the Osborne 1 or the Compaq Portable, both of which had a heavier CRT screen and tipped the scales at 10.7 kg and 13 kg, respectively, the Compass was feather light. Some people call the Compass the first truly portable laptop computer.

The computer had 384 kilobytes of nonvolatile bubble memory, a magnetic storage system that showed promise in the 1970s and ’80s. With no rotating disks or moving parts, solid-state bubble memory worked well in settings where a laptop might, say, take a tumble. Indeed, sales representatives claimed they would drop the computer in front of prospective buyers to show off its durability.

But bubble memory also tends to run hot, so the exterior case was designed out of a magnesium alloy to make it a heat sink. The metal case added to the laptop’s reputation for ruggedness. The Compass also included a 16-bit 8086 microprocessor and up to 512 KB of RAM. Floppy drives and hard disks were available as peripherals.

With a price tag of US $8,150 (about $23,000 today), the Compass wasn’t intended for consumers but rather for business executives. Accordingly, it came preloaded with a text editor, a spreadsheet, a plotter, a terminal emulator, a database management system, and other business software. The built-in 1200-baud modem was designed to connect to a central computer at the GRiD Systems’ headquarters in Mountain View, Calif., from which additional applications could be downloaded.

The GRiD’s Sturdy Design Made It Ideal for Space

The rugged laptop soon found a home with NASA and the U.S. military, both of which valued its sturdy design and didn’t blink at the cost.

The first GRiD Compass launched into space on 28 November 1983 aboard the space shuttle Columbia. The hardware adaptations for microgravity were relatively minor: a new cord to plug into the shuttle’s power supply and a small fan to compensate for the lack of convective cooling in space.

The software modifications were more significant. Special graphical software displayed the orbiter’s position relative to Earth and the line of daylight/darkness. Astronauts used the feature to plan upcoming photo shoots of specific locations. The GRiD also featured a backup reentry program, just in case all of the IBMs at Mission Control failed.

For its maiden voyage, the laptop received the code name SPOC (short for Shuttle Portable On-Board Computer). Neither NASA nor GRiD Systems officially connected the acronym to a certain pointy-eared Vulcan on Star Trek, but the GRiD Compass became a Hollywood staple whenever a character had to show off wealth and tech savviness. The Compass featured prominently in Aliens, Wall Street, and Pulp Fiction.

The Compass/SPOC remained a regular on shuttle missions into the early 1990s. NASA’s trust in the computer was not misplaced: Reportedly, the GRiD flying aboard Challenger survived the January 1986 crash.

The GRiDPad 1900 Was a First in Tablet Computing

John Ellenby and Glenn Edens, both from Xerox PARC, and David Paulson had founded GRiD Systems Corp. in 1979. The company went public in 1981, and the following year they launched the GRiD Compass.

Not a company to rest on its laurels, GRiD continued to be a pioneer in portable computers, especially thanks to the work of Jeff Hawkins. He joined the company in 1982, left for school in 1986, and returned as vice president of research. At GRiD, Hawkins led the development of a pen- or stylus-based computer. In 1989, this work culminated in the GRiDPad 1900, often regarded as the first commercially successful tablet computer. Hawkins went on to invent the PalmPilot and Treo, though not at GRiD.

Amid the rapidly consolidating personal computer industry, GRiD Systems was bought by Tandy Corp. in 1988 as a wholly owned subsidiary. Five years later, GRiD was bought again, by Irvine, Calif.–based AST Research, which was itself acquired by Samsung in 1996.

In 2006 the Computer History Museum sponsored a roundtable discussion by key members of the original GRiD engineering team: Glenn Edens, Carol Hankins, Craig Mathias, and Dave Paulson, moderated by New York Times journalist (and former GRiD employee) John Markoff:

How Do You Preserve an Old Computer?

Although the GRiD Compass’s tenure as a computer product was relatively short, its life as a historic artifact goes on. To be added to a museum collection, an object must be pioneering, iconic, or historic. The GRiD Compass is all three, which is how the computer found its way into the permanent holdings of not one, but two separate Smithsonian museums.

One Compass was acquired by the National Air and Space Museum in 1989. No surprise there, seeing as how the Compass was the first laptop used in space aboard a NASA mission. Seven years later, curators at the Cooper Hewitt, Smithsonian Design Museum added one to their collections in recognition of the innovative clamshell design.

Credit for the GRiD Compass’s iconic look and feel goes to the British designer Bill Moggridge. His firm was initially tasked with designing the exterior case for the new computer. After taking a prototype home and trying to use it, Moggridge realized he needed to create a design that unified the user, the object, and the software. It was a key moment in the development of computer-human interactive design. In 2010, Moggridge became the fourth director of the Cooper Hewitt and its first director without a background as a museum professional.

Considering the importance Moggridge placed on interactive design, it’s fitting that preservation of the GRiD laptop was overseen by the museum’s Digital Collection Materials Project. The project, launched in 2017, aims to develop standards, practices, and strategies for preserving digital materials, including personal electronics, computers, mobile devices, media players, and born-digital products.

Keeping an electronic device in working order can be extremely challenging in an age of planned obsolescence. Cooper Hewitt brought in Ben Fino-Radin, a media archeologist and digital conservator, to help resurrect their moribund GRiD. Fino-Radin in turn reached out to Ian Finder, a passionate collector and restorer of vintage computers who has a particular expertise in restoring GRiD Compass laptops. Using bubble memory from Finder’s personal collection, curators at Cooper Hewitt were able to boot the museum’s GRiD and to document the software for their research collections.

Even as museums strive to preserve their old GRiDs, new GRiDs are being born. Back in 1993, former GRiD employees in the United Kingdom formed GRiD Defence Systems during a management buyout. The London-based company continues the GRiD tradition of building rugged military computers. The company’s GRiDCASE 1510 Rugged Laptop, a portable device with a 14.2-cm backlit LED display, looks remarkably like a smaller version of the Compass circa 1982. I guess when you have a winning combination, you stick with it.

An abridged version of this article appears in the June 2020 print issue as “The First Laptop in Orbit.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

How Many People Did it Take to Build the Great Pyramid?

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/tech-history/heroic-failures/how-many-people-did-it-take-to-build-the-great-pyramid

Given that some 4,600 years have elapsed since the completion of the Great Pyramid of Giza, the structure stands remarkably intact. It is a polyhedron with a regular polygon base, its volume is about 2.6 million cubic meters, and its original height was 146.6 meters, including the lost pyramidion, or capstone. We may never know exactly how the pyramid was built, but even so, we can say with some confidence how many people were required to build it.

We must start with the time constraint of roughly 20 years, the length of the reign of Khufu, the pharaoh who commissioned the construction (he died around 2530 B.C.E.). Herodotus, writing more than 21 centuries after the pyramid’s completion, was told that labor gangs totaling 100,000 men worked in three-month spells a year to finish the structure in 20 years. In 1974, Kurt Mendelssohn, a German-born British physicist, put the labor force at 70,000 seasonal workers and up to 10,000 permanent masons.

These are large overestimates; we can do better by appealing to simple physics. The potential energy of the pyramid—the energy needed to lift the mass above ground level—is simply the product of acceleration due to gravity, mass, and the center of mass, which in a pyramid is one-quarter of its height. The mass cannot be pinpointed because it depends on the specific densities of the Tura limestone and mortar that were used to build the structure; I am assuming a mean of 2.6 metric tons per cubic meter, hence a total mass of about 6.75 million metric tons. That means the pyramid’s potential energy is about 2.4 trillion joules.

To maintain his basal metabolic rate, a 70-kilogram (154-pound) man requires some 7.5 megajoules a day; steady exertion will raise that figure by at least 30 percent. About 20 percent of that increase will be converted into useful work, which amounts to about 450 kilojoules a day (different assumptions are possible, but they would make no fundamental difference). Dividing the potential energy of the pyramid by 450 kJ implies that it took 5.3 million man-days to raise the pyramid. If a work year consists of 300 days, that would mean almost 18,000 man-years, which, spread over 20 years, implies a workforce of about 900 men.

A similar number of workers might be needed to place the stones in the rising structure and then to smooth the cladding blocks (many interior blocks were just rough-cut). And in order to cut 2.6 million cubic meters of stone in 20 years, the project would have required about 1,500 quarrymen working 300 days a year and producing 0.25 cubic meter of stone per capita. The grand total of the construction labor would then be some 3,300 workers. Even if we were to double that number to account for designers, organizers, and overseers and for labor needed for transport, tool repair, the building and maintenance of on-site housing, and cooking and laundry work, the total would be still less than 7,000 workers.

During the time of the pyramid’s construction, the total population of the late Old Kingdom was 1.5 million to 1.6 million people, and hence such a labor force would not have been an extraordinary imposition on the country’s economy. The challenge was to organize the labor, plan an uninterrupted supply of building stones, and provide housing, clothing, and food for labor gangs on the Giza site.

In the 1990s, archaeologists uncovered a cemetery for workers and the foundations of a settlement used to house the builders of the two later pyramids at the site, indicating that no more than 20,000 people lived there. That an additional two pyramids were built in rapid succession at the Giza site (for Khafre, Khufu’s son, starting at 2520 B.C.E., and for Menkaure, starting 2490 B.C.E.) shows how quickly early Egyptians mastered the building of pyramids: The erection of those massive structures became just another series of construction projects for the Old Kingdom’s designers, managers, and workers. If you build things, it becomes easier to build things—a useful lesson for those who worry about the sorry state of our infrastructure.

This article appears in the June 2020 print issue as “Building the Great Pyramid.”

Who Invented Radio: Guglielmo Marconi or Aleksandr Popov?

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/dawn-of-electronics/who-invented-radio-guglielmo-marconi-or-aleksandr-popov

Who invented radio? Your answer probably depends on where you’re from.

On 7 May 1945, the Bolshoi Theater in Moscow was packed with scientists and officials of the Soviet Communist Party to celebrate the first demonstration of radio 50 years prior, by Aleksandr S. Popov. It was an opportunity to honor a native son and to try to redirect the historical record away from the achievements of Guglielmo Marconi, widely recognized throughout most of the world as the inventor of radio. Going forward, 7 May was declared to be Radio Day, celebrated across the Soviet Union and still celebrated in Russia to this day.

The claim for Popov’s primacy as radio’s inventor came from his presentation of a paper, “On the Relation of Metallic Powders to Electrical Oscillations,” and his demonstration of a radio-wave detecting apparatus at St. Petersburg University on 7 May 1895.

Aleksandr Popov Developed the First Radio Capable of Distinguishing Morse Code

Popov’s device was a simple coherer—a glass tube with two electrodes spaced a few centimeters apart with metal filings between them. The device was based on the work of French physicist Edouard Branly, who described such a circuit in 1890, and of English physicist Oliver Lodge, who refined it in 1893. The electrodes would initially have a high resistance, but when they were hit with an electric impulse, a low-resistance path would develop, allowing conductivity until the metal filings clumped together and the resistance became too steep. The coherer had to be tapped or shaken after each use to rescatter the filings.

According to the A. S. Popov Central Museum of Communications, in St. Petersburg, Popov’s device was the world’s first radio receiver capable of distinguishing signals by duration. He used a Lodge coherer indicator and added a polarized telegraph relay, which served as a direct-current amplifier. The relay allowed Popov to connect the output of the receiver to an electric bell, recorder, or telegraph apparatus, providing electromechanical feedback. [The device at top, from the museum’s collections, has a bell.] The feedback automatically reset the coherer: When the bell rang, the coherer was simultaneously shaken.

On 24 March 1896, Popov gave another groundbreaking public demonstration, this time sending Morse code via wireless telegraphy. Once again at St. Petersburg University at a meeting of the Russian Physicochemical Society, Popov sent signals between two buildings 243 meters apart. A professor stood at the blackboard in the second building, recording the letters that the Morse code spelled out: Heinrich Hertz.

Coherer-based designs similar to Popov’s became the basis of first-generation radio communication equipment. They remained in use until 1907, when crystal receivers eclipsed them.

Popov and Marconi Had Very Different Views About Radio

Popov was a contemporary of Marconi’s, but the two men developed their radio apparatuses independently and without knowledge of the other’s work. Making a definitive claim of who was first is complicated by inadequate documentation of events, conflicting definitions of what constitutes a radio, and national pride.

One of the reasons why Marconi gets the credit and Popov doesn’t is that Marconi was much more savvy about intellectual property. One of the best ways to preserve your place in history is to secure patents and publish your research findings in a timely way. Popov did neither. He never pursued a patent for his lightning detector, and there is no official record of his 24 March 1896 demonstration. He eventually abandoned radio to turn his attention to the newly discovered Röntgen waves, also known as X-rays.

Marconi, on the other hand, filed for a British patent on 2 June 1896, which became the first application for a patent in radiotelegraphy. He quickly raised capital to commercialize his system, built up a vast industrial enterprise, and went on to be known—outside of Russia—as the inventor of radio.

Although Popov never sought to commercialize his radio as a means of sending messages, he did see potential in its use for recording disturbances in the atmosphere—a lightning detector. In July 1895, he installed his first lightning detector at the meteorological observatory of the Institute of Forestry in St. Petersburg. It was able to detect thunderstorms up to 50 kilometers away. He installed a second detector the following year at the All-Russia Industrial and Art Exhibition at Nizhny Novgorod, about 400 km east of Moscow.

Within several years, the clockmaking company Hoser Victor in Budapest was manufacturing lightning detectors based on Popov’s work.

A Popov Device Found Its Way to South Africa

One of those machines made it all the way to South Africa, some 13,000 km away. Today, it can be found in the museum of the South African Institute for Electrical Engineers (SAIEE) in Johannesburg.

Now, it’s not always the case that museums know what’s in their own collections. The origins of equipment that’s long been obsolete can be particularly hard to trace. With spotty record keeping and changes in personnel, institutional memory can lose track of what an object is or why it was important.

That might have been the fate of the South African Popov detector, but for the keen eye of Dirk Vermeulen, an electrical engineer and longtime member of the SAIEE Historical Interest Group. For years, Vermeuelen assumed that the object was an old recording ammeter, used to measure electric current. One day, though, he decided to take a closer look. To his delight, he learned that it was probably the oldest object in the SAIEE collection and the only surviving instrument from the Johannesburg Meteorological Station.

In 1903 the colonial government had ordered the Popov detector as part of the equipment for the newly established station, located on a hill on the eastern edge of town. The station’s detector is similar to Popov’s original design, except that the trembler used to shake up the filings also deflected a recording pen. The recording chart was wrapped around an aluminum drum that revolved once per hour. With each revolution of the drum, a separate screw advanced the chart by 2 millimeters, allowing activity to be recorded over the course of days.

Vermeulen wrote up his discovery [PDF] for the December 2000 Proceedings of the IEEE. Sadly, he passed away about a year ago, but his colleague Max Clarke arranged to get IEEE Spectrum a photo of the South African detector. Vermeulen was a tireless advocate for creating a museum to house the SAIEE’s collection of artifacts, which finally happened in 2014. It seems fitting that in an article that commemorates an early pioneer of radio, I also pay tribute to Vermeulen and the rare radio-wave detector that he helped bring to light.

An abridged version of this article appears in the May 2020 print issue as “The First Radio.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Al Alcorn, Creator of Pong, Explains How Early Home Computers Owe Their Color Graphics to This One Cheap, Sleazy Trick

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-talk/tech-history/silicon-revolution/al-alcorn-creator-of-pong-explains-how-early-home-computers-owe-their-color-to-this-one-cheap-sleazy-trick

In March, we published a Hands On article about Matt Sarnoff’s modern homebrew computer that uses a very old hack: NTSC artifact color. This hack allows digital systems without specialized graphics hardware to produce color images by exploiting quirks in how TVs decode analog video signals.

NTSC artifact color was used most notably by the Apple II in 1977, where Steve “Woz” Wozniak’s use of the hack brought it to wide attention; it was later used in the IBM PC and TRS-80 Color Computers. But it was unclear where the idea had originally come from, so we were thrilled to see that video game and electrical engineering legend Allan Alcorn left a comment on the article with an answer: the first color computer graphics that many people ever saw owe their origin to a cheap test tool used in a Californian TV repair shop in the 1960s. IEEE Spectrum talked to Alcorn to find out more:

Stephen Cass: Analog NTSC televisions generate color by looking at the phase of a signal relative to a reference frequency. So how did you come across this color test tool, and how did it work?

Al Alcorn: When I was 13, 14, my neighbor across the street had a television repair shop. I would go down there and at the same time, I had my father sign me up for an RCA correspondence course on radio and television repair. So, by the time I got to Berkeley, I was a journeyman TV repairman and actually paid my way through college through television. In one repair shop, there was a real cheap, sleazy color bar generator [for testing televisions]. And instead of doing color properly by synthesizing the phases and stuff like that, it simply used a crystal that was 3.58 megahertz [the carrier frequency for the color signal] minus 15.750 kilohertz, which was the horizontal scan frequency. So it slipped one phase, 360 degrees, every scan line. You put that signal on the screen and you’ve got a color bar from left to right. It really was really the cheapest, sleaziest way of doing it!

SC: How did that idea of not doing NTSC “by the book” enter into your own designs?

AA: So, I learned the cheap, sleazy way doing repair work. But then I got a job at Ampex [a leader in audio/visual technology at the time]. Initially, I wanted to be an analog engineer, digital was not as interesting. At Ampex, it was the first time I saw video being done by digital circuits; they had gotten fast enough, and that opened my eyes up. [Then I went to Atari]. Nolan [Bushnell, co-founder of Atari] decided we wanted to be in the home consumer electronics space. We had done this [monochrome] arcade game [1972’s Pong] which got us going, but he always wanted to be in the consumer space. I worked with another engineer and we reduced the entire logic of the Pong game down to a single N-channel silicon chip. Anyway, part of the way into the design, Nolan said, “Oh, by the way, it has to be colored.” But I knew he was going to pull this stunt, so I’d already chosen the crystal [that drove the chip] to be 3.58 MHz, minus 15.750 kilohertz.

SC: Why did you suspect he was going to do that?

AA: Because there never was a plan. We had no outline or business plan, [it was just Nolan]. I’m sure you’ve heard that the whole idea behind the original arcade Pong was that it was a test for me just to practice, building the simplest possible game. But Nolan lied to me and said it was going to be a home product. Well, at the end it was kind of sad, a failure, because I had like 70 ICs in it, and that was [too expensive] for a home game. But [then Nolan decided] it would work for an arcade game! And near the end of making [arcade] Pong, Nolan said, “Well, where’s the sound?” I said “What do you mean, sound?” I didn’t want to add in any more parts. He said “I want the roar of the crowd of thousands applauding.” And [Ted] Dabney, the other owner said “I want boos and hisses.” I said to them “Okay, I’ll be right back.” I just went in with a little probe, looking for around the vertical sync circuit for frequencies that [happened to be in the audible range]. I found a place and used a 555 timer [to briefly connect the circuit to a loudspeaker to make blip sounds when triggered]. I said “There you go Nolan, if you don’t like it, you do it.” And he said “Okay.” Subsequently, I’ve seen articles about how brilliant the sound was!  The whole idea is to get the maximum functionality for the minimum circuitry. It worked. We had $500 in the bank. We had nothing and so we put it out there. Time is of the essence.

SC: So, in the home version of Pong, the graphics would simply change color from one side of the screen to the other?

AA: Right, the whole goal for doing this was just to put on the box: “Color!” Funny story—home Pong becomes a hit. This is like in 1974, 75. It’s a big hit. And they’re creating advertisements for television. Trying to record the Pong signal onto videotape. I get a call from some studio somewhere, saying, “We can’t get it to play on the videotape recorder, why?”  I say, “Well, it’s not really video! There’s no interlace… Treat it as though it’s PAL, just run up through a standard converter.”

SC: How does Wozniak get wind of this?

AA: In those days, in Silicon Valley, we didn’t keep secrets. I hired Steve Jobs on a fluke, and he’s not an engineer. His buddy Woz was working at HP, but we were a far more fun place to hang out. We had a production floor with about 30 to 50 arcade video games being shipped, and they were on the floor being burnt in. Jobs didn’t get along with the other guys very well, so he’d work at night. Woz would come in and play while Jobs did his work, or got Woz to do it for him. And I enjoyed Woz. I mean, this guy is a genius, I mean, a savant. It’s just like, “Oh my God.”

When the Apple II was being done, I helped them. I mean, I actually loaned them my oscilloscope, I had a 465 Tektronix scope, which I still have, and they designed the Apple II with it. I designed Pong with it. I did some work, I think, on the cassette storage. And then I remember showing Woz the trick for the hi-res color, explaining, sitting him down and saying, “Okay, this is how NTSC is supposed to work.” And then I said, “Okay. Now the reality is that if you do everything at this clock [frequency] and you do this with a pulse of square waves…” And basically explained the trick. And he ran with it.  That was the tradition. I mean, it was cool. I was kind of showing off!

SC: When people today are encouraged to tinker and experiment with electronics, it’s typically using things like the Arduino, which are heavily focused on digital circuits. Do you think analog engineering has been neglected?

AA: Well, it certainly is. There was a time, I think it was in the ’90s, where it got so absurd that there just weren’t any good analog engineers out there. And you really need analog engineers on certain things. A good analog engineer at that time was highly paid. Made a lot of money because they’re just rare. So, yeah. But most kids want to be—well, just want to get rich. And the path is through programming something on an iPhone. And that’s it. You get rich and go. But there is a lot of value in analog engineering. 

In World War I, British Biplanes Had Wireless Phones in the Cockpit

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/dawn-of-electronics/in-world-war-i-british-biplanes-had-wireless-phones-in-cockpit

As soon as the first humans went up in hot air balloons in the late 1700s, military strategists saw the tantalizing possibilities of aerial reconnaissance. Imagine being able to spot enemy movements and artillery from on high—even better if you could instantly communicate those findings to colleagues on the ground. But the technology of the day didn’t offer an elegant way to do that.

By the early 20th century, all of the necessary elements were in place to finally make aerial reconnaissance a reality: the telegraph, the telephone, and the airplane. The challenge was bringing them together. Wireless enthusiasts faced reluctant government bureaucrats, who were parsimonious in funding an unproven technology.

Wireless Telegraphy Provided Vital Intel During World War I Battles

One early attempt involved wireless telegraphy—sending telegraph signals by radio. Its main drawback was size. The battery pack and transmitter weighed up to 45 kilograms and took up an entire seat on a plane, sometimes overflowing into the pilot’s area. The wire antenna trailed behind the plane and had to be reeled in before landing. There was no room for a dedicated radio operator, and so the pilot would have to do everything: observe the enemy, consult the map, and tap out coordinates in Morse code, all while flying the plane under enemy fire.

Despite the complicated setup, some pioneers managed to make it work. In 1911, First Lieutenant Benjamin D. Foulois, pilot of the U.S. Army’s sole airplane, flew along the Mexican border and reported to Signal Corps stations on the ground by Morse code. Three years later, under the auspices of the British Royal Flying Corps (RFC), Lieutenants Donald Lewis and Baron James tried out air-to-air radiotelegraphy by flying 16 kilometers apart, communicating by Morse code as they flew.

It didn’t take long for the RFC’s wireless system to see its first real action. England entered World War I on 4 August 1914. On 6 September while flying during the first Battle of the Marne in France, Lewis spotted a 50-km gap in the enemy line. He sent a wireless message reporting what he saw, and British and French troops charged the gap. It was the first time that a wireless message sent from a British plane was received and acted upon. British army commanders became instant evangelists for wireless, demanding more equipment and training for both pilots and ground support.

From then on, the RFC, which had formed in 1912 under Captain Herbert Musgrave, grew quickly. Initially, Musgrave had been tasked with investigating a laundry list of war-related activities—ballooning, kiting, photography, meteorology, bomb-dropping, musketry, and communication. He decided to focus on the last. At the start of the war, the RFC took over the Experimental Marconi Station at Brooklands Aerodrome in Surrey, southwest of London.

Brooklands had been the site of the first powered flight in England, in 1909, even though it wasn’t an ideal spot for an airfield. The runway sat in the middle of a motor racetrack, high-tension cables surrounded the field on three sides, and two 30-meter-tall brick chimneys stood to the east.

At first, reconnaissance pilots reported on the effectiveness of artillery firings by giving directional instructions. “About 50 yards short and to the right” was one message that Lewis sent at Marne. That’s a fairly long string for a pilot to tap out in Morse code. By October 1914, the British had developed maps with grid references, which meant that with just a few letters and numbers, such as “A5 B3,” you could indicate direction and distance. Even with that simplification, however, using radiotelegraphy was still cumbersome.

Voice Calls From the Cockpit Relied on Good Microphones

Direct voice communication via wireless telephony was a better solution—except that the open cockpit of a biplane wasn’t exactly conducive to easy conversation. Intense noise, vibration, and often-violent air disturbances drowned out voices. The muscles of the face had trouble retaining their shape under varying wind pressure. Pilots had difficulty being understood by crewmen sitting just a few centimeters away, never mind being heard through a microphone over a radio that had to distinguish voice from background noise.

In the spring of 1915, Charles Edmond Prince was sent to Brooklands to lead the development of a two-way voice system for aircraft. Prince had worked as an engineer for the Marconi Co. since 1907, and he and his team, many of whom also came from Marconi, soon got an air-to-ground system up and running.

Prince’s system was not at all like a modern cellphone, nor even like the telephones of the time. Although the pilot could talk to the ground station, the ground operator replied in Morse code. It took another year to develop ground-to-air and machine-to-machine wireless telephony.

Prince’s group experimented with a variety of microphones. Eventually they settled on an older Hunnings Cone microphone that had a thick diaphragm. Through trial and error, they learned the importance of testing the microphone outside the laboratory and under typical flight conditions. They found it almost impossible to predict how a particular microphone would work in the air based solely on its behavior on the ground. As Prince later wrote about the Hunnings Cone, “it appeared curiously dead and ineffective on the ground, but seemed to take on a new sprightliness in the air.”

The diaphragm’s material was also important. The team tested carbon, steel, ebonite, celluloid, aluminum, and mica. Mica was the ultimate winner because its natural frequency was the least affected by engine noise. (Prince published his findings after the war, in a 1920 journal of the Institution of Electrical Engineers (IEE). If you have a subscription to IEEE Xplore, you can read Prince’s paper here [PDF].)

Prince was an early proponent of vacuum tubes, and so his radio relied on tubes rather than crystals. But the tubes his team initially used were incredibly problematic and unreliable, and the team worked through several different models. After Captain H.J. Round of the Marconi Co. joined Prince’s group, he designed vacuum tubes specifically for airborne applications.

During the summer of 1915, Prince’s group successfully tested the first air-to-ground voice communication using an aircraft radio telephony transmitter. Shortly thereafter, Captain J.M. Furnival, one of Prince’s assistants, established the Wireless Training School at Brooklands. Every week 36 fighter pilots passed through to learn how to use the wireless apparatus and the art of proper articulation in the air. The school also trained officers how to maintain the equipment.

Hands-Free Calling Via the Throat Microphone

Prince’s team didn’t stop there. In 1918, they released a new aviator cap that incorporated telephone receivers over the ears and a throat microphone. The throat mic was built into the cap and wrapped around the neck so that it could pick up the vibrations directly from the pilot’s throat, thus avoiding the background noise of the wind and the engine. This was a significant advancement because it allowed the pilots to go “hands free,” as Captain B.S. Cohen wrote in his October 1919 engineering report.

By the end of the war, Prince and his engineers had achieved air-to-ground, ground-to-air, and machine-to-machine wireless speech transmission. The Royal Air Force had equipped 600 planes with continuous-wave voice radio and set up 1,000 ground stations with 18,000 wireless operators.

This seems like a clear example of how military technology drives innovation during times of war. But tracing the history of the achievement muddies the water a bit.

In the formal response to Prince’s 1920 IEE paper, Captain P.P. Eckersley called the airplane telephone as much a problem of propaganda as it was a technical challenge. By that, he meant Prince didn’t have an unlimited R&D budget, and so he had to prove that aerial telephony was going to have practical applications.

In his retelling of the development, Prince was proud of his team’s demonstration for Lord Kitchener at St. Omer in February 1916, the first practical demonstration of the device.

But Major T. Vincent Smith thought such a demonstration was ill-advised. Smith, a technical advisor to the RFC, argued that showing the wireless telephone to the higher command would only inflame their imaginations, believing such a device could solve all of their considerable communication difficulties. Smith saw it as his duty to dampen enthusiasm, lest he be asked to “do all sorts of impossible things.”

Both Round, the vacuum tube designer, and Harry M. Dowsett, Marconi’s chief of testing, added nuance to Prince’s version of events. Round noted that investigations into vacuum-tube systems for sending and receiving telephony began in 1913, well before the war was under way. Dowsett said that more credit should be given to the Marconi engineers who created the first working telephony set (as opposed to Prince’s experimental set of 1915).

In his 1920 article, Prince acknowledges that he did not include the full history and that his contribution was the novel application of existing circuitry to use in an airplane. He then gives credit to the contributions of Round and other engineers, as well as the General Electric Co., which had patented a similar aerial telephony system used by the U.S. Army Signal Corps.

History rarely has room for so much detail. And so it is Prince—and Prince alone—who gets the credit line for the aerial telephony set that is now in the collections of the Science Museum London. It’s up to us to remember that this innovative machine was the work not of one but of many.

An abridged version of this article appears in the April 2020 print issue as “Calling From the Cockpit.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Meet the Roomba’s Ancestor: The Cybernetic Tortoise

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/space-age/meet-roombas-ancestor-cybernetic-tortoise

In the robotics family tree, Roomba’s ancestors were probably Elmer and Elsie, a pair of cybernetic tortoises invented in the 1940s by neurophysiologist W. Grey Walter. The robots could “see” by means of a rotating photocell that steered them toward a light source. If the light was too bright, they would retreat and continue their exploration in a new direction. Likewise, when they ran into obstacles, a touch sensor would compel the tortoises to reverse and change course. In this way, Elmer and Elsie slowly explored their surroundings.

Walter was an early researcher into electroencephalography (EEG), a technique for detecting the electrical activity of the brain using electrodes attached to the scalp. Among his notable clinical breakthroughs was the first diagnosis of a brain tumor by EEG. In 1939 he joined the newly established Burden Neurological Institute in Bristol, England, as head of its physiology department, and he remained at the Burden for the rest of his career.

Norbert Wiener’s cybernetics movement gave birth to a menagerie of cybernetic creatures

In the late 1940s, Walter became involved in the emerging community of scientists who were interested in cybernetics. The field’s founder, Norbert Wiener, defined cybernetics as “the scientific study of control and communication in the animal and the machine.” In the first wave of cybernetics, people were keen on building machines to model animal behavior. Claude Shannon played around with a robotic mouse named Theseus that could navigate mazes. W. Ross Ashby built the Homeostat, a machine that automatically adapted to inputs so as to remain in a stable state.

Walter’s contribution to this cybernetic menagerie was an electromechanical tortoise, which he began working on in the spring of 1948 in his spare time. His first attempts were inelegant. In 1951 W. J. “Bunny” Warren, an electrical engineer at the Burden, constructed six tortoises for Walter that were more solidly engineered. Two of these six tortoises became Elmer and Elsie, their names taken from Grey’s somewhat contrived acronym: ELectro MEchanical Robots, Light Sensitive, with Internal and External stability.

Walter considered Elmer and Elsie to be the Adam and Eve of a new species, Machina speculatrix. The scientific nomenclature reflected the robots’ exploratory or speculative behavior. The creatures each had a smooth protective shell and a protruding neck, so Walter put them in the Linnaean genus Testudo, or tortoise. Extending his naming scheme, he dubbed Shannon’s maze-crawling mouse Machina labyrinthia and Ashby’s Homestat Machina sopora (sleeping machine).

Did W. Grey Walter’s cybernetic tortoises exhibit free will?

Each tortoise moved on three wheels with two sets of motors, one for locomotion and the other for steering. Its “brain” consisted of two vacuum tubes, which Walter said gave it the equivalent of two functioning neurons.

Despite such limited equipment, the tortoises displayed free will, he claimed. In the May 1950 issue of Scientific American [PDF], he described how the photocell atop the tortoise’s neck scanned the surroundings for a light source. The photocell was attached to the steering mechanism, and as the tortoise searched, it moved forward in a circular pattern. Walter compared this to the alpha rhythm of the electric pulses in the brain, which sweeps over the visual areas and at the same time releases impulses for the muscles to move.

In a dark room, the tortoise wandered aimlessly. When it detected a light, the tortoise moved directly toward the source. But if the light surpassed a certain brightness, it retreated. Presented with two distinct light sources, it would trace a path back and forth between the pair. “Like a moth to a flame,” Walter wrote, the tortoise oscillated between seeking and withdrawing from the lights.

The tortoise had a running light that came on when it was searching for a light source. Originally, this was just to signal to observers what command the robot was processing, but it had some unintended consequences. If Elmer happened to catch a glimpse of itself in a mirror, it would begin moving closer to the image until the reflected light became too bright, and then it would retreat. In his 1953 book The Living Brain, Walter compared the robot to “a clumsy Narcissus.”

Similarly, if Elmer and Elsie were in the same area and saw the other’s light, they would lock onto the source and approach, only to veer away when they got too close. Ever willing to describe the machines in biological terms, Walter called this a mating dance where the unfortunate lovers could never “consummate their ‘desire.’ ”

The tortoise’s shell did much more than just protect the machine’s electromechanical insides. If the robot bumped into an obstacle, a touch sensor in the shell caused it to reverse and change direction. In this manner it could explore its surroundings despite being effectively blind.

M. speculatrix was powered by a hearing-aid battery and a 6-volt battery. When its wanderings were done—that is, when its battery levels were low—it made its way to its hutch. There, it could connect its circuits, turn off its motors, and recharge.

Elmer and Elsie were a huge hit at the 1951 Festival of Britain

During the summer of 1951, Elmer and Elsie performed daily in the science exhibition at the Festival of Britain. Held at sites throughout the United Kingdom, the festival drew millions of visitors. The tortoises were a huge hit. Attendees wondered at their curious activity as they navigated their pen, moved toward and away from light sources, and avoided obstacles in their path. A third tortoise with a transparent shell was on display to showcase the inner workings and to advertise the component parts.

Even as M. speculatrix was wowing the public, Walter was investigating the next evolution of the species. Elmer and Elsie successfully demonstrated unpredictable behavior that could be compared with a basic animal reaction to stimuli, but they never learned from their experience. They had no memory and could not adapt to their environment.

Walter dubbed his next experimental tortoise M. docilis, from the Latin for teachable, and he attempted to build a robot that could mimic Pavlovian conditioned responses. Where the Russian psychologist used dogs, food, and some sort of sound, Walter used his cybernetic tortoises, light, and a whistle. That is, he taught his M. docilis tortoises that the sound of a whistle was the same as a light source and that the tortoise would move toward the sound even if no light was present.

Walter published his findings on M. docilis in a second Scientific American article, “A Machine That Learns” [PDF]. This follow-up article had much to offer electrical engineers, including circuit diagrams and a technical discussion of some of the challenges in constructing the robots, such as amplifying the sound of the whistle to overcome the noise of the motors.

The brain of M. docilis was CORA (short for COnditioned Reflex Analog) circuitry, which detected repeated coincidental sensory inputs on separate channels, such as light and sound that happened at the same time. After CORA logged a certain number of repetitions, often between 10 and 20 instances, it linked the resulting behavior, which Walter described as a conditioned response. Just as CORA could learn a behavior, it could also forget it. If the operator teased the tortoise by withholding the light from the sound of the whistle, CORA would delink the response.

At the end of his article, Walter acknowledged that future experiments with more circuits and inputs were feasible, but the increase in complexity would come at the cost of stability. Eventually, scientists would find it too difficult to model the behavior and understand the reactions to multiple stimuli.

Walter discontinued his experiments with robotic tortoises after CORA, and the research was not picked up by others. As the historian of science Andrew Pickering noted in his 2009 book, The Cybernetic Brain, “CORA remains an unexploited resource in the history of cybernetics.”

Walter’s legacy lives on in his tortoises. The late Rueben Hoggett compiled a treasure trove of archival research on Walter’s tortoises, which can be found on Hoggett’s website, Cybernetic Zoo. The three tortoises from the Festival of Britain were auctioned off, and the winner, Wes Clutterbuck, nicknamed them Slo, Mo, and Shun. Although two were later destroyed in a fire, the Clutterbuck family donated the one with a transparent shell to the Smithsonian Institution. The only other known surviving tortoise from the original six crafted by Bunny Warren is at the Science Museum in London. It is currently on exhibit in the Making the Modern World Gallery.

An abridged version of this article appears in the March 2020 print issue as “The Proto-Roomba.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

When Artists, Engineers, and PepsiCo Collaborated, Then Clashed at the 1970 World’s Fair

Post Syndicated from W. Patrick McCray original https://spectrum.ieee.org/tech-history/silicon-revolution/when-artists-engineers-and-pepsico-collaborated-then-clashed-at-the-1970-worlds-fair

On 18 March 1970, a former Japanese princess stood at the center of a cavernous domed structure on the outskirts of Osaka. With a small crowd of dignitaries, artists, engineers, and business executives looking on, she gracefully cut a ribbon that tethered a large red balloon to a ceremonial Shinto altar. Rumbles of thunder rolled out from speakers hidden in the ceiling. As the balloon slowly floated upward, it appeared to meet itself in midair, reflecting off the massive spherical mirror that covered the walls and ceiling.

With that, one of the world’s most extravagant and expensive multimedia installations officially opened, and the attendees turned to congratulate one another on this collaborative melding of art, science, and technology. Underwritten by PepsiCo, the installation was the beverage company’s signal contribution to Expo ’70, the first international exposition to be held in an Asian country.

A year and a half in the making, the Pepsi Pavilion drew eager crowds and elicited effusive reviews. And no wonder: The pavilion was the creation of Experiments in Art and Technology—E.A.T.—an influential collective of artists, engineers, technicians, and scientists based in New York City. Led by Johan Wilhelm “Billy” Klüver, an electrical engineer at Bell Telephone Laboratories, E.A.T. at its peak had more than a thousand members and enjoyed generous support from corporate donors and philanthropic foundations. Starting in the mid-1960s and continuing into the ’70s, the group mounted performances and installations that blended electronics, lasers, telecommunications, and computers with artistic interpretations of current events, the natural world, and the human condition.

E.A.T. members saw their activities transcending the making of art. Artist–engineer collaborations were understood as creative experiments that would benefit not just the art world but also industry and academia. For engineers, subject to vociferous attacks about their complicity in the arms race, the Vietnam War, environmental destruction, and other global ills, the art-and-technology movement presented an opportunity to humanize their work.

Accordingly, Klüver and the scores of E.A.T. members in the United States and Japan who designed and built the pavilion considered it an “experiment in the scientific sense,” as the 1972 book Pavilion: Experiments in Art and Technology stated. Klüver pitched the installation as a “piece of hardware” that engineers and artists would program with “software” (that is, live performances) to create an immersive visual, audio, and tactile experience. As with other E.A.T. projects, the goal was not about the product but the process.

Pepsi executives, unsurprisingly, viewed their pavilion on somewhat different terms. These were the years of the Pepsi Generation, the company’s mildly countercultural branding. For them, the pavilion would be at once an advertisement, a striking visual statement, and a chance to burnish the company’s global reputation. To that end, Pepsi directed close to US $2 million (over $13 million today) to E.A.T. to create the biggest, most elaborate, and most expensive art project of its time.

Perhaps it was inevitable, but over the 18 months it took E.A.T. to execute the project, Pepsi executives grew increasingly concerned about the group’s vision. Just a month after the opening, the partnership collapsed amidst a flurry of recriminating letters and legal threats. And yet, despite this inglorious end, the participants considered the pavilion a triumph.

The pavilion was born during a backyard conversation in the fall of 1968 between David Thomas, vice president in charge of Pepsi’s marketing, and his neighbor, Robert Breer, a sculptor and filmmaker who belonged to the E.A.T. collective. Pepsi had planned to contract with Disney to build its Expo ’70 exhibition, as it had done for the 1964 World’s Fair in New York City. Some Pepsi executives were, however, concerned that the conservative entertainment company wouldn’t produce something hip enough for the burgeoning youth market, and they had memories of the 1964 project, when Disney ran well over its already considerable budget. Breer put Thomas in touch with Klüver, productive dialogue ensued, and the company hired E.A.T. in December 1968.

Klüver was a master at straddling the two worlds of art and science. Born in Monaco in 1927 and raised in Stockholm, he developed a deep appreciation for cinema as a teen, an interest he maintained while studying with future Nobel physicist Hannes Alfvén. After earning a Ph.D. in electrical engineering at the University of California, Berkeley, in 1957, he accepted a coveted research position at Bell Labs in Murray Hill, N.J.

While keeping up a busy research program, Klüver made time to explore performances and gallery openings in downtown Manhattan and to seek out artists. He soon began collaborating with artists such as Yvonne Rainer, Andy Warhol, Jasper Johns, and Robert Rauschenberg, contributing his technical expertise and helping to organize exhibitions and shows. His collaboration with Jean Tinguely on a self-destructing sculpture, called Homage to New York, appeared on the April 1969 cover of IEEE Spectrum. Klüver emerged as the era’s most visible and vocal spokesperson for the merger of art and technology in the United States. Life magazine called him the “Edison-Tesla-Steinmetz-Marconi-Leonardo da Vinci of the American avant-garde.”

Klüver’s supervisor, John R. Pierce, was tolerant and even encouraging of his activities. Pierce had his own creative bent, writing science fiction in his spare time and collaborating with fellow Bell engineer Max Mathews to create computer-generated music. Meanwhile, Bell Labs, buoyed by the economic prosperity of the 1960s, supported a small coterie of artists-in-residence, including Nam June Paik, Lillian Schwartz, and Stan VanDerBeek.

In time, Klüver devised more ambitious projects. For his 1966 orchestration of 9 Evenings: Theatre and Engineering, nearly three dozen engineering colleagues worked with artists to build wireless radio transmitters, carts that floated on cushions of air, an infrared television system, and other electronics. Held at New York City’s 69th Regiment Armory—which in 1913 had hosted a pathbreaking exhibition of modern art9 Evenings expressed a new creative culture in which artists and engineers collaborated.

In the midst of organizing 9 Evenings, Klüver, along with artists Rauschenberg and Robert Whitman and Bell Labs engineer Fred Waldhauer, founded Experiments in Art and Technology. By the end of 1967, more than a thousand artists and technical experts had joined. And a year later, E.A.T. had scored the commission to create the Pepsi Pavilion.

From the start, E.A.T. envisioned the pavilion as a multimedia environment that would offer a flexible, personalized experience for each visitor and that would express irreverent, uncommercial, and antiauthoritarian values.

But reaching consensus on how to realize that vision took months of debate and argument. Breer wanted to include his slow-moving cybernetic “floats”—large, rounded, self-driving sculptures powered by car batteries. Whitman was becoming intrigued with lasers and visual perception, and felt there should be a place for that. Forrest “Frosty” Myers argued for an outdoor light installation using searchlights, his focus at the time. Experimental composer David Tudor imagined a sophisticated sound system that would transform the Pepsi Pavilion into both recording studio and instrument.

“We’re all painters,” Klüver recalled Rauschenberg saying, “so let’s do something nonpainterly.” Rauschenberg’s attempt to break the stalemate prompted a further flood of suggestions. How about creating areas where the temperature changed? Or pods that functioned as anechoic chambers—small spaces of total silence? Maybe the floor could have rear-screen projections that gave visitors the impression of walking over flames, clouds, or swimming fish. Perhaps wind tunnels and waterfalls could surround the entrances.

Eventually, Klüver herded his fellow E.A.T. members into agreeing to an eclectic set of tech-driven pieces. The pavilion building itself was a white, elongated geodesic dome, which E.A.T. detested and did its best to obscure. And so a visitor approaching the finished pavilion encountered not the building but a veil of artificial fog that completely enshrouded the structure. At night, the fog was dramatically lit and framed by high-intensity xenon lights designed by Myers.

On the outdoor terrace, Breer’s white floats rolled about autonomously like large bubbles, emitting soft sounds—speech, music, the sound of sawing wood—and gently reversing themselves when they bumped into something. Steps led downward into a darkened tunnel, where visitors were greeted by a Japanese hostess wearing a futuristic red dress and bell-shaped hat and handed a clear plastic wireless handset. Stepping farther into the tunnel, they would be showered with red, green, yellow, and blue light patterns from a krypton laser system, courtesy of Whitman.

Ascending into the main pavilion, the visitors’ attention would be drawn immediately upward, where their reflections off the huge spherical mirror made it appear that they were floating in space. The dome also created auditory illusions, as echoes and reverberations toyed with people’s sense of acoustic reality. The floors of the circular room sloped gently upward to the center, where a glass insert in the floor allowed visitors to peer down into the entrance tunnel with its laser lights. Other parts of the floor were covered in different materials and textures—stone, wood, carpet. As the visitor moved around, the handset delivered a changing array of sounds. While a viewer stood on the patch of plastic grass, for example, loop antennas embedded in the floor might trigger the sound of birds or a lawn mower.

The experience was deeply personal: You could wander about at your own pace, in any direction, and compose your own trippy sensory experience.

To pull off such a feat of techno-art required an extraordinary amount of engineering. The mirror dome alone took months to design and build. E.A.T. viewed the mirror as, in Frosty Myers’s words, the “key to the whole Pavilion,” and it dictated much of what was planned for the interior. The research and testing for the mirror largely fell to members of E.A.T.’s Los Angeles chapter, led by Elsa Garmire. The physicist had done her graduate work at MIT with laser pioneer Charles Townes and then accepted a postdoc in electrical engineering at Caltech. But Garmire found the environment for women at Caltech unsatisfying, and she began to consider the melding of art and engineering as an alternate career path.

After experimenting with different ideas, Garmire and her colleagues designed a mirror modeled after the Mylar balloon satellites launched by NASA. A vacuum would hold the mirror’s Mylar lining in place, while a rigid outer shell held in the vacuum. E.A.T. unveiled a full-scale prototype of the mirror in September 1969 in a hangar at a Marine Corps airbase. It was built by G.T. Schjeldahl Co., the Minnesota-based company responsible for NASA’s Echo and PAGEOS [PDF] balloon satellites. Gene Youngblood, a columnist for an underground newspaper, found himself mesmerized when he ventured inside the “giant womb-mirror” for the first time. “I’ve never seen anything so spectacular, so transcendentally surrealistic.… The effect is mind-shattering,” he wrote. What you saw depended on the ambient lighting and where you were standing, and so the dome fulfilled E.A.T.’s goal of providing each visitor with a unique, interactive experience. Such effects didn’t come cheap: By the time Expo ’70 started, the cost of the pavilion’s silver lining came to almost $250,000.

An even more visually striking feature of the pavilion was its exterior fog. Ethereal in appearance, it required considerable real-world engineering to execute. This effort was led by Japanese artist Fujiko Nakaya, who had met Klüver in 1966 in New York City, where she was then working. Born in 1933 on the northern island of Hokkaido, she was the daughter of Ukichiro Nakaya, a Japanese physicist famous for his studies of snow crystals. When E.A.T. got the Pepsi commission, Klüver asked Fujiko to explore options for enshrouding the pavilion in clouds.

Nakaya’s aim was to produce a “dense, bubbling fog,” as she wrote in 1972, for a person “to walk in, to feel and smell, and disappear in.” She set up meteorological instruments at the pavilion site to collect baseline temperature, wind, and humidity data. She also discussed several ways of generating fog with scientists in Japan. One idea they considered was dry ice. Solid chunks of carbon dioxide mixed with water or steam could indeed make a thick mist. But the expo’s health officials ruled out the plan, claiming the massive release of CO2 would attract mosquitoes.

Eventually, Nakaya decided that her fog would be generated out of pure  water. For help, she turned to Thomas R. Mee, a physicist in the Pasadena area whom Elsa Garmire knew. Mee had just started his own company to make instruments for weather monitoring. He had never heard of Klüver or E.A.T., but he knew of Nakaya’s father’s pioneering research on snow.

Mee and Nakaya figured out how to create fog by spraying the water under high pressure through copper lines fitted with very narrow nozzles. The lines hugged the edges of the geodesic structure, and the 2,500 or so nozzles atomized some 41,600 liters of water an hour. The pure white fog spilled over the structure’s angled and faceted roof and drifted gently over the fairground. Breer compared it to the clouds found in Edo-period Japanese landscape paintings.

While the fog and mirrored dome were the pavilion’s most obvious features, hidden away in a control room sat an elaborate computerized sound system.

Designed by Tudor, the system could accept signal inputs from 32 sources, which could be modified, amplified, and toggled among 37 speakers. The sources could be set to one of three modes: “line sound,” in which the sound switched rapidly from speaker to speaker in a particular pattern; “point sound,” in which the sound emanated from one speaker; and “immersion” or “environmental” mode, where the sound seemed to come from all directions. “The listener would have the impression that the sound was somehow embodied in a vehicle that was flying about him at varying speeds,” Tudor explained.

The audio system also served as an experimental lab. Much as researchers might book time on a particle accelerator or a telescope, E.A.T. invited “resident programmers” to apply to spend several weeks in Osaka exploring the pavilion’s potential as an artistic instrument. The programmers would have access to a library of several hundred “natural environmental sounds” as well as longer recordings that Tudor and his colleagues had prepared. These included bird calls, whale songs, heartbeats, traffic noises, foghorns, tugboats, and ocean liners. Applicants were encouraged to create “experiences that tend toward the real rather than the philosophical.” Perhaps in deference to its patron’s conservatism, E.A.T. specified it was “not interested in political or social comment.”

In sharp contrast to E.A.T.’s sensibilities, Pepsi executives didn’t view the pavilion as an experiment or even a work of art but rather as a product they had paid for. Eventually, they decided that they were not well pleased by what E.A.T. had delivered. On 20 April 1970, little more than a month after the pavilion opened to the public, Pepsi informed Klüver that E.A.T.’s services were no longer needed. E.A.T. staff who had remained in Osaka to operate the pavilion smuggled the audio tapes out, leaving Pepsi to play a repetitive and banal soundtrack inside its avant-garde building for the remaining months of the expo.

Despite E.A.T.’s abrupt ouster, many critics responded favorably to the pavilion. A Newsweek critic called it “an electronic cathedral in the shape of a geodesic dome,” neither “fine art nor engineering but a true synthesis.” Another critic christened the pavilion a “total work of art”—a Gesamtkunstwerk—in which the aesthetic and technological, human and organic, and mechanical and electric were united.

In hindsight, the Pepsi Pavilion was really the apogee for the art-and-technology movement that burst forth in the mid-1960s. This first wave did not last. Some critics contended that in creating corporate-sponsored large-scale collaborations like the pavilion, artists compromised themselves aesthetically and ethically—“freeload[ing] at the trough of that techno-fascism that had inspired them,” as one incensed observer wrote. By the mid-1970s, such expensive and elaborate projects had become as discredited and out of fashion as moon landings.

Nonetheless, for many E.A.T. members, the Pepsi Pavilion left a lasting mark. Elsa Garmire’s artistic experimentation with lasers led to her cofounding a company, Laser Images, which built equipment for laser light shows. Riffing on the popularity of planetarium shows, the company named its product the “laserium,” which soon became a pop-culture fixture.

Meanwhile, Garmire shifted her professional energies back to science. After leaving Caltech for the University of Southern California, she went on to have an exceptionally successful career in laser physics. She served as engineering dean at Dartmouth College and president of the Optical Society of America. Years later, Garmire said that working with artists influenced her interactions with students, especially when it came to cultivating a sense of play.

After Expo ’70 ended, Mee filed for a U.S. patent to cover an “Environmental Control Method and Apparatus” derived from his pavilion work. As his company, Mee Industries, grew, he continued his collaborations with Nakaya. Even after Mee’s death in 1998, his company contributed hardware to installations Nakaya designed for the Guggenheim Museum in Bilbao, Spain. More recently, her Fog Bridge [PDF] was integrated into the Exploratorium building in San Francisco.

Billy Klüver insisted that the success of his organization would ultimately be judged by the degree to which it became redundant. By that measure, E.A.T. was indeed a success, even if events didn’t unfold quite the way he imagined. At universities in the United States and Europe, dozens of programs now explore the intersections of art, technology, engineering, and design. It’s common these days to find tech-infused art in museum collections and adorning public spaces. Events like Burning Man and its many imitators continue to explore the experimental edges of art and technology—and to emphasize the process over the product.

And that may be the legacy of the pavilion and of E.A.T.: They revealed that engineers and artists could forge a common creative culture. Far from being worlds apart, their communities share values of entrepreneurship, adaptability, and above all, the collective desire to make something beautiful.

This article appears in the March 2020 print issue as “Big in Japan.”

Fun—and Uranium—for the Whole Family in This 1950s Science Kit

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/space-age/fun-and-uranium-for-the-whole-family-in-this-1950s-science-kit

“Users should not take ore samples out of their jars, for they tend to flake and crumble and you would run the risk of having radioactive ore spread out in your laboratory.” Such was the warning that came with the Gilbert U-238 Atomic Energy Lab, a 1950s science kit that included four small jars of actual uranium. Budding young nuclear scientists were encouraged to use the enclosed instruments to measure the samples’ radioactivity, observe radioactive decay, and even go prospecting for radioactive ores. Yes, the Gilbert company definitely intended for kids to try this at home. And so the company’s warning was couched not in terms of health risk but rather as bad scientific practice: Removing the ore from its jar would raise the background radiation, thereby invalidating your experimental results.

The Gilbert U-238 Atomic Energy Lab put a positive spin on radioactivity

The A.C. Gilbert Co., founded in 1909 as the Mysto Manufacturing Co., was already a leader in toys designed to inspire interests in science and engineering. Founder Alfred Carlton Gilbert’s first hit was the Erector Set, which he introduced in 1913. In the early 1920s, the company sold vacuum tubes and radio receivers until Westinghouse Electric cried patent infringement. Beginning in 1922, A.C. Gilbert began selling chemistry sets.

When the Atomic Energy Lab hit the market in 1950, it was one of the most elaborate science kits available. In addition to uranium, it had beta-alpha, beta, and gamma radiation sources. It contained a cloud chamber, a spinthariscope (a simple device for watching atoms decay), an electroscope, and a Geiger counter, as well as a 60-page instruction book and a guide to mining uranium.

Also included in every kit was Learn How Dagwood Splits the Atom! Part comic book, part educational manual, it used the popular comic strip characters Blondie and Dagwood Bumstead, as well as their children, dog, and friends, to explain the basics of atomic energy. In the tale, they all shrink to the size of atoms while Mandrake the Magician, another popular comic strip hero of the day, supervises the experiment and explains how to split an atom of uranium-235.

Despite the incongruity of a magician explaining science, the booklet was prepared with expert advice. Published in 1949 by King Features Syndicate, it featured Leslie R. Groves (director of the Manhattan Project) and John R. Dunning (a physicist who verified fission of the uranium atom) as consultants.

Groves’s opening statement encourages the pursuit of truth, facts, and knowledge. He strives to allay readers’ fears about atomic energy and encourages them to see how it can be used for peacetime pursuits. The journalist Bob Considine, who covered the atomic bomb tests at Bikini, likewise dwells on the positive possibilities of nuclear energy and the availability of careers in the field.

Alas, fewer than 5,000 of the Gilbert kits were sold, and it remained on the market only until 1951. The lackluster sales may have been due to the eye-popping price: US $49.50, or about $500 today. Two competing sets, from the Porter Chemical Company, also contained uranium ore and were advertised as having atomic energy components, but retailed for $10 and $25.

Starting in the 1960s, toy safety became a concern

Parents today might be baffled that products containing radioactive elements were ever marketed to children. At the time, however, the radioactivity wasn’t considered a flaw. The inside cover of the Atomic Energy Lab proclaimed the product “Safe!”

But it’s also true that in the 1950s few consumer protection laws regulated the safety of toys in the United States. Instead, toy manufacturers responded to trends in popular opinion and consumer taste, which had been pro-science since World War II.

Those attitudes began to change in the 1960s. Books such as Rachel Carson’s Silent Spring (1962, Houghton Mifflin) raised concerns about how chemicals were harming the environment, and the U.S. Congress began investigating whether toy manufacturers were providing adequate safeguards for children.

Beginning with the passage of the 1960 Federal Hazardous Substances Labeling Act [PDF], all products sold in the United States that contained toxic, corrosive, or flammable ingredients had to include warning labels. Additionally, any product that could be an irritant or a sensitizer, or that could generate pressure when heated or decomposed, had to be labeled a “hazardous substance.”

More far reaching was the 1966 Child Protection Act, which allowed the U.S. Secretary of Health, Education, and Welfare to ban the sale of toys that contained hazardous substances. Due to a limited definition of “hazardous substance,” it did not regulate electrical, mechanical, or thermal hazards. The 1969 Child Protection and Toy Safety Act closed these loopholes. And the Toxic Substances Control Act of 1976 banned some chemicals outright and strictly controlled the quantities of others.

Clearly, makers of chemistry sets and other scientific toys were being put on notice.

Did the rise of product safety laws inadvertently undermine science toys?

What were ostensible wins for child safety was a loss for science education. Chemistry sets were radically simplified, and the substances they contained were either diluted or eliminated. In-depth instruction booklets became brief pamphlets offering only basic, innocuous experiments. The A.C. Gilbert Co., which struggled after the death of its founder in 1961, finally went bankrupt in 1967.

The U.S. Consumer Product Safety Commission, established in 1972, continues to police the toy market. Toys get recalled for high levels of arsenic, lead, or other harmful substances for being too flammable, or for containing parts small enough to choke on.

And so in 2001, the commission reported the recall of Professor Wacko’s Exothermic Exuberance chemistry kit. As you might expect from the product’s name, there was a fire risk. The kit included glycerin and potassium permanganate, which ignite when mixed. (This chemical combo is also the basis of the popular—at least on some university campuses—burning book experiment.) A controlled fire is one thing, but in Professor Wacko’s case the bottles had interchangeable lids. If the lids, which might contain residual chemicals, were accidentally switched, the set could be put away without the user realizing that a reaction was brewing. Several house fires resulted.

Another recalled science toy was 2007’s CSI Fingerprint Examination Kit, based on the hit television show. Children, pretending to be crime-scene investigators, dusted for fingerprints. Unfortunately, the fingerprint powder contained up to 5 percent asbestos, which can cause serious lung ailments if inhaled.

In comparison, the risk from the uranium-238 in Gilbert’s U-238 Atomic Energy Lab was minimal, about the equivalent to a day’s UV exposure from the sun. And the kit had the beneficial effect of teaching that radioactivity is a naturally occurring phenomena. Bananas are mildly radioactive, after all, as are Brazil nuts and lima beans. To be sure, experts don’t recommend ingesting uranium or carrying it around in your pocket for extended periods of time. Perhaps it was too much to expect that every kid would abide by the kit’s clear warning. But despite sometimes being called the “most dangerous toy in the world,” Gilbert’s U-238 Atomic Energy Lab was unlikely to have ever produced a glowing child.

An abridged version of this article appears in the February 2020 print issue as “Fun With Uranium!”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

The Hidden Figures Behind Bletchley Park’s Code-Breaking Colossus

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/dawn-of-electronics/the-hidden-figures-behind-bletchley-parks-codebreaking-colossus

“If anyone asked us what we did, we were to say that we…did secretarial work.” That’s how Eleanor Ireland described the secrecy surrounding her years at Bletchley Park. Ireland was decidedly not a secretary, but there was good reason for the subterfuge.

Ireland was one of 273 women recruited during World War II to operate Bletchley Park’s Colossus machines, which were custom built to help decrypt German messages that had been encoded using the sophisticated Lorenz cipher machines. (Bletchley Park’s more famous code-breaking effort, pioneered by Alan Turing, involved breaking the ciphers of the simpler Enigma machines.) Because of the intense, high-stakes, highly classified work, the women were all required to sign statements under the Official Secrets Act. And so their contributions, and Colossus itself, remained state secrets for decades after the end of the war.

In 1975, the U.K. government began slowly declassifying the project, starting with the release of some photos. The historian Brian Randell, who had been lobbying the government to declassify Colossus, was given permission to interview engineers involved in the project. He was also allowed to write a paper about their work, but without discussing the code-breaking aspects. Randell presented his findings at a conference in Los Alamos, N.M., in June 1976.

In 1983, Tommy Flowers, the electrical engineer chiefly responsible for designing the machines, was permitted to write about Colossus, again without disclosing details about what Colossus was used for. In 1987, IEEE Spectrum’s Glenn Zorpette wrote one of the first journalistic accounts of the code-breaking effort [see “Breaking the Code,” September 1987]. It wasn’t until 1996, when the U.S. government declassified its own documents from Bletchley Park, that the women’s story finally started to emerge.

Beginning in 1943, a group of Wrens—members of the Women’s Royal Naval Service—who either excelled at mathematics or had received strong recommendations were told to report to Bletchley Park. This Victorian mansion and estate about 80 kilometers northwest of London was home to the Government Code and Cipher School.

There, the Wrens received training in binary math, the teleprinter alphabet, and how to read machine punch tapes. Max Newman, head of the section responsible for devising mechanized methods of code breaking, initially led these tutorials. After two weeks, the women were tested on their knowledge and placed into jobs accordingly. Eleanor Ireland landed the plum assignment of Colossus operator.

Colossus was the first digital electronic computer, predating ENIAC by two years. Tommy Flowers, who worked on switching electronics at the Post Office Research Station in Dollis Hill, designed the machine to help decipher the encrypted messages that the Nazi high command sent by radioteleprinter. The Germans called their radioteleprinter equipment Sägefisch, or sawfish, reportedly because of the radio signals’ sawtooth wave. The British accordingly nicknamed the German coded messages “fish,” and the cipher that Colossus was designed to break became “Tunny,” short for tuna fish.

The Tunny machine was known to the Germans as the Lorenz SZ40, and it operated as an attachment to a standard teleprinter. Like the Enigma, the Lorenz had a set of wheels that encrypted the message. But where the Enigma had three wheels, the Lorenz had 12. Because of the Lorenz’s significantly stronger encryption, the Germans used it for their highest-level messages, such as those sent by Hitler to his generals.

The Bletchley Park code-breakers figured out how to break the Tunny codes without ever having seen a Lorenz. Each of the 12 wheels was imprinted with a different number of two-digit numerals. The code breakers discovered that the wheels consisted of two groups of five—which they called the psi wheels and the chi wheels—plus two motor, or mu, wheels. The chi wheels moved forward in unison with each letter of a message. The psi wheels advanced irregularly based on the position of the mu wheels. Each letter of a message was the sum of the letters—that is, the sum of the numbers representing the letters—generated by the chi and psi wheels.

The initial function of Colossus was to help determine the starting point of the wheels. Colossus read the cipher’s stream of characters and counted the frequency of each character. Cryptographers then compared the results to the frequency of letter distribution in the German language and to a sample chi-wheel combination, continually refining the chi-wheel settings until they found the optimal one.

Eventually, there were 10 Colossi operating around the clock at Bletchley Park. These room-size machines, filled with banks of vacuum tubes, switches, and whirring tape, were impressive to behold. The official government account of the project, the 1945 General Report on Tunny, used words such as “fantastic,” “uncanny,” and “wizardry” to describe Colossus, creating a mental image of a mythic machine.

But the actual task of operating Colossus was tedious, time-consuming, and stressful. Before the machine could even start crunching data, the punched paper tapes that stored the information had to be joined into a loop. The Wrens experimented with various glues and applications until they arrived at the ones that worked best given the speed, heat, and tension of the tape as it ran through the machine. Dorothy Du Boisson described the process as the art of using just the right amount of glue, French chalk, and a warm clamp to make a proper joint.

The operator then had to feed the tape through a small gate in front of the machine’s photoelectric reader, adjusting the tape’s tautness using a series of pulleys. Getting the right tension was tricky. Too tight and the tape might break; too loose and it would slip in the machine. Either meant losing valuable time. Colossus read the tape at thousands of characters per second, and each tape run took approximately an hour.

The cryptographers at Bletchley Park decided which patterns to run, and the Wrens entered the desired programming into Colossus using switches and telephone-exchange plugs and cords. They called this pegging a wheel pattern. Ireland recalled getting an electric shock every time she put in a plug.

During the first three months of the Colossus program, many of the Wrens suffered from exhaustion and malnutrition, and their living conditions were far from enviable. The women bunked four to a room in the cold and dreary servant quarters of nearby Woburn Abbey. Catherine Caughey reported that the abbey’s plumbing couldn’t keep up.

The rooms that housed the Colossi were, by contrast, constantly overheated. The vacuum tubes on the machines gave off the equivalent of a hundred electric heaters. Whenever a Wren got too hot or sleepy, she would step outside the bunkers to splash water on her face. Male colleagues suggested that the women go topless. They declined.

The Wrens worked in 8-hour shifts around the clock. They rotated through a week of day shifts, a week of evenings, and a week of nights with one weekend off every month. Women on different shifts were often assigned to the same dorm room, the comings and goings disrupting their sleep.

Those in charge of the Wrens at Woburn Abbey didn’t know what the women were doing, so they still required everyone to participate in squad-drill training every day. Only after women started fainting during the exercises were a few improvements made. Women in the same shift began rooming together. Those working the night shift were served a light breakfast before starting, rather than reheated leftovers from supper.

During their time at Bletchley Park, the computer operators knew very little about their successful contribution to the war effort.

Having signed the Official Secrets Act, none of the 273 women who operated the Colossi could speak of their work after the war. Most of the machines were destroyed, and Tommy Flowers was ordered to burn the designs and operating instructions. As a result, for decades the history of computing was missing an important first.

Beginning in the early 1990s, Tony Sale, an engineer and curator at the Science Museum, London, began to re-create a Colossus, with the help of some volunteers. They were motivated by intellectual curiosity as well as a bit of national pride. For years, U.S. computer scientists had touted ENIAC as the first electronic computer. Sale wanted to have a working Colossus up and running before the 50th anniversary of ENIAC’s dedication in 1996.

On 6 June 1996, the Duke of Kent switched on a basic working Colossus at Bletchley Park. Sale’s machine is still on view in the Colossus Gallery at the National Museum of Computing on the Bletchley estate, which is open every day to the public.

When the British government finally released the 500-page General Report on Tunny in 2000, the story of Colossus could be told in full. Jack Copeland captures both the technical detail and the personal stories in his 2006 book Colossus: The Secrets of Bletchley Park’s Codebreaking Computers (Oxford University Press), which he wrote in collaboration with Flowers and a number of Bletchley Park code breakers and computer historians.

And what of the women computer operators? Their stories have been slower to be integrated into the historical narrative, but historians such as Janet Abbate and Mar Hicks are leading the way. Beginning in 2001, Abbate led an oral history project interviewing 52 women pioneers in computing, including Eleanor Ireland. These interviews became the basis for Abbate’s 2012 book Recoding Gender: Women’s Changing Participation in Computing (MIT Press).

In 2017, Hicks published Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing (MIT Press). In it she traces women’s work in the burgeoning computer field before World War II through the profession’s gender flip in the 1960s. The book documents the industry’s systematic gender discrimination, which is still felt today.

As for the computer operators themselves, Ireland took advantage of the lifting of the classification to write an essay about Colossus and the fellowship of the Wrens: “When we meet, as we do in recent years every September, we all agree that those were our finest hours.”

An abridged version of this article appears in the January 2020 print issue as “The Hidden Figures of Colossus.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

The Crazy Story of How Soviet Russia Bugged an American Embassy’s Typewriters

Post Syndicated from Robert W. Lucky original https://spectrum.ieee.org/tech-history/silicon-revolution/the-crazy-story-of-how-soviet-russia-bugged-an-american-embassys-typewriters

Every engineer has stories of bugs that they discovered through clever detective work. But such exploits are seldom of interest to other engineers, let alone the general public. Nonetheless, a recent book authored by Eric Haseltine, titled The Spy in Moscow Station (Macmillan, 2019), is a true story of bug hunting that should be of interest to all. It recounts a lengthy struggle by Charles Gandy, an electrical engineer at the United States’ National Security Agency, to uncover an elaborate and ingenious scheme by Soviet engineers to intercept communications in the American embassy in Moscow. (I should say that, by coincidence, both Haseltine and Gandy are friends of mine.)

This was during the Cold War in the late 1970s. American spies were being arrested, and how they were being identified was a matter of great concern to U.S. intelligence. The first break came with the accidental discovery of a false chimney cavity at the Moscow embassy. Inside the chimney was an unusual Yagi-style antenna that could be raised and lowered with pulleys. The antenna had three active elements, each tuned to a different wavelength. What was the purpose of this antenna, and what transmitters was it listening to?

Gandy pursued these questions for years, not only baffled by the technology, but buffeted by interagency disputes and hampered by the Soviet KGB. At one point he was issued a “cease and desist” letter by the CIA, which, along with the State Department, had authority over security at the embassy. These agencies were not persuaded that there were any transmitters to be found: Regular scans for emissions from bugs showed nothing.

It was only when Gandy got a letter authorizing his investigation from President Ronald Reagan that he was able to take decisive action. All of the electronics at the embassy—some 10 tons of equipment—was securely shipped back to the United States. Every piece was disassembled and X-rayed.

After tens of thousands of fruitless X-rays, a technician noticed a small coil of wire inside the on/off switch of an IBM Selectric typewriter. Gandy believed that this coil was acting as a step-down transformer to supply lower-voltage power to something within the typewriter. Eventually he uncovered a series of modifications that had been concealed so expertly that they had previously defied detection.

A solid aluminum bar, part of the structural support of the typewriter, had been replaced with one that looked identical but was hollow. Inside the cavity was a circuit board and six magnetometers. The magnetometers sensed movements of tiny magnets that had been embedded in the transposers that moved the typing “golf ball” into position for striking a given letter.

Other components of the typewriters, such as springs and screws, had been repurposed to deliver power to the hidden circuits and to act as antennas. Keystroke information was stored and sent in encrypted burst transmissions that hopped across multiple frequencies.

Perhaps most interesting, the transmissions were at a low power level in a narrow frequency band that was occupied by intermodulation overtones of powerful Soviet TV stations. The TV signals would swamp the illicit transmissions and mask them from detection by embassy security scans, but the clever design of the mystery antenna and associated electronic filtering let the Soviets extract the keystroke signals.

When all had been discovered, Haseltine recounts how Gandy sat back and felt an emotion—a kinship with the Soviet engineers who had designed this ingenious system. This is the same kinship I feel whenever I come across some particularly innovative design, whether by a colleague or competitor. It is the moment when a technology transcends known limits, when the impossible becomes the doable. Gandy and his unknown Soviet opponents were working with 1970s technology. Imagine what limits will be transcended tomorrow!

This article appears in the January 2020 print issue as “The Ingenuity of Spies.”

Hasbro’s Classic Game Operation Was Sparked by a Grad Student’s Electric Idea

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/space-age/hasbros-classic-game-operation-was-sparked-by-a-grad-students-electric-idea

Cavity Sam, the cartoon patient in the board game Operation, suffers from an array of anatomically questionable ailments: writer’s cramp (represented by a tiny plastic pencil), water on the knee (a bucket of water), butterflies in the stomach (you get the idea). Each player takes a turn as Sam’s doctor, using a pair of tweezers to try to remove the plastic piece for each ailment. Dexterity is key. If the tweezers touch the side of the opening, it closes a circuit, causing the red bulb that is Sam’s nose to light up and a buzzer to sound. Your turn is then over. The game’s main flaw, at least from the patient’s perspective, is that it’s more fun to lose your turn than to play perfectly.

John Spinello created the initial concept for what became Operation in the early 1960s, when he was an industrial design student at the University of Illinois. Spinello’s game, called Death Valley, didn’t feature a patient, but rather a character lost in the desert. His canteen drained by a bullet hole, he wanders through ridiculous hazards in search of water. Players moved around the board, inserting their game piece—a metal probe—into holes of various sizes. The probe had to go in cleanly without touching the sides; otherwise it would complete a circuit and sound a buzzer. Spinello’s professor gave him an A.

Spinello sold the idea to Marvin Glass and Associates, a Chicago-based toy design company, for US $500, his name on the U.S. patent (3,333,846), and the promise of a job, which never materialized.

Mel Taft, a game designer at Milton Bradley, saw a prototype of Death Valley and thought it had potential. His team tinkered with the idea but decided it would be more interesting if the players had to remove an object rather than insert a probe. They created a surgery-themed game, and Operation was born.

The game debuted in 1965, and the English-language version has remained virtually  the same for decades. (A 2013 attempt to make Cavity Sam thinner and younger and the game pieces larger and easier to extract did not go over well with fans.) Variations of Operation have featured cartoon characters other than Cavity Sam, such as Homer Simpson, from whose body the player extracted donuts and pretzels, and Star Wars’ Chewbacca, who’s been infested with Porgs and other “hair hazards.” International editions are available in French, German, Italian, and Portuguese/Spanish. The global franchise for all this electrified silliness has generated an estimated $40 million in sales over the years.

Such complete-the-circuit games actually date back to the mid-1700s. Benjamin Franklin designed a game called Treason, which involved removing a gilt crown from a picture of King George II without touching the frame. And in a popular carnival game, contestants had to guide an electrified wire loop along a twisted rod without touching it.

Spinello was not the first to patent such a game. In 1957, Walter Goldfinger and Seymour Beckler received U.S. patent 2,808,263  for a portable electric game that simulated a golf course and its hazards. They in turn cited John Braund’s 1950 patent for a simulated baseball game and John Archer Smith and Victor Merrow’s 1946 patent for a game involving steering vehicles around a board.

A few years after Spinello filed for his U.S. patent, but two months before it was granted, an inventor named Narayan Patel filed for a remarkably similar game that also called for inserting a metal probe between electrified plates. Patel outlined four themed games based on this setup, one of which was aimed at adult partiers. He called his amusement and dexterity test “How Much Do You Drink?” with ranges from “Never heard of alcohol” to “Brother, you are dead.”

But if your main goal was to get inebriated while testing your fine motor skills, you didn’t really need a dedicated game. Some players simply adapted their own drinking rules to Operation. Needless to say, if you didn’t start the game with the skilled hands of a surgeon, it is unlikely that a few alcoholic beverages would help.

An abridged version of this article appears in the December 2019 print issue as “A Game With Buzz.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

The First Transatlantic Telegraph Cable Was a Bold, Beautiful Failure

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/heroic-failures/the-first-transatlantic-telegraph-cable-was-a-bold-beautiful-failure

On 16 August 1858, Queen Victoria and U.S. president James Buchanan exchanged telegraphic pleasantries, inaugurating the first transatlantic cable connecting British North America to Ireland. It wasn’t exactly instant messaging: The queen’s 98-word greeting of goodwill took almost 16 hours to send through the 3,200-kilometer cable. Still, compared to packet steamships, which could take 10 days to cross the Atlantic, the cable promised a tremendous improvement in speed for urgent communications.

This milestone in telegraphy had been a long time coming. Samuel Morse first suggested linking the two continents in 1840, and various attempts were made over the ensuing years. Progress on the project took off in the mid-1850s, when U.S. entrepreneur Cyrus W. Field began investing heavily in telegraphy.

Field had made his fortune in the paper industry by the age of 34. The first telegraph project he invested in was a link from St. Johns, Newfoundland, to New York City, as envisioned by Canadian engineer Frederic Newton Gisborne. The venture never secured enough funding, but Field’s enthusiasm for telegraphy was undiminished. Over the next decade, he invested his own money and rallied other inventors and investors to form several telegraph companies.

The most audacious of these was the Atlantic Telegraph Company (ATC). Field and the English engineers John Watkins Brett and Charles Tilston Bright, both specialists in submarine telegraphy, formed the company in 1856, with the goal of laying a transatlantic cable. The British and U.S. governments both agreed to subsidize the project.

Terrestrial telegraphy was by then well established, and several shorter submarine cables had been deployed in Europe and the United States. Still, the transatlantic cable’s great length posed some unique challenges, especially because transmission theory and cable design were still very much under debate.

Morse and British physicist Michael Faraday believed that the conducting wire of a submarine cable should be as narrow as possible, to limit retardation of the signal. And the wider the wire, the more electricity would be needed to charge it. Edward Orange Wildman Whitehouse, an electrician for the Atlantic Telegraph Company, subscribed to this view.

The other school of thought was represented by William Thomson (later Lord Kelvin). He argued that the amount of retardation was inversely proportional to the square of the cable’s length. Thomson suggested using a large-diameter core made with the purest copper available in order to reduce the resistance. Bright, the project’s chief engineer, shared Thomson’s view. This design was significantly heavier and more costly than the one proposed by the Morse-Faraday school, and the ATC did not adopt it.

The Gutta Percha Co. manufactured the cable’s core and insulation. The core consisted of seven strands of copper wire twisted together to make a wire 0.083 inch in diameter. The finished core weighed 107 pounds per nautical mile, which was significantly lighter than the 392 pounds per nautical mile that Thomson and Bright had proposed. The copper core was wrapped in three layers of gutta-percha, a latex that comes from trees of the same name. The insulated core was then covered in tarred hemp and wrapped with iron wire. The finished cable was about five-eighths of an inch in diameter.

At the time, no ship could carry all of the submarine cable needed, so the cargo was split between two naval ships, the HMS Agamemnon and the USSF Niagara, both of which were refitted to carry the load. It took three weeks to load the cable. Many spectators gathered to watch, while local officials and visiting dignitaries treated the naval officers to countless dinners and celebrations, much of which was recorded and amplified by the press.

Of course, two ships meant that the cables would have to be spliced together at some point. Once again, there was disagreement about how to proceed.

Bright argued for splicing the cable in midocean and then having each ship head in opposite directions, paying out its cable as it went. Whitehouse and the other electricians preferred to begin laying the cable in Ireland and splicing in the second half once the first half had been laid. This plan would allow continuous contact with the shore and ongoing testing of the cable’s signal. Bright’s plan had the advantage of halving the time to lay the cable, thus lessening the chance of encountering foul weather.

The directors initially chose Whitehouse’s plan. Niagara and Agamemnon met at Queenstown, Ireland, to test the cable with a temporary splice. After a successful transmission, the ships headed to Valentia Bay to begin their mission, escorted by the USS Susquehanna and the HMS Leopard. Also joining the fleet were the support ships HMS Advice, HMS Willing Mind, and HMS Cyclops.

On 5 August 1857, the expedition got under way. The first portion of cable to be laid was known as the shore cable: heavily reinforced line to guard against strains of waves, currents, rocks, and anchors. But less than 5 miles out, the shore cable got caught in the machinery and broke. The fleet returned to port.

Willing Mind ran a thick rope to retrieve the broken shore cable, and crew members spliced it back to the shore cable on the Niagara. The fleet set out again. When they reached the end of the shore cable, the crews spliced it to the ocean cable and slowly lowered it to the ocean floor.

For the next few days, the cable laying proceeded. There was nearly continuous communication between Whitehouse on shore and Field, Morse, and Thomson on board, although Morse was incapacitated by seasickness much of the time.

The paying-out machinery was fiddly to operate. The cable was occasionally thrown off the wheel, and tar from the cable built up in the grooves and had to be cleaned off. To keep the cable paying out at a controlled rate required constant adjustment of the machinery’s brakes. The brakeman had to balance the ship’s speed and the ocean current. In perfect weather and flat seas, this was easy to judge. But weather can be fickle, and humans are fallible.

Around 3:45 a.m. on 11 August, the stern of Niagara headed into the trough of a wave. As the ship rose, the pressure on the cable increased. The brakes should have been released, but they weren’t. The cable broke and plunged to an irretrievable depth.

Field immediately and headed to England aboard the Leopard to meet with the ATC’s board of directors. Niagara and Agamemnon remained at the site for a few days to practice splicing the cable from the two ships. Cyclops, which had done the initial survey of the route the previous year, conducted soundings of the site. When they returned to shore, the crews learned that the project had been halted for the year.

Over the winter months, William Everett was named chief engineer and set about redesigning the paying-out machinery with more attention to the braking mechanism and safety features. The crew practiced their maneuvers. Thomson thought more about transmission speed and developed his mirrored galvanometer, an instrument for detecting current in, say, a long cable.

The ships set out again the following summer. This time they would follow Bright’s plan. Agamemnon and Niagara would meet at 52°2’ N, 33°18’ W, the halfway point of the proposed line. In the middle of the Atlantic Ocean, they would splice together the cable and drop it to the ocean floor. Agamemnon would head east to Ireland, while Niagara headed west to Newfoundland.

Although the weather was fine when the ships set out, it soon turned. For six days, the two ships, laden with 1,500 tons of cable, pitched alarmingly from side to side. Although no one was lost, 45 men were injured, and Agamemnon ended up 200 miles off course.

Finally, on 25 June 1858 Agamemnon and Niagara met. The crews spliced together the cable, and the ships set off. At first, the two ships were able to communicate via the cable, but around 3:30 a.m. on 27 June, both logbooks recorded a failure. Because things looked fine on each ship, the crews assumed the problem was on the other end, and the ships returned to the rendezvous spot. The crews didn’t want to waste time investigating, so they agreed to abandon the 100 km of cable that had been laid and spliced together a new one, and the ships set off once more.

By 29 June, Agamemnon had paid out almost all of the cable stored on deck, which meant the crew would have to switch to the main coil in the middle of the night. Although they had practiced the maneuver over the winter, luck was not on their side. Around midnight, the cable snapped and was lost. As it turned out, the six-day storm had damaged the cable as it lay on the deck. The two ships were hundreds of kilometers apart by this point, and they headed back to Queenstown to await further direction.

Field was not deterred, but it took some doing to convince the rest of the ATC’s board of directors to make another attempt. He could be a persuasive guy.

The ships set out for a third time on 17 July 1858. This time the cable laying progressed without incident, having been blessed, finally, by the weather gods. On 29 July, as Field recorded in his journal, the two ships spliced the two ends of the cable together in the middle of the Atlantic Ocean, dropped it in the water at 1,500 fathoms (2,745 meters), and then each ship headed to its destination port. Niagara arrived on 4 August, Agamemnon the following day. The 3,200-km cable now connected Heart’s Content, in Newfoundland, to Telegraph Field on Valentia Island, in Ireland.

By 10 August dispatchers were sending test messages, and on 16 August, with the queen’s and Buchanan’s exchanges, the line was officially open.

All of the project’s many starts and stops had been followed closely by the press and an eager public. Archaeologist Cassie Newland has called the heroic effort “the Victorian equivalent of the Apollo mission.”

The triumphant opening, after years of speculation and so many failures, was lauded as the communications achievement of the century. New York City celebrated with a parade and fireworks, which accidentally set the dome of City Hall on fire. Trinity Church in lower Manhattan held a special service to commemorate the cable, with the mayor of New York City and other officials in attendance and the Right Reverend George Washington Doane, Bishop of New Jersey, giving the address. On the other side of the Atlantic, shares in the ATC more than doubled, and Charles Bright was knighted for his work overseeing the project.

Of course, companies wanted to cash in on the celebration and immediately crafted all sorts of memorabilia and souvenirs. Niagara had arrived in New York with hundreds of kilometers of excess cable. The jeweler Tiffany & Co. bought it all.

Tiffany craftsmen cut the cable into 10-centimeter pieces, banding the ends of each piece with brass ferrules and attaching a descriptive plate [see photo at top]. The souvenirs retailed for 50 cents each (about US $15 today); wholesalers could buy lots of 100 for $25. Each piece came with a facsimile of a note, signed by Cyrus W. Field, certifying that he had sold the balance of the cable from Niagara to Tiffany. Although Tiffany claimed to have a monopoly on the cable, numerous competitors sprang up with their own cable souvenirs, including watch fobs, earrings, pendants, charms, letter openers, candlesticks, walking-stick toppers, and tabletop displays.

Tiffany reportedly sold thousands of its cable souvenirs, but it was a short-lived venture. Transmission on the transatlantic cable, never very strong, degraded quickly. Within a few weeks, the line failed entirely.

Blame for the failure quickly landed on Whitehouse, chief engineer for the eastern terminus of the cable. He believed that the farther the signal had to travel, the stronger the necessary voltage, and so he at times used up to 2,000 volts to try to boost the signal. Meanwhile, Thomson, the chief engineer for the cable’s western terminus, was using his mirror galvanometer to detect and amplify the faint signal coming through the cable.

In blaming Whitehouse for the failure, people were quick to point out his lack of traditional qualifications. This is a bit unfair—his path was similar to that of many gentleman scientists of the time. Trained as a medical doctor, he had a successful surgical practice before turning his attention to electrical experiments. Whitehouse patented several improvements to telegraphic apparatus and was elected a member of the British Association for the Advancement of Science. But the commission investigating the cable’s failure laid fault on Whitehouse’s use of high voltages.

In 1985 historian and engineer Donard de Cogan published an article that somewhat vindicated Whitehouse. De Cogan’s analysis of a length of cable that had been retrieved from the original deployment noted its poor manufacture, including the fact that the copper core was not centered within the insulator and at places was perilously close to the metal sheathing. Additionally, there was significant deterioration of the gutta percha insulator. De Cogan speculated that the impurities—which even Thomson objected to—along with improper storage over the 1857–58 winter resulted in a cable whose failure was inevitable. De Cogan also concluded, though, that having Whitehouse as a scapegoat may have actually helped advance transoceanic telegraphy. Had the cable failed without a cause, investors would have been more hesitant.

Regardless, the failure left Tiffany with thousands of unsellable souvenirs. Some ended up in museum collections, but many were put into storage and forgotten, leading to the next twist in this tale.

In 1974 a company called Lanello Reserves advertised the sale of 2,000 Tiffany cable souvenirs. The asking price was $100—about $500 in today’s dollars. Lanello Reserves also donated 100 pieces to the Smithsonian Institution, which the museum resold in its shops. Today, original transatlantic cable souvenirs pop up regularly in online auction sites. You too can own a piece of history.

While the souvenirs may have had a longer life than the transmission cable itself, that doesn’t diminish the accomplishment of Bright, Thomson, Whitehouse, and their teams. Even though the cable never worked well, a total of 732 messages were sent before it failed. These included the reporting of a collision between the Cunard Line ships Europa and Arabia, as well as an order from the British government to hold two regiments in Canada. The regiments were en route to India, but when the British government learned that the Indian Rebellion had been repressed, they sent new orders via the cable. By not sending the troops, the treasury saved an estimated £50,000 to £60,000, recouping about one-seventh of their investment in the cable with a single military order.

Public sentiment toward the cable quickly cooled, however. By the end of 1858, rumors abounded that this was all an elaborate hoax or a fraudulent stock scheme aimed at fleecing unsuspecting investors. Similar to today’s conspiracy theorists who refuse to believe that the Apollo moon landing was real, the cable doubters were not convinced by souvenirs, messages from heads of state, or effusive press coverage.

Field was eager to try again. Most of the ATC’s original financial backers weren’t interested in investing in another transatlantic cable, but the British government still saw the potential and continued to provide funding. The company finally succeeded in building a permanent transatlantic cable in 1866.

Field may have lost many of his friends and investors, but he never lost his optimism. He was able to believe in a future of instant communication at a time when most people didn’t have—indeed, couldn’t even conceive of—indoor plumbing and electric lights.

An abridged version of this article appears in the November 2019 print issue as “Tiffany’s Transatlantic Telegraphy Doodad.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

How the Trautonium Found Favor With Nazis and Gave Hitchcock’s The Birds its Distinctive Screech

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/dawn-of-electronics/how-the-trautonium-found-favor-with-nazis-and-gave-hitchcocks-emthe-birdsem-its-distinctive-screech

I knew next to nothing about electronic music when I began researching this month’s column. My only association with the genre was the synthesizer sounds of ’80s pop and the (for me) headache-inducing beats of EDM. I never stopped to think about the roots of electronic music, and so I was surprised to learn that it can be traced back more than a century. It also has more than a passing association with the Nazis. Frode Weium, senior curator at the Norwegian Museum of Science and Technology, is the person who nominated the Volkstrautonium for this month’s Past Forward artifact and sent me down this fascinating musical rabbit hole.

The Volkstrautonium arose during the wave of electronic music that began shortly after World War I. Inventors in Europe, the Soviet Union, and the United States were creating a cacophony (or, if you like, a symphony) of new electronic musical instruments. It’s hard to say exactly why electronic music took off as it did, but the results were diverse and abundant. Some of the new creations took the name of their inventor, such as the theremin (for León Theremin) and the Ondes Martenot (for Maurice Martenot). Others were portmanteaus that merged musical and electronic terms: the Terpiston, the Rhythmicon, the Cathodic Harmonium, the Radiophonic Organ, the Magnetophone, the Spherophone, the Elektrochord.

The music theorist Jörg Mager welcomed these new sounds. Often considered the founder of the electro-music movement, Mager in 1924 published his essay “Eine neue Epoche der Musik durch Radio” (“A New Epoch in Music Through Radio”), in which he argued that the radio was not simply a tool to disseminate sound, but also a tool to manipulate sound waves and create a new form of music.

In this burgeoning world of sonic experimentation, Germany was the epicenter. And an electrical engineer and trained musician named Friedrich Trautwein wanted a piece of the action. Trautwein had trained at the Heidelberg Conservatory and then studied engineering and acoustics at Karlsruhe Technical University, receiving his doctorate in 1921. He filed his first patent for electric tone generation a year later.

Trautwein’s first attempt at a new musical instrument was the Trautonium, an electrified cross between a violin (a violin with only one string, that is) and a piano (minus the keyboard). The playing interface, or manual, of the Trautonium consisted of a single wire stretched over a metal plate. When the musician pressed the wire against the plate, it closed the circuit and produced a tone. Moving your finger along the wire from left to right changed the resistance and therefore the pitch, while knobs on the console further adjusted the pitch. A set of movable keys let the musician set fixed pitches. It sounded like a violin. Sort of.

The original Trautonium generated its sound using tubes filled with neon gas, which functioned as a relaxation oscillator. The neon tubes were later replaced with a type of high-energy gas tube called a thyratron, which helped stabilize the pitch.

Trautwein continued to experiment with ways of manipulating the electronic tone to produce the most pleasing timbre, according to Thomas Patteson, a professor of music history at the Curtis Institute of Music, in Philadelphia, who has done significant research on the history of the Trautonium. (For an extended description of Trautwein’s formant theory, for instance, see Chapter 5 of Patteson’s book Instruments for New Music: Sound, Technology, and Modernism, University of California Press, 2015.)

In 1930 Trautwein became a lecturer in acoustics at the Berlin Academy of Music, where he met Paul Hindemith, who was teaching music composition there. Almost immediately, the two began to collaborate on improving the Trautonium. Hindemith, already an established composer, wrote music specifically for the instrument and encouraged others to play it. One of Hindemith’s students, Oskar Sala, became a virtuoso on the instrument.

In this 1930 recording, Sala plays an early Trautononium:

Sala, like Trautwein, had dual interests in music and science; he went on to study physics at what is now Humboldt University of Berlin.

The Trautonium debuted to the public on 20 June 1930 as part of a summer concert series devoted to new music. For the performance, Hindemith composed Das kleinen Elektromuskiers Lieblinge (The Little Electro-musician’s Favorites), a set of seven short pieces for a trio of Trautonia. Hindemith, Sala, and the pianist Rudolf Schmidt performed the pieces and demonstrated the potential range of the new instrument. In conjunction with the performance, Trautwein published a short book, Elektrische Musik, that served as a technical guide to the Trautonium.

The following year, Hindemith conducted his Concertino for Trautonium and String Orchestra at the second Radio Music Convention, in Munich, with Sala as the soloist. And at the 1932 Radio Exhibition, in Berlin, the Trautonium was part of the “electric orchestra,” which also featured an electric cello, electric violin, electric piano, and theremin.

Trautwein and Sala believed that the Trautonium had commercial appeal beyond the concert hall. Beginning in 1931, they partnered with the electronics firm Telefunken to create a mass-marketable instrument. The result was the Telefunken-Trautonium, which later became known as the Volkstrautonium.

Because the Volkstrautonium was intended for use in the home, it underwent a few design changes from Trautwein’s original machine. The manual and circuitry were consolidated into a single box with a cover to minimize dust. Additional knobs and switches helped the player control the sound. The instrument could be plugged into a radio receiver for amplification.

Despite all these enhancements, the Volkstrautonium did not make a splash when it debuted at the 1933 Radio Exhibition. Of the 200 or so that were produced, only a few were ever sold.

The instrument may have been a victim of particularly poor timing. Priced at 400 reichsmarks, or about two and a half months’ salary for the average German worker, it would have been a significant investment. Meanwhile, amidst a global economic depression, unemployment in Germany hovered around 30 percent. The Volkstrautonium was simply unaffordable for most people.

Telefunken’s lackluster marketing of the instrument, which included almost no advertising, didn’t help matters. The company officially stopped making them in 1937, and all unsold units were given to Trautwein.

According to Frode Weium, the Volkstrautonium pictured at top was a gift from AEG Berlin, which partially owned Telefunken, to Alf Scott-Hansen Jr., a Norwegian electrical engineer, amateur jazz musician, and film director. It’s unclear whether Scott-Hansen used this Volkstrautonium. The Norwegian Museum of Science and Technology acquired it in 1995.

Though the Volkstrautonium was not a commercial success, that didn’t stop the Trautonium from finding a niche audience among radio enthusiasts. Despite the high price of the Volkstrautonium, it had a fairly simple design. You could build a pared-down version with easily available parts. In March 1933, Radio-Craft magazine published detailed instructions on how to build a Trautonium [PDF], slightly altered for U.S. customers with parts available in the United States.

According to the Radio-Craft article, the Trautonium was not just easy to build but also easy to play: “One may learn to play it in a short time, even though one is not a musician.” Perhaps that was true, but playing well was probably another matter.

Finding music to play on the Trautonium would also have been tricky. In order to popularize any new instrument, you need new music to be written for it. Otherwise, the instrument only mimics other instruments—there’s no signature sound or essential need that allows the new instrument to take root. The theremin, for example, didn’t pass into obscurity like many of the other instruments of the early electro-music age because its uniquely eerie sound became popular in scores for science fiction and horror movies. [The theremin also inspires occasional reboots, a few of which are described in “The Return of the Theremin” and “How to Build a Theremin You Can Play With Your Whole Body,” in IEEE Spectrum.]

The Trautonium, for its part, produced a surprising range of sound effects for Alfred Hitchcock’s 1963 horror-thriller The Birds, a movie famous for its lack of a traditional score. Instead, Oskar Sala created the screeches and cries of the birds, as well as the slamming of doors and windows, using a variation of Trautwein’s instrument that he designed:

Sala scored hundreds of films with his Trautonium, refining the instrument throughout his life. He never trained any students to play it, however, nor did other composers besides Hindemith produce melodic smash hits with the instrument. It thus fell into obscurity.

More recently, a few artists have rediscovered the range and potential of the Trautonium. The Danish musician Agnes Obel played a replica Trautonium on her 2016 album Citizen of Glass. The German musician Peter Pichler wrote a music theater piece, Wiedersehen in Trautonien (Reunion in Trautonia), which is an account of an imagined meeting of Trautwein, Hindemith, and Sala. He also took his Trautonium on tour in Australia last April.

I said there were Nazis, and here they are: Like all German academics, Trautwein had to navigate the difficult political climate of the 1930s and ’40s. Like many, he joined the Nazi Party, was rewarded with a promotion to professor, and rode out the war years mostly unscathed.

In 1935 Trautwein and Sala presented a Trautonium to Joseph Goebbels, Hitler’s minister of propaganda. Goebbels was, unsurprisingly, mostly interested in the propaganda value of music. Luckily for Trautwein, electro-music fit into the Reich’s desire to reconcile technology and culture. Trautwein volunteered the Trautonium to test the speaker system for the 1936 Olympic Games in Berlin, and it was played three times in official radio programs accompanying the games.

Hindemith, who was married to a Jewish woman and who often collaborated with leftists, didn’t fare as well under the Nazis. Goebbels pressured him to take an extended leave of absence from the Berlin Academy, and he found it increasingly difficult to perform and conduct. His work was banned in 1936. Hindemith and his wife immigrated to Switzerland two years later, settled in the United States in 1940, and returned to Europe in 1953.

When I first listened to the 1930 recording of Oskar Sala playing Trautwein’s simple electronic instrument, I was struck by how the sound seems both strange and familiar. Now that I know the history of the Trautonium and its champions, I think the word that best describes it is haunting.

An abridged version of this article appears in the October 2019 print issue as “An Instrument for The Birds.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society. She dedicates this month’s column to her uncle Ralph Morrison, who died in August. “Ralph was an IEEE Life Member and also an avid violinist,” Marsh says. “I am pretty sure he would have hated the music the Trautonium produced, but I know he would have loved discussing the electrical and acoustical challenges of the instrument.”