Tag Archives: tech-history/cyberspace

The Rich Tapestry of Fiber Optics

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/cyberspace/the-rich-tapestry-of-fiber-optics

Whoopee! So wrote Donald Keck, a researcher at Corning Glass Works, in the 7 August 1970 entry of his lab notebook. The object of his exuberance was a 29-meter-long piece of highly purified, titanium-doped optical fiber, through which he had successfully passed a light signal with a measured loss of only 17 decibels per kilometer. A few years earlier, typical losses had been closer to 1,000 dB/km. Keck’s experiment was the first demonstration of low-loss optical fiber for telecommunications, and it paved the way for transmitting voice, data, and video over long distances.

As important as this achievement was, it was not an isolated event. Physicists and engineers had been working for decades to make optical telecommunications possible, developing not just fibers but waveguides, lasers, and other components. (More on that in a bit.) And if you take the long view, as historians like me tend to do, it’s part of a fascinating tapestry that also encompasses glass, weaving, art, and fashion.

Optical fiber creates stunning effects in art and fashion

Shown above is a sculpture called Crossform Pendant Lamp, by the New York–based textile artist Suzanne Tick. Tick is known for incorporating unusual materials into her weaving: recycled dry cleaner hangers, Mylar balloons washed up on the beach, documents from her divorce. For the lamp, she used industrial fiber-optic yarn.

The piece was part of a collaboration between Tick and industrial designer Harry Allen. Allen worked on mounting ideas and illuminators, while Tick experimented with techniques to weave the relatively stiff fiber-optic yarn. (Optical fiber is flexible compared with other types of glass, but inflexible compared to, say, wool.) The designers had to determine how the lamp would hang and how it would connect to a power and light source. The result is an artwork that glows from within.

Weaving is of course an ancient technology, as is glassmaking. The ability to draw, or pull, glass into consistent fibers, on the other hand, emerged only at the end of the 19th century. As soon as it did, designers attempted to weave with glass, creating hats, neckties, shawls, and other garments.

Perhaps the most famous of these went on display at the 1893 World’s Columbian Exposition in Chicago. The exhibit mounted by the Libbey Glass Company, of Toledo, Ohio, showcased a dress made from silk and glass fibers. The effect was enchanting. According to one account, it captured the light and shimmered “as crusted snow in sunlight.” One admirer was Princess Eulalia of Spain, a royal celebrity of the day, who requested a similar dress be made for her. Libbey Glass was happy to oblige—and receive the substantial international press.

The glass fabric was too brittle for practical wear, which may explain why few such garments emerged over the years. But the idea of an illuminated dress did not fade, awaiting just the right technology. Designer Zac Posen found a stunning combination when he crafted a dress of organza, optical fiber, LEDs, and 30 tiny battery packs, for the actor Claire Danes to wear to the 2016 Met Gala. The theme of that year’s fashion extravaganza was “Manus x Machina: Fashion in an Age of Technology,” and Danes’s dress stole the show. Princess Eulalia would have approved. 

Fiber optics has many founders

Of course, most of the work on fiber optics has occurred in the mainstream of science and engineering, with physicists and engineers experimenting with different ways to manipulate light and funnel it through glass fibers. Here, though, the history gets a bit tangled. Let’s consider the man credited with coining the term “fiber optics”: Narinder Singh Kapany

Kapany was born in India in 1926, received his Ph.D. in optics from Imperial College London, and then moved to the United States, where he spent the bulk of his career as a businessman and entrepreneur. He began working with optical fibers during his graduate studies, trying to improve the quality of image transmission. He introduced the term and the field to a broader audience in the November 1960 issue of Scientific American with an article simply titled “Fiber Optics.”

As Kapany informed his readers, a fiber-optic thread is a cylindrical glass fiber having a high index of refraction surrounded by a thin coating of glass with a low index of refraction. Near-total internal reflection takes place between the two, thus keeping a light signal from escaping its conductor. He explained how light transmitted along bundles of flexible glass fibers could transport optical images along torturous paths with useful outcomes. Kapany predicted that it would soon become routine for physicians to examine the inside of a patient’s body using a “fiberscope”—and indeed fiber-optic endoscopy is now commonplace.

Kapany’s article unintentionally introduced an extra loop into the historical thread of fiber optics. In an anecdote that leads off the article, Kapany relates that in the 1870s, Irish physicist John Tyndall demonstrated how light could travel along a curved path. His “light pipe” was formed by a stream of water emerging from a hole in the side of a tank. When Tyndall shone a light into the tank, the light followed the stream of water as it exited the tank and arced to the floor. This same effect is seen in illuminated fountains.

Kapany’s anecdote conjures a mental image that allows readers to begin to understand the concept of guiding light, and I always love when scientists evoke history. In this case, though, the history was wrong: Tyndall wasn’t the originator of the guided-light demonstration.

While researching his highly readable 1999 book City of Light: The Story of Fiber Optics, Jeff Hecht discovered that in fact Jean-Daniel Colladon deserves the credit. In 1841, the Swiss physicist performed the water-jet experiment in Geneva and published an account the following year in Comptes Rendus, the proceedings of the French Academy of Sciences. Hecht, a frequent contributor to IEEE Spectrum, concluded that Michael Faraday, Tyndall’s mentor, probably saw another Swiss physicist, Auguste de la Rive, demonstrate a water jet based on Colladon’s apparatus, and Faraday then encouraged Tyndall to attempt something similar back in London.

I forgive Kapany for not digging around in the archives, even if his anecdote did exaggerate Tyndall’s role in fiber optics. And sure, Tyndall should have credited Colladon, but then there is a long history of scientists not getting the credit they deserve. Indeed, Kapany himself is considered one of them. In 1999 Fortune magazine listed him as one of the “unsung heroes” of 20th-century businessmen. This idea was perpetuated after Kapany did not share the Nobel Prize in Physics with Charles Kao in 2009 for achievements in the transmission of light in fibers for optical communication.

Whether or not Kapany should have shared the prize—and there are never any winners when it comes to debates over overlooked Nobelists—Kao certainly deserved what he got. In 1963 Kao joined a team at Standard Telecommunication Laboratories (STL) in England, the research center for Standard Telephones and Cables. Working with George Hockham, he spent the next three years researching how to use fiber optics for long-distance communication, both audio and video.

On 27 January 1966, Kao demonstrated a short-distance optical waveguide at a meeting of the Institution of Electrical Engineers (IEE).

According to a press release from STL, the waveguide had “the information-carrying capacity of one Gigacycle, which is equivalent to 200 television channels or over 200,000 telephone channels.” Once the technology was perfected, the press release went on, a single undersea cable would be capable of transmitting large amounts of data from the Americas to Europe.

In July, Kao and Hockham published their work [PDF] in the Proceedings of the IEE. They proposed that fiber-optic communication over long distances would be viable, but only if an attenuation of less than 20 dB/km could be achieved. That’s when Corning got involved.

Corning’s contribution brought long-distance optical communication closer to reality

In 1966, the head of a new group in fiber-optic communication at the Post Office Research Station in London mentioned to a visitor from Corning the need for low-loss glass fibers to realize Kao’s vision of long-distance communication. Corning already made fiber optics for medical and military use, but those short pieces of cable had losses of approximately 1,000 dB/km—not even close to Kao and Hockham’s threshold.

That visitor from Corning, William Shaver, told his colleague Robert Maurer about the British effort, and Maurer in turn recruited Keck, Peter Schultz, and Frank Zimar to work on a better way of drawing the glass fibers. The group eventually settled on a process involving a titanium-doped core. The testing of each new iteration of fiber could take several months, but by 1970 the Corning team thought they had a workable technology. On 11 May 1970, they filed for two patents. The first was US3659915A, a fused silica optical waveguide, awarded to Maurer and Schultz, and the second was US271126A, a method of producing optical waveguide fibers, awarded to Keck and Schultz.

Three months after the filing, Keck recorded the jubilant note in his lab notebook. Alas, it was after 5:00 pm on a Friday, and no one was around to join in his celebration. Keck verified the result with a second test on 21 August 1970. In 2012, the achievement was recognized with an IEEE Milestone as a significant event in electrotechnology.

Of course, there was still much work to be done to make long-distance optical communication commercially viable. Just like Princess Eulalia’s dress, Corning’s titanium-doped fibers weren’t strong enough for practical use. Eventually, the team discovered a better process with germanium-doped fibers, which remain the industry standard to this day. A half-century after the first successful low-loss transmission, fiber-optic cables encircle the globe, transmitting terabits of data every second.

An abridged version of this article appears in the August 2020 print issue as “Weaving Light.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Today’s Internet Still Relies on an ARPANET-Era Protocol: The Request for Comments

Post Syndicated from Steve Crocker original https://spectrum.ieee.org/tech-history/cyberspace/todays-internet-still-relies-on-an-arpanetera-protocol-the-request-for-comments

Each March, July, and November, we are reminded that the Internet is not quite the mature, stable technology that it seems to be. We rely on the Internet as an essential tool for our economic, social, educational, and political lives. But when the Internet Engineering Task Force meets every four months at an open conference that bounces from continent to continent, more than 1,000 people from around the world gather with change on their minds. Their vision of the global network that all humanity shares is dynamic, evolving, and continuously improving. Their efforts combine with the contributions of myriad others to ensure that the Internet always works but is never done, never complete.

The rapid yet orderly evolution of the Internet is all the more remarkable considering the highly unusual way it happens: without a company, a government, or a board of directors in charge. Nothing about digital communications technology suggests that it should be self-organizing or, for that matter, fundamentally reliable. We enjoy an Internet that is both of those at once because multiple generations of network developers have embraced a principle and a process that have been quite rare in the history of technology. The principle is that the protocols that govern how Internet-connected devices communicate should be open, expandable, and robust. And the process that invents and refines those protocols demands collaboration and a large degree of consensus among all who care to participate.

As someone who was part of the small team that very deliberately adopted a collaborative, consensus-based process to develop protocols for the ARPANET—predecessor to the Internet—I have been pleasantly surprised by how those ideas have persisted and succeeded, even as the physical network has evolved from 50-kilobit-per-second telephone lines in the mid-1960s to the fiber-optic, 5G, and satellite links we enjoy today. Though our team certainly never envisioned unforgeable “privacy passes” or unique identifiers for Internet-connected drones—two proposed protocols discussed at the task force meeting this past March—we did circulate our ideas for the ARPANET as technical memos among a far-flung group of computer scientists, collecting feedback and settling on solutions in much the same way as today, albeit at a much smaller scale.

We called each of those early memos a “Request for Comments” or RFC. Whatever networked device you use today, it almost certainly follows rules laid down in ARPANET RFCs written decades ago, probably including protocols for sending plain ASCII text (RFC 20, issued in 1969), audio or video data streams (RFC 768, 1980), and Post Office Protocol, or POP, email (RFC 918, 1984).

Of course, technology moves on. None of the computer or communication hardware used to build the ARPANET are crucial parts of the Internet today. But there is one technological system that has remained in constant use since 1969: the humble RFC, which we invented to manage change itself in those early days.

The ARPANET was far simpler than the Internet because it was a single network, not a network of networks. But in 1966, when the Pentagon’s Advanced Research Projects Agency (ARPA) started planning the idea of linking together completely different kinds of computers at a dozen or more research universities from California to Massachusetts, the project seemed quite ambitious.

It took two years to create the basic design, which was for an initial subnet that would exchange packets of data over dedicated telephone lines connecting computers at just four sites: the Santa Barbara and Los Angeles campuses of the University of California; Stanford Research Institute (SRI) in Menlo Park, Calif.; and the University of Utah in Salt Lake City. At each site, a router—we called them IMPs, for interface message processors—would chop outgoing blocks of bits into smaller packets. The IMPs would also reassemble incoming packets from distant computers into blocks that the local “host” computer could process.

In the tumultuous summer of 1968, I was a graduate student spending a few months in the computer science department at UCLA, where a close friend of mine from high school, Vint Cerf, was studying. Like many others in the field, I was much more interested in artificial intelligence and computer graphics than in networking. Indeed, some principal investigators outside of the first four sites initially viewed the ARPANET project as an intrusion rather than an opportunity. When ARPA invited each of the four pilot sites to send two people to a kickoff meeting in Santa Barbara at the end of August, Vint and I drove up from UCLA and discovered that all of the other attendees were also graduate students or staff members. No professors had come.

Almost none of us had met, let alone worked with, anyone from the other sites before. But all of us had worked on time-sharing systems, which doled out chunks of processing time on centralized mainframe computers to a series of remotely connected users, and so we all had a sense that interesting things could be done by connecting distant computers and getting their applications to interact with one another. In fact, we expected that general-purpose interconnection of computers would be so useful that it would eventually spread to include essentially every computer. But we certainly did not anticipate how that meeting would launch a collaborative process that would grow this little network into a critical piece of global infrastructure. And we had no inkling how dramatically our collaboration over the next few years would change our lives.

After getting to know each other in Santa Barbara, we organized follow-up meetings at each of the other sites so that we would all have a common view of what this eclectic network would look like. The SDS Sigma 7 computer at UCLA would be connecting to a DEC PDP-10 in Utah, an IBM System/360 in Santa Barbara, and an SDS 940 at SRI.

We would be a distributed team, writing software that would have to work on a diverse collection of machines and operating systems—some of which didn’t even use the same number of bits to represent characters. Co-opting the name of the ARPA-appointed committee of professors that had assigned us to this project, we called ourselves the Network Working Group.

We had only a few months during the autumn of 1968 and the winter of 1969 to complete our theoretical work on the general architecture of the protocols, while we waited for the IMPs to be built in Cambridge, Mass., by the R&D company Bolt, Beranek and Newman (BBN).

Our group was given no concrete requirements for what the network should do. No project manager asked us for regular status reports or set firm milestones. Other than a general assumption that users at each site should be able to remotely log on and transfer files to and from hosts at the other sites, it was up to us to create useful services.

Through our regular meetings, a broader vision emerged, shaped by three ideas. First, we saw the potential for lots of interesting network services. We imagined that different application programs could exchange messages across the network, for example, and even control one another remotely by executing each other’s subroutines. We wanted to explore that potential.

Second, we felt that the network services should be expandable. Time-sharing systems had demonstrated how you could offer a new service merely by writing a program and letting others use it. We felt that the network should have a similar capacity.

Finally, we recognized that the network would be most useful if it were agnostic about the hardware of its hosts. Whatever software we wrote ought to support any machine seamlessly, regardless of its word length, character set, instruction set, or architecture.

We couldn’t translate these ideas into software immediately because BBN had yet to release its specification for the interface to the IMP. But we wanted to get our thoughts down on paper. When the Network Working Group gathered in Utah in March 1969, we dealt out writing assignments to one another. Until we got the network running and created an email protocol, we would have to share our memos through the U. S. Mail. To make the process as easy and efficient as possible, I kept a numbered list of documents in circulation, and authors mailed copies of memos they wrote to everyone else.

Tentatively, but with building excitement, our group of grad students felt our way through the dark together. We didn’t even appoint a leader. Surely at some point the “real experts”—probably from some big-name institution in the Northeast or Washington, D.C.—would take charge. We didn’t want to step on anyone’s toes. So we certainly weren’t going to call our technical memos “standards” or “orders.” Even “proposals” seemed too strong—we just saw them as ideas that we were communicating without prior approval or coordination. Finally, we settled on a term suggested by Bill Duvall, a young programmer at SRI, that emphasized that these documents were part of an ongoing and often preliminary discussion: Request for Comments.

The first batch of RFCs arrived in April 1969. What was arguably one of our best initial ideas was not spelled out in these RFCs but only implicit in them: the agreement to structure protocols in layers, so that one protocol could build on another if desired and so that programmers could write software that tapped into whatever level of the protocol stack worked best for their needs.

We started with the bottom layer, the foundation. I wrote RFC 1, and Duvall wrote RFC 2. Together, these first two memos described basic streaming connections between hosts. We kept this layer simple—easy to define and easy to implement. Interactive terminal connections (like Telnet), file transfer mechanisms (like FTP), and other applications yet to be defined (like email) could then be built on top of it.

That was the plan, anyway. It turned out to be more challenging than expected. We wrestled with, among other things, how to establish connections, how to assign addresses that allowed for multiple connections, how to handle flow control, what to use as the common unit of transmission, and how to enable users to interrupt the remote system. Only after multiple iterations and many, many months of back-and-forth did we finally reach consensus on the details.

Some of the RFCs in that first batch were more administrative, laying out the minimalist conventions we wanted these memos to take, presenting software testing schedules, and tracking the growing mailing list.

Others laid out grand visions that nevertheless failed to gain traction. To my mind, RFC 5 was the most ambitious and interesting of the lot. In it, Jeff Rulifson, then at SRI, introduced a very powerful idea: downloading a small application at the beginning of an interactive session that could mediate the session and speed things up by handling “small” actions locally.

As one very simple example, the downloaded program could let you edit or auto-complete a command on the console before sending it to the remote host. The application would be written in a machine-agnostic language called Decode-Encode Language (DEL). For this to work, every host would have to be able to run a DEL interpreter. But we felt that the language could be kept simple enough for this to be feasible and that it might significantly improve responsiveness for users.

Aside from small bursts of experimentation with DEL, however, the idea didn’t catch on until many years later, when Microsoft released ActiveX and Sun Microsystems produced Java. Today, the technique is at the heart of every online app.

The handful of RFCs we circulated in early 1969 captured our ideas for network protocols, but our work really began in earnest that September and October, when the first IMPs arrived at UCLA and then SRI. Two were enough to start experimenting. Duvall at SRI and Charley Kline at UCLA (who worked in Leonard Kleinrock’s group) dashed off some software to allow a user on the UCLA machine to log on to the machine at SRI. On the evening of 29 October 1969, Charley tried unsuccessfully to do so. After a quick fix to a small glitch in the SRI software, a successful connection was made that evening. The software was adequate for connecting UCLA to SRI, but it wasn’t general enough for all of the machines that would eventually be connected to the ARPANET. More work was needed.

By February 1970, we had a basic host-to-host communication protocol working well enough to present it at that spring’s Joint Computer Conference in Atlantic City. Within a few more months, the protocol was solid enough that we could shift our attention up the stack to two application-layer protocols, Telnet and FTP.

Rather than writing monolithic programs to run on each computer, as some of our bosses had originally envisioned, we stuck to our principle that protocols should build on one another so that the system would remain open and extensible. Designing Telnet and FTP to communicate through the host-to-host protocol guaranteed that they could be updated independently of the base system.

By October 1971, we were ready to put the ARPANET through its paces. Gathering at MIT for a complete shakedown test—we called it “the bake-off”—we checked that each host could log on to every other host. It was a proud moment, as well as a milestone that the Network Working Group had set for itself.

And yet we knew there was still so much to do. The network had grown to connect 23 hosts at 15 sites. A year later, at a big communications conference in Washington, D.C., the ARPANET was demonstrated publicly in a hotel ballroom. Visitors were able to sit down at any of several terminals and log on to computers all over the United States.

Year after year, our group continued to produce RFCs with observations, suggested changes, and possible extensions to the ARPANET and its protocols. Email was among those early additions. It started as a specialized case of file transfer but was later reworked into a separate protocol (Simple Mail Transfer Protocol, or SMTP, RFC 788, issued in 1981). Somewhat to the bemusement of both us and our bosses, email became the dominant use of the ARPANET, the first “killer app.”

Email also affected our own work, of course, as it allowed our group to circulate RFCs faster and to a much wider group of collaborators. A virtuous cycle had begun: Each new feature enabled programmers to create other new features more easily.

Protocol development flourished. The TCP and IP protocols replaced and greatly enhanced the host-to-host protocol and laid the foundation for the Internet. The RFC process led to the adoption of the Domain Name System (DNS, RFC 1035, issued in 1987), the Simple Network Management Protocol (SNMP, RFC 1157, 1990), and the Hypertext Transfer Protocol (HTTP, RFC 1945, 1996).

In time, the development process evolved along with the technology and the growing importance of the Internet in international communication and commerce. In 1979, Vint Cerf, by then a program manager at DARPA, created the Internet Configuration Control Board, which eventually spawned the Internet Engineering Task Force. That task force continues the work that was originally done by the Network Working Group. Its members still discuss problems facing the network, modifications that might be necessary to existing protocols, and new protocols that may be of value. And they still publish protocol specifications as documents with the label “Request for Comments.”

And the core idea of continual improvement by consensus among a coalition of the willing still lives strong in Internet culture. Ideas for new protocols and changes to protocols are now circulated via email lists devoted to specific protocol topics, known as working groups. There are now about a hundred of these groups. When they meet at the triannual conferences, the organizers still don’t take votes: They ask participants to hum if they agree with an idea, then take the sense of the room. Formal decisions follow a subsequent exchange over email.

Drafts of protocol specifications are circulated as “Internet-Drafts,” which are intended for discussion leading to an RFC. One discussion that was recently begun on new network software to enable quantum Internet communication, for example, is recorded in an RFC-like Internet-Draft.

And in an ironic twist, the specification for this or any other new protocol will appear in a Request for Comments only after it has been approved for formal adoption and published. At that point, comments are no longer actually requested.

This article appears in the August 2020 print issue as “The Consensus Protocol.”

About the Author

Steve Crocker is the chief architect of the Arpanet’s Request for Comments (RFC) process and one of the founding members of the Network Working Group, the forerunner of the Internet Engineering Task Force. For many years, he was a board member of ICANN, serving as its vice chairman and then its chairman until 2017.