Tag Archives: history

The History of the URL

Post Syndicated from Zack Bloom original https://blog.cloudflare.com/the-history-of-the-url/

The History of the URL

On the 11th of January 1982 twenty-two computer scientists met to discuss an issue with ‘computer mail’ (now known as email). Attendees included the guy who would create Sun Microsystems, the guy who made Zork, the NTP guy, and the guy who convinced the government to pay for Unix. The problem was simple: there were 455 hosts on the ARPANET and the situation was getting out of control.

The History of the URL

This issue was occuring now because the ARPANET was on the verge of switching from its original NCP protocol, to the TCP/IP protocol which powers what we now call the Internet. With that switch suddenly there would be a multitude of interconnected networks (an ‘Inter… net’) requiring a more ‘hierarchical’ domain system where ARPANET could resolve its own domains while the other networks resolved theirs.

Other networks at the time had great names like “COMSAT”, “CHAOSNET”, “UCLNET” and “INTELPOSTNET” and were maintained by groups of universities and companies all around the US who wanted to be able to communicate, and could afford to lease 56k lines from the phone company and buy the requisite PDP-11s to handle routing.

The History of the URL

In the original ARPANET design, a central Network Information Center (NIC) was responsible for maintaining a file listing every host on the network. The file was known as the HOSTS.TXT file, similar to the /etc/hosts file on a Linux or OS X system today. Every network change would require the NIC to FTP (a protocol invented in 1971) to every host on the network, a significant load on their infrastructure.

Having a single file list every host on the Internet would, of course, not scale indefinitely. The priority was email, however, as it was the predominant addressing challenge of the day. Their ultimate conclusion was to create a hierarchical system in which you could query an external system for just the domain or set of domains you needed. In their words: “The conclusion in this area was that the current ‘[email protected]’ mailbox identifier should be extended to ‘[email protected]’ where ‘domain’ could be a hierarchy of domains.” And the domain was born.

The History of the URL

It’s important to dispel any illusion that these decisions were made with prescience for the future the domain name would have. In fact, their elected solution was primarily decided because it was the “one causing least difficulty for existing systems.” For example, one proposal was for email addresses to be of the form <user>.<host>@<domain>. If email usernames of the day hadn’t already had ‘.’ characters you might be emailing me at ‘[email protected]’ today.

The History of the URL

What is Cloudflare?

Cloudflare allows you to move caching, load balancing, rate limiting, and even network firewall and code execution out of your infrastructure to our points of presence within milliseconds of virtually every Internet user.

Read A Case Study
Contact Sales

UUCP and the Bang Path

It has been said that the principal function of an operating system is to define a number of different names for the same object, so that it can busy itself keeping track of the relationship between all of the different names. Network protocols seem to have somewhat the same characteristic.

— David D. Clark, 1982

Another failed proposal involved separating domain components with the exclamation mark (!). For example, to connect to the ISIA host on ARPANET, you would connect to !ARPA!ISIA. You could then query for hosts using wildcards, so !ARPA!* would return to you every ARPANET host.

This method of addressing wasn’t a crazy divergence from the standard, it was an attempt to maintain it. The system of exclamation separated hosts dates to a data transfer tool called UUCP created in 1976. If you’re reading this on an OS X or Linux computer, uucp is likely still installed and available at the terminal.

ARPANET was introduced in 1969, and quickly became a powerful communication tool… among the handful of universities and government institutions which had access to it. The Internet as we know it wouldn’t become publically available outside of research institutions until 1991, twenty one years later. But that didn’t mean computer users weren’t communicating.

The History of the URL

In the era before the Internet, the general method of communication between computers was with a direct point-to-point dial up connection. For example, if you wanted to send me a file, you would have your modem call my modem, and we would transfer the file. To craft this into a network of sorts, UUCP was born.

In this system, each computer has a file which lists the hosts its aware of, their phone number, and a username and password on that host. You then craft a ‘path’, from your current machine to your destination, through hosts which each know how to connect to the next:


The History of the URL

This address would form not just a method of sending me files or connecting with my computer directly, but also would be my email address. In this era before ‘mail servers’, if my computer was off you weren’t sending me an email.

While use of ARPANET was restricted to top-tier universities, UUCP created a bootleg Internet for the rest of us. It formed the basis for both Usenet and the BBS system.


Ultimately, the DNS system we still use today would be proposed in 1983. If you run a DNS query today, for example using the dig tool, you’ll likely see a response which looks like this:

google.com.   299 IN  A

This is informing us that google.com is reachable at As you might know, the A is informing us that this is an ‘address’ record, mapping a domain to an IPv4 address. The 299 is the ‘time to live’, letting us know how many more seconds this value will be valid for, before it should be queried again. But what does the IN mean?

IN stands for ‘Internet’. Like so much of this, the field dates back to an era when there were several competing computer networks which needed to interoperate. Other potential values were CH for the CHAOSNET or HS for Hesiod which was the name service of the Athena system. CHAOSNET is long dead, but a much evolved version of Athena is still used by students at MIT to this day. You can find the list of DNS classes on the IANA website, but it’s no surprise only one potential value is in common use today.


It is extremely unlikely that any other TLDs will be created.

— John Postel, 1994

Once it was decided that domain names should be arranged hierarchically, it became necessary to decide what sits at the root of that hierarchy. That root is traditionally signified with a single ‘.’. In fact, ending all of your domain names with a ‘.’ is semantically correct, and will absolutely work in your web browser: google.com.

The first TLD was .arpa. It allowed users to address their old traditional ARPANET hostnames during the transition. For example, if my machine was previously registered as hfnet, my new address would be hfnet.arpa. That was only temporary, during the transition, server administrators had a very important choice to make: which of the five TLDs would they assume? “.com”, “.gov”, “.org”, “.edu” or “.mil”.

When we say DNS is hierarchical, what we mean is there is a set of root DNS servers which are responsible for, for example, turning .com into the .com nameservers, who will in turn answer how to get to google.com. The root DNS zone of the internet is composed of thirteen DNS server clusters. There are only 13 server clusters, because that’s all we can fit in a single UDP packet. Historically, DNS has operated through UDP packets, meaning the response to a request can never be more than 512 bytes.

;       This file holds the information on root name servers needed to
;       initialize cache of Internet domain name servers
;       (e.g. reference this file in the "cache  .  "
;       configuration file of BIND domain name servers).
;       This file is made available by InterNIC 
;       under anonymous FTP as
;           file                /domain/named.cache
;           on server           FTP.INTERNIC.NET
;       -OR-                    RS.INTERNIC.NET
;       last update:    March 23, 2016
;       related version of root zone:   2016032301
; formerly NS.INTERNIC.NET
.                        3600000      NS    A.ROOT-SERVERS.NET.
A.ROOT-SERVERS.NET.      3600000      A
A.ROOT-SERVERS.NET.      3600000      AAAA  2001:503:ba3e::2:30
.                        3600000      NS    B.ROOT-SERVERS.NET.
B.ROOT-SERVERS.NET.      3600000      A
B.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:84::b
.                        3600000      NS    C.ROOT-SERVERS.NET.
C.ROOT-SERVERS.NET.      3600000      A
C.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:2::c
.                        3600000      NS    D.ROOT-SERVERS.NET.
D.ROOT-SERVERS.NET.      3600000      A
D.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:2d::d
.                        3600000      NS    E.ROOT-SERVERS.NET.
E.ROOT-SERVERS.NET.      3600000      A
.                        3600000      NS    F.ROOT-SERVERS.NET.
F.ROOT-SERVERS.NET.      3600000      A
F.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:2f::f
.                        3600000      NS    G.ROOT-SERVERS.NET.
G.ROOT-SERVERS.NET.      3600000      A
.                        3600000      NS    H.ROOT-SERVERS.NET.
H.ROOT-SERVERS.NET.      3600000      A
H.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:1::53
.                        3600000      NS    I.ROOT-SERVERS.NET.
I.ROOT-SERVERS.NET.      3600000      A
I.ROOT-SERVERS.NET.      3600000      AAAA  2001:7fe::53
.                        3600000      NS    J.ROOT-SERVERS.NET.
J.ROOT-SERVERS.NET.      3600000      A
J.ROOT-SERVERS.NET.      3600000      AAAA  2001:503:c27::2:30
.                        3600000      NS    K.ROOT-SERVERS.NET.
K.ROOT-SERVERS.NET.      3600000      A
K.ROOT-SERVERS.NET.      3600000      AAAA  2001:7fd::1
.                        3600000      NS    L.ROOT-SERVERS.NET.
L.ROOT-SERVERS.NET.      3600000      A
L.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:9f::42
.                        3600000      NS    M.ROOT-SERVERS.NET.
M.ROOT-SERVERS.NET.      3600000      A
M.ROOT-SERVERS.NET.      3600000      AAAA  2001:dc3::35
; End of file

Root DNS servers operate in safes, inside locked cages. A clock sits on the safe to ensure the camera feed hasn’t been looped. Particularily given how slow DNSSEC implementation has been, an attack on one of those servers could allow an attacker to redirect all of the Internet traffic for a portion of Internet users. This, of course, makes for the most fantastic heist movie to have never been made.

Unsurprisingly, the nameservers for top-level TLDs don’t actually change all that often. 98% of the requests root DNS servers receive are in error, most often because of broken and toy clients which don’t properly cache their results. This became such a problem that several root DNS operators had to spin up special servers just to return ‘go away’ to all the people asking for reverse DNS lookups on their local IP addresses.

The TLD nameservers are administered by different companies and governments all around the world (Verisign manages .com). When you purchase a .com domain, about $0.18 goes to the ICANN, and $7.85 goes to Verisign.


It is rare in this world that the silly name us developers think up for a new project makes it into the final, public, product. We might name the company database Delaware (because that’s where all the companies are registered), but you can be sure by the time it hits production it will be CompanyMetadataDatastore. But rarely, when all the stars align and the boss is on vacation, one slips through the cracks.

Punycode is the system we use to encode unicode into domain names. The problem it is solving is simple, how do you write 比薩.com when the entire internet system was built around using the ASCII alphabet whose most foreign character is the tilde?

It’s not a simple matter of switching domains to use unicode. The original documents which govern domains specify they are to be encoded in ASCII. Every piece of internet hardware from the last fourty years, including the Cisco and Juniper routers used to deliver this page to you make that assumption.

The web itself was never ASCII-only. It was actually originally concieved to speak ISO 8859-1 which includes all of the ASCII characters, but adds an additional set of special characters like ¼ and letters with special marks like ä. It does not, however, contain any non-Latin characters.

This restriction on HTML was ultimately removed in 2007 and that same year Unicode became the most popular character set on the web. But domains were still confined to ASCII.

The History of the URL

As you might guess, Punycode was not the first proposal to solve this problem. You most likely have heard of UTF-8, which is a popular way of encoding Unicode into bytes (the 8 is for the eight bits in a byte). In the year 2000 several members of the Internet Engineering Task Force came up with UTF-5. The idea was to encode Unicode into five bit chunks. You could then map each five bits into a character allowed (A-V & 0-9) in domain names. So if I had a website for Japanese language learning, my site 日本語.com would become the cryptic M5E5M72COA9E.com.

This encoding method has several disadvantages. For one, A-V and 0-9 are used in the output encoding, meaning if you wanted to actually include one of those characters in your doman, it had to be encoded like everything else. This made for some very long domains, which is a serious problem when each segment of a domain is restricted to 63 characters. A domain in the Myanmar language would be restricted to no more than 15 characters. The proposal does make the very interesting suggestion of using UTF-5 to allow Unicode to be transmitted by Morse code and telegram though.

There was also the question of how to let clients know that this domain was encoded so they could display them in the appropriate Unicode characters, rather than showing M5E5M72COA9E.com in my address bar. There were several suggestions, one of which was to use an unused bit in the DNS response. It was the “last unused bit in the header”, and the DNS folks were “very hesitant to give it up” however.

Another suggestion was to start every domain using this encoding method with ra--. At the time (mid-April 2000), there were no domains which happened to start with those particular characters. If I know anything about the Internet, someone registered an ra-- domain out of spite immediately after the proposal was published.

The ultimate conclusion, reached in 2003, was to adopt a format called Punycode which included a form of delta compression which could dramatically shorten encoded domain names. Delta compression is a particularily good idea because the odds are all of the characters in your domain are in the same general area within Unicode. For example, two characters in Farsi are going to be much closer together than a Farsi character and another in Hindi. To give an example of how this works, if we take the nonsense phrase:


In an uncompressed format, that would be stored as the three characters [1610, 1584, 1597] (based on their Unicode code points). To compress this we first sort it numerically (keeping track of where the original characters were): [1584, 1597, 1610]. Then we can store the lowest value (1584), and the delta between that value and the next character (13), and again for the following character (23), which is significantly less to transmit and store.

Punycode then (very) efficiently encodes those integers into characters allowed in domain names, and inserts an xn-- at the beginning to let consumers know this is an encoded domain. You’ll notice that all the Unicode characters end up together at the end of the domain. They don’t just encode their value, they also encode where they should be inserted into the ASCII portion of the domain. To provide an example, the website 熱狗sales.com becomes xn--sales-r65lm0e.com. Anytime you type a Unicode-based domain name into your browser’s address bar, it is encoded in this way.

This transformation could be transparent, but that introduces a major security problem. All sorts of Unicode characters print identically to existing ASCII characters. For example, you likely can’t see the difference between Cyrillic small letter a (“а”) and Latin small letter a (“a”). If I register Cyrillic аmazon.com (xn--mazon-3ve.com), and manage to trick you into visiting it, it’s gonna be hard to know you’re on the wrong site. For that reason, when you visit 🍕💩.ws, your browser somewhat lamely shows you xn--vi8hiv.ws in the address bar.


The first portion of the URL is the protocol which should be used to access it. The most common protocol is http, which is the simple document transfer protocol Tim Berners-Lee invented specifically to power the web. It was not the only option. Some people believed we should just use Gopher. Rather than being general-purpose, Gopher is specifically designed to send structured data similar to how a file tree is structured.

For example, if you request the /Cars endpoint, it might return:

1Chevy Camaro             /Archives/cars/cc     gopher.cars.com     70
iThe Camero is a classic  fake                  (NULL)              0
iAmerican Muscle car      fake                  (NULL)              0
1Ferrari 451              /Factbook/ferrari/451  gopher.ferrari.net 70

which identifies two cars, along with some metadata about them and where you can connect to for more information. The understanding was your client would parse this information into a usable form which linked the entries with the destination pages.

The History of the URL

The first popular protocol was FTP, which was created in 1971, as a way of listing and downloading files on remote computers. Gopher was a logical extension of this, in that it provided a similar listing, but included facilities for also reading the metadata about entries. This meant it could be used for more liberal purposes like a news feed or a simple database. It did not have, however, the freedom and simplicity which characterizes HTTP and HTML.

HTTP is a very simple protocol, particularily when compared to alternatives like FTP or even the HTTP/3 protocol which is rising in popularity today. First off, HTTP is entirely text based, rather than being composed of bespoke binary incantations (which would have made it significantly more efficient). Tim Berners-Lee correctly intuited that using a text-based format would make it easier for generations of programmers to develop and debug HTTP-based applications.

HTTP also makes almost no assumptions about what you’re transmitting. Despite the fact that it was invented expliticly to accompany the HTML language, it allows you to specify that your content is of any type (using the MIME Content-Type, which was a new invention at the time). The protocol itself is rather simple:

A request:

GET /index.html HTTP/1.1 Host: www.example.com

Might respond:

HTTP/1.1 200 OK
Date: Mon, 23 May 2005 22:38:34 GMT
Content-Type: text/html; charset=UTF-8
Content-Encoding: UTF-8
Content-Length: 138
Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
Server: Apache/ (Unix) (Red-Hat/Linux)
ETag: "3f80f-1b6-3e1cb03b"
Accept-Ranges: bytes
Connection: close

        <title>An Example Page</title>
        Hello World, this is a very simple HTML document.

To put this in context, you can think of the networking system the Internet uses as starting with IP, the Internet Protocol. IP is responsible for getting a small packet of data (around 1500 bytes) from one computer to another. On top of that we have TCP, which is responsible for taking larger blocks of data like entire documents and files and sending them via many IP packets reliably. On top of that, we then implement a protocol like HTTP or FTP, which specifies what format should be used to make the data we send via TCP (or UDP, etc.) understandable and meaningful.

In other words, TCP/IP sends a whole bunch of bytes to another computer, the protocol says what those bytes should be and what they mean.

You can make your own protocol if you like, assemblying the bytes in your TCP messages however you like. The only requirement is that whoever you are talking to speaks the same language. For this reason, it’s common to standardize these protocols.

There are, of course, many less important protocols to play with. For example there is a Quote of The Day protocol (port 17), and a Random Characters protocol (port 19). They may seem silly today, but they also showcase just how important that a general-purpose document transmission format like HTTP was.


The timeline of Gopher and HTTP can be evidenced by their default port numbers. Gopher is 70, HTTP 80. The HTTP port was assigned (likely by Jon Postel at the IANA) at the request of Tim Berners-Lee sometime between 1990 and 1992.

This concept, of registering ‘port numbers’ predates even the Internet. In the original NCP protocol which powered the ARPANET remote addresses were identified by 40 bits. The first 32 identified the remote host, similar to how an IP address works today. The last eight were known as the AEN (it stood for “Another Eight-bit Number”), and were used by the remote machine in the way we use a port number, to separate messages destined for different processes. In other words, the address specifies which machine the message should go to, and the AEN (or port number) tells that remote machine which application should get the message.

They quickly requested that users register these ‘socket numbers’ to limit potential collisions. When port numbers were expanded to 16 bits by TCP/IP, that registration process was continued.

While protocols have a default port, it makes sense to allow ports to also be specified manually to allow for local development and the hosting of multiple services on the same machine. That same logic was the basis for prefixing websites with www.. At the time, it was unlikely anyone was getting access to the root of their domain, just for hosting an ‘experimental’ website. But if you give users the hostname of your specific machine (dx3.cern.ch), you’re in trouble when you need to replace that machine. By using a common subdomain (www.cern.ch) you can change what it points to as needed.

The Bit In-between

As you probably know, the URL syntax places a double slash (//) between the protocol and the rest of the URL:


That double slash was inherited from the Apollo computer system which was one of the first networked workstations. The Apollo team had a similar problem to Tim Berners-Lee: they needed a way to separate a path from the machine that path is on. Their solution was to create a special path format:


And TBL copied that scheme. Incidentally, he now regrets that decision, wishing the domain (in this case example.com) was the first portion of the path:


URLs were never intended to be what they’ve become: an arcane way for a user to identify a site on the Web. Unfortunately, we’ve never been able to standardize URNs, which would give us a more useful naming system. Arguing that the current URL system is sufficient is like praising the DOS command line, and stating that most people should simply learn to use command line syntax. The reason we have windowing systems is to make computers easier to use, and more widely used. The same thinking should lead us to a superior way of locating specific sites on the Web.

— Dale Dougherty 1996

There are several different ways to understand the ‘Internet’. One is as a system of computers connected using a computer network. That version of the Internet came into being in 1969 with the creation of the ARPANET. Mail, files and chat all moved over that network before the creation of HTTP, HTML, or the ‘web browser’.

In 1992 Tim Berners-Lee created three things, giving birth to what we consider the Internet. The HTTP protocol, HTML, and the URL. His goal was to bring ‘Hypertext’ to life. Hypertext at its simplest is the ability to create documents which link to one another. At the time it was viewed more as a science fiction panacea, to be complimented by Hypermedia, and any other word you could add ‘Hyper’ in front of.

The key requirement of Hypertext was the ability to link from one document to another. In TBL’s time though, these documents were hosted in a multitude of formats and accessed through protocols like Gopher and FTP. He needed a consistent way to refer to a file which encoded its protocol, its host on the Internet, and where it existed on that host.

At the original World-Wide Web presentation in March of 1992 TBL described it as a ‘Universal Document Identifier’ (UDI). Many different formats were considered for this identifier:

protocol: aftp host: xxx.yyy.edu path: /pub/doc/README
PR=aftp; H=xx.yy.edu; PA=/pub/doc/README;

This document also explains why spaces must be encoded in URLs (%20):

The use of white space characters has been avoided in UDIs: spaces > are not legal characters. This was done because of the frequent > introduction of extraneous white space when lines are wrapped by > systems such as mail, or sheer necessity of narrow column width, and > because of the inter-conversion of various forms of white space > which occurs during character code conversion and the transfer of > text between applications.

What’s most important to understand is that the URL was fundamentally just an abbreviated way of refering to the combination of scheme, domain, port, credentials and path which previously had to be understood contextually for each different communication system.

It was first officially defined in an RFC published in 1994.

scheme:[//[user:[email protected]]host[:port]][/]path[?query][#fragment]

This system made it possible to refer to different systems from within Hypertext, but now that virtually all content is hosted over HTTP, may not be as necessary anymore. As early as 1996 browsers were already inserting the http:// and www. for users automatically (rendering any advertisement which still contains them truly ridiculous).


I do not think the question is whether people can learn the meaning of the URL, I just find it it morally abhorrent to force grandma or grandpa to understand what, in the end, are UNIX file system conventions.

— Israel del Rio 1996

The slash separated path component of a URL should be familiar to any user of any computer built in the last fifty years. The hierarchal filesystem itself was introduced by the MULTICS system. Its creator, in turn, attributes it to a two hour conversation with Albert Einstein he had in 1952.

MULTICS used the greater than symbol (>) to separated file path components. For example:


That was perfectly logical, but unfortunately the Unix folks decided to use > to represent redirection, delegating path separation to the forward slash (/).

Snapchat the Supreme Court

Wrong. We are I now see clearly *disagreeing*. You and I.

As a person I reserve the right to use different criteria for different purposes. I want to be able to give names to generic works, AND to particular translations AND to particular versions. I want a richer world than you propose. I don’t want to be constrained by your two-level system of “documents” and “variants”.

— Tim Berners-Lee 1993

One half of the URLs referenced by US Supreme Court opinions point to pages which no longer exist. If you were reading an academic paper in 2011, written in 2001, you have better than even odds that any given URL won’t be valid.

There was a fervent belief in 1993 that the URL would die, in favor of the ‘URN’. The Uniform Resource Name is a permanent reference to a given piece of content which, unlike a URL, will never change or break. Tim Berners-Lee first described the “urgent need” for them as early as 1991.

The simplest way to craft a URN might be to simply use a cryptographic hash of the contents of the page, for example: urn:791f0de3cfffc6ec7a0aacda2b147839. This method doesn’t meet the criteria of the web community though, as it wasn’t really possible to figure out who to ask to turn that hash into a piece of real content. It also didn’t account for the format changes which often happen to files (compressed vs uncompressed for example) which nevertheless represent the same content.

The History of the URL

In 1996 Keith Shafer and several others proposed a solution to the problem of broken URLs. The link to this solution is now broken. Roy Fielding posted an implementation suggestion in July of 1995. That link is now broken.

I was able to find these pages through Google, which has functionally made page titles the URN of today. The URN format was ultimately finalized in 1997, and has essentially never been used since. The implementation is itself interesting. Each URN is composed of two components, an authority who can resolve a given type of URN, and the specific ID of this document in whichever format the authority understands. For example, urn:isbn:0131103628 will identify a book, forming a permanent link which can (hopefully) be turned into a set of URLs by your local isbn resolver.

Given the power of search engines, it’s possible the best URN format today would be a simple way for files to point to their former URLs. We could allow the search engines to index this information, and link us as appropriate:

<!-- On http://zack.is/history -->
<link rel="past-url" href="http://zackbloom.com/history.html">
<link rel="past-url" href="http://zack.is/history.html">

Query Params

The “application/x-www-form-urlencoded” format is in many ways an aberrant monstrosity, the result of many years of implementation accidents and compromises leading to a set of requirements necessary for interoperability, but in no way representing good design practices.

WhatWG URL Spec

If you’ve used the web for any period of time, you are familiar with query parameters. They follow the path portion of the URL, and encode options like ?name=zack&state=mi. It may seem odd to you that queries use the ampersand character (&) which is the same character used in HTML to encode special characters. In fact, if you’ve used HTML for any period of time, you likely have had to encode ampersands in URLs, turning http://host/?x=1&y=2 into http://host/?x=1&amp;y=2 or http://host?x=1&#38;y=2 (that particular confusion has always existed).

You may have also noticed that cookies follow a similar, but different format: x=1;y=2 which doesn’t actually conflict with HTML character encoding at all. This idea was not lost on the W3C, who encouraged implementers to support ; as well as & in query parameters as early as 1995.

Originally, this section of the URL was strictly used for searching ‘indexes’. The Web was originally created (and its funding was based on it creating) a method of collaboration for high energy physicists. This is not to say Tim Berners-Lee didn’t know he was really creating a general-purpose communication tool. He didn’t add support for tables for years, which is probably something physicists would have needed.

In any case, these ‘physicists’ needed a way of encoding and linking to information, and a way of searching that information. To provide that, Tim Berners-Lee created the <ISINDEX> tag. If <ISINDEX> appeared on a page, it would inform the browser that this is a page which can be searched. The browser should show a search field, and allow the user to send a query to the server.

That query was formatted as keywords separated by plus characters (+):


In fantastic Internet fashion, this tag was quickly abused to do all manner of things including providing an input to calculate square roots. It was quickly proposed that perhaps this was too specific, and we really needed a general purpose <input> tag.

That particular proposal actually uses plus signs to separate the components of what otherwise looks like a modern GET query:


This was far from universally acclaimed. Some believed we needed a way of saying that the content on the other side of links should be searchable:

<a HREF="wais://quake.think.com/INFO" INDEX=1>search</a>

Tim Berners-Lee thought we should have a way of defining strongly-typed queries:

<ISINDEX TYPE="iana:/www/classes/query/personalinfo">

I can be somewhat confident in saying, in retrospect, I am glad the more generic solution won out.

The real work on <INPUT> began in January of 1993 based on an older SGML type. It was (perhaps unfortunately), decided that <SELECT> inputs needed a separate, richer, structure:

<select name=FIELDNAME type=CHOICETYPE [value=VALUE] [help=HELPUDI]> 
    <choice>item 1
    <choice>item 2
    <choice>item 3

If you’re curious, reusing <li>, rather than introducing the <option> element was absolutely considered. There were, of course, alternative form proposals. One included some variable substituion evocative of what Angular might do today:

<QUESTION TYPE=float DEFAULT=default VAR=lval>Prompt</QUESTION>
<CHOICE DEFAULT=default VAR=lval>
    <ALTERNATIVE VAL=value1>Prompt1 ...
    <ALTERNATIVE VAL=valuen>Promptn

In this example the inputs are checked against the type specified in type, and the VAR values are available on the page for use in string substitution in URLs, à la:


Additional proposals actually used @, rather than =, to separate query components:

[email protected][email protected](value&value)

It was Marc Andreessen who suggested our current method based on what he had already implemented in Mosaic:


Just two months later Mosaic would add support for method=POST forms, and ‘modern’ HTML forms were born.

Of course, it was also Marc Andreessen’s company Netscape who would create the cookie format (using a different separator). Their proposal was itself painfully shortsighted, led to the attempt to introduce a Set-Cookie2 header, and introduced fundamental structural issues we still deal with at Cloudflare to this day.


The portion of the URL following the ‘#’ is known as the fragment. Fragments were a part of URLs since their initial specification, used to link to a specific location on the page being loaded. For example, if I have an anchor on my site:

<a name="bio"></a>

I can link to it:


This concept was gradually extended to any element (rather than just anchors), and moved to the id attribute rather than name:

<h1 id="bio">Bio</h1>

Tim Berners-Lee decided to use this character based on its connection to addresses in the United States (despite the fact that he’s British by birth). In his words:

In a snail mail address in the US at least, it is common
to use the number sign for an apartment number or suite
number within a building. So 12 Acacia Av #12 means “The
building at 12 Acacia Av, and then within that the unit
known numbered 12”. It seemed to be a natural character
for the task. Now, http://www.example.com/foo#bar means
“Within resource http://www.example.com/foo, the
particular view of it known as bar”.

It turns out that the original Hypertext system, created by Douglas Englebart, also used the ‘#’ character for the same purpose. This may be coincidental or it could be a case of accidental “idea borrowing”.

Fragments are explicitly not included in HTTP requests, meaning they only live inside the browser. This concept proved very valuable when it came time to implement client-side navigation (before pushState was introduced). Fragments were also very valuable when it came time to think about how we can store state in URLs without actually sending it to the server. What could that mean? Let’s explore:

Molehills and Mountains

There is a whole standard, as yukky as SGML, on Electronic data Intercahnge [sic], meaning forms and form submission. I know no more except it looks like fortran backwards with no spaces.

— Tim Berners-Lee 1993

There is a popular perception that the internet standards bodies didn’t do much from the finalization of HTTP 1.1 and HTML 4.01 in 2002 to when HTML 5 really got on track. This period is also known (only by me) as the Dark Age of XHTML. The truth is though, the standardization folks were fantastically busy. They were just doing things which ultimately didn’t prove all that valuable.

One such effort was the Semantic Web. The dream was to create a Resource Description Framework (editorial note: run away from any team which seeks to create a framework), which would allow metadata about content to be universally expressed. For example, rather than creating a nice web page about my Corvette Stingray, I could make an RDF document describing its size, color, and the number of speeding tickets I had gotten while driving it.

This is, of course, in no way a bad idea. But the format was XML based, and there was a big chicken-and-egg problem between having the entire world documented, and having the browsers do anything useful with that documentation.

It did however provide a powerful environment for philosophical argument. One of the best such arguments lasted at least ten years, and was known by the masterful codename ‘httpRange-14’.

httpRange-14 sought to answer the fundamental question of what a URL is. Does a URL always refer to a document, or can it refer to anything? Can I have a URL which points to my car?

They didn’t attempt to answer that question in any satisfying manner. Instead they focused on how and when we can use 303 redirects to point users from links which aren’t documents to ones which are, and when we can use URL fragments (the bit after the ‘#’) to point users to linked data.

To the pragmatic mind of today, this might seem like a silly question. To many of us, you can use a URL for whatever you manage to use it for, and people will use your thing or they won’t. But the Semantic Web cares for nothing more than semantics, so it was on.

This particular topic was discussed on July 1st 2002, July 15th 2002, July 22nd 2002, July 29th 2002, September 16th 2002, and at least 20 other occasions through 2005. It was resolved by the great ‘httpRange-14 resolution’ of 2005, then reopened by complaints in 2007 and 2011 and a call for new solutions in 2012. The question was heavily discussed by the pedantic web group, which is very aptly named. The one thing which didn’t happen is all that much semantic data getting put on the web behind any sort of URL.


As you may know, you can include a username and password in URLs:

http://zack:[email protected]

The browser then encodes this authentication data into Base64, and sends it as a header:

Authentication: Basic emFjazpzaGhoaGho

The only reason for the Base64 encoding is to allow characters which might not be valid in a header, it provides no obscurity to the username and password values.

Particularily over the pre-SSL internet, this was very problematic. Anyone who could snoop on your connection could easily see your password. Many alternatives were proposed including Kerberos which is a widely used security protocol both then and now.

As with so many of these examples though, the simple basic auth proposal was easiest for browser manufacturers (Mosaic) to implement. This made it the first, and ultimately the only, solution until developers were given the tools to build their own authentication systems.

The Web Application

In the world of web applications, it can be a little odd to think of the basis for the web being the hyperlink. It is a method of linking one document to another, which was gradually augmented with styling, code execution, sessions, authentication, and ultimately became the social shared computing experience so many 70s researchers were trying (and failing) to create. Ultimately, the conclusion is just as true for any project or startup today as it was then: all that matters is adoption. If you can get people to use it, however slipshod it might be, they will help you craft it into what they need. The corollary is, of course, no one is using it, it doesn’t matter how technically sound it might be. There are countless tools which millions of hours of work went into which precisely no one uses today.

This was adapted from a post which originally appeared on the Eager blog. In 2016 Eager become Cloudflare Apps.

What is Cloudflare?

Cloudflare allows you to move caching, load balancing, rate limiting, and even network firewall and code execution out of your infrastructure to our points of presence within milliseconds of virtually every Internet user.

Read A Case Study
Contact Sales

50 Years of The Internet. Work in Progress to a Better Internet

Post Syndicated from Martin J Levy original https://blog.cloudflare.com/50-years-of-the-internet-work-in-progress-to-a-better-internet/

50 Years of The Internet. Work in Progress to a Better Internet

It was fifty years ago when the very first network packet took flight from the Los Angeles campus at UCLA to the Stanford Research Institute (SRI) building in Palo Alto. Those two California sites had kicked-off the world of packet networking, of the Arpanet, and of the modern Internet as we use and know it today. Yet by the time the third packet had been transmitted that evening, the receiving computer at SRI had crashed. The “L” and “O” from the word “LOGIN” had been transmitted successfully in their packets; but that “G”, wrapped in its own packet, caused the death of that nascent packet network setup. Even today, software crashes, that’s a solid fact; but this historic crash, is exactly that — historic.

50 Years of The Internet. Work in Progress to a Better Internet
Courtesy of MIT Advanced Network Architecture Group 

So much has happened since that day (October 29’th to be exact) in 1969, in fact it’s an understatement to say “so much has happened”! It’s unclear that one blog article would ever be able to capture the full history of packets from then to now. Here at Cloudflare we say we are helping build a “better Internet”, so it would make perfect sense for us to honor the history of the Arpanet and its successor, the Internet, by focusing on some of the other folks that have helped build a better Internet.

Leonard Kleinrock, Steve Crocker, and crew – those first packets

Nothing takes away from what happened that October day. The move from a circuit-based networking mindset to a packet-based network is momentus. The phrase net-heads vs bell-heads was born that day – and it’s still alive today! The basics of why the Internet became a permissionless innovation was instantly created the moment that first packet traversed that network fifty years ago.

50 Years of The Internet. Work in Progress to a Better Internet
Courtesy of UCLA

Professor Leonard (Len) Kleinrock continued to work on the very-basics of packet networking. The network used on that day expanded from two nodes to four nodes (in 1969, one IMP was delivered each month from BBN to various university sites) and created a network that spanned the USA from coast to coast and then beyond.

50 Years of The Internet. Work in Progress to a Better Internet
ARPANET logical map 1973 via Wikipedia 

In the 1973 map there’s a series of boxes marked TIP. These are a version of the IMP that was used to connect computer terminals along with computers (hosts) to the ARPANET. Every IMP and TIP was managed by Bolt, Beranek and Newman (BBN), based in Cambridge Mass. This is vastly different from today’s Internet where every network is operated autonomously.

By 1977 the ARPANET had grown further with links from the United States mainland to Hawaii plus links to Norway and the United Kingdom.

50 Years of The Internet. Work in Progress to a Better Internet
ARPANET logical map 1977 via Wikipedia

Focusing back to that day in 1969, Steve Crocker (who was a graduate student at UCLA at that time) headed up the development of the NCP software. The Network Control Program (later remembered as Network Control Protocol) provided the host to host transmission control software stack. Early versions of telnet and FTP ran atop NCP.

During this journey both Len Kleinrock, Steve Crocker, and the other early packet pioneers have always been solid members of the Internet community and continue to deliver daily to a better Internet.

Steve Crocker and Bill Duvall have written a guest blog about that day fifty years ago. Please read it after you’ve finished reading this blog.

BTW: Today, on this 50th anniversary, UCLA is celebrating history via this symposium (see also https://samueli.ucla.edu/internet50/).

Their collective accomplishments are extensive and still relevant today.

Vint Cerf and Bob Kahn – the creation of TCP/IP

In 1973 Vint Cerf was asked to work on a protocol to replace the original NCP protocol. The new protocol is now known as TCP/IP. Of course, everyone had to move from NCP to TCP and that was outlined in RFC801. At the time (1982 and 1983) there were around 200 to 250 hosts on the ARPANET, yet that transition was still a major undertaking.

Finally, on January 1st, 1983, fourteen years after that first packet flowed, the NCP protocol was retired and TCP/IP was enabled. The ARPANET got what would become the Internet’s first large scale addressing scheme (IPv4). This was better in so many ways; but in reality, this transition was just one more stepping stone towards our modern and better Internet.

Jon Postel – The RFCs, The numbers, The legacy

Some people write code, some people write documents, some people organize documents, some people organize numbers. Jon Postel did all of these things. Jon was the first person to be in charge of allocating numbers (you know – IP addresses) back in the early 80’s. In a way it was a thankless job that no-one else wanted to do. Jon was also the keeper of the early documents (Request For Comment or RFCs) that provide us with how the packet network should operate. Everything was available so that anyone could write code and join the network. Everyone was also able to write a fresh document (or update an existing document) so that the ecosystem of the Arpanet could grow. Some of those documents are still in existence and referenced today. RFC791 defines the IP protocol and is dated 1981 – it’s still an active document in-use today! Those early days and Jon’s massive contributions have been well documented and acknowledged. A better Internet is impossible without these conceptual building blocks.

Jon passed away in 1998; however, his legacy and his thoughts are still in active use today. He once said within the TCP world: “Be conservative in what you send, be liberal in what you accept”. This is called the robustness principle and it’s still key to writing good network protocol software.

Bill Joy & crew – Berkeley BSD Unix 4.2 and its TCP/IP software

What’s the use of a protocol if you don’t have software to speak it. In the early 80’s there were many efforts to build both affordable and fast hardware, along with the software to speak to that hardware. At the University of California, Berkeley (UCB) there was a group of software developers tasked in 1980 by the Defense Advanced Research Projects Agency (DARPA) to implement the brand-new TCP/IP protocol stack on the VAX under Unix. They not-only solved that task; but they went a long way further than just that goal.

The folks at UCB (Bill Joy, Marshall Kirk McKusick, Keith Bostic, Michael Karels, and others) created an operating system called 4.2BSD (Berkeley Software Distribution) that came with TCP/IP ingrained in its core. It was based on the AT&T’s Unix v6 and Unix/32V; however it had significantly deviated in many ways. The networking code, or sockets as its interface is called, became the underlying building blocks of each and every piece of networking software in the modern world of the Internet. We at Cloudflare have written numerous times about networking kernel code and it all boils down to the code that was written back at UCB. Bill Joy went on to be a founder of Sun Microsystems (which commercialized 4.2BSD and much more). Others from UCB went on to help build other companies that still are relevant to the Internet today.

Fun fact: Berkeley’s Unix (or FreeBSD, OpenBSD, NetBSD as its variants are known) is now the basis of every iPhone, iPad and Mac laptops software in existence. Android’s and Chromebooks come from a different lineage; but still hold those BSD methodologies as the fundamental basis of all their networking software.

Al Gore – The Information Superhighway – or retold as “funding the Internet”

Do you believe that Al Gore invented the Internet? It’s actually doesn’t matter which side of this statement you want to argue; the simple fact is that the US Government funded the National Science Foundation (NSF) with the task of building an “information superhighway”. Al Gore himself said: “how do we create a nationwide network of information superhighways? Obviously, the private sector is going to do it, but the Federal government can catalyze and accelerate the process. ” He said that statement on September 19, 1994 and this blog post author knows that fact because I was there in the room when he said it!

The United States Federal Government help fund the growth of the Arpanet into the early version of the Internet. Without the government’s efforts, we may not have been where we are today. Luckily, just a handful of years later, the NSF decided that in fact the commercial world could and should be the main building blocks for the Internet and instantly the Internet as we know it today was born. Packets fly across commercial backbones are paid for via commercial contracts. The parts that are still funded by the government (any government) are normally only the parts used by universities, or military users.

But this author is still going to thank Al Gore for helping create a better Internet back in the early 90’s.

Sir Tim Berners-Lee – The World Wide Web

What can I say? In 1989 Tim Berners-Lee (who was later knighted and is now Sir Tim) invented the World Wide Web and we would not have billions of people using the Internet today without him. Period!

50 Years of The Internet. Work in Progress to a Better Internet
via Reddit

50 Years of The Internet. Work in Progress to a Better Internet
via Reddit

Yeah, let’s clear up that subtle point. Sir Tim invented the World Wide Web (WWW) and Vint Cerf invented the Internet. When folks talk about using one or the other, it’s worth reminding then there is a difference. But I digress!

Sir Tim’s creation is what provides everyday folks with a window into information on the Internet. Before the WWW we had textual interfaces to information; but only if you knew where to look and what to type. We really need to remember every time we click on a link or press submit to buy something, that the only way that is usable is such mass and uniform form is because of Sir Tim’s creation.

Sally Floyd – The subtle art of dropping packets

Random Early Detection (RED) is an algorithm that saved the Internet back in the early 90’s. Built on earlier work by Van Jacobson, it defined a method to drop packets when a router was overloaded, or more importantly about to be overloaded. Packet network, before Van Jacobson’s or Sally Floyd’s work, would congest heavily and slow down. It seemed natural to never throw away data; but between the two inventors of RED, that all changed. Her follow-up work is described in an August 1993 paper.

50 Years of The Internet. Work in Progress to a Better Internet

Networks have become much more complex since August 1993, yet the RED code still exists and is used in nearly every Unix or Linux kernel today. See the tc-red(8) command and/or the Linux kernel code itself.

It’s with great sorrow that Sally Floyd passed away in late August. But, rest assured, her algorithm will possibly be used forever to help keep a better Internet flowing smoothly forever.

Jay Adelson and Al Avery – The datacenter that interconnect networks

Remember that comment by Al Gore above saying that the private sector would build the Internet. Back in the late 90’s that’s exactly what happened. Telecom companies were selling capacity to fledgling ISPs. Nationwide IP backbones were being built by the likes of PSI, Netcom, UUnet, Digex, CAIS, ANS, etc. The telco’s themselves like MCI, Sprint, but interestingly not AT&T at the time, were getting into providing Internet access in a big way.

In the US everything was moving very fast. By the mid-90’s there was no way to get a connection anymore from a regional research network for your shiny new ISP. Everything had all gone commercial and the NSF funded parts of the Internet were not available for commercial packets.

The NSF, in it’s goal to allow commercial networks to build the Internet, had also specified that those networks should interconnect at four locations around the country. New Jersey, Chicago, Bay Area, California, and Washington DC area.

50 Years of The Internet. Work in Progress to a Better Internet
Network Access Point via Wikipedia

The NAP’s, as they were called, were to provide interconnection between networks and to provide the research networks a way to interconnect with commercial network along with themselves. The NAPs suddenly exploded in usage, near-instantly needing to be bigger, The buildings they were housed in ran out of space or power or both! Yet those networks needed homes, interconnections needed a better structure and the old buildings that were housing the Internet’s routers just didn’t cut it anymore.

Jay and Al had a vision. New massive datacenters that could securely house the growing need for the power-hungry Internet. But that’s only a small portion of the vision. They realized that if many networks all lived under the same roof then interconnecting them could indeed build a better Internet. They installed Internet Exchanges and a standardized way of cross-connecting from one network to another. They were carrier neutral, so that everyone was treated equal. It was, what became known as the “network effect” and it was a success. The more networks you had under one roof, the more that other networks would want to be housed within those same roofs. The company they created was (and still is) called Equinix. It wasn’t the first company to realize this; but it sure has become one of the biggest and most successful in this arena.

Today, a vast amount of the Internet uses Equinix datacenters, it’s IXs along with similar offerings from similar companies. Jay and Al’s vision absolutely paved the way to a better internet.

Everyone who’s a member of The Internet Society 1992-Today

It turns out that people realized that the modern Internet is not all-commercial all-the-time. There is a need for other influences to be had. Civil society, governments, academics, along with those commercial entities should also have a say in how the Internet evolves. This brings into the conversation a myriad of people that have either been members of The Internet Society (ISOC) and/or have worked directly for ISOC over it’s 27+ years. This is the organization that manages and helps fund the IETF (where protocols are discussed and standardized). ISOC plays a decisive role at The Internet Governance Forum (IGF), and fosters a clear understanding of how the Internet should be used and protected to both the general public and regulators worldwide. ISOCs involvement with Internet Exchange development (vital as the Internet grows and connects users and content) has been a game changer for many-many countries, especially in Africa.

ISOC has an interesting funding mechanism centered around the dotORG domain. You may not have realized that you were helping the Internet grow when you registered and paid for your .org domain; however, you are!

Over the life of ISOC, the Internet has moved from being the domain of engineers and scientists into something used by nearly everyone; independent of technical skill or in-fact a full understanding of it’s inner workings. ISOC’s mission is “to promote the open development, evolution and use of the Internet for the benefit of all people throughout the world“. It has been a solid part of that growth.

Giving voice to everyone on how the Internet could grow and how it should (or should not be) regulated, is front-and-center for every person involved with ISOC globally. Defining both an inclusive Internet and a better Internet is the everyday job for those people.

Kanchana Kanchanasut – Thailand and .TH

In the 1988, amongst other things, Professor Kanchana Kanchanasut registered and operated the country Top Level Domain .TH (which is the two-letter ISO 3166 code for Thailand). This was the first country to have a TLD; something all countries take for granted today.

Also in 1988, five Thai universities got dial-up connections to the Internet because of her work. However, the real breakthrough came when Prof. Kanchanasut’s efforts led to the first leased line interconnecting Thailand to the nascent Internet of the early 90’s. That was 1991 and since then Thailand’s connectivity has exploded. It’s an amazingly well connected country. Today it boasts a plethora of mobile operators, and international undersea and cross-border cables, along with Prof. Kanchanasut’s present-day work spearheading an independent and growing Internet Exchange within Thailand.

In 2013, the “Mother of the Internet in Thailand” as she is affectionately called, was inducted into the Internet Hall of Fame by the Internet Society. If you’re in Thailand, or South East Asia, then she’s the reason why you have a better Internet.

The list continues

In the fifty years since that first packet there have been heros, both silent and profoundly vocal that have moved the Internet forward. There’s no was all could be named or called out; however, you will find many listed if you go look. Wander through the thousands of RFC’s, or check out the Internet Hall of Fame. The Internet today is a better Internet because anyone can be a contributor.

Cloudflare and the better Internet

Cloudflare, or in fact any part of the Internet, would not be where it is today without the groundbreaking work of these people plus many others unnamed here. This fifty year effort has moved the needle in such a way that without all of them the runaway success of the Internet could not have been possible!

Cloudflare is just over nine years old (that’s only 18% of this fifty year period). Gazillions and gazillions of packets have flowed since Cloudflare started providing it services and we sincerely believe we have done our part with those services to build a better Internet.. Oh, and we haven’t finished our work, far from it! We still have a long way to go in helping build a better Internet. And we’re just getting started!

If you’re interested in helping build a better Internet and want to join Cloudflare in our offices in San Francisco, Singapore, London, Austin, Sydney, Champaign, Munich, San Jose, New York or our new Lisbon Portugal offices, then buzz over to our jobs page and come join us! #betterInternet

Fifty Years Ago

Post Syndicated from Guest Author original https://blog.cloudflare.com/fifty-years-ago/

Fifty Years Ago

This is a guest post by Steve Crocker of Shinkuro, Inc. and Bill Duvall of Consulair. Fifty years ago they were both present when the first packets flowed on the Arpanet.

On 29 October 2019, Professor Leonard (“Len”) Kleinrock is chairing a celebration at the University of California, Los Angeles (UCLA).  The date is the fiftieth anniversary of the first full system test and remote host-to-host login over the Arpanet.  Following a brief crash caused by a configuration problem, a user at UCLA was able to log in to the SRI SDS 940 time-sharing system.  But let us paint the rest of the picture.

The Arpanet was a bold project to connect sites within the ARPA-funded computer science research community and to use packet-switching as the technology for doing so.  Although there were parallel packet-switching research efforts around the globe, none were at the scale of the Arpanet project. Cooperation among researchers in different laboratories, applying multiple machines to a single problem and sharing of resources were all part of the vision.  And over the fifty years since then, the vision has been fulfilled, albeit with some undesired outcomes mixed in with the enormous benefits.  However, in this blog, we focus on just those early days.

In September 1969, Bolt, Beranek and Newman (BBN) in Cambridge, MA delivered the first Arpanet IMP (packet switch) to Len Kleinrock’s laboratory at UCLA. The Arpanet incorporated his theoretical work on packet switching and UCLA was chosen as the network measurement site for validation of his theories.  The second IMP was installed a month later at Doug Engelbart’s laboratory at the Stanford Research Institute – now called SRI International – in Menlo Park, California.  Engelbart had invented the mouse and his lab had developed a graphical interface for structured and hyperlinked text.  Engelbart’s vision saw computer users sharing information over a wide-scale network, so the Arpanet was a natural candidate for his work. Today, we have seen that vision travel from SRI to Xerox to Apple to Microsoft, and it is now a part of everyone’s environment.

“IMP” stood for Interface Message Processor; we would now simply say “router.” Each IMP was connected to up to four host computers.  At UCLA the first host was a Scientific Data Systems (SDS) Sigma 7.  At SRI, the host was an SDS 940.  Jon Postel, Vint Cerf and Steve Crocker were among the graduate students at UCLA involved in the design of the protocols between the hosts on the Arpanet, as were Bill Duvall, Jeff Rulifson, and others at SRI (see RFC 1 and RFC 2.)

SRI and UCLA quickly connected their hosts to the IMPs.  Duvall at SRI modified the SDS 940 time-sharing system to allow host to host terminal connections over the net. Charley Kline wrote the complementary client program at UCLA.  These efforts required building custom hardware for connecting the IMPs to the hosts, and programming for both the IMPs and the respective hosts.  At the time, systems programming was done either in assembly language or special purpose hybrid languages blending simple higher-level language features with assembler.  Notable examples were ESPOL for the Burroughs 5500 and PL/I for Multics.  Much of Engelbart’s NLS system was written in such a language, but the time-sharing system was written in assembler for efficiency and size considerations.

Along with the delivery of the IMPs, a deadline of October 31 was set for connecting the first hosts.  Testing was scheduled to begin on October 29 in order to allow a few days for necessary debugging and handling of unanticipated problems.   In addition to the high-speed line that connected the SRI and UCLA IMPs, there was a parallel open, dedicated voice line. On the evening of October 29 Duvall at SRI donned his headset as did Charley Kline at UCLA, and both host-IMP pairs were started. Charley typed an L, the first letter of a LOGIN command.  Duvall, tracking the activity at SRI, saw that the L was received, and that it launched a user login process within the 940. The 940 system was full duplex, so it echoed an “L” across the net to UCLA.  At UCLA, the L appeared on the terminal.  Success! Charley next typed O and received back O.  Charley typed G, and there was silence.  At SRI, Duvall quickly determined that an echo buffer had been sized too small[1], re-sized it, and restarted the system. Charley  typed “LO” again, and received back the normal “LOGIN”.  He typed a confirming RETURN, and the first host-to-host login on the Arpanet was completed.

Len Kleinrock noted that the first characters sent over the net were “LO.”  Sensing the importance of the event, he expanded “LO” to “Lo and Behold”, and used that in the title of the movie called “Lo and Behold: Reveries of the Connected World.”  See imdb.com/title/tt5275828.

Fifty Years Ago
Engelbart’s five finger keyboard and mouse with three buttons. The mouse evolved and became ubiquitous. The five finger keyboard faded.

IMPs continued to be installed on the Arpanet at the rate of roughly one per month over the next two years.  Soon we had a spectacularly large network with more than twenty hosts, and the connections between the IMPs were permanent telephone lines operating at the lightning speed of 50,000 bits per second[2].

Fifty Years Ago
Len Kleinrock and IMP #1 at UCLA

Today, all computers come with hardware and software to communicate with other computers.  Not so back then.  Each computer was the center of its own world, and expected to be connected only to subordinate “peripheral” devices – printers, tape drives, etc.  Many even used different character sets.  There was no standard method for connecting two computers together, not even ones from the same manufacturer. Part of what made the Arpanet project bold was the diversity of the hardware and software at the research centers.  Almost all of the hosts at these sites were time-shared computers.  Typically, several people shared the same computer, and the computer processed each user’s computation a little bit at a time.  These computers were large and expensive.  Personal computers were fifteen years in the future, and smart phones were science fiction.  Even Dick Tracy’s fantasy two-way wrist radio envisioned only voice interaction, not instant access to databases and sharing of pictures and videos.

Fifty Years Ago
Dick Tracy and his two-way radio.

Each site had to create a hardware connection from the host(s) to the IMP. Further, each site had to add drivers or more to the operating system in its host(s) so that programs on the host could communicate with the IMP.  The protocols for host to host communication were in their infancy and unproven.

During those first two years when IMPs were being installed monthly, we met with students and researchers at the other sites to develop the first suite of protocols.  The bottom layer was forgettably named the Host-Host protocol[3].  Telnet, for emulating terminal dial-up, and the File Transfer Protocol (FTP) were on the next layer above the Host-Host protocol.  Email started as a special case of FTP and later evolved into its own protocol.  Other networks sprang up and the Arpanet became the seedling for the Internet, with TCP providing a reliable, two-way host to host connection, and IP below it stitching together the multiple networks of the Internet.  But the Telnet and FTP protocols continued for many years and are only recently being phased out in favor of more robust and more secure alternatives.

The hardware interfaces, the protocols and the software that implemented the protocols were the tangible engineering products of that early work.  Equally important was the social fabric and culture that we created.  We knew the system would evolve, so we envisioned an open and evolving architecture.  Many more protocols would be created, and the process is now embodied in the Internet Engineering Task Force (IETF).  There was also a strong spirit of cooperation and openness.  The Request for Comments (RFCs) series of notes were open for anyone to write and everyone to read.  Anyone was welcome to participate in the design of the protocol, and hence we now have important protocols that have originated from all corners of the world.

In October 1971, two years after the first IMP was installed, we held a meeting at MIT to test the software on all of the hosts.  Researchers at each host attempted to login, via Telnet, to each of the other hosts.  In the spirit of Samuel Johnson’s famous quote[4], the deadline and visibility within the research community stimulated frenetic activity all across the network to get everything working.  Almost all of the hosts were able to login to all of the other hosts.  The Arpanet was finally up and running.  And the bakeoff at MIT that October set the tone for the future: test your software by connecting to others.  No need for formal standards certification or special compliance organizations; the pressure to demonstrate your stuff actually works with others gets the job done.

[1] The SDS 940 had a maximum memory size of 65K 24-bit words. The time-sharing system along with all of its associated drivers and active data had to share this limited memory, so space was precious and all data structures and buffers were kept to the minimum possible size. The original host-to-host protocol called for terminal emulation and single character messages, and buffers were sized accordingly. What had not been anticipated was that in a full duplex system such as the 940, multiple characters might be echoed for a single received character. Such was the case when the G of LOG was echoed back as “GIN” due to the command completion feature of the SDS 940 operating system.

[2] “50,000” is not a misprint. The telephone lines in those days were analog, not digital. To achieve a data rate of 50,000 bits per second, AT&T used twelve voice grade lines bonded together and a Western Electric series 303A modem that spread the data across the twelve lines. Several years later, an ordinary “voice grade” line was implemented with digital technology and could transmit data at 56,000 bits per second, but in the early days of the Arpanet 50Kbs was considered very fast. These lines were also quite expensive.

[3] In the papers that described the Host-Host protocol, the term Network Control Program (NCP) designated the software addition to the operating system that implemented the Host-Host protocol. Over time, the term Host-Host protocol fell into disuse in favor of Network Control Protocol, and the initials “NCP” were repurposed.

[4] Samuel Johnson – ‘Depend upon it, sir, when a man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully.’

A Brief History of the Lie Detector

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/heroic-failures/a-brief-history-of-the-lie-detector

It’s surprisingly hard to create a real-life Lasso of Truth

When Wonder Woman deftly ensnares someone in her golden lariat, she can compel that person to speak the absolute truth. It’s a handy tool for battling evil supervillains. Had the Lasso of Truth been an actual piece of technology, police detectives no doubt would be lining up to borrow it.

Indeed, for much of the past century, psychologists, crime experts, and others have searched in vain for an infallible lie detector. Some thought they’d discovered it in the polygraph machine. A medical device for recording a patient’s vital signs—pulse, blood pressure, temperature, breathing rate—the polygraph was designed to help diagnose cardiac anomalies and to monitor patients during surgery.

The polygraph was a concatenation of several instruments. One of the first was a 1906 device, invented by British cardiologist James Mackenzie, that measured the arterial and venous pulse and plotted them as continuous lines on paper. The Grass Instrument Co., of Massachusetts, maker of the 1960 polygraph machine pictured above, also sold equipment for monitoring EEGs, epilepsy, and sleep.

The leap from medical device to interrogation tool is a curious one, as historian Ken Alder describes in his 2007 book The Lie Detectors: The History of an American Obsession (Free Press). Well before the polygraph’s invention, scientists had tried to link vital signs with emotions. As early as 1858, French physiologist Étienne-Jules Marey recorded bodily changes as responses to uncomfortable stressors, including nausea and sharp noises. In the 1890s, Italian criminologist Cesare Lombroso used a specialized glove to measure a criminal suspect’s blood pressure during interrogation. Lombroso believed that criminals constituted a distinct, lower race, and his glove was one way he tried to verify that belief.

In the years leading up to World War I, Harvard psychologist Hugo Münsterberg used a variety of instruments, including the polygraph, to record and analyze subjective feelings. Münsterberg argued for the machine’s application to criminal law, seeing both scientific impartiality and conclusiveness.

As an undergraduate, William Moulton Marston worked in Münsterberg’s lab and was captivated by his vision. After receiving his B.A. in 1915, Marston decided to continue at Harvard, pursuing both a law degree and a Ph.D. in psychology, which he saw as complementary fields. He invented a systolic blood pressure cuff and with his wife, Elizabeth Holloway Marston, used the device to investigate the links between vital signs and emotions. In tests on fellow students, he reported a 96 percent success rate in detecting liars.

World War I proved to be a fine time to research the arts of deception. Robert Mearns Yerkes, who also earned a Ph.D. in psychology from Harvard and went on to develop intelligence tests for the U.S. Army, agreed to sponsor more rigorous tests of Marston’s research under the aegis of the National Research Council. In one test on 20 detainees in the Boston Municipal court, Marston claimed a 100 percent success rate in lie detection. But his high success rate made his supervisors suspicious. And his critics argued that interpreting polygraph results was more art than science. Many people, for instance, experience higher heart rate and blood pressure when they feel nervous or stressed, which may in turn affect their reaction to a lie detector test. Maybe they’re lying, but maybe they just don’t like being interrogated.

Marston (like Yerkes) was a racist. He claimed he could not be fully confident in the results on African Americans because he thought their minds were more primitive than those of whites. The war ended before Marston could convince other psychologists of the validity of the polygraph.

Across the country in Berkeley, Calif., the chief of police was in the process of turning his department into a science- and data-driven crime-fighting powerhouse. Chief August Vollmer centralized his department’s command and communications and had his officers communicate by radio. He created a records system with extensive cross-references for fingerprints and crime types. He compiled crime statistics and assessed the efficacy of policing techniques. He started an in-house training program for officers, with university faculty teaching evidentiary law, forensics, and crime-scene photography. In 1916 Volmer hired the department’s first chemist, and in 1919 he began recruiting college graduates to become officers. He vetted all applicants with a battery of intelligence tests and psychiatric exams.

Against this backdrop, John Augustus Larson, a rookie cop who happened to have a Ph.D. in physiology, read Marston’s 1921 article “Physiological Possibilities of the Deception Test” [PDF]. Larson decided he could improve Marston’s technique and began testing subjects using his own contraption, the “cardio-pneumo-psychogram.” Vollmer gave Larson free rein to test his device in hundreds of cases.

Larson established a protocol of yes/no questions, delivered by the interrogator in a monotone, to create a baseline sample. All suspects in a case were also asked the same set of questions about the case; no interrogation lasted more than a few minutes. Larson secured consent before administering his tests, although he believed only guilty parties would refuse to participate. In all, he tested 861 subjects in 313 cases, corroborating 80 percent of his findings. Chief Vollmer was convinced and helped promote the polygraph through newspaper stories.

And yet, despite the Berkeley Police Department’s enthusiastic support and a growing popular fascination with the lie detector, U.S. courts were less than receptive to polygraph results as evidence.

In 1922, for instance, Marston applied to be an expert witness in the case of Frye v. United States. The defendant, James Alphonso Frye, had been arrested for robbery and then confessed to the murder of Dr. R.W. Brown. Marston believed his lie detector could verify that Frye’s confession was false, but he never got the chance.

Chief Justice Walter McCoy didn’t allow Marston to take the stand, claiming that lie detection was not “a matter of common knowledge.” The decision was upheld by the court of appeals with a slightly different justification: that the science was not widely accepted by the relevant scientific community. This became known as the Frye Standard or the general acceptance test, and it set the precedent for the court’s acceptance of any new scientific test as evidence.

Marston was no doubt disappointed, and the idea of an infallible lie detector seems to have stuck with him. Later in life, he helped create Wonder Woman. The superhero’s Lasso of Truth proved far more effective at apprehending criminals and revealing their misdeeds than Marston’s polygraph ever was.

To this day, polygraph results are not admissible in most courts. Decades after the Frye case, the U.S. Supreme Court, in United States v. Scheffer, ruled that criminal defendants could not admit polygraph evidence in their defense, noting that “the scientific community remains extremely polarized about the reliability of polygraph techniques.”

But that hasn’t stopped the use of polygraphs for criminal investigation, at least in the United States. The U.S. military, the federal government, and other agencies have also made ample use of the polygraph in determining a person’s suitability for employment and security clearances.

Meanwhile, the technology of lie detection has evolved from monitoring basic vital signs to tracking brain waves. In the 1980s, J. Peter Rosenfeld, a psychologist at Northwestern University, developed one of the first methods for doing so. It took advantage of a type of brain activity, known as P300, that is emitted about 300 milliseconds after the person recognizes a distinct image. The idea behind Rosenfield’s P300 test was that a suspect accused, say, of theft would have a distinct P300 response when shown an image of the stolen object, while an innocent party would not. One of the main drawbacks was finding an image associated with the crime that only the suspect would have seen.

In 2002 Daniel Langleben, a professor of psychiatry at the University of Pennsylvania, began using functional magnetic resonance imaging, or fMRI, to do real-time imaging of the brain while a subject was telling the truth and also lying. Langleben found that the brain was generally more active when lying and suggested that truth telling was the default modality for most humans, which I would say is a point in favor of humanity. Langleben has reported being able to correctly classify individual lies or truths 78 percent of the time. (In 2010, IEEE Spectrum contributing editor Mark Harris wrote about his own close encounter with an fMRI lie detector. It’s a good read.)

More recently, the power of artificial intelligence has been brought to bear on lie detection. Researchers at the University of Arizona developed the Automated Virtual Agent for Truth Assessments in Real-Time, or AVATAR, for interrogating an individual via a video interface. The system uses AI to assess changes in the person’s eyes, voice, gestures, and posture that raise flags about possible deception. According to Fast Company and CNBC, the U.S. Department of Homeland Security has been testing AVATAR at border crossings to identify people for additional screening, with a reported success rate of 60 to 75 percent. The accuracy of human judges, by comparison, is at best 54 to 60 percent, according to AVATAR’s developers.

While the results for AVATAR and fMRI may seem promising, they also show the machines are not infallible. Both techniques compare individual results against group data sets. As with any machine-learning algorithm, the data set must be diverse and representative of the entire population. If the data is poor quality or incomplete or if the algorithm is biased or if the sensors measuring the subject’s physiological response don’t work properly, it’s simply a more high-tech version of Marston’s scientific racism.

Both fMRI and AVATAR pose new challenges to the already contested history of lie detection technology. Over the years, psychologists, detectives, and governments have continued to argued for their validity. There is, for example, a professional organization called the American Polygraph Association. Meanwhile, lawyers, civil libertarians, and other psychologists have decried their use. Proponents seem to have an unwavering faith in data and instrumentation over human intuition. Detractors see many alternative explanations for positive results and cite a preponderance of evidence that polygraph tests are no more reliable than guesswork.

Along the way, sensational crime reporting and Hollywood dramatizations have led the public to believe that lie detectors are a proven technology and also, contradictorily, that master criminals can fake the results.

I think Ken Alder comes closest to the truth when he notes that at its core, the lie detector is really only successful when suspects believe it works.

An abridged version of this article appears in the August 2019 print issue as “A Real-Life Lasso of Truth.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Chip Hall of Fame: MOS Technology 6581

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-history/silicon-revolution/chip-hall-of-fame-mos-technology-6581

A synthesizer that defined the sound of a generation

1982 was a big year for music. Not only did Michael Jackson release Thriller, the bestselling album of all time, but Madonna made her debut. And it saw the launch of the Commodore 64 microcomputer. Thanks to the C64, millions of homes were equipped with a programmable electronic synthesizer, one that’s still in vogue.

The C64 became the bestselling computer of all time (some 17 million were sold) largely because it had graphics and sound capabilities that punched way above the system’s price tag: US $600 on release, soon falling to $149. Like many machines from that era, the C64 has a devoted following in the retrocomputing community, and emulators are available that let you run nearly all its software on modern hardware. What’s unusual is that a specific supporting chip inside the C64 has also retained its own dedicated following: the 6581 SID sound chip.

The C64 was developed by MOS Technology in 1981. MOS had already had a hit in the microcomputing world with its creation of the 6502 CPU in 1975. That chip—and a small family of variants—was used to power popular home computers and game consoles such as the Apple II and Atari 2600. As recounted in IEEE Spectrum’s March 1985 design case history [PDF] of the C64 by Tekla S. Perry and Paul Wallich, MOS originally intended just to make a new graphics chip and a new sound chip. The idea was to sell them as components to microcomputer manufacturers. But those chips turned out to be so good that MOS decided to make its own computer.

Creation of the sound chip fell to a young engineer called Robert Yannes. He was the perfect choice for the job, motivated by a long-standing interest in electronic sound. Although there were some advanced microcomputer-controlled synthesizers available, including the Super Sound board designed for use with the Cosmac VIP system, the built-in sound generation tech in home computers was relatively crude. Yannes had higher ambitions. “I’d worked with synthesizers, and I wanted a chip that was a music synthesizer,” Yannes told Spectrum in 1985. His big advantage was that MOS had a manufacturing fab on-site. This allowed for cheap and fast experimentation and testing: “The actual design only took about four or five months,” said Yannes.

On a hardware level, what made the 6581 SID stand out was better frequency control of its internal oscillators and, critically, an easy way for programmers to control what’s known as the sound envelope. Early approaches to using computers to generate musical tones (starting with one by Alan Turing himself) produced sound that was either off or on at a fixed intensity, like a buzzer. But most musical instruments don’t work that way: Think of how a piano note can be struck sharply or softly, and how a note can linger before decaying into silence. The sound envelope defines how a note’s intensity rises and falls. Some systems allowed the volume to be adjusted as the note played, but this was awkward to program. Yannes incorporated data registers into the 6581 SID so a developer could define an envelope and then leave it to the chip to control the intensity, rather than adjusting the intensity by programming the CPU to send volume-control commands as notes played (something few developers bothered to attempt).

The SID chip has three sound channels that can play simultaneously using three basic waveforms, plus a fourth “noise” waveform that produces rumbling to hissing static sounds, depending on the frequency. The chip has the ability to filter and modulate the channels to produce an even wider range of sounds. Some programmers discovered they could tease the chip into doing things it was never designed to do, such as speech synthesis. This was perhaps most famously used in Ghostbusters, a 1984 game based on the movie of the same name in which the C64 would utter low-fidelity catchphrases from the movie, such as “He slimed me!”

But stunts like speech synthesis aside, the SID chip’s design meant that home computer games could have truly musical soundtracks. Developers started hiring composers to create original works for C64 games—indeed, some titles today are solely remembered because of a catchy soundtrack.

Unlike in modern game development, in which soundtrack creation is technically similar to conventional music recording (up to, and including, using orchestras and choirs), these early composers had to be familiar with how the SID chip was programmed at the hardware level, as well as its behavioral quirks. (Because the chip got to market so quickly, MOS’s documentation of the 6581 SID was notoriously lousy, with Yannes acknowledging to Spectrum in 1985 that “the spec sheet got distributed and copied and rewritten by various people until it made practically no sense anymore.”)

At the time, these composers were generally unknown outside the games industry. Many of them moved on to other things after the home computer boom faded and their peculiar hybrid combination of musical and programming expertise was less in demand. In more recent years however, some of them have been celebrated, such as the prolific Ben Daglish, who composed the music for dozens of popular games.

Daglish (who created my favorite C64 soundtrack, for 1987’s Re-Bounder) was initially bemused that people in the 21st century were still interested in music created for, and by, the SID chip, but he became a popular guest at retrocomputing and so-called chiptunes events before his untimely death in late 2018.

Chiptunes (also known as bitpop) is a genre of original music that leans into the distinctive sound of 1980s computer sound chips. Some composers use modern synthesizers programmed to replicate that sound, but others like to use the original hardware, especially the SID chips (with or without the surrounding C64 system). Because the 6581 SID hasn’t been in production for many years, this has resulted in a brisk aftermarket for old chips—and one that’s big enough that crooks have made fake chips, or reconditioned dead chips, to sell to enthusiasts. Other people have created modern drop-in replacements for the SID chip, such as the SwinSID.

There are several options if you’d like to listen to a classic C64 game soundtrack or a modern chiptune without investing in hardware. You can find many on YouTube, and projects like SOASC= are dedicated to playing tunes on original SID chips and recording the output using modern audio formats. But for a good balance between modern convenience and hard-core authenticity, I’d recommend using a player like Sidplay, which emulates the SID chip and can play music data extracted from original software code. Even after the last SID chip finally burns out, its sound will live on.

An abridged version of this article appears in the July 2019 print issue as “Chip Hall of Fame: SID 6581.”

How NASA Recruited Snoopy and Drafted Barbie

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/space-age/how-nasa-recruited-snoopy-and-drafted-barbie

The space agency has long relied on kid-friendly mascots to make the case for space

graphic link to special report landing page
graphic link to special report landing  page

In the comic-strip universe of Peanuts, Snoopy beat Neil Armstrong to the moon. It was in March 1969—four months before Armstrong would take his famous small step—that the intrepid astrobeagle and his flying doghouse touched down on the lunar surface. “I beat the Russians…I beat everybody,” Snoopy marveled. “I even beat that stupid cat who lives next door!”

The comic-strip dog had begun a formal partnership with NASA the previous year, when Charles Schulz, the creator of Peanuts, and its distributor United Feature Syndicate, agreed to the use of Snoopy as a semi-official NASA mascot.

Snoopy was already a renowned World War I flying ace—again, within the Peanuts universe. Clad in a leather flying helmet, goggles, and signature red scarf, he sat atop his doghouse, reenacting epic battles with his nemesis, the Red Baron. Just as NASA had turned to real-life fighter pilots for its first cohort of astronauts, the space agency also recruited Snoopy.

Two months after the comic-strip Snoopy’s lunar landing, a second, real-world Snoopy buzzed the surface of the moon, as part of Apollo 10. This mission was essentially a dress rehearsal for Apollo 11. The crew was tasked with skimming, or “snooping,” the surface of the moon, so they nicknamed the lunar module “Snoopy.” It logically followed that Apollo 10’s command module was “Charlie Brown.”

On 21 May, as the astronauts settled in for their first night in lunar orbit, Snoopy’s pilot, Eugene Cernan, asked ground control to “watch Snoopy well tonight, and make him sleep good, and we’ll take him out for a walk and let him stretch his legs in the morning.” The next day, Cernan and Tom Stafford descended in Snoopy, stopping some 14,000 meters above the surface.

Since then, Snoopy and NASA have been locked in a mutually beneficial orbit. Schulz, a space enthusiast, ran comic strips about space exploration, and the moon shot in particular, which helped excite popular support for the program. Commercial tie-ins extended well beyond the commemorative plush toy shown at top. Over the years, Snoopy figurines, music boxes, banks, watches, pencil cases, bags, posters, towels, and pins have all promoted a fun and upbeat attitude toward life beyond Earth’s atmosphere.

There’s also a serious side to Snoopy. In the wake of the tragic Apollo 1 fire, which claimed the lives of three astronauts, NASA wanted to promote greater flight safety and awareness. Al Chop, director of public affairs for the Manned Spacecraft Center (now the Lyndon B. Johnson Space Center), suggested using Snoopy as a symbol for safety, and Schulz agreed. 

NASA created the Silver Snoopy Award to honor ground crew who have contributed to flight safety and mission success. The recipient’s prize? A silver Snoopy lapel pin, designed by Schulz and presented by an astronaut, in appreciation for the person’s efforts to preserve astronauts’ lives.

Snoopy was by no means the only popularizer of the U.S. space program. Over the years, there have been GI Joe astronauts, LEGO astronauts, and Hello Kitty astronauts. Not all of these came with the NASA stamp of approval, but even unofficially they served as tiny ambassadors for space.

Of all the astronautical dolls, I’m most intrigued by Astronaut Barbie, of which there have been numerous incarnations over the years. The first was Miss Astronaut Barbie, who debuted in 1965—13 years before women were accepted into NASA’s astronaut classes and 18 years before Sally Ride flew in space.

Miss Astronaut Barbie might have been ahead of her time, but she was also a reflection of that era’s pioneering women. Cosmonaut Valentina Tereshkova became the first woman to go to space on 16 June 1963, when she completed a solo mission aboard Vostok 6. Meanwhile, American women were training for space as early as 1960, through the privately funded Women in Space program. The Mercury 13 endured the same battery of tests that NASA used to train the all-male astronaut corps and were celebrated in the press, but none of them ever went to space.

In 2009, Mattel reissued Miss Astronaut of 1965 as part of the celebration of Barbie’s 50th anniversary. “Yes, she was a rocket scientist,” the packaging declares, “taking us to new fashion heights, while firmly placing her stilettos on the moon.” For the record, Miss Astronaut Barbie wore zippered boots, not high heels.

Other Barbies chose careers in space exploration and always with a flair for fashion. A 1985 Astronaut Barbie modeled a hot pink jumpsuit, with matching miniskirt for attending press conferences. Space Camp Barbie, produced through a partnership between Mattel and the U.S. Space & Rocket Center in Huntsville, Ala., wore a blue flight suit, although a later version sported white and pink. An Apollo 11 commemorative Barbie rocked a red- and silver-trimmed jumpsuit and silver boots and came with a Barbie flag, backpack, and three glow-in-the-dark moon rocks. (Scientific accuracy has never been Mattel’s strong suit, at least where Barbie is concerned.) And in 2013, Mattel collaborated with NASA to create Mars Explorer Barbie, to mark the first anniversary of the rover Curiosity.

More recently, Mattel has extended the Barbie brand to promote real-life role models for girls. In 2018, as part of its Inspiring Women series, the toymaker debuted the Katherine Johnson doll, which pays homage to the African-American mathematician who calculated the trajectory for NASA’s first crewed spaceflight. Needless to say, this Barbie is also clad in pink, with era-appropriate cat-eye glasses, a double strand of pearls, and a NASA employee ID tag.

Commemorative dolls and stuffed animals may be playthings designed to tug at our consumerist heartstrings. But let’s suspend the cynicism for a minute and imagine what goes on in the mind of a young girl or boy who plays with a doll and dreams of the future. Maybe we’re seeing a recruit for the next generation of astronauts, scientists, and engineers.

An abridged version of this article appears in the July 2019 print issue as “The Beagle Has Landed.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Rediscovering the Remarkable Engineers Behind Olivetti’s ELEA 9003

Post Syndicated from Jean Kumagai original https://spectrum.ieee.org/tech-talk/tech-history/silicon-revolution/rediscovering-the-remarkable-engineers-behind-olivettis-elea-9003

A new graphic novel explores the forgotten history of the ELEA 9003, one of the first transistorized digital computers

The Chinese-Italian engineer Mario Tchou was, by all accounts, brilliant. Born and raised in Italy and educated in the United States, he led the Olivetti company’s ambitious effort to build a completely transistorized mainframe computer in the late 1950s. During Mario’s tenure, Olivetti successfully launched the ELEA 9003 mainframe and founded one of the first transistor companies. And yet, even in Italy, his story is not well known.

The historical obscurity of such an important figure troubled Ciaj Rocchi and Matteo Demonte, a husband-and-wife team of illustrators based in Milan. And so they created a short graphic novel about Tchou and the Olivetti computer project, as well as a short animation [shown at top]. The graphic novel appeared in the 12 April issue of La Lettura, the Italian cultural magazine, where Demonte and Rocchi both work.

If Tchou’s isn’t exactly a household name, how did the pair come to learn about him? Rocchi says they might have also remained in the dark—if not for the birth of their son in 2007. “We wanted to make sure he knew about his family and where he came from,” Rocchi says. Their family tree includes Demonte’s grandfather, who had emigrated to Milan from China in 1931. “I thought, if I don’t write this down for my son, it will be lost,” Rocchi recalls.

This British Family Changed the Course of Engineering

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/dawn-of-electronics/this-british-family-changed-the-course-of-engineering

Charles Parsons invented the modern steam turbine, but his wife and daughter built something just as lasting

The British engineer Charles Parsons knew how to make a splash. In honor of Queen Victoria’s Diamond Jubilee, the British Royal Navy held a parade of vessels on 26 June 1897 for the Lords of the Admiralty, foreign ambassadors, and other dignitaries. Parsons wasn’t invited, but he decided to join the parade anyway. Three years earlier, he’d introduced a powerful turbine generator—considered the first modern steam turbine—and he then built the SY Turbinia to demonstrate the engine’s power.

Arriving at the naval parade, Parsons raised a red pennant and then broke through the navy’s perimeter of patrol boats. With a top speed of almost 34 knots (60 kilometers per hour), Turbinia was faster than any other vessel and could not be caught. Parsons had made his point. The Royal Navy placed an order for its first turbine-powered ship the following year.

Onboard the Turbinia that day was Parsons’s 12-year-old daughter, Rachel, whose wide-ranging interests in science and engineering Parsons and his wife encouraged. From a young age, Rachel Parsons and her brother, Algernon, tinkered in their father’s home workshop, just as Charles had done when he was growing up. Indeed, the Parsons family tree shows generation after generation of engineering inquisitiveness from both the men and the women, each of whom made their mark on the field.

Charles grew up at Birr Castle, in County Offaly, Ireland. His father, William, who became the 3rd Earl of Rosse in 1841, was a mathematician with an interest in astronomy. Scientists and inventors, including Charles Babbage, traveled to Birr Castle to see the Leviathan of Parsonstown, a 1.8-meter (72-inch) reflecting telescope that William built during the 1840s. His wife, Mary, a skilled blacksmith, forged the iron work for the telescope’s tube.

William dabbled in photography, unsuccessfully attempting to photograph the stars. Mary was the real photography talent. Her detailed photos of the famous telescope won the Photographic Society of Ireland’s first Silver Medal.

Charles and his siblings enjoyed a traditional education by private tutors. They also had the benefit of a hands-on education, experimenting with the earl’s many steam-powered machines, including a steam-powered carriage. They worked on the Leviathan’s adjustment apparatus and in their mother’s dark room.

After studying mathematics at Trinity College, Dublin, and St. John’s College, Cambridge, Charles apprenticed at the Elswick Works, a large manufacturing complex operated by the engineering firm W.G. Armstrong in Newcastle upon Tyne, England. It was unusual for someone of his social class to apprentice, and he paid £500 for the opportunity (about US $60,000 today), in the hopes of later gaining a management position.

During his time at the works, Charles refined some engine designs that he’d sketched out while at Cambridge. The reciprocating, or piston, steam engine had by then been around for more than 100 years, itself an improvement on Thomas Newcomen’s earlier but inefficient atmospheric steam engine. Beginning in the 1760s, James Watt and Matthew Boulton made improvements that included adding a separate condenser to eliminate the loss of heat when water was injected into the cylinder. The water created a vacuum and pulled the piston in a stroke. A later improvement was the double-acting engine, where the piston could both push and pull. Still, piston steam engines were loud, dirty, and prone to exploding, and Charles saw room for improvement.

His initial design was for a four-cylinder epicycloidal engine, in which the cylinders as well as the crankshaft rotated. One advantage of this unusual configuration was that it could work at high speed with limited vibration. Charles designed it to directly drive a dynamo so as to avoid any connecting belts or pulleys. He applied for a British patent in 1877 at the age of 23.

Charles offered the design to his employer, who declined, but Kitson and Co., a locomotive manufacturer in Leeds, was interested. Charles’s brother Richard Clere Parsons was a partner at Kitson and persuaded him to join the company, which eventually produced 40 of the engines. Charles spent two years there, mostly working on rocket-powered torpedoes that proved unsuccessful.

More successful was his courting of Katharine Bethell, the daughter of a prominent Yorkshire family. Charles was said to have impressed Katharine with his skill at needlework, and they married in 1883.

In 1884, Charles became a junior partner and the head of the electrical section at Clarke, Chapman and Co., a manufacturer of marine equipment in Newcastle upon Tyne. He developed a new turbine engine, which he used to drive an electric generator, also of his own design. [His first prototype, now part of the collection of the Science Museum, London, is shown above.] The turbine generator was 1.73 meters long, 0.4 meters wide, and 0.8 meters high, and it weighed a metric ton.

Charles Parsons’s engine is often considered the first modern turbine. Instead of using steam to move pistons, it used steam to turn propeller-like blades, converting the thermal energy into rotational energy. Parsons’s original design was inefficient, running at 18,000 rpm and producing 7.5 kilowatts—about the power of a small household backup generator today. He made rapid incremental improvements, such as changing the shape of the blades, and he soon had an engine with an output of 50,000 kW, which would be enough to power up to 50,000 homes today.

In 1889 Charles established C.A. Parsons and Co., in Heaton, a suburb of Newcastle, with the goal of manufacturing his turbo-generator. The only hitch was that Clarke, Chapman still held the patent rights. While the patent issues got sorted out, Charles founded the Newcastle and District Electric Lighting Co., which became the first electric company to rely entirely on steam turbines. It wouldn’t be the last.

During his lifetime, he saw turbine-generated electricity become affordable and readily available to a large population. Even today, most electricity generation relies on steam turbines.

Once Charles had secured the patent rights to his invention, he set about improving the steam turbo-generator, making it more efficient and more compact. He established the Marine Steam Turbine Co., which built the Turbinia in 1894. Charles spent several years refining the mechanics before the ship made its sensational public appearance at the Diamond Jubilee. In 1905, just eight years after the Turbinia’s public debut, the British admiralty decided all future Royal Navy vessels should be turbine powered. The private commercial shipping industry followed suit.

Charles Parsons never stopped designing or innovating, trying his hand at many other ventures. Not all were winners. For instance, he spent 25 years attempting to craft artificial diamonds before finally admitting defeat. More lucrative was the manufacture of optical glass for telescopes and searchlights. In the end, he earned over 300 patents, received a knighthood, and was awarded the Order of Merit.

But Charles was not the only engineer in his very talented household.

When I first started thinking about this month’s column, I wanted to mark the centenary of the founding of the Women’s Engineering Society (WES), one of the oldest organizations dedicated to the advancement of women in engineering. I searched for a suitable museum object that honored female engineers. That proved more difficult than I anticipated. Although the WES maintains extensive archives at the Institution of Engineering and Technology, including a complete digitized run of its journal, The Woman Engineer, it doesn’t have much in the way of three-dimensional artifacts. There was, for example, a fancy rose bowl that was commissioned for the society’s 50th anniversary. But it seemed not quite right to represent women engineers with a purely decorative object.

I then turned my attention to the founders of WES, who included Charles Parsons’s wife, Katharine, and daughter, Rachel. Although Charles was a prolific inventor, neither Katharine nor Rachel invented anything, so there was no obvious museum object linked to them. But inventions aren’t the only way to be a pioneering engineer.

After what must have been a wonderful childhood of open-ended inquiry and scientific exploration, Rachel followed in her father’s footsteps to Cambridge. She was one of the first women to study mechanical sciences there. At the time, though, the university barred women from receiving a degree.

When World War I broke out and Rachel’s brother enlisted, she took over his position as a director on the board of the Heaton Works. She also joined the training division of the Ministry of Munitions and was responsible for instructing thousands of women in mechanical tasks.

As described in Henrietta Heald’s upcoming book Magnificent Women and their Revolutionary Machines (to be published in February 2020 by the crowdfunding publisher Unbound), the war brought about significant demographic changes in the British workforce. More than 2 million women went to work outside the home, as factories ramped up to increase war supplies of all sorts. Of these, more than 800,000 entered the engineering trades.

This upsurge in female employment coincided with a shift in national sentiment toward women’s suffrage. Women had been fighting for the right to vote for decades, and they finally achieved a partial success in 1918, when women over the age of 30 who met certain property and education requirements were allowed to vote. It took another decade before women had the same voting rights as men.

But these political and workplace victories for women were built on shaky ground. The passage of the Sex Disqualification (Removal) Act of 1919 made it illegal to discriminate against women in the workplace. But the Restoration of Pre-War Practices Act, passed the same year, required that women give up their jobs to returning servicemen, unless they happened to work for firms that had employed women in the same role before the war.

These contradictory laws both stemmed from negotiations between Prime Minister David Lloyd George and British trade unions. The unions had vigorously objected to employing women during the war, but the government needed the women to work. And so it came up with the Treasury Agreement of 1915, which stipulated that skilled work could be subdivided and automated, allowing women and unskilled men to take them on. Under those terms, the unions acquiesced to the “dilution” of the skilled male workforce.

And so, although the end of the war brought openings for women in some professions, tens of thousands of women in engineering suddenly found themselves out of work.

The Parsons women fought back, using their social standing to advocate on behalf of female engineers. On 23 June 1919, Katharine and Rachel Parsons, along with several other prominent women, founded the Women’s Engineering Society to resist the relinquishing of wartime jobs to men and to promote engineering as a rewarding profession for both sexes.

Two weeks later, Katharine gave a rousing speech, “Women’s Work in Engineering and Shipbuilding during the War” [PDF] at a meeting of the North East Coast Institution of Engineers and Shipbuilders. “Women are able to work on almost every known operation in engineering, from the most highly skilled precision work, measured to [the] micrometer, down to the rougher sort of laboring jobs,” she proclaimed. “To enumerate all the varieties of work intervening between these two extremes would be to make a catalogue of every process in engineering.” Importantly, Katharine mentioned not just the diluted skills of factory workers but also the intellectual and design work of female engineers.

Just as impassioned, Rachel wrote an article for the National Review several months later that positioned the WES as a voice for women engineers:

Women must organize; this is the only royal road to victory in the industrial world. Women have won their political independence; now is the time for them to achieve their economic freedom too. It is useless to wait patiently for the closed doors of the skilled trade unions to swing open. It is better far to form a strong alliance, which, armed as it will be with the parliamentary vote, may be as powerful an influence in safeguarding the interests of women-engineers as the men’s unions have been in improving the lot of their members.

The following year, Rachel was one of the founding members of an all-female engineering firm, Atalanta, of which her mother was a shareholder. The firm specialized in small machinery work, similar to the work Rachel had been overseeing at her father’s firm. Although the business voluntarily shuttered after eight years, the name lived on as a manufacturer of small hand tools and household fixtures.

The WES has had a much longer history. In its first year, it began publishing The Women Engineer, which still comes out quarterly. In 1923 the WES began holding an annual conference, which has been canceled only twice, both times due to war. Over its 100 years, the organization worked to secure employment rights for women from the shop floor to management, guarantee access to formal education, and even encouraged the use of new consumer technologies, such as electrical appliances in the home.

Early members of the WES came from many different branches of engineering. Dorothée Pullinger ran a factory in Scotland that produced the Galloway, an automobile that was entirely designed and built by women for women. Amy Johnson was a world-renowned pilot who also earned a ground engineer’s license. Jeanie Dicks, the first female member of the Electrical Contractors Association, won the contract for the electrification of Winchester Cathedral.

Today the WES continues its mission of supporting women in pursuit of engineering, scientific, and technical careers. Its website gives thanks and credit to early male allies, including Charles Parsons, who supported female engineers. Charles may have earned his place in history due to his numerous inventions, but if you come across his turbine at the Science Museum, remember that his wife and daughter earned their places, too.

An abridged version of this article appears in the June 2019 print issue as “As the Turbine Turns.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

The Last Working Olivetti Mainframe Sits In a Tuscan High School

Post Syndicated from Jean Kumagai original https://spectrum.ieee.org/tech-talk/tech-history/silicon-revolution/when-the-history-of-computing-comes-alive

How an encounter with the ELEA 9003 inspired a tech historian’s career

About 10 years ago, Elisabetta Mori and some friends were doing research for an art exhibit on the theme of “archives of memories.”

“We approached the theme literally, and so we looked for old examples of physical memories—computer memories,” Mori recalls. “We tried to see the oldest computers built in Italy.” At the Museum of Computing Machinery in Pisa, they saw the Calcolatrice Elettronica Pisana, an early digital computer built by the University of Pisa in 1957 with the support of the Olivetti company. But the machine had long ago stopped working.

Then they heard about a working model of the ELEA 9003, Olivetti’s first commercial mainframe, introduced in 1959. They lost no time tracking it down.

This 9003 had originally belonged to a bank in Siena, where it was used for payroll, managing accounts, calculating interest rates, and the like. In 1972, the bank donated the computer to a high school in the Tuscan hill town of Bibbiena. And there it’s been ever since. Today, former Olivetti employees periodically travel to the ISIS High School Enrico Fermi to tend to the machine.

The mainframe’s sleek aluminum modular racks and peripherals occupy a large room, with Olivetti typewriters and calculators spread around the space. The technicians keep spare parts on hand, as well as original manuals and blueprints.

The encounter with the computer changed Mori’s life. She wrote a master’s thesis about it. Now, she is a Ph.D. candidate in the history of computing at Middlesex University in London. Mori’s article, “The Italian Computer: Olivetti’s ELEA 9003 Was a Study in Elegant, Ergonomic Design,” describes the company’s heroic effort to launch the ELEA 9003. [In the photo at top, Mori is seated at the 9003’s console.]

“The machine works, but it is fragile,” Mori says. The computer contains more than 40 kilometers of copper cable wrapped in woven glass fiber. “If you don’t run the computer regularly, it will stop working. If you move it, it will die.”

To forestall that eventuality, a local group called the Associazione Amici dell’Olivetti ELEA 9003 is raising funds to hire and train workers to maintain the computer. You can reach them at [email protected].

“Until I saw it working, I didn’t realize how complex, fascinating, and noisy these early computers were,” Mori says. “I would have missed one big part of the story.”

The Italian Computer: Olivetti’s ELEA 9003 Was a Study in Elegant, Ergonomic Design

Post Syndicated from Elisabetta Mori original https://spectrum.ieee.org/tech-history/silicon-revolution/the-italian-computer-olivettis-elea-9003-was-a-study-in-elegant-ergonomic-design

In 1959, Olivetti introduced one of the first transistorized mainframes and started its own transistor company

“I have made my decision: We are going to scrap the first version of our computer, and we will start again from scratch.” It’s the autumn of 1957, and Mario Tchou, a brilliant young Chinese-Italian electrical engineer, is speaking to his team at the Olivetti Electronics Research Laboratory. Housed in a repurposed villa on the outskirts of Pisa, not far from the Leaning Tower, the lab is filled with vacuum tubes, wires, cables, and other electronics, a startling contrast to the tasteful decorations of the palatial rooms.

On any weekday, some 20 or so physicists, technicians, and engineers would be hard at work there, designing, developing, soldering, conferring. In less than two years—half the time they’d been allotted—they’ve completed their first prototype mainframe, called Macchina Zero (Zero Machine). No other company in Italy has ever built a computer before. They’re understandably proud.

Today, though, is a Sunday, and Tchou has called in his boss and three members of the team to discuss a bold decision, one that he hopes will place Olivetti ahead of every other computer maker in the world.

Macchina Zero, he points out, uses vacuum tubes. And tubes, he says, will soon become obsolete: They are too big, they overheat, they are unreliable, they consume too much power. The company wants to build a cutting-edge machine, and transistors are the computer technology of the future. “Olivetti will launch a fully transistorized machine,” Tchou tells them.

Within a year, the lab would finish a prototype of the new machine. In support of that effort, Olivetti would also launch its own transistor company and strike a strategic alliance with Fairchild Semiconductor. When Olivetti’s first mainframe, the ELEA 9003, is unveiled in 1959, it is an astonishing work of industrial design—modular, technologically advanced, and built to human scale. Olivetti, better known for its typewriters, adding machines, and iconic advertisements, was suddenly a computer company to reckon with.

The fact that most historical accounts largely ignore Olivetti’s role as an early pioneer of computing and transistors may have something to do with the series of tragic events that would transpire after the ELEA 9003’s introduction. But it is a history worth revisiting, because the legacy of Olivetti lives on in some surprising ways.

During World War II, computers were expensive, fragile, and hidden, restricted to military and scientific purposes. But after the war, businesses were quick to adopt computers to address their escalating need for information management. The machines on offer relied on vacuum tubes, punch tape, and punch cards, and they were slow and unreliable. But they were much faster than the manual and mechanical systems they were replacing.

The engineer and entrepreneur Camillo Olivetti founded Olivetti in 1908 as the first typewriter manufacturer in Italy. Production at the company’s factory in Ivrea, near Turin, later expanded to mechanical calculators and other office equipment.

In the 1920s, Camillo’s eldest son, Adriano, became more involved in the family business. Adriano had studied chemical engineering at the Polytechnic University of Turin. Camillo, a socialist, initially employed his son as an unskilled worker in the Olivetti factory. He then sent Adriano to the United States to study industrial methods. In 1926, the Olivettis reorganized the company’s production according to the principles of scientific management. By 1938, Adriano had assumed the presidency of Olivetti.

Adriano believed that the profits of industry should be reinvested for the betterment of society. Under his tenure, the company offered worker benefits that had no equal in Italy at the time, including more equitable pay for women, a complete range of health services, nine months of paid maternity leave, and free childcare. In addition, the Ivrea factory had a large library with 30,000 volumes.

Adriano also established an experimental marketing and advertising department, surrounding himself with smart young designers, architects, artists, poets, photographers, and musicians. The combination of Adriano’s initiatives spurred the company to wider international prominence.

After World War II, Adriano became convinced that electronics was the future of the company, and so he established a joint venture with the French firm Compagnie des Machines Bull. Bull was one of the biggest punch-card equipment manufacturers in Europe, and it had just entered the computer business. The Olivetti-Bull Company became the official reseller of Bull’s products in Italy, and the partnership helped Olivetti survey the domestic market potential for computers.

In 1952, Olivetti founded a computer research center in New Canaan, Conn., at the recommendation of Dino Olivetti, Adriano’s youngest brother. Dino had studied at MIT and was president of the Olivetti Corp. of America. (That same year, Dino contributed to an exhibition devoted to Olivetti products and design at the Museum of Modern Art in New York City.) The lab kept tabs on developments in the United States, where electronics and computers were at the forefront.

Olivetti sought a worthy academic partner for its computer business. After a failed alliance with Rome University in the early 1950s, the company partnered with the University of Pisa in 1955. At the time, the only two computers in the country were a National Cash Register CRC 102A, installed at the Milan Polytechnic, and a Ferranti Mark I*, installed at an applied math research institute in Rome.

The University of Pisa began building a research computer, with Olivetti providing financial support, electronic components, patent licenses, and employees. In exchange, Olivetti’s staff gained valuable experience. While the Pisa project aimed to create a single scientific machine for researchers, Olivetti hoped to develop a series of commercial computers for the business market.

Adriano searched for an expert engineer and manager to set up a computer lab within the company and lead Olivetti’s computer team. He eventually found both in Mario Tchou. Born in Italy in 1924, Tchou was the son of Yin Tchou, a Chinese diplomat stationed in Rome and Vatican City. After studying electrical engineering at the Sapienza University of Rome, Mario received a scholarship to the Catholic University of America, in Washington, D.C., where he obtained a bachelor’s degree in electronic engineering. In 1949, he moved to New York City to get a master’s in physics at the Polytechnic Institute of Brooklyn, and three years later, he became an associate professor of electrical engineering at Columbia University.

Adriano Olivetti met Mario Tchou in New York City in August 1954 and immediately decided he was the perfect choice. Tchou was an expert in digital control systems, and he worked at one of the most advanced electronics and computing research labs in the United States. He was also a native Italian speaker and understood the company’s culture. Adriano and his son Roberto convinced Tchou to move back to Italy and become the leader of their Laboratorio Ricerche Elettroniche, in Pisa.

The lab’s first project, Macchina Zero, went as well as could be expected, but Tchou’s decision in 1957 to switch to transistors involved risks and potential delays. The company would need at least 100,000 transistors and diodes for each installation. But in Italy as elsewhere, transistors were in short supply. Rather than importing devices from the United States or elsewhere, the company decided to manufacture the devices in-house. The move would give Olivetti a secure and continuous source of components as well as expertise and insights into the latest developments in the field.

In 1957, with Telettra, an Italian telecommunications company, Olivetti founded the SGS Company (which stands for Società Generale Semiconduttori). SGS soon began producing germanium alloy junction transistors, based on technology licensed from General Electric.

SGS’s next generation of transistors, though, would be silicon, manufactured in partnership with Fairchild Semiconductor. The California startup had been founded the same year as SGS by a group of young scientists and engineers that included Robert Noyce and Gordon Moore. In late 1959, SGS contacted Fairchild through Olivetti’s New Canaan lab, and the following year Fairchild became an equal partner in SGS with Olivetti and Telettra. Olivetti now had access to Fairchild’s pathbreaking technology. That included the planar process, which Fairchild had patented in 1959 and is still used to make integrated circuits.

The result of Tchou’s push for a transistorized computer was the ELEA 9003, the first commercial computer to be made in Italy. It launched in 1959, and between 1960 and 1964, about 40 of the mainframes were sold or leased to Italian clients, mainly in banking and industry.

ELEA belongs to what historians of computing consider the second generation of computers—that is, machines that used transistors and ferrite-core memories. In this respect, the ELEA 9003 was similar to the IBM 7070 and the Siemens 2002. Core memories were arrays of tiny magnetic rings threaded with copper wire. Each core could be magnetized clockwise or counterclockwise, to represent one bit of information—a 1 or a 0. Olivetti workers sewed the ELEA memories by hand at the Borgolombardo factory, near Milan, where the ELEAs were assembled.

The minimum unit of memory in the ELEA 9003 was the character, which consisted of six bits plus a parity bit. The total memory ranged from 20,000 to 160,000 characters, with a typical installation having about 40,000. Two Olivetti engineers, Giorgio Sacerdoti and Martin Friedman, had previously worked with Ferranti computers. Their background may have influenced some design decisions for the 9003, in particular the computer architecture. However, the Ferranti Mark I* that Sacerdoti worked on in Rome used Williams-Kilburn tubes and vacuum tubes instead of core memory and transistors.

To oversee the aesthetic design of the new computer, Adriano brought in the Italian architect Ettore Sottsass Jr. Assisted by Dutch designer Andries Van Onck, Sottsass focused on the human-machine interface, using human factors and ergonomics to make the computer easier to operate and maintain. For example, he standardized the height of the racks at 150 centimeters, to allow engineers and technicians working on either side to visually communicate with one another, as computers were very noisy in those days.

The ELEA 9003 was housed in a series of modular cabinets. Colored strips identified the contents of each cabinet, such as the power supply, memory, arithmetic logic unit, and the control unit for the peripherals, which included printers and Ampex magnetic tape drives. Some ELEA 9003 installations employed vacuum tubes for the power supplies and tape decks.

To facilitate the testing and repair of circuit boards, Sottsass arranged each rack in three parts: a central section and two wings, which could be opened like a book. He also organized the connection cables in channels above the racks. Typical mainframes of that era had their cables positioned beneath the floor, making maintenance cumbersome and expensive.

The console’s display used a grid of colored cubes, similar to mosaic tiles. Each cube was engraved with a letter or a symbol. Different sections of the display showed the status of the 9003’s components. An operator could use the console’s keyboard to enter instructions, one at a time, for direct execution.

Sottsass’s design for the Olivetti ELEA 9003 was complex but elegant. It was awarded the prestigious Compasso d’Oro (Golden Compass) industrial design prize in 1959.

Olivetti aimed to export the ELEA to the international market. Rather than translating the computer’s commands and abbreviations from Italian into English, French, or German, the company devised a bold solution. It commissioned the Ulm School of Design, one of the most progressive design centers at the time, to develop a system of symbols that would be independent of any one language. Although the resulting sign system was never used in the ELEA series, it prefigures today’s widespread use of icons in computer interfaces.

Olivetti’s big plans for exporting its computers included the acquisition of the U.S. typewriter manufacturer Underwood in 1959. With this move, Olivetti hoped to leverage Underwood’s powerful commercial network to strengthen its sales in the United States. The acquisition, however, depleted the company’s coffers. Worse, Olivetti discovered that Underwood’s manufacturing facilities were outdated and its financial situation bleak.

Then, on 27 February 1960, Adriano Olivetti died from a stroke while traveling by train from Milan to Lausanne. He was 58 years old. The following year, Mario Tchou was killed in a car accident at the age of 37. At the time of his death, Tchou had been spearheading the development of a new generation of Olivetti computers that incorporated silicon components from SGS-Fairchild. With these tragic deaths, Olivetti’s computer division lost its most charismatic and visionary leaders.

The next several years proved tumultuous for the company. Roberto Olivetti tried to keep the computer business going, even appealing to the Italian government for aid. But the government didn’t view electronics and computers as a matter of national interest and so refused to bail out Olivetti’s electronics division. (Nor had the government supported Olivetti’s development of the ELEA, in stark contrast to the U.S. and British governments’ generous support of their domestic computer makers.) Meanwhile, the U.S. government, through its former ambassador to Italy, Clare Boothe Luce, reportedly was pressuring Olivetti to sell its electronics division, which it finally did to General Electric in 1964.

The sale to GE did not include Olivetti’s small-size programmable calculators, which the company continued to develop. The Programma 101 came to market in 1965 and proved an instant hit. [See sidebar, “The Calculator That Helped Land Men On the Moon.”]

Acquiring Olivetti was part of GE’s strategy to enter the European computer market. Olivetti’s French partner, Bull, also faced financial difficulties and was also bought by GE in 1964. GE continued building computers based on Olivetti’s smaller models and sold them as the GE 100 series. The ELEA 4115, for example, became the GE 115. Eventually, GE sold about 4,000 machines in the GE 100 line.

We can’t know how far Olivetti would have taken its computer business had Adriano Olivetti and Mario Tchou lived longer. What we do know is that the electronics division left behind an impressive legacy of design, advanced hardware, and talented engineers.

Olivetti had unquestionably the most elegant computers of its day. Adriano viewed computers as complex artifacts, whose aesthetics, ergonomics, and user experience had to be carefully cultivated in parallel with the technology. He organized every aspect of the company, including the factories, workers, advertising, and marketing, to embrace this holistic approach to design. In his famous 1973 lecture “Good Design Is Good Business,” IBM’s Thomas J. Watson Jr. credited Adriano Olivetti for inspiring IBM’s own overhaul of its corporate aesthetic in the late 1950s.

Olivetti’s computer legacy also lives on through its transistor business. In 1987, SGS merged with the French-owned Thomson Semiconducteurs to form STMicroelectronics, now a multinational manufacturer of microchips.

And the people hired by Olivetti continued to make their mark. Of the many capable engineers and scientists who passed through Olivetti’s doors, one stands out. In 1960, the company hired a 19-year-old named Federico Faggin to work in its electronics lab. During Faggin’s years at Olivetti, he learned about computer architecture and logic and circuit design and helped to build a small experimental computer.

Later, after earning a physics degree from the University of Padua, Faggin worked briefly at SGS-Fairchild in Italy before moving to Fairchild’s R&D lab in Palo Alto, Calif., and then to Intel. Drawing on his experience at Olivetti and SGS, he soon joined the small team that created the Intel 4004, the first commercial microprocessor. And so, although Olivetti’s foray into building mainframe computers suffered a premature death, the effort indirectly contributed to the birth of the microcomputer industry that surrounds us today.

This article appears in the June 2019 print issue as “The Italian Computer.”

About the Author

Elisabetta Mori is a Ph.D. candidate in the history of computing at Middlesex University in London.

The Calculator That Helped Land Men on the Moon

Post Syndicated from Elisabetta Mori original https://spectrum.ieee.org/tech-history/silicon-revolution/the-calculator-that-helped-land-men-on-the-moon

Olivetti’s Programma 101 embodied the company’s holistic approach to technical efficiency, ease of use, and smart design

After the sale of its computer business to General Electric in 1964, Italy’s Olivetti managed to retain control of its small electronic calculators. The most notable of these would be the Programma 101.

Introduced at a Business Equipment Manufacturers Association show in New York in October 1965, this programmable desktop calculator proved an immediate success. Also known as the P101 or the Perottina (after the chief engineer who designed it, Pier Giorgio Perotto), it eventually sold more than 40,000 units, primarily in the United States but also in Europe. NASA bought a number of P101s, which were used by engineers working on the 1969 Apollo 11 moon landing.

Chief among the machine’s selling points was its portability. Roughly the size of an electric typewriter, it could be used in program mode like a computer, with stored instructions, while in manual mode it served as a high-speed calculator. Its memory consisted of a magnetostrictive delay line, which used pulses of sound traveling along a coil of nickel alloy wire to store numeric data and program instructions. This kind of memory was used in several other small computers and calculators, including the Ferranti Sirius, a small business computer, and the Friden EC-130 and EC-132 desktop calculators.

The P101 had a 36-character keyboard, a built-in mechanical printer, and a magnetic card reader/recorder, for storing and retrieving programs. Olivetti supplied a library of commonly used programs. There was no display as such.

The P101 used only high-level instructions, so programming it was extremely simple. As a promotional video proclaimed, “A good secretary can learn to operate it in a matter of days!” The ad showed the P101 being used in a research lab, beside a swimming pool, and even at a betting hall.

At a time when bulky mainframe computers required a team of programmers, engineers, and operators to run, the P101’s compact size, capabilities, and ease of use were remarkable. Like all computers of that era, it wasn’t exactly cheap: The P101 could be leased on a monthly basis, or bought outright for US $3,200 (about $25,000 today). For comparison’s sake, the monthly rent on an IBM System/360 mainframe ranged from $2,700 to $115,000, with purchase prices from $133,000 to $5.5 million.

The calculator’s technical features inspired imitation: Hewlett-Packard reportedly paid Olivetti approximately $900,000 in royalties because of the similarities between the architecture and the magnetic cards of the HP 9100 programmable calculator series and those of the Programma 101.

The P101’s aesthetic and ergonomic design was the work of a talented young Italian architect named Mario Bellini. In contrast to Ettore Sottsass Jr.’s futuristic look for Olivetti’s ELEA 9003 mainframe, Bellini’s P101 is curvy and sensual while still being user friendly. Its rounded edges comfortably supported the user’s wrists and hands. Magnetic cards could be easily inserted into the central slot. On the right-hand side, a green/red light added a touch of color while also alerting the user to any malfunctions. The Programma 101 is now part of the permanent collection at the Museum of Modern Art in New York City.

If you own a Programma 101 and need to get it fixed, don’t despair. A team that includes some of the P101’s original designers, former Olivetti engineers, and volunteers will help you restore and repair it. Their lab is located in the Museo Tecnologicamente in Ivrea, the town near Turin where Olivetti was founded.  

For more on Olivetti’s pioneering computers, see “The Italian Computer: Olivetti’s ELEA 9003 Was a Study in Elegant, Ergonomic Design.”

About the Author

Elisabetta Mori is a Ph.D. candidate in computer history at Middlesex University in the United Kingdom.

In 1983, This Bell Labs Computer Was the First Machine to Become a Chess Master

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/silicon-revolution/in-1983-this-bell-labs-computer-was-the-first-machine-to-become-a-chess-master

Belle used a brute-force approach to best other computers and humans

Chess is a complicated game. It’s a game of strategy between two opponents, but with no hidden information and all of the potential moves known by both players at the outset. With each turn, players communicate their intent and try to anticipate the possible countermoves. The ability to envision several moves in advance is a recipe for victory, and one that mathematicians and logicians have long found intriguing.

Despite some early mechanical chess-playing machines—and at least one chess-playing hoax—mechanized chess play remained hypothetical until the advent of digital computing. While working on his Ph.D. in the early 1940s, the German computer pioneer Konrad Zuse used computer chess as an example for the high-level programming language he was developing, called Plankalkül. Due to World War II, however, his work wasn’t published until 1972. With Zuse’s work unknown to engineers in Britain and the United States, Norbert Wiener, Alan Turing, and notably Claude Shannon (with his 1950 paper “Programming a Computer for Playing Chess” [PDF]) paved the way for thinking about computer chess.

Beginning in the early 1970s, Bell Telephone Laboratories researchers Ken Thompson and Joe Condon developed Belle, a chess-playing computer. Thompson is cocreator of the Unix operating system, and he’s also a great lover of chess. He grew up in the era of Bobby Fischer, and as a youth he played in chess tournaments. He joined Bell Labs in 1966, after earning a master’s in electrical engineering and computer science from the University of California, Berkeley.

Joe Condon was a physicist by training who worked in the Metallurgy Division at Bell Labs. His research contributed to the understanding of the electronic band structure of metals, and his interests evolved with the rise of digital computing. Thompson got to know Condon when he and his Unix collaborator, Dennis Ritchie, began collaborating on a game called Space Travel, using a PDP-7 minicomputer that was under Condon’s purview. Thompson and Condon went on to collaborate on numerous projects, including promoting the use of C as the language for AT&T’s switching system.

Belle began as a software approach—Thompson had written a sample chess program in an early Unix manual. But after Cordon joined the team, the program morphed into a hybrid computer chess-playing machine, with Thompson handling the programming and Condon designing the hardware.

Belle consisted of three main parts: a move generator, a board evaluator, and a transposition table. The move generator identified the highest-value piece under attack and the lowest-value piece attacking, and it sorted potential moves based on that information. The board evaluator noted the king’s position and its relative safety during different stages of the game. The transposition table contained a memory cache of potential moves, and it made the evaluation more efficient.

Belle employed a brute-force approach. It looked at all of the possible moves a player could make with the current configuration of the board, and then considered all of the moves that the opponent could make. In chess, a turn taken by one player is called a ply. Initially, Belle could compute moves four plies deep. When Belle debuted at the Association for Computing Machinery’s North American Computer Chess Championship in 1978, where it claimed its first title, it had a search depth of eight plies. Belle went on to win the championship four more times. In 1983, it also become the first computer to earn the title of chess “master.”

Computer chess programmers were often treated with hostility when they pitted their systems against human competitors, some of whom were suspicious of potential cheating, while others were simply apprehensive. When Thompson wanted to test out Belle at his local chess club, he took pains to build up personal relationships. He offered his opponents a printout of the computer’s analysis of the match. If Belle won in mixed human/computer tournaments, he refused the prize money, offering it to the next person in line. Belle went on to play weekly at the Westfield Chess Club, in Westfield, N.J., for almost 10 years.

In contrast to human-centered chess competitions, where silence reigns so as not to disturb a player’s concentration, computer chess tournaments could be noisy affairs, with people discussing and debating different algorithms and game strategies. In a 2005 oral history, Thompson remembers them fondly. After a tournament, he would be invigorated and head back to the lab, ready to tackle a new problem.

For a computer, Belle led a colorful life, at one point becoming the object of a corporate practical joke. One day in 1978, Bell Labs computer scientist Mike Lesk, another member of the Unix team, stole some letterhead from AT&T chairman John D. deButts and wrote a fake memo, calling for the suspension of the “T. Belle Computer” project.

At the heart of the fake memo was a philosophical question: Is a game between a person and a computer a form of communication or of data processing? The memo claimed that it was the latter and that Belle therefore violated the 1956 antitrust decision barring the company from engaging in the computer business. In fact, though, AT&T’s top executives never pressured Belle’s creators to stop playing or inventing games at work, likely because the diversions led to economically productive research. The hoax became more broadly known after Dennis Ritchie featured it in a 2001 article, for a special issue of the International Computer Games Association Journal that was dedicated to Thompson’s contributions to computer chess.

In his oral history, Thompson describes how Belle also became the object of international intrigue. In the early 1980s, Soviet electrical engineer, computer scientist, and chess grandmaster Mikhail Botvinnik invited Thompson to bring Belle to Moscow for a series of demonstrations. He departed from New York’s John F. Kennedy International Airport, only to discover that Belle was not on the same plane.

Thompson learned of the machine’s fate after he’d been in Moscow for several days. A Bell Labs security guard who was moonlighting at JFK airport happened to see a Bell Labs box labeled “computer” that was roped off in the customs area. The guard alerted his friends at Bell Labs, and word eventually reached Condon, who lost no time in calling Thompson.

Condon warned Thompson to throw out the spare parts for Belle that he’d brought with him. “You’re probably going to be arrested when you get back,” he said. Why? Thompson asked. “For smuggling computers into Russia,” Condon replied.

In his oral history, Thompson speculates that Belle had fallen victim to the Reagan administration’s rhetoric concerning the “hemorrhage of technology” to the Soviet Union. Overzealous U.S. Customs agents had spotted Thompson’s box and confiscated it, but never alerted him or Bell Labs. His Moscow hosts seemed to agree that Reagan was to blame. When Thompson met with them to explain that Belle had been detained, the head of the Soviet chess club pointed out that the Ayatollah Khomeini had outlawed chess in Iran because it was against God. “Do you suppose Reagan did this to outlaw chess in the United States?” he asked Thompson.

Returning to the states, Thompson took Condon’s advice and dumped the spare parts in Germany. Arriving back home, he wasn’t arrested, for smuggling or anything else. But when he attempted to retrieve Belle at JFK, he was told that he was in violation of the Export Act—Belle’s old, outdated Hewlett-Packard monitor was on a list of banned items. Bell Labs paid a fine, and Belle was eventually returned.

After Belle had dominated the computer chess world for several years, its star began to fade, as more powerful computers with craftier algorithms came along. Chief among them was IBM’s Deep Blue, which captured international attention in 1996 when it won a game against world champion Garry Kasparov. Kasparov went on to win the match, but the ground was laid for a rematch. The following year, after extensive upgrades, Deep Blue defeated Kasparov, becoming the first computer to beat a human world champion in a tournament under regulation time controls.

Photographer Peter Adams brought Belle to my attention, and his story shows the value of being friendly to archivists. Adams had photographed Thompson and many of his Bell Labs colleagues for his portrait series “Faces of Open Source.” During Adams’s research for the series, Bell Labs corporate archivist Ed Eckert granted him permission to photograph some of the artifacts associated with the Unix research lab. Adams put Belle on his wish list, but he assumed that it was now in some museum collection. To his astonishment, he learned that the machine was still at Nokia Bell Labs in Murray Hill, N.J. As Adams wrote to me in an email, “It still had all the wear on it from the epic chess games it had played… :).”

An abridged version of this article appears in the May 2019 print issue as “Cold War Chess.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Untold History of AI: How Amazon’s Mechanical Turkers Got Squeezed Inside the Machine

Post Syndicated from Oscar Schwartz original https://spectrum.ieee.org/tech-talk/tech-history/dawn-of-electronics/untold-history-of-ai-mechanical-turk-revisited-tktkt

Today’s unseen digital laborers resemble the human who powered the 18th-century Mechanical Turk

The history of AI is often told as the story of machines getting smarter over time. What’s lost is the human element in the narrative, how intelligent machines are designed, trained, and powered by human minds and bodies.

In this six-part series, we explore that human history of AI—how innovators, thinkers, workers, and sometimes hucksters have created algorithms that can replicate human thought and behavior (or at least appear to). While it can be exciting to be swept up by the idea of superintelligent computers that have no need for human input, the true history of smart machines shows that our AI is only as good as we are.

Part 6: Mechanical Turk Revisited

At the turn of millennium, Amazon began expanding its services beyond bookselling. As the variety of products on the site grew, the company had to figure out new ways to categorize and organize them. Part of this task was removing tens of thousands of duplicate products that were popping up on the website. 

Engineers at the company tried to write software that could automatically eliminate all duplicates across the site. Identifying and deleting objects seemed to be a simple task, one well within the capacities of a machine. Yet the engineers soon gave up, describing the data-processing challenge as “insurmountable.” This task, which presupposed the ability to notice subtle differences and similarities between pictures and text, actually required human intelligence. 

Amazon was left with a conundrum. Deleting duplicate products from the site was a trivial task for humans, but the sheer number of duplicates would require a huge workforce. Coordinating that many workers on one task was not a trivial problem.

An Amazon manager named Venky Harinarayan came up with a solution. His patent described a “hybrid machine/human computing arrangement,” which would break down tasks into small units, or “subtasks” and distribute them to a network of human workers.

In the case of deleting duplicates, a central computer could divide Amazon’s site into small sections—say, 100 product pages for can openers—and send the sections to human workers over the Internet. The workers could then identify duplicates in these small units and send their pieces of the puzzle back. 

This distributed system offered a crucial advantage: The workers didn’t have to be centralized in one place but could instead complete the subtasks on their own personal computers wherever they happened to be, whenever they chose. Essentially, what Harinaryran developed was an effective way to distribute low-skill yet difficult-to-automate work to a broad network of humans who could work in parallel.

The method proved so effective in Amazon’s internal operations, Jeff Bezos decided this system could be sold as a service to other companies. Bezos turned Harinaryan’s technology into a marketplace for laborers. There, businesses that had tasks that were easy for humans (but hard to automate) could be matched with a network of freelance workers, who would do the tasks for small amounts of money.

Thus was born Amazon Mechanical Turk, or mTurk for short. The service launched in 2005, and the user base quickly grew. Businesses and researchers around the globe began uploading thousands of so-called “human intelligence tasks” onto the platform, such as transcribing audio or captioning images. These tasks were dutifully carried out by an internationally dispersed and anonymous group of workers for a small fee (one aggrieved worker reported an average fee of 20 cents per task). 

The name of this new service was a wink at the chess-playing machine of the 18th century, the Mechanical Turk invented by the huckster Wolfgang von Kempelen. And just like that faux automaton, inside which hid a human chess player, the mTurk platform was designed to make human labor invisible. Workers on the platform are not represented with names, but with numbers, and communication between the requester and the worker is entirely depersonalized. Bezos himself has called these dehumanized workers “artificial artificial intelligence.”

Today, mTurk is a thriving marketplace with hundreds of thousands of workers around the world. While the online platform provides a source of income for people who otherwise might not have access to jobs, the labor conditions are highly questionable. Some critics have argued that by keeping the workers invisible and atomized, Amazon has made it easier for them to be exploited. A research paper [PDF] published in December 2017 found that workers earned a median wage of approximately US $2 per hour, and only 4 percent earned more than $7.25 per hour.

Interestingly, mTurk has also become crucial for the development of machine-learning applications. In machine learning, an AI program is given a large data set, then learns on its own how to find patterns and draw conclusions. MTurk workers are frequently used to build and label these training data sets, yet their role in machine learning is often overlooked.

The dynamic now playing out between the AI community and mTurk is one that has been ever-present throughout the history of machine intelligence. We eagerly admire the visage of the autonomous “intelligent machine,” while ignoring, or even actively concealing, the human labor that makes it possible.

Perhaps we can take a lesson from the author Edgar Allan Poe. When he viewed von Kempelen’s Mechanical Turk, he was not fooled by the illusion. Instead, he wondered what it would be like for the chess player trapped inside, the concealed laborer “tightly compressed” among cogs and levers in “exceedingly painful and awkward positions.”

In our current moment, when headlines about AI breakthroughs populate our news feeds, it’s important to remember Poe’s forensic attitude. It can be entertaining—if sometimes alarming—to be swept up in the hype over AI, and to be carried away by the vision of machines that have no need for mere mortals. But if you look closer, you’ll likely see the traces of human labor.

This is the final installment of a six-part series on the untold history of AI. Part 5 told a story of algorithmic bias—from the 1980s. 

Eight years, 2000 blog posts

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/eight-years-2000-blog-posts/

Today’s a bit of a milestone for us: this is the 2000th post on this blog.

Why does a computer company have a blog? When did it start, who writes it, and where does the content come from? And don’t you have sore fingers? All of these are good questions: I’m here to answer them for you.

The first ever Raspberry Pi blog post

Marital circumstances being what they are, I had a front-row view of everything that was going on at Raspberry Pi, right from the original conversations that kicked the project off in 2009. In 2011, when development was still being done on Eben’s and my kitchen table, we met with sudden and slightly alarming fame when Rory Cellan Jones from the BBC shot a short video of a prototype Raspberry Pi and blogged about it – his post went viral. I was working as a freelance journalist and editor at the time, but realised that we weren’t going to get a better chance to kickstart a community, so I dropped my freelance work and came to work full-time for Raspberry Pi.

Setting up an instantiation of WordPress so we could talk to all Rory’s readers, each of whom decided we’d promised we’d make them a $25 computer, was one of the first orders of business. We could use the WordPress site to announce news, and to run a sort of devlog, which is what became this blog; back then, many of our blog posts were about the development of the original Raspberry Pi.

It was a lovely time to be writing about what we do, because we could be very open about the development process and how we were moving towards launch in a way that sadly, is closed to us today. (If we’d blogged about the development of Raspberry Pi 3 in the detail we’d blogged about Raspberry Pi 1, we’d not only have been handing sensitive and helpful commercial information to the large number of competitor organisations that have sprung up like mushrooms since that original launch; but you’d also all have stopped buying Pi 2 in the run-up, starving us of the revenue we need to do the development work.)

Once Raspberry Pis started making their way into people’s hands in early 2012, I realised there was something else that it was important to share: news about what new users were doing with their Pis. And I will never, ever stop being shocked at the applications of Raspberry Pi that you come up with. Favourites from over the years? The paludarium’s still right up there (no, I didn’t know what a paludarium was either when I found out about it); the cucumber sorter’s brilliant; and the home-brew artificial pancreas blows my mind. I’ve a particular soft spot for musical projects (which I wish you guys would comment on a bit more so I had an excuse to write about more of them).

As we’ve grown, my job has grown too, so I don’t write all the posts here like I used to. I oversee press, communications, marketing and PR for Raspberry Pi Trading now, working with a team of writers, editors, designers, illustrators, photographers, videographers and managers – it’s very different from the days when the office was that kitchen table. Alex Bate, our magisterial Head of Social Media, now writes a lot of what you see on this blog, but it’s always a good day for me when I have time to pitch in and write a post.

I’d forgotten some of the early stuff before looking at 2011’s blog posts to jog my memory as I wrote today’s. What were we thinking when we decided to ship without GPIO pins soldered on? (Happily for the project and for the 25,000,000 Pi owners all over the world in 2019, we changed our minds before we finally launched.) Just how many days in aggregate did I spend stuffing envelopes with stickers at £1 a throw to raise some early funds to get the first PCBs made? (I still have nightmares about the paper cuts.) And every time I think I’m having a bad day, I need to remember that this thing happened, and yet everything was OK again in the end. (The backs of my hands have gone all prickly just thinking about it.) Now I think about it, the Xenon Death Flash happened too. We also survived that.

At the bottom of it all, this blog has always been about community. It’s about sharing what we do, what you do, and making links between people all over the world who have this little machine in common. The work you do telling people about Raspberry Pi, putting it into your own projects, and supporting us by buying the product doesn’t just help us make hardware: every penny we make funds the Raspberry Pi Foundation’s charitable work, helps kids on every continent to learn the skills they need to make their own futures better, and, we think, makes the world a better place. So thank you. As long as you keep reading, we’ll keep writing.

The post Eight years, 2000 blog posts appeared first on Raspberry Pi.

Storing Encrypted Credentials In Git

Post Syndicated from Bozho original https://techblog.bozho.net/storing-encrypted-credentials-in-git/

We all know that we should not commit any passwords or keys to the repo with our code (no matter if public or private). Yet, thousands of production passwords can be found on GitHub (and probably thousands more in internal company repositories). Some have tried to fix that by removing the passwords (once they learned it’s not a good idea to store them publicly), but passwords have remained in the git history.

Knowing what not to do is the first and very important step. But how do we store production credentials. Database credentials, system secrets (e.g. for HMACs), access keys for 3rd party services like payment providers or social networks. There doesn’t seem to be an agreed upon solution.

I’ve previously argued with the 12-factor app recommendation to use environment variables – if you have a few that might be okay, but when the number of variables grow (as in any real application), it becomes impractical. And you can set environment variables via a bash script, but you’d have to store it somewhere. And in fact, even separate environment variables should be stored somewhere.

This somewhere could be a local directory (risky), a shared storage, e.g. FTP or S3 bucket with limited access, or a separate git repository. I think I prefer the git repository as it allows versioning (Note: S3 also does, but is provider-specific). So you can store all your environment-specific properties files with all their credentials and environment-specific configurations in a git repo with limited access (only Ops people). And that’s not bad, as long as it’s not the same repo as the source code.

Such a repo would look like this:

└─── production
|   |   application.properites
|   |   keystore.jks
└─── staging
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client1
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client2
|   |   application.properites
|   |   keystore.jks

Since many companies are using GitHub or BitBucket for their repositories, storing production credentials on a public provider may still be risky. That’s why it’s a good idea to encrypt the files in the repository. A good way to do it is via git-crypt. It is “transparent” encryption because it supports diff and encryption and decryption on the fly. Once you set it up, you continue working with the repo as if it’s not encrypted. There’s even a fork that works on Windows.

You simply run git-crypt init (after you’ve put the git-crypt binary on your OS Path), which generates a key. Then you specify your .gitattributes, e.g. like that:

secretfile filter=git-crypt diff=git-crypt
*.key filter=git-crypt diff=git-crypt
*.properties filter=git-crypt diff=git-crypt
*.jks filter=git-crypt diff=git-crypt

And you’re done. Well, almost. If this is a fresh repo, everything is good. If it is an existing repo, you’d have to clean up your history which contains the unencrypted files. Following these steps will get you there, with one addition – before calling git commit, you should call git-crypt status -f so that the existing files are actually encrypted.

You’re almost done. We should somehow share and backup the keys. For the sharing part, it’s not a big issue to have a team of 2-3 Ops people share the same key, but you could also use the GPG option of git-crypt (as documented in the README). What’s left is to backup your secret key (that’s generated in the .git/git-crypt directory). You can store it (password-protected) in some other storage, be it a company shared folder, Dropbox/Google Drive, or even your email. Just make sure your computer is not the only place where it’s present and that it’s protected. I don’t think key rotation is necessary, but you can devise some rotation procedure.

git-crypt authors claim to shine when it comes to encrypting just a few files in an otherwise public repo. And recommend looking at git-remote-gcrypt. But as often there are non-sensitive parts of environment-specific configurations, you may not want to encrypt everything. And I think it’s perfectly fine to use git-crypt even in a separate repo scenario. And even though encryption is an okay approach to protect credentials in your source code repo, it’s still not necessarily a good idea to have the environment configurations in the same repo. Especially given that different people/teams manage these credentials. Even in small companies, maybe not all members have production access.

The outstanding questions in this case is – how do you sync the properties with code changes. Sometimes the code adds new properties that should be reflected in the environment configurations. There are two scenarios here – first, properties that could vary across environments, but can have default values (e.g. scheduled job periods), and second, properties that require explicit configuration (e.g. database credentials). The former can have the default values bundled in the code repo and therefore in the release artifact, allowing external files to override them. The latter should be announced to the people who do the deployment so that they can set the proper values.

The whole process of having versioned environment-speific configurations is actually quite simple and logical, even with the encryption added to the picture. And I think it’s a good security practice we should try to follow.

The post Storing Encrypted Credentials In Git appeared first on Bozho's tech blog.

Hiring a Director of Sales

Post Syndicated from Yev original https://www.backblaze.com/blog/hiring-a-director-of-sales/

Backblaze is hiring a Director of Sales. This is a critical role for Backblaze as we continue to grow the team. We need a strong leader who has experience in scaling a sales team and who has an excellent track record for exceeding goals by selling Software as a Service (SaaS) solutions. In addition, this leader will need to be highly motivated, as well as able to create and develop a highly-motivated, success oriented sales team that has fun and enjoys what they do.

The History of Backblaze from our CEO
In 2007, after a friend’s computer crash caused her some suffering, we realized that with every photo, video, song, and document going digital, everyone would eventually lose all of their information. Five of us quit our jobs to start a company with the goal of making it easy for people to back up their data.

Like many startups, for a while we worked out of a co-founder’s one-bedroom apartment. Unlike most startups, we made an explicit agreement not to raise funding during the first year. We would then touch base every six months and decide whether to raise or not. We wanted to focus on building the company and the product, not on pitching and slide decks. And critically, we wanted to build a culture that understood money comes from customers, not the magical VC giving tree. Over the course of 5 years we built a profitable, multi-million dollar revenue business — and only then did we raise a VC round.

Fast forward 10 years later and our world looks quite different. You’ll have some fantastic assets to work with:

  • A brand millions recognize for openness, ease-of-use, and affordability.
  • A computer backup service that stores over 500 petabytes of data, has recovered over 30 billion files for hundreds of thousands of paying customers — most of whom self-identify as being the people that find and recommend technology products to their friends.
  • Our B2 service that provides the lowest cost cloud storage on the planet at 1/4th the price Amazon, Google or Microsoft charges. While being a newer product on the market, it already has over 100,000 IT and developers signed up as well as an ecosystem building up around it.
  • A growing, profitable and cash-flow positive company.
  • And last, but most definitely not least: a great sales team.

You might be saying, “sounds like you’ve got this under control — why do you need me?” Don’t be misled. We need you. Here’s why:

  • We have a great team, but we are in the process of expanding and we need to develop a structure that will easily scale and provide the most success to drive revenue.
  • We just launched our outbound sales efforts and we need someone to help develop that into a fully successful program that’s building a strong pipeline and closing business.
  • We need someone to work with the marketing department and figure out how to generate more inbound opportunities that the sales team can follow up on and close.
  • We need someone who will work closely in developing the skills of our current sales team and build a path for career growth and advancement.
  • We want someone to manage our Customer Success program.

So that’s a bit about us. What are we looking for in you?

Experience: As a sales leader, you will strategically build and drive the territory’s sales pipeline by assembling and leading a skilled team of sales professionals. This leader should be familiar with generating, developing and closing software subscription (SaaS) opportunities. We are looking for a self-starter who can manage a team and make an immediate impact of selling our Backup and Cloud Storage solutions. In this role, the sales leader will work closely with the VP of Sales, marketing staff, and service staff to develop and implement specific strategic plans to achieve and exceed revenue targets, including new business acquisition as well as build out our customer success program.

Leadership: We have an experienced team who’s brought us to where we are today. You need to have the people and management skills to get them excited about working with you. You need to be a strong leader and compassionate about developing and supporting your team.

Data driven and creative: The data has to show something makes sense before we scale it up. However, without creativity, it’s easy to say “the data shows it’s impossible” or to find a local maximum. Whether it’s deciding how to scale the team, figuring out what our outbound sales efforts should look like or putting a plan in place to develop the team for career growth, we’ve seen a bit of creativity get us places a few extra dollars couldn’t.

Jive with our culture: Strong leaders affect culture and the person we hire for this role may well shape, not only fit into, ours. But to shape the culture you have to be accepted by the organism, which means a certain set of shared values. We default to openness with our team, our customers, and everyone if possible. We love initiative — without arrogance or dictatorship. We work to create a place people enjoy showing up to work. That doesn’t mean ping pong tables and foosball (though we do try to have perks & fun), but it means people are friendly, non-political, working to build a good service but also a good place to work.

Do the work: Ideas and strategy are critical, but good execution makes them happen. We’re looking for someone who can help the team execute both from the perspective of being capable of guiding and organizing, but also someone who is hands-on themselves.

Additional Responsibilities needed for this role:

  • Recruit, coach, mentor, manage and lead a team of sales professionals to achieve yearly sales targets. This includes closing new business and expanding upon existing clientele.
  • Expand the customer success program to provide the best customer experience possible resulting in upsell opportunities and a high retention rate.
  • Develop effective sales strategies and deliver compelling product demonstrations and sales pitches.
  • Acquire and develop the appropriate sales tools to make the team efficient in their daily work flow.
  • Apply a thorough understanding of the marketplace, industry trends, funding developments, and products to all management activities and strategic sales decisions.
  • Ensure that sales department operations function smoothly, with the goal of facilitating sales and/or closings; operational responsibilities include accurate pipeline reporting and sales forecasts.
  • This position will report directly to the VP of Sales and will be staffed in our headquarters in San Mateo, CA.


  • 7 – 10+ years of successful sales leadership experience as measured by sales performance against goals.
    Experience in developing skill sets and providing career growth and opportunities through advancement of team members.
  • Background in selling SaaS technologies with a strong track record of success.
  • Strong presentation and communication skills.
  • Must be able to travel occasionally nationwide.
  • BA/BS degree required

Think you want to join us on this adventure?
Send an email to jobscontact@backblaze.com with the subject “Director of Sales.” (Recruiters and agencies, please don’t email us.) Include a resume and answer these two questions:

  1. How would you approach evaluating the current sales team and what is your process for developing a growth strategy to scale the team?
  2. What are the goals you would set for yourself in the 3 month and 1-year timeframes?

Thank you for taking the time to read this and I hope that this sounds like the opportunity for which you’ve been waiting.

Backblaze is an Equal Opportunity Employer.

The post Hiring a Director of Sales appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Japan’s Directorate for Signals Intelligence

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/japans_director.html

The Intercept has a long article on Japan’s equivalent of the NSA: the Directorate for Signals Intelligence. Interesting, but nothing really surprising.

The directorate has a history that dates back to the 1950s; its role is to eavesdrop on communications. But its operations remain so highly classified that the Japanese government has disclosed little about its work ­ even the location of its headquarters. Most Japanese officials, except for a select few of the prime minister’s inner circle, are kept in the dark about the directorate’s activities, which are regulated by a limited legal framework and not subject to any independent oversight.

Now, a new investigation by the Japanese broadcaster NHK — produced in collaboration with The Intercept — reveals for the first time details about the inner workings of Japan’s opaque spy community. Based on classified documents and interviews with current and former officials familiar with the agency’s intelligence work, the investigation shines light on a previously undisclosed internet surveillance program and a spy hub in the south of Japan that is used to monitor phone calls and emails passing across communications satellites.

The article includes some new documents from the Snowden archive.

The Software Freedom Conservancy on Tesla’s GPL compliance

Post Syndicated from corbet original https://lwn.net/Articles/754919/rss

The Software Freedom Conservancy has put out a
blog posting
on the history and current status of Tesla’s GPL
compliance issues. “We’re thus glad that, this week, Tesla has acted
publicly regarding its current GPL violations and has announced that
they’ve taken their first steps toward compliance. While Tesla acknowledges
that they still have more work to do, their recent actions show progress
toward compliance and a commitment to getting all the way there.