Tag Archives: history

A Brief History of the Lie Detector

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/heroic-failures/a-brief-history-of-the-lie-detector

It’s surprisingly hard to create a real-life Lasso of Truth

When Wonder Woman deftly ensnares someone in her golden lariat, she can compel that person to speak the absolute truth. It’s a handy tool for battling evil supervillains. Had the Lasso of Truth been an actual piece of technology, police detectives no doubt would be lining up to borrow it.

Indeed, for much of the past century, psychologists, crime experts, and others have searched in vain for an infallible lie detector. Some thought they’d discovered it in the polygraph machine. A medical device for recording a patient’s vital signs—pulse, blood pressure, temperature, breathing rate—the polygraph was designed to help diagnose cardiac anomalies and to monitor patients during surgery.

The polygraph was a concatenation of several instruments. One of the first was a 1906 device, invented by British cardiologist James Mackenzie, that measured the arterial and venous pulse and plotted them as continuous lines on paper. The Grass Instrument Co., of Massachusetts, maker of the 1960 polygraph machine pictured above, also sold equipment for monitoring EEGs, epilepsy, and sleep.

The leap from medical device to interrogation tool is a curious one, as historian Ken Alder describes in his 2007 book The Lie Detectors: The History of an American Obsession (Free Press). Well before the polygraph’s invention, scientists had tried to link vital signs with emotions. As early as 1858, French physiologist Étienne-Jules Marey recorded bodily changes as responses to uncomfortable stressors, including nausea and sharp noises. In the 1890s, Italian criminologist Cesare Lombroso used a specialized glove to measure a criminal suspect’s blood pressure during interrogation. Lombroso believed that criminals constituted a distinct, lower race, and his glove was one way he tried to verify that belief.

In the years leading up to World War I, Harvard psychologist Hugo Münsterberg used a variety of instruments, including the polygraph, to record and analyze subjective feelings. Münsterberg argued for the machine’s application to criminal law, seeing both scientific impartiality and conclusiveness.

As an undergraduate, William Moulton Marston worked in Münsterberg’s lab and was captivated by his vision. After receiving his B.A. in 1915, Marston decided to continue at Harvard, pursuing both a law degree and a Ph.D. in psychology, which he saw as complementary fields. He invented a systolic blood pressure cuff and with his wife, Elizabeth Holloway Marston, used the device to investigate the links between vital signs and emotions. In tests on fellow students, he reported a 96 percent success rate in detecting liars.

World War I proved to be a fine time to research the arts of deception. Robert Mearns Yerkes, who also earned a Ph.D. in psychology from Harvard and went on to develop intelligence tests for the U.S. Army, agreed to sponsor more rigorous tests of Marston’s research under the aegis of the National Research Council. In one test on 20 detainees in the Boston Municipal court, Marston claimed a 100 percent success rate in lie detection. But his high success rate made his supervisors suspicious. And his critics argued that interpreting polygraph results was more art than science. Many people, for instance, experience higher heart rate and blood pressure when they feel nervous or stressed, which may in turn affect their reaction to a lie detector test. Maybe they’re lying, but maybe they just don’t like being interrogated.

Marston (like Yerkes) was a racist. He claimed he could not be fully confident in the results on African Americans because he thought their minds were more primitive than those of whites. The war ended before Marston could convince other psychologists of the validity of the polygraph.

Across the country in Berkeley, Calif., the chief of police was in the process of turning his department into a science- and data-driven crime-fighting powerhouse. Chief August Vollmer centralized his department’s command and communications and had his officers communicate by radio. He created a records system with extensive cross-references for fingerprints and crime types. He compiled crime statistics and assessed the efficacy of policing techniques. He started an in-house training program for officers, with university faculty teaching evidentiary law, forensics, and crime-scene photography. In 1916 Volmer hired the department’s first chemist, and in 1919 he began recruiting college graduates to become officers. He vetted all applicants with a battery of intelligence tests and psychiatric exams.

Against this backdrop, John Augustus Larson, a rookie cop who happened to have a Ph.D. in physiology, read Marston’s 1921 article “Physiological Possibilities of the Deception Test” [PDF]. Larson decided he could improve Marston’s technique and began testing subjects using his own contraption, the “cardio-pneumo-psychogram.” Vollmer gave Larson free rein to test his device in hundreds of cases.

Larson established a protocol of yes/no questions, delivered by the interrogator in a monotone, to create a baseline sample. All suspects in a case were also asked the same set of questions about the case; no interrogation lasted more than a few minutes. Larson secured consent before administering his tests, although he believed only guilty parties would refuse to participate. In all, he tested 861 subjects in 313 cases, corroborating 80 percent of his findings. Chief Vollmer was convinced and helped promote the polygraph through newspaper stories.

And yet, despite the Berkeley Police Department’s enthusiastic support and a growing popular fascination with the lie detector, U.S. courts were less than receptive to polygraph results as evidence.

In 1922, for instance, Marston applied to be an expert witness in the case of Frye v. United States. The defendant, James Alphonso Frye, had been arrested for robbery and then confessed to the murder of Dr. R.W. Brown. Marston believed his lie detector could verify that Frye’s confession was false, but he never got the chance.

Chief Justice Walter McCoy didn’t allow Marston to take the stand, claiming that lie detection was not “a matter of common knowledge.” The decision was upheld by the court of appeals with a slightly different justification: that the science was not widely accepted by the relevant scientific community. This became known as the Frye Standard or the general acceptance test, and it set the precedent for the court’s acceptance of any new scientific test as evidence.

Marston was no doubt disappointed, and the idea of an infallible lie detector seems to have stuck with him. Later in life, he helped create Wonder Woman. The superhero’s Lasso of Truth proved far more effective at apprehending criminals and revealing their misdeeds than Marston’s polygraph ever was.

To this day, polygraph results are not admissible in most courts. Decades after the Frye case, the U.S. Supreme Court, in United States v. Scheffer, ruled that criminal defendants could not admit polygraph evidence in their defense, noting that “the scientific community remains extremely polarized about the reliability of polygraph techniques.”

But that hasn’t stopped the use of polygraphs for criminal investigation, at least in the United States. The U.S. military, the federal government, and other agencies have also made ample use of the polygraph in determining a person’s suitability for employment and security clearances.

Meanwhile, the technology of lie detection has evolved from monitoring basic vital signs to tracking brain waves. In the 1980s, J. Peter Rosenfeld, a psychologist at Northwestern University, developed one of the first methods for doing so. It took advantage of a type of brain activity, known as P300, that is emitted about 300 milliseconds after the person recognizes a distinct image. The idea behind Rosenfield’s P300 test was that a suspect accused, say, of theft would have a distinct P300 response when shown an image of the stolen object, while an innocent party would not. One of the main drawbacks was finding an image associated with the crime that only the suspect would have seen.

In 2002 Daniel Langleben, a professor of psychiatry at the University of Pennsylvania, began using functional magnetic resonance imaging, or fMRI, to do real-time imaging of the brain while a subject was telling the truth and also lying. Langleben found that the brain was generally more active when lying and suggested that truth telling was the default modality for most humans, which I would say is a point in favor of humanity. Langleben has reported being able to correctly classify individual lies or truths 78 percent of the time. (In 2010, IEEE Spectrum contributing editor Mark Harris wrote about his own close encounter with an fMRI lie detector. It’s a good read.)

More recently, the power of artificial intelligence has been brought to bear on lie detection. Researchers at the University of Arizona developed the Automated Virtual Agent for Truth Assessments in Real-Time, or AVATAR, for interrogating an individual via a video interface. The system uses AI to assess changes in the person’s eyes, voice, gestures, and posture that raise flags about possible deception. According to Fast Company and CNBC, the U.S. Department of Homeland Security has been testing AVATAR at border crossings to identify people for additional screening, with a reported success rate of 60 to 75 percent. The accuracy of human judges, by comparison, is at best 54 to 60 percent, according to AVATAR’s developers.

While the results for AVATAR and fMRI may seem promising, they also show the machines are not infallible. Both techniques compare individual results against group data sets. As with any machine-learning algorithm, the data set must be diverse and representative of the entire population. If the data is poor quality or incomplete or if the algorithm is biased or if the sensors measuring the subject’s physiological response don’t work properly, it’s simply a more high-tech version of Marston’s scientific racism.

Both fMRI and AVATAR pose new challenges to the already contested history of lie detection technology. Over the years, psychologists, detectives, and governments have continued to argued for their validity. There is, for example, a professional organization called the American Polygraph Association. Meanwhile, lawyers, civil libertarians, and other psychologists have decried their use. Proponents seem to have an unwavering faith in data and instrumentation over human intuition. Detractors see many alternative explanations for positive results and cite a preponderance of evidence that polygraph tests are no more reliable than guesswork.

Along the way, sensational crime reporting and Hollywood dramatizations have led the public to believe that lie detectors are a proven technology and also, contradictorily, that master criminals can fake the results.

I think Ken Alder comes closest to the truth when he notes that at its core, the lie detector is really only successful when suspects believe it works.

An abridged version of this article appears in the August 2019 print issue as “A Real-Life Lasso of Truth.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Chip Hall of Fame: MOS Technology 6581

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-history/silicon-revolution/chip-hall-of-fame-mos-technology-6581

A synthesizer that defined the sound of a generation

1982 was a big year for music. Not only did Michael Jackson release Thriller, the bestselling album of all time, but Madonna made her debut. And it saw the launch of the Commodore 64 microcomputer. Thanks to the C64, millions of homes were equipped with a programmable electronic synthesizer, one that’s still in vogue.

The C64 became the bestselling computer of all time (some 17 million were sold) largely because it had graphics and sound capabilities that punched way above the system’s price tag: US $600 on release, soon falling to $149. Like many machines from that era, the C64 has a devoted following in the retrocomputing community, and emulators are available that let you run nearly all its software on modern hardware. What’s unusual is that a specific supporting chip inside the C64 has also retained its own dedicated following: the 6581 SID sound chip.

The C64 was developed by MOS Technology in 1981. MOS had already had a hit in the microcomputing world with its creation of the 6502 CPU in 1975. That chip—and a small family of variants—was used to power popular home computers and game consoles such as the Apple II and Atari 2600. As recounted in IEEE Spectrum’s March 1985 design case history [PDF] of the C64 by Tekla S. Perry and Paul Wallich, MOS originally intended just to make a new graphics chip and a new sound chip. The idea was to sell them as components to microcomputer manufacturers. But those chips turned out to be so good that MOS decided to make its own computer.

Creation of the sound chip fell to a young engineer called Robert Yannes. He was the perfect choice for the job, motivated by a long-standing interest in electronic sound. Although there were some advanced microcomputer-controlled synthesizers available, including the Super Sound board designed for use with the Cosmac VIP system, the built-in sound generation tech in home computers was relatively crude. Yannes had higher ambitions. “I’d worked with synthesizers, and I wanted a chip that was a music synthesizer,” Yannes told Spectrum in 1985. His big advantage was that MOS had a manufacturing fab on-site. This allowed for cheap and fast experimentation and testing: “The actual design only took about four or five months,” said Yannes.

On a hardware level, what made the 6581 SID stand out was better frequency control of its internal oscillators and, critically, an easy way for programmers to control what’s known as the sound envelope. Early approaches to using computers to generate musical tones (starting with one by Alan Turing himself) produced sound that was either off or on at a fixed intensity, like a buzzer. But most musical instruments don’t work that way: Think of how a piano note can be struck sharply or softly, and how a note can linger before decaying into silence. The sound envelope defines how a note’s intensity rises and falls. Some systems allowed the volume to be adjusted as the note played, but this was awkward to program. Yannes incorporated data registers into the 6581 SID so a developer could define an envelope and then leave it to the chip to control the intensity, rather than adjusting the intensity by programming the CPU to send volume-control commands as notes played (something few developers bothered to attempt).

The SID chip has three sound channels that can play simultaneously using three basic waveforms, plus a fourth “noise” waveform that produces rumbling to hissing static sounds, depending on the frequency. The chip has the ability to filter and modulate the channels to produce an even wider range of sounds. Some programmers discovered they could tease the chip into doing things it was never designed to do, such as speech synthesis. This was perhaps most famously used in Ghostbusters, a 1984 game based on the movie of the same name in which the C64 would utter low-fidelity catchphrases from the movie, such as “He slimed me!”

But stunts like speech synthesis aside, the SID chip’s design meant that home computer games could have truly musical soundtracks. Developers started hiring composers to create original works for C64 games—indeed, some titles today are solely remembered because of a catchy soundtrack.

Unlike in modern game development, in which soundtrack creation is technically similar to conventional music recording (up to, and including, using orchestras and choirs), these early composers had to be familiar with how the SID chip was programmed at the hardware level, as well as its behavioral quirks. (Because the chip got to market so quickly, MOS’s documentation of the 6581 SID was notoriously lousy, with Yannes acknowledging to Spectrum in 1985 that “the spec sheet got distributed and copied and rewritten by various people until it made practically no sense anymore.”)

At the time, these composers were generally unknown outside the games industry. Many of them moved on to other things after the home computer boom faded and their peculiar hybrid combination of musical and programming expertise was less in demand. In more recent years however, some of them have been celebrated, such as the prolific Ben Daglish, who composed the music for dozens of popular games.

Daglish (who created my favorite C64 soundtrack, for 1987’s Re-Bounder) was initially bemused that people in the 21st century were still interested in music created for, and by, the SID chip, but he became a popular guest at retrocomputing and so-called chiptunes events before his untimely death in late 2018.

Chiptunes (also known as bitpop) is a genre of original music that leans into the distinctive sound of 1980s computer sound chips. Some composers use modern synthesizers programmed to replicate that sound, but others like to use the original hardware, especially the SID chips (with or without the surrounding C64 system). Because the 6581 SID hasn’t been in production for many years, this has resulted in a brisk aftermarket for old chips—and one that’s big enough that crooks have made fake chips, or reconditioned dead chips, to sell to enthusiasts. Other people have created modern drop-in replacements for the SID chip, such as the SwinSID.

There are several options if you’d like to listen to a classic C64 game soundtrack or a modern chiptune without investing in hardware. You can find many on YouTube, and projects like SOASC= are dedicated to playing tunes on original SID chips and recording the output using modern audio formats. But for a good balance between modern convenience and hard-core authenticity, I’d recommend using a player like Sidplay, which emulates the SID chip and can play music data extracted from original software code. Even after the last SID chip finally burns out, its sound will live on.

An abridged version of this article appears in the July 2019 print issue as “Chip Hall of Fame: SID 6581.”

How NASA Recruited Snoopy and Drafted Barbie

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/space-age/how-nasa-recruited-snoopy-and-drafted-barbie

The space agency has long relied on kid-friendly mascots to make the case for space

graphic link to special report landing page
graphic link to special report landing  page

In the comic-strip universe of Peanuts, Snoopy beat Neil Armstrong to the moon. It was in March 1969—four months before Armstrong would take his famous small step—that the intrepid astrobeagle and his flying doghouse touched down on the lunar surface. “I beat the Russians…I beat everybody,” Snoopy marveled. “I even beat that stupid cat who lives next door!”

The comic-strip dog had begun a formal partnership with NASA the previous year, when Charles Schulz, the creator of Peanuts, and its distributor United Feature Syndicate, agreed to the use of Snoopy as a semi-official NASA mascot.

Snoopy was already a renowned World War I flying ace—again, within the Peanuts universe. Clad in a leather flying helmet, goggles, and signature red scarf, he sat atop his doghouse, reenacting epic battles with his nemesis, the Red Baron. Just as NASA had turned to real-life fighter pilots for its first cohort of astronauts, the space agency also recruited Snoopy.

Two months after the comic-strip Snoopy’s lunar landing, a second, real-world Snoopy buzzed the surface of the moon, as part of Apollo 10. This mission was essentially a dress rehearsal for Apollo 11. The crew was tasked with skimming, or “snooping,” the surface of the moon, so they nicknamed the lunar module “Snoopy.” It logically followed that Apollo 10’s command module was “Charlie Brown.”

On 21 May, as the astronauts settled in for their first night in lunar orbit, Snoopy’s pilot, Eugene Cernan, asked ground control to “watch Snoopy well tonight, and make him sleep good, and we’ll take him out for a walk and let him stretch his legs in the morning.” The next day, Cernan and Tom Stafford descended in Snoopy, stopping some 14,000 meters above the surface.

Since then, Snoopy and NASA have been locked in a mutually beneficial orbit. Schulz, a space enthusiast, ran comic strips about space exploration, and the moon shot in particular, which helped excite popular support for the program. Commercial tie-ins extended well beyond the commemorative plush toy shown at top. Over the years, Snoopy figurines, music boxes, banks, watches, pencil cases, bags, posters, towels, and pins have all promoted a fun and upbeat attitude toward life beyond Earth’s atmosphere.

There’s also a serious side to Snoopy. In the wake of the tragic Apollo 1 fire, which claimed the lives of three astronauts, NASA wanted to promote greater flight safety and awareness. Al Chop, director of public affairs for the Manned Spacecraft Center (now the Lyndon B. Johnson Space Center), suggested using Snoopy as a symbol for safety, and Schulz agreed. 

NASA created the Silver Snoopy Award to honor ground crew who have contributed to flight safety and mission success. The recipient’s prize? A silver Snoopy lapel pin, designed by Schulz and presented by an astronaut, in appreciation for the person’s efforts to preserve astronauts’ lives.

Snoopy was by no means the only popularizer of the U.S. space program. Over the years, there have been GI Joe astronauts, LEGO astronauts, and Hello Kitty astronauts. Not all of these came with the NASA stamp of approval, but even unofficially they served as tiny ambassadors for space.

Of all the astronautical dolls, I’m most intrigued by Astronaut Barbie, of which there have been numerous incarnations over the years. The first was Miss Astronaut Barbie, who debuted in 1965—13 years before women were accepted into NASA’s astronaut classes and 18 years before Sally Ride flew in space.

Miss Astronaut Barbie might have been ahead of her time, but she was also a reflection of that era’s pioneering women. Cosmonaut Valentina Tereshkova became the first woman to go to space on 16 June 1963, when she completed a solo mission aboard Vostok 6. Meanwhile, American women were training for space as early as 1960, through the privately funded Women in Space program. The Mercury 13 endured the same battery of tests that NASA used to train the all-male astronaut corps and were celebrated in the press, but none of them ever went to space.

In 2009, Mattel reissued Miss Astronaut of 1965 as part of the celebration of Barbie’s 50th anniversary. “Yes, she was a rocket scientist,” the packaging declares, “taking us to new fashion heights, while firmly placing her stilettos on the moon.” For the record, Miss Astronaut Barbie wore zippered boots, not high heels.

Other Barbies chose careers in space exploration and always with a flair for fashion. A 1985 Astronaut Barbie modeled a hot pink jumpsuit, with matching miniskirt for attending press conferences. Space Camp Barbie, produced through a partnership between Mattel and the U.S. Space & Rocket Center in Huntsville, Ala., wore a blue flight suit, although a later version sported white and pink. An Apollo 11 commemorative Barbie rocked a red- and silver-trimmed jumpsuit and silver boots and came with a Barbie flag, backpack, and three glow-in-the-dark moon rocks. (Scientific accuracy has never been Mattel’s strong suit, at least where Barbie is concerned.) And in 2013, Mattel collaborated with NASA to create Mars Explorer Barbie, to mark the first anniversary of the rover Curiosity.

More recently, Mattel has extended the Barbie brand to promote real-life role models for girls. In 2018, as part of its Inspiring Women series, the toymaker debuted the Katherine Johnson doll, which pays homage to the African-American mathematician who calculated the trajectory for NASA’s first crewed spaceflight. Needless to say, this Barbie is also clad in pink, with era-appropriate cat-eye glasses, a double strand of pearls, and a NASA employee ID tag.

Commemorative dolls and stuffed animals may be playthings designed to tug at our consumerist heartstrings. But let’s suspend the cynicism for a minute and imagine what goes on in the mind of a young girl or boy who plays with a doll and dreams of the future. Maybe we’re seeing a recruit for the next generation of astronauts, scientists, and engineers.

An abridged version of this article appears in the July 2019 print issue as “The Beagle Has Landed.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Rediscovering the Remarkable Engineers Behind Olivetti’s ELEA 9003

Post Syndicated from Jean Kumagai original https://spectrum.ieee.org/tech-talk/tech-history/silicon-revolution/rediscovering-the-remarkable-engineers-behind-olivettis-elea-9003

A new graphic novel explores the forgotten history of the ELEA 9003, one of the first transistorized digital computers

The Chinese-Italian engineer Mario Tchou was, by all accounts, brilliant. Born and raised in Italy and educated in the United States, he led the Olivetti company’s ambitious effort to build a completely transistorized mainframe computer in the late 1950s. During Mario’s tenure, Olivetti successfully launched the ELEA 9003 mainframe and founded one of the first transistor companies. And yet, even in Italy, his story is not well known.

The historical obscurity of such an important figure troubled Ciaj Rocchi and Matteo Demonte, a husband-and-wife team of illustrators based in Milan. And so they created a short graphic novel about Tchou and the Olivetti computer project, as well as a short animation [shown at top]. The graphic novel appeared in the 12 April issue of La Lettura, the Italian cultural magazine, where Demonte and Rocchi both work.

If Tchou’s isn’t exactly a household name, how did the pair come to learn about him? Rocchi says they might have also remained in the dark—if not for the birth of their son in 2007. “We wanted to make sure he knew about his family and where he came from,” Rocchi says. Their family tree includes Demonte’s grandfather, who had emigrated to Milan from China in 1931. “I thought, if I don’t write this down for my son, it will be lost,” Rocchi recalls.

This British Family Changed the Course of Engineering

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/dawn-of-electronics/this-british-family-changed-the-course-of-engineering

Charles Parsons invented the modern steam turbine, but his wife and daughter built something just as lasting

The British engineer Charles Parsons knew how to make a splash. In honor of Queen Victoria’s Diamond Jubilee, the British Royal Navy held a parade of vessels on 26 June 1897 for the Lords of the Admiralty, foreign ambassadors, and other dignitaries. Parsons wasn’t invited, but he decided to join the parade anyway. Three years earlier, he’d introduced a powerful turbine generator—considered the first modern steam turbine—and he then built the SY Turbinia to demonstrate the engine’s power.

Arriving at the naval parade, Parsons raised a red pennant and then broke through the navy’s perimeter of patrol boats. With a top speed of almost 34 knots (60 kilometers per hour), Turbinia was faster than any other vessel and could not be caught. Parsons had made his point. The Royal Navy placed an order for its first turbine-powered ship the following year.

Onboard the Turbinia that day was Parsons’s 12-year-old daughter, Rachel, whose wide-ranging interests in science and engineering Parsons and his wife encouraged. From a young age, Rachel Parsons and her brother, Algernon, tinkered in their father’s home workshop, just as Charles had done when he was growing up. Indeed, the Parsons family tree shows generation after generation of engineering inquisitiveness from both the men and the women, each of whom made their mark on the field.

Charles grew up at Birr Castle, in County Offaly, Ireland. His father, William, who became the 3rd Earl of Rosse in 1841, was a mathematician with an interest in astronomy. Scientists and inventors, including Charles Babbage, traveled to Birr Castle to see the Leviathan of Parsonstown, a 1.8-meter (72-inch) reflecting telescope that William built during the 1840s. His wife, Mary, a skilled blacksmith, forged the iron work for the telescope’s tube.

William dabbled in photography, unsuccessfully attempting to photograph the stars. Mary was the real photography talent. Her detailed photos of the famous telescope won the Photographic Society of Ireland’s first Silver Medal.

Charles and his siblings enjoyed a traditional education by private tutors. They also had the benefit of a hands-on education, experimenting with the earl’s many steam-powered machines, including a steam-powered carriage. They worked on the Leviathan’s adjustment apparatus and in their mother’s dark room.

After studying mathematics at Trinity College, Dublin, and St. John’s College, Cambridge, Charles apprenticed at the Elswick Works, a large manufacturing complex operated by the engineering firm W.G. Armstrong in Newcastle upon Tyne, England. It was unusual for someone of his social class to apprentice, and he paid £500 for the opportunity (about US $60,000 today), in the hopes of later gaining a management position.

During his time at the works, Charles refined some engine designs that he’d sketched out while at Cambridge. The reciprocating, or piston, steam engine had by then been around for more than 100 years, itself an improvement on Thomas Newcomen’s earlier but inefficient atmospheric steam engine. Beginning in the 1760s, James Watt and Matthew Boulton made improvements that included adding a separate condenser to eliminate the loss of heat when water was injected into the cylinder. The water created a vacuum and pulled the piston in a stroke. A later improvement was the double-acting engine, where the piston could both push and pull. Still, piston steam engines were loud, dirty, and prone to exploding, and Charles saw room for improvement.

His initial design was for a four-cylinder epicycloidal engine, in which the cylinders as well as the crankshaft rotated. One advantage of this unusual configuration was that it could work at high speed with limited vibration. Charles designed it to directly drive a dynamo so as to avoid any connecting belts or pulleys. He applied for a British patent in 1877 at the age of 23.

Charles offered the design to his employer, who declined, but Kitson and Co., a locomotive manufacturer in Leeds, was interested. Charles’s brother Richard Clere Parsons was a partner at Kitson and persuaded him to join the company, which eventually produced 40 of the engines. Charles spent two years there, mostly working on rocket-powered torpedoes that proved unsuccessful.

More successful was his courting of Katharine Bethell, the daughter of a prominent Yorkshire family. Charles was said to have impressed Katharine with his skill at needlework, and they married in 1883.

In 1884, Charles became a junior partner and the head of the electrical section at Clarke, Chapman and Co., a manufacturer of marine equipment in Newcastle upon Tyne. He developed a new turbine engine, which he used to drive an electric generator, also of his own design. [His first prototype, now part of the collection of the Science Museum, London, is shown above.] The turbine generator was 1.73 meters long, 0.4 meters wide, and 0.8 meters high, and it weighed a metric ton.

Charles Parsons’s engine is often considered the first modern turbine. Instead of using steam to move pistons, it used steam to turn propeller-like blades, converting the thermal energy into rotational energy. Parsons’s original design was inefficient, running at 18,000 rpm and producing 7.5 kilowatts—about the power of a small household backup generator today. He made rapid incremental improvements, such as changing the shape of the blades, and he soon had an engine with an output of 50,000 kW, which would be enough to power up to 50,000 homes today.

In 1889 Charles established C.A. Parsons and Co., in Heaton, a suburb of Newcastle, with the goal of manufacturing his turbo-generator. The only hitch was that Clarke, Chapman still held the patent rights. While the patent issues got sorted out, Charles founded the Newcastle and District Electric Lighting Co., which became the first electric company to rely entirely on steam turbines. It wouldn’t be the last.

During his lifetime, he saw turbine-generated electricity become affordable and readily available to a large population. Even today, most electricity generation relies on steam turbines.

Once Charles had secured the patent rights to his invention, he set about improving the steam turbo-generator, making it more efficient and more compact. He established the Marine Steam Turbine Co., which built the Turbinia in 1894. Charles spent several years refining the mechanics before the ship made its sensational public appearance at the Diamond Jubilee. In 1905, just eight years after the Turbinia’s public debut, the British admiralty decided all future Royal Navy vessels should be turbine powered. The private commercial shipping industry followed suit.

Charles Parsons never stopped designing or innovating, trying his hand at many other ventures. Not all were winners. For instance, he spent 25 years attempting to craft artificial diamonds before finally admitting defeat. More lucrative was the manufacture of optical glass for telescopes and searchlights. In the end, he earned over 300 patents, received a knighthood, and was awarded the Order of Merit.

But Charles was not the only engineer in his very talented household.

When I first started thinking about this month’s column, I wanted to mark the centenary of the founding of the Women’s Engineering Society (WES), one of the oldest organizations dedicated to the advancement of women in engineering. I searched for a suitable museum object that honored female engineers. That proved more difficult than I anticipated. Although the WES maintains extensive archives at the Institution of Engineering and Technology, including a complete digitized run of its journal, The Woman Engineer, it doesn’t have much in the way of three-dimensional artifacts. There was, for example, a fancy rose bowl that was commissioned for the society’s 50th anniversary. But it seemed not quite right to represent women engineers with a purely decorative object.

I then turned my attention to the founders of WES, who included Charles Parsons’s wife, Katharine, and daughter, Rachel. Although Charles was a prolific inventor, neither Katharine nor Rachel invented anything, so there was no obvious museum object linked to them. But inventions aren’t the only way to be a pioneering engineer.

After what must have been a wonderful childhood of open-ended inquiry and scientific exploration, Rachel followed in her father’s footsteps to Cambridge. She was one of the first women to study mechanical sciences there. At the time, though, the university barred women from receiving a degree.

When World War I broke out and Rachel’s brother enlisted, she took over his position as a director on the board of the Heaton Works. She also joined the training division of the Ministry of Munitions and was responsible for instructing thousands of women in mechanical tasks.

As described in Henrietta Heald’s upcoming book Magnificent Women and their Revolutionary Machines (to be published in February 2020 by the crowdfunding publisher Unbound), the war brought about significant demographic changes in the British workforce. More than 2 million women went to work outside the home, as factories ramped up to increase war supplies of all sorts. Of these, more than 800,000 entered the engineering trades.

This upsurge in female employment coincided with a shift in national sentiment toward women’s suffrage. Women had been fighting for the right to vote for decades, and they finally achieved a partial success in 1918, when women over the age of 30 who met certain property and education requirements were allowed to vote. It took another decade before women had the same voting rights as men.

But these political and workplace victories for women were built on shaky ground. The passage of the Sex Disqualification (Removal) Act of 1919 made it illegal to discriminate against women in the workplace. But the Restoration of Pre-War Practices Act, passed the same year, required that women give up their jobs to returning servicemen, unless they happened to work for firms that had employed women in the same role before the war.

These contradictory laws both stemmed from negotiations between Prime Minister David Lloyd George and British trade unions. The unions had vigorously objected to employing women during the war, but the government needed the women to work. And so it came up with the Treasury Agreement of 1915, which stipulated that skilled work could be subdivided and automated, allowing women and unskilled men to take them on. Under those terms, the unions acquiesced to the “dilution” of the skilled male workforce.

And so, although the end of the war brought openings for women in some professions, tens of thousands of women in engineering suddenly found themselves out of work.

The Parsons women fought back, using their social standing to advocate on behalf of female engineers. On 23 June 1919, Katharine and Rachel Parsons, along with several other prominent women, founded the Women’s Engineering Society to resist the relinquishing of wartime jobs to men and to promote engineering as a rewarding profession for both sexes.

Two weeks later, Katharine gave a rousing speech, “Women’s Work in Engineering and Shipbuilding during the War” [PDF] at a meeting of the North East Coast Institution of Engineers and Shipbuilders. “Women are able to work on almost every known operation in engineering, from the most highly skilled precision work, measured to [the] micrometer, down to the rougher sort of laboring jobs,” she proclaimed. “To enumerate all the varieties of work intervening between these two extremes would be to make a catalogue of every process in engineering.” Importantly, Katharine mentioned not just the diluted skills of factory workers but also the intellectual and design work of female engineers.

Just as impassioned, Rachel wrote an article for the National Review several months later that positioned the WES as a voice for women engineers:

Women must organize; this is the only royal road to victory in the industrial world. Women have won their political independence; now is the time for them to achieve their economic freedom too. It is useless to wait patiently for the closed doors of the skilled trade unions to swing open. It is better far to form a strong alliance, which, armed as it will be with the parliamentary vote, may be as powerful an influence in safeguarding the interests of women-engineers as the men’s unions have been in improving the lot of their members.

The following year, Rachel was one of the founding members of an all-female engineering firm, Atalanta, of which her mother was a shareholder. The firm specialized in small machinery work, similar to the work Rachel had been overseeing at her father’s firm. Although the business voluntarily shuttered after eight years, the name lived on as a manufacturer of small hand tools and household fixtures.

The WES has had a much longer history. In its first year, it began publishing The Women Engineer, which still comes out quarterly. In 1923 the WES began holding an annual conference, which has been canceled only twice, both times due to war. Over its 100 years, the organization worked to secure employment rights for women from the shop floor to management, guarantee access to formal education, and even encouraged the use of new consumer technologies, such as electrical appliances in the home.

Early members of the WES came from many different branches of engineering. Dorothée Pullinger ran a factory in Scotland that produced the Galloway, an automobile that was entirely designed and built by women for women. Amy Johnson was a world-renowned pilot who also earned a ground engineer’s license. Jeanie Dicks, the first female member of the Electrical Contractors Association, won the contract for the electrification of Winchester Cathedral.

Today the WES continues its mission of supporting women in pursuit of engineering, scientific, and technical careers. Its website gives thanks and credit to early male allies, including Charles Parsons, who supported female engineers. Charles may have earned his place in history due to his numerous inventions, but if you come across his turbine at the Science Museum, remember that his wife and daughter earned their places, too.

An abridged version of this article appears in the June 2019 print issue as “As the Turbine Turns.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

The Last Working Olivetti Mainframe Sits In a Tuscan High School

Post Syndicated from Jean Kumagai original https://spectrum.ieee.org/tech-talk/tech-history/silicon-revolution/when-the-history-of-computing-comes-alive

How an encounter with the ELEA 9003 inspired a tech historian’s career

About 10 years ago, Elisabetta Mori and some friends were doing research for an art exhibit on the theme of “archives of memories.”

“We approached the theme literally, and so we looked for old examples of physical memories—computer memories,” Mori recalls. “We tried to see the oldest computers built in Italy.” At the Museum of Computing Machinery in Pisa, they saw the Calcolatrice Elettronica Pisana, an early digital computer built by the University of Pisa in 1957 with the support of the Olivetti company. But the machine had long ago stopped working.

Then they heard about a working model of the ELEA 9003, Olivetti’s first commercial mainframe, introduced in 1959. They lost no time tracking it down.

This 9003 had originally belonged to a bank in Siena, where it was used for payroll, managing accounts, calculating interest rates, and the like. In 1972, the bank donated the computer to a high school in the Tuscan hill town of Bibbiena. And there it’s been ever since. Today, former Olivetti employees periodically travel to the ISIS High School Enrico Fermi to tend to the machine.

The mainframe’s sleek aluminum modular racks and peripherals occupy a large room, with Olivetti typewriters and calculators spread around the space. The technicians keep spare parts on hand, as well as original manuals and blueprints.

The encounter with the computer changed Mori’s life. She wrote a master’s thesis about it. Now, she is a Ph.D. candidate in the history of computing at Middlesex University in London. Mori’s article, “The Italian Computer: Olivetti’s ELEA 9003 Was a Study in Elegant, Ergonomic Design,” describes the company’s heroic effort to launch the ELEA 9003. [In the photo at top, Mori is seated at the 9003’s console.]

“The machine works, but it is fragile,” Mori says. The computer contains more than 40 kilometers of copper cable wrapped in woven glass fiber. “If you don’t run the computer regularly, it will stop working. If you move it, it will die.”

To forestall that eventuality, a local group called the Associazione Amici dell’Olivetti ELEA 9003 is raising funds to hire and train workers to maintain the computer. You can reach them at [email protected].

“Until I saw it working, I didn’t realize how complex, fascinating, and noisy these early computers were,” Mori says. “I would have missed one big part of the story.”

The Italian Computer: Olivetti’s ELEA 9003 Was a Study in Elegant, Ergonomic Design

Post Syndicated from Elisabetta Mori original https://spectrum.ieee.org/tech-history/silicon-revolution/the-italian-computer-olivettis-elea-9003-was-a-study-in-elegant-ergonomic-design

In 1959, Olivetti introduced one of the first transistorized mainframes and started its own transistor company

“I have made my decision: We are going to scrap the first version of our computer, and we will start again from scratch.” It’s the autumn of 1957, and Mario Tchou, a brilliant young Chinese-Italian electrical engineer, is speaking to his team at the Olivetti Electronics Research Laboratory. Housed in a repurposed villa on the outskirts of Pisa, not far from the Leaning Tower, the lab is filled with vacuum tubes, wires, cables, and other electronics, a startling contrast to the tasteful decorations of the palatial rooms.

On any weekday, some 20 or so physicists, technicians, and engineers would be hard at work there, designing, developing, soldering, conferring. In less than two years—half the time they’d been allotted—they’ve completed their first prototype mainframe, called Macchina Zero (Zero Machine). No other company in Italy has ever built a computer before. They’re understandably proud.

Today, though, is a Sunday, and Tchou has called in his boss and three members of the team to discuss a bold decision, one that he hopes will place Olivetti ahead of every other computer maker in the world.

Macchina Zero, he points out, uses vacuum tubes. And tubes, he says, will soon become obsolete: They are too big, they overheat, they are unreliable, they consume too much power. The company wants to build a cutting-edge machine, and transistors are the computer technology of the future. “Olivetti will launch a fully transistorized machine,” Tchou tells them.

Within a year, the lab would finish a prototype of the new machine. In support of that effort, Olivetti would also launch its own transistor company and strike a strategic alliance with Fairchild Semiconductor. When Olivetti’s first mainframe, the ELEA 9003, is unveiled in 1959, it is an astonishing work of industrial design—modular, technologically advanced, and built to human scale. Olivetti, better known for its typewriters, adding machines, and iconic advertisements, was suddenly a computer company to reckon with.

The fact that most historical accounts largely ignore Olivetti’s role as an early pioneer of computing and transistors may have something to do with the series of tragic events that would transpire after the ELEA 9003’s introduction. But it is a history worth revisiting, because the legacy of Olivetti lives on in some surprising ways.

During World War II, computers were expensive, fragile, and hidden, restricted to military and scientific purposes. But after the war, businesses were quick to adopt computers to address their escalating need for information management. The machines on offer relied on vacuum tubes, punch tape, and punch cards, and they were slow and unreliable. But they were much faster than the manual and mechanical systems they were replacing.

The engineer and entrepreneur Camillo Olivetti founded Olivetti in 1908 as the first typewriter manufacturer in Italy. Production at the company’s factory in Ivrea, near Turin, later expanded to mechanical calculators and other office equipment.

In the 1920s, Camillo’s eldest son, Adriano, became more involved in the family business. Adriano had studied chemical engineering at the Polytechnic University of Turin. Camillo, a socialist, initially employed his son as an unskilled worker in the Olivetti factory. He then sent Adriano to the United States to study industrial methods. In 1926, the Olivettis reorganized the company’s production according to the principles of scientific management. By 1938, Adriano had assumed the presidency of Olivetti.

Adriano believed that the profits of industry should be reinvested for the betterment of society. Under his tenure, the company offered worker benefits that had no equal in Italy at the time, including more equitable pay for women, a complete range of health services, nine months of paid maternity leave, and free childcare. In addition, the Ivrea factory had a large library with 30,000 volumes.

Adriano also established an experimental marketing and advertising department, surrounding himself with smart young designers, architects, artists, poets, photographers, and musicians. The combination of Adriano’s initiatives spurred the company to wider international prominence.

After World War II, Adriano became convinced that electronics was the future of the company, and so he established a joint venture with the French firm Compagnie des Machines Bull. Bull was one of the biggest punch-card equipment manufacturers in Europe, and it had just entered the computer business. The Olivetti-Bull Company became the official reseller of Bull’s products in Italy, and the partnership helped Olivetti survey the domestic market potential for computers.

In 1952, Olivetti founded a computer research center in New Canaan, Conn., at the recommendation of Dino Olivetti, Adriano’s youngest brother. Dino had studied at MIT and was president of the Olivetti Corp. of America. (That same year, Dino contributed to an exhibition devoted to Olivetti products and design at the Museum of Modern Art in New York City.) The lab kept tabs on developments in the United States, where electronics and computers were at the forefront.

Olivetti sought a worthy academic partner for its computer business. After a failed alliance with Rome University in the early 1950s, the company partnered with the University of Pisa in 1955. At the time, the only two computers in the country were a National Cash Register CRC 102A, installed at the Milan Polytechnic, and a Ferranti Mark I*, installed at an applied math research institute in Rome.

The University of Pisa began building a research computer, with Olivetti providing financial support, electronic components, patent licenses, and employees. In exchange, Olivetti’s staff gained valuable experience. While the Pisa project aimed to create a single scientific machine for researchers, Olivetti hoped to develop a series of commercial computers for the business market.

Adriano searched for an expert engineer and manager to set up a computer lab within the company and lead Olivetti’s computer team. He eventually found both in Mario Tchou. Born in Italy in 1924, Tchou was the son of Yin Tchou, a Chinese diplomat stationed in Rome and Vatican City. After studying electrical engineering at the Sapienza University of Rome, Mario received a scholarship to the Catholic University of America, in Washington, D.C., where he obtained a bachelor’s degree in electronic engineering. In 1949, he moved to New York City to get a master’s in physics at the Polytechnic Institute of Brooklyn, and three years later, he became an associate professor of electrical engineering at Columbia University.

Adriano Olivetti met Mario Tchou in New York City in August 1954 and immediately decided he was the perfect choice. Tchou was an expert in digital control systems, and he worked at one of the most advanced electronics and computing research labs in the United States. He was also a native Italian speaker and understood the company’s culture. Adriano and his son Roberto convinced Tchou to move back to Italy and become the leader of their Laboratorio Ricerche Elettroniche, in Pisa.

The lab’s first project, Macchina Zero, went as well as could be expected, but Tchou’s decision in 1957 to switch to transistors involved risks and potential delays. The company would need at least 100,000 transistors and diodes for each installation. But in Italy as elsewhere, transistors were in short supply. Rather than importing devices from the United States or elsewhere, the company decided to manufacture the devices in-house. The move would give Olivetti a secure and continuous source of components as well as expertise and insights into the latest developments in the field.

In 1957, with Telettra, an Italian telecommunications company, Olivetti founded the SGS Company (which stands for Società Generale Semiconduttori). SGS soon began producing germanium alloy junction transistors, based on technology licensed from General Electric.

SGS’s next generation of transistors, though, would be silicon, manufactured in partnership with Fairchild Semiconductor. The California startup had been founded the same year as SGS by a group of young scientists and engineers that included Robert Noyce and Gordon Moore. In late 1959, SGS contacted Fairchild through Olivetti’s New Canaan lab, and the following year Fairchild became an equal partner in SGS with Olivetti and Telettra. Olivetti now had access to Fairchild’s pathbreaking technology. That included the planar process, which Fairchild had patented in 1959 and is still used to make integrated circuits.

The result of Tchou’s push for a transistorized computer was the ELEA 9003, the first commercial computer to be made in Italy. It launched in 1959, and between 1960 and 1964, about 40 of the mainframes were sold or leased to Italian clients, mainly in banking and industry.

ELEA belongs to what historians of computing consider the second generation of computers—that is, machines that used transistors and ferrite-core memories. In this respect, the ELEA 9003 was similar to the IBM 7070 and the Siemens 2002. Core memories were arrays of tiny magnetic rings threaded with copper wire. Each core could be magnetized clockwise or counterclockwise, to represent one bit of information—a 1 or a 0. Olivetti workers sewed the ELEA memories by hand at the Borgolombardo factory, near Milan, where the ELEAs were assembled.

The minimum unit of memory in the ELEA 9003 was the character, which consisted of six bits plus a parity bit. The total memory ranged from 20,000 to 160,000 characters, with a typical installation having about 40,000. Two Olivetti engineers, Giorgio Sacerdoti and Martin Friedman, had previously worked with Ferranti computers. Their background may have influenced some design decisions for the 9003, in particular the computer architecture. However, the Ferranti Mark I* that Sacerdoti worked on in Rome used Williams-Kilburn tubes and vacuum tubes instead of core memory and transistors.

To oversee the aesthetic design of the new computer, Adriano brought in the Italian architect Ettore Sottsass Jr. Assisted by Dutch designer Andries Van Onck, Sottsass focused on the human-machine interface, using human factors and ergonomics to make the computer easier to operate and maintain. For example, he standardized the height of the racks at 150 centimeters, to allow engineers and technicians working on either side to visually communicate with one another, as computers were very noisy in those days.

The ELEA 9003 was housed in a series of modular cabinets. Colored strips identified the contents of each cabinet, such as the power supply, memory, arithmetic logic unit, and the control unit for the peripherals, which included printers and Ampex magnetic tape drives. Some ELEA 9003 installations employed vacuum tubes for the power supplies and tape decks.

To facilitate the testing and repair of circuit boards, Sottsass arranged each rack in three parts: a central section and two wings, which could be opened like a book. He also organized the connection cables in channels above the racks. Typical mainframes of that era had their cables positioned beneath the floor, making maintenance cumbersome and expensive.

The console’s display used a grid of colored cubes, similar to mosaic tiles. Each cube was engraved with a letter or a symbol. Different sections of the display showed the status of the 9003’s components. An operator could use the console’s keyboard to enter instructions, one at a time, for direct execution.

Sottsass’s design for the Olivetti ELEA 9003 was complex but elegant. It was awarded the prestigious Compasso d’Oro (Golden Compass) industrial design prize in 1959.

Olivetti aimed to export the ELEA to the international market. Rather than translating the computer’s commands and abbreviations from Italian into English, French, or German, the company devised a bold solution. It commissioned the Ulm School of Design, one of the most progressive design centers at the time, to develop a system of symbols that would be independent of any one language. Although the resulting sign system was never used in the ELEA series, it prefigures today’s widespread use of icons in computer interfaces.

Olivetti’s big plans for exporting its computers included the acquisition of the U.S. typewriter manufacturer Underwood in 1959. With this move, Olivetti hoped to leverage Underwood’s powerful commercial network to strengthen its sales in the United States. The acquisition, however, depleted the company’s coffers. Worse, Olivetti discovered that Underwood’s manufacturing facilities were outdated and its financial situation bleak.

Then, on 27 February 1960, Adriano Olivetti died from a stroke while traveling by train from Milan to Lausanne. He was 58 years old. The following year, Mario Tchou was killed in a car accident at the age of 37. At the time of his death, Tchou had been spearheading the development of a new generation of Olivetti computers that incorporated silicon components from SGS-Fairchild. With these tragic deaths, Olivetti’s computer division lost its most charismatic and visionary leaders.

The next several years proved tumultuous for the company. Roberto Olivetti tried to keep the computer business going, even appealing to the Italian government for aid. But the government didn’t view electronics and computers as a matter of national interest and so refused to bail out Olivetti’s electronics division. (Nor had the government supported Olivetti’s development of the ELEA, in stark contrast to the U.S. and British governments’ generous support of their domestic computer makers.) Meanwhile, the U.S. government, through its former ambassador to Italy, Clare Boothe Luce, reportedly was pressuring Olivetti to sell its electronics division, which it finally did to General Electric in 1964.

The sale to GE did not include Olivetti’s small-size programmable calculators, which the company continued to develop. The Programma 101 came to market in 1965 and proved an instant hit. [See sidebar, “The Calculator That Helped Land Men On the Moon.”]

Acquiring Olivetti was part of GE’s strategy to enter the European computer market. Olivetti’s French partner, Bull, also faced financial difficulties and was also bought by GE in 1964. GE continued building computers based on Olivetti’s smaller models and sold them as the GE 100 series. The ELEA 4115, for example, became the GE 115. Eventually, GE sold about 4,000 machines in the GE 100 line.

We can’t know how far Olivetti would have taken its computer business had Adriano Olivetti and Mario Tchou lived longer. What we do know is that the electronics division left behind an impressive legacy of design, advanced hardware, and talented engineers.

Olivetti had unquestionably the most elegant computers of its day. Adriano viewed computers as complex artifacts, whose aesthetics, ergonomics, and user experience had to be carefully cultivated in parallel with the technology. He organized every aspect of the company, including the factories, workers, advertising, and marketing, to embrace this holistic approach to design. In his famous 1973 lecture “Good Design Is Good Business,” IBM’s Thomas J. Watson Jr. credited Adriano Olivetti for inspiring IBM’s own overhaul of its corporate aesthetic in the late 1950s.

Olivetti’s computer legacy also lives on through its transistor business. In 1987, SGS merged with the French-owned Thomson Semiconducteurs to form STMicroelectronics, now a multinational manufacturer of microchips.

And the people hired by Olivetti continued to make their mark. Of the many capable engineers and scientists who passed through Olivetti’s doors, one stands out. In 1960, the company hired a 19-year-old named Federico Faggin to work in its electronics lab. During Faggin’s years at Olivetti, he learned about computer architecture and logic and circuit design and helped to build a small experimental computer.

Later, after earning a physics degree from the University of Padua, Faggin worked briefly at SGS-Fairchild in Italy before moving to Fairchild’s R&D lab in Palo Alto, Calif., and then to Intel. Drawing on his experience at Olivetti and SGS, he soon joined the small team that created the Intel 4004, the first commercial microprocessor. And so, although Olivetti’s foray into building mainframe computers suffered a premature death, the effort indirectly contributed to the birth of the microcomputer industry that surrounds us today.

This article appears in the June 2019 print issue as “The Italian Computer.”

About the Author

Elisabetta Mori is a Ph.D. candidate in the history of computing at Middlesex University in London.

The Calculator That Helped Land Men on the Moon

Post Syndicated from Elisabetta Mori original https://spectrum.ieee.org/tech-history/silicon-revolution/the-calculator-that-helped-land-men-on-the-moon

Olivetti’s Programma 101 embodied the company’s holistic approach to technical efficiency, ease of use, and smart design

After the sale of its computer business to General Electric in 1964, Italy’s Olivetti managed to retain control of its small electronic calculators. The most notable of these would be the Programma 101.

Introduced at a Business Equipment Manufacturers Association show in New York in October 1965, this programmable desktop calculator proved an immediate success. Also known as the P101 or the Perottina (after the chief engineer who designed it, Pier Giorgio Perotto), it eventually sold more than 40,000 units, primarily in the United States but also in Europe. NASA bought a number of P101s, which were used by engineers working on the 1969 Apollo 11 moon landing.

Chief among the machine’s selling points was its portability. Roughly the size of an electric typewriter, it could be used in program mode like a computer, with stored instructions, while in manual mode it served as a high-speed calculator. Its memory consisted of a magnetostrictive delay line, which used pulses of sound traveling along a coil of nickel alloy wire to store numeric data and program instructions. This kind of memory was used in several other small computers and calculators, including the Ferranti Sirius, a small business computer, and the Friden EC-130 and EC-132 desktop calculators.

The P101 had a 36-character keyboard, a built-in mechanical printer, and a magnetic card reader/recorder, for storing and retrieving programs. Olivetti supplied a library of commonly used programs. There was no display as such.

The P101 used only high-level instructions, so programming it was extremely simple. As a promotional video proclaimed, “A good secretary can learn to operate it in a matter of days!” The ad showed the P101 being used in a research lab, beside a swimming pool, and even at a betting hall.

At a time when bulky mainframe computers required a team of programmers, engineers, and operators to run, the P101’s compact size, capabilities, and ease of use were remarkable. Like all computers of that era, it wasn’t exactly cheap: The P101 could be leased on a monthly basis, or bought outright for US $3,200 (about $25,000 today). For comparison’s sake, the monthly rent on an IBM System/360 mainframe ranged from $2,700 to $115,000, with purchase prices from $133,000 to $5.5 million.

The calculator’s technical features inspired imitation: Hewlett-Packard reportedly paid Olivetti approximately $900,000 in royalties because of the similarities between the architecture and the magnetic cards of the HP 9100 programmable calculator series and those of the Programma 101.

The P101’s aesthetic and ergonomic design was the work of a talented young Italian architect named Mario Bellini. In contrast to Ettore Sottsass Jr.’s futuristic look for Olivetti’s ELEA 9003 mainframe, Bellini’s P101 is curvy and sensual while still being user friendly. Its rounded edges comfortably supported the user’s wrists and hands. Magnetic cards could be easily inserted into the central slot. On the right-hand side, a green/red light added a touch of color while also alerting the user to any malfunctions. The Programma 101 is now part of the permanent collection at the Museum of Modern Art in New York City.

If you own a Programma 101 and need to get it fixed, don’t despair. A team that includes some of the P101’s original designers, former Olivetti engineers, and volunteers will help you restore and repair it. Their lab is located in the Museo Tecnologicamente in Ivrea, the town near Turin where Olivetti was founded.  

For more on Olivetti’s pioneering computers, see “The Italian Computer: Olivetti’s ELEA 9003 Was a Study in Elegant, Ergonomic Design.”

About the Author

Elisabetta Mori is a Ph.D. candidate in computer history at Middlesex University in the United Kingdom.

In 1983, This Bell Labs Computer Was the First Machine to Become a Chess Master

Post Syndicated from Allison Marsh original https://spectrum.ieee.org/tech-history/silicon-revolution/in-1983-this-bell-labs-computer-was-the-first-machine-to-become-a-chess-master

Belle used a brute-force approach to best other computers and humans

Chess is a complicated game. It’s a game of strategy between two opponents, but with no hidden information and all of the potential moves known by both players at the outset. With each turn, players communicate their intent and try to anticipate the possible countermoves. The ability to envision several moves in advance is a recipe for victory, and one that mathematicians and logicians have long found intriguing.

Despite some early mechanical chess-playing machines—and at least one chess-playing hoax—mechanized chess play remained hypothetical until the advent of digital computing. While working on his Ph.D. in the early 1940s, the German computer pioneer Konrad Zuse used computer chess as an example for the high-level programming language he was developing, called Plankalkül. Due to World War II, however, his work wasn’t published until 1972. With Zuse’s work unknown to engineers in Britain and the United States, Norbert Wiener, Alan Turing, and notably Claude Shannon (with his 1950 paper “Programming a Computer for Playing Chess” [PDF]) paved the way for thinking about computer chess.

Beginning in the early 1970s, Bell Telephone Laboratories researchers Ken Thompson and Joe Condon developed Belle, a chess-playing computer. Thompson is cocreator of the Unix operating system, and he’s also a great lover of chess. He grew up in the era of Bobby Fischer, and as a youth he played in chess tournaments. He joined Bell Labs in 1966, after earning a master’s in electrical engineering and computer science from the University of California, Berkeley.

Joe Condon was a physicist by training who worked in the Metallurgy Division at Bell Labs. His research contributed to the understanding of the electronic band structure of metals, and his interests evolved with the rise of digital computing. Thompson got to know Condon when he and his Unix collaborator, Dennis Ritchie, began collaborating on a game called Space Travel, using a PDP-7 minicomputer that was under Condon’s purview. Thompson and Condon went on to collaborate on numerous projects, including promoting the use of C as the language for AT&T’s switching system.

Belle began as a software approach—Thompson had written a sample chess program in an early Unix manual. But after Cordon joined the team, the program morphed into a hybrid computer chess-playing machine, with Thompson handling the programming and Condon designing the hardware.

Belle consisted of three main parts: a move generator, a board evaluator, and a transposition table. The move generator identified the highest-value piece under attack and the lowest-value piece attacking, and it sorted potential moves based on that information. The board evaluator noted the king’s position and its relative safety during different stages of the game. The transposition table contained a memory cache of potential moves, and it made the evaluation more efficient.

Belle employed a brute-force approach. It looked at all of the possible moves a player could make with the current configuration of the board, and then considered all of the moves that the opponent could make. In chess, a turn taken by one player is called a ply. Initially, Belle could compute moves four plies deep. When Belle debuted at the Association for Computing Machinery’s North American Computer Chess Championship in 1978, where it claimed its first title, it had a search depth of eight plies. Belle went on to win the championship four more times. In 1983, it also become the first computer to earn the title of chess “master.”

Computer chess programmers were often treated with hostility when they pitted their systems against human competitors, some of whom were suspicious of potential cheating, while others were simply apprehensive. When Thompson wanted to test out Belle at his local chess club, he took pains to build up personal relationships. He offered his opponents a printout of the computer’s analysis of the match. If Belle won in mixed human/computer tournaments, he refused the prize money, offering it to the next person in line. Belle went on to play weekly at the Westfield Chess Club, in Westfield, N.J., for almost 10 years.

In contrast to human-centered chess competitions, where silence reigns so as not to disturb a player’s concentration, computer chess tournaments could be noisy affairs, with people discussing and debating different algorithms and game strategies. In a 2005 oral history, Thompson remembers them fondly. After a tournament, he would be invigorated and head back to the lab, ready to tackle a new problem.

For a computer, Belle led a colorful life, at one point becoming the object of a corporate practical joke. One day in 1978, Bell Labs computer scientist Mike Lesk, another member of the Unix team, stole some letterhead from AT&T chairman John D. deButts and wrote a fake memo, calling for the suspension of the “T. Belle Computer” project.

At the heart of the fake memo was a philosophical question: Is a game between a person and a computer a form of communication or of data processing? The memo claimed that it was the latter and that Belle therefore violated the 1956 antitrust decision barring the company from engaging in the computer business. In fact, though, AT&T’s top executives never pressured Belle’s creators to stop playing or inventing games at work, likely because the diversions led to economically productive research. The hoax became more broadly known after Dennis Ritchie featured it in a 2001 article, for a special issue of the International Computer Games Association Journal that was dedicated to Thompson’s contributions to computer chess.

In his oral history, Thompson describes how Belle also became the object of international intrigue. In the early 1980s, Soviet electrical engineer, computer scientist, and chess grandmaster Mikhail Botvinnik invited Thompson to bring Belle to Moscow for a series of demonstrations. He departed from New York’s John F. Kennedy International Airport, only to discover that Belle was not on the same plane.

Thompson learned of the machine’s fate after he’d been in Moscow for several days. A Bell Labs security guard who was moonlighting at JFK airport happened to see a Bell Labs box labeled “computer” that was roped off in the customs area. The guard alerted his friends at Bell Labs, and word eventually reached Condon, who lost no time in calling Thompson.

Condon warned Thompson to throw out the spare parts for Belle that he’d brought with him. “You’re probably going to be arrested when you get back,” he said. Why? Thompson asked. “For smuggling computers into Russia,” Condon replied.

In his oral history, Thompson speculates that Belle had fallen victim to the Reagan administration’s rhetoric concerning the “hemorrhage of technology” to the Soviet Union. Overzealous U.S. Customs agents had spotted Thompson’s box and confiscated it, but never alerted him or Bell Labs. His Moscow hosts seemed to agree that Reagan was to blame. When Thompson met with them to explain that Belle had been detained, the head of the Soviet chess club pointed out that the Ayatollah Khomeini had outlawed chess in Iran because it was against God. “Do you suppose Reagan did this to outlaw chess in the United States?” he asked Thompson.

Returning to the states, Thompson took Condon’s advice and dumped the spare parts in Germany. Arriving back home, he wasn’t arrested, for smuggling or anything else. But when he attempted to retrieve Belle at JFK, he was told that he was in violation of the Export Act—Belle’s old, outdated Hewlett-Packard monitor was on a list of banned items. Bell Labs paid a fine, and Belle was eventually returned.

After Belle had dominated the computer chess world for several years, its star began to fade, as more powerful computers with craftier algorithms came along. Chief among them was IBM’s Deep Blue, which captured international attention in 1996 when it won a game against world champion Garry Kasparov. Kasparov went on to win the match, but the ground was laid for a rematch. The following year, after extensive upgrades, Deep Blue defeated Kasparov, becoming the first computer to beat a human world champion in a tournament under regulation time controls.

Photographer Peter Adams brought Belle to my attention, and his story shows the value of being friendly to archivists. Adams had photographed Thompson and many of his Bell Labs colleagues for his portrait series “Faces of Open Source.” During Adams’s research for the series, Bell Labs corporate archivist Ed Eckert granted him permission to photograph some of the artifacts associated with the Unix research lab. Adams put Belle on his wish list, but he assumed that it was now in some museum collection. To his astonishment, he learned that the machine was still at Nokia Bell Labs in Murray Hill, N.J. As Adams wrote to me in an email, “It still had all the wear on it from the epic chess games it had played… :).”

An abridged version of this article appears in the May 2019 print issue as “Cold War Chess.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Untold History of AI: How Amazon’s Mechanical Turkers Got Squeezed Inside the Machine

Post Syndicated from Oscar Schwartz original https://spectrum.ieee.org/tech-talk/tech-history/dawn-of-electronics/untold-history-of-ai-mechanical-turk-revisited-tktkt

Today’s unseen digital laborers resemble the human who powered the 18th-century Mechanical Turk

The history of AI is often told as the story of machines getting smarter over time. What’s lost is the human element in the narrative, how intelligent machines are designed, trained, and powered by human minds and bodies.

In this six-part series, we explore that human history of AI—how innovators, thinkers, workers, and sometimes hucksters have created algorithms that can replicate human thought and behavior (or at least appear to). While it can be exciting to be swept up by the idea of superintelligent computers that have no need for human input, the true history of smart machines shows that our AI is only as good as we are.

Part 6: Mechanical Turk Revisited

At the turn of millennium, Amazon began expanding its services beyond bookselling. As the variety of products on the site grew, the company had to figure out new ways to categorize and organize them. Part of this task was removing tens of thousands of duplicate products that were popping up on the website. 

Engineers at the company tried to write software that could automatically eliminate all duplicates across the site. Identifying and deleting objects seemed to be a simple task, one well within the capacities of a machine. Yet the engineers soon gave up, describing the data-processing challenge as “insurmountable.” This task, which presupposed the ability to notice subtle differences and similarities between pictures and text, actually required human intelligence. 

Amazon was left with a conundrum. Deleting duplicate products from the site was a trivial task for humans, but the sheer number of duplicates would require a huge workforce. Coordinating that many workers on one task was not a trivial problem.

An Amazon manager named Venky Harinarayan came up with a solution. His patent described a “hybrid machine/human computing arrangement,” which would break down tasks into small units, or “subtasks” and distribute them to a network of human workers.

In the case of deleting duplicates, a central computer could divide Amazon’s site into small sections—say, 100 product pages for can openers—and send the sections to human workers over the Internet. The workers could then identify duplicates in these small units and send their pieces of the puzzle back. 

This distributed system offered a crucial advantage: The workers didn’t have to be centralized in one place but could instead complete the subtasks on their own personal computers wherever they happened to be, whenever they chose. Essentially, what Harinaryran developed was an effective way to distribute low-skill yet difficult-to-automate work to a broad network of humans who could work in parallel.

The method proved so effective in Amazon’s internal operations, Jeff Bezos decided this system could be sold as a service to other companies. Bezos turned Harinaryan’s technology into a marketplace for laborers. There, businesses that had tasks that were easy for humans (but hard to automate) could be matched with a network of freelance workers, who would do the tasks for small amounts of money.

Thus was born Amazon Mechanical Turk, or mTurk for short. The service launched in 2005, and the user base quickly grew. Businesses and researchers around the globe began uploading thousands of so-called “human intelligence tasks” onto the platform, such as transcribing audio or captioning images. These tasks were dutifully carried out by an internationally dispersed and anonymous group of workers for a small fee (one aggrieved worker reported an average fee of 20 cents per task). 

The name of this new service was a wink at the chess-playing machine of the 18th century, the Mechanical Turk invented by the huckster Wolfgang von Kempelen. And just like that faux automaton, inside which hid a human chess player, the mTurk platform was designed to make human labor invisible. Workers on the platform are not represented with names, but with numbers, and communication between the requester and the worker is entirely depersonalized. Bezos himself has called these dehumanized workers “artificial artificial intelligence.”

Today, mTurk is a thriving marketplace with hundreds of thousands of workers around the world. While the online platform provides a source of income for people who otherwise might not have access to jobs, the labor conditions are highly questionable. Some critics have argued that by keeping the workers invisible and atomized, Amazon has made it easier for them to be exploited. A research paper [PDF] published in December 2017 found that workers earned a median wage of approximately US $2 per hour, and only 4 percent earned more than $7.25 per hour.

Interestingly, mTurk has also become crucial for the development of machine-learning applications. In machine learning, an AI program is given a large data set, then learns on its own how to find patterns and draw conclusions. MTurk workers are frequently used to build and label these training data sets, yet their role in machine learning is often overlooked.

The dynamic now playing out between the AI community and mTurk is one that has been ever-present throughout the history of machine intelligence. We eagerly admire the visage of the autonomous “intelligent machine,” while ignoring, or even actively concealing, the human labor that makes it possible.

Perhaps we can take a lesson from the author Edgar Allan Poe. When he viewed von Kempelen’s Mechanical Turk, he was not fooled by the illusion. Instead, he wondered what it would be like for the chess player trapped inside, the concealed laborer “tightly compressed” among cogs and levers in “exceedingly painful and awkward positions.”

In our current moment, when headlines about AI breakthroughs populate our news feeds, it’s important to remember Poe’s forensic attitude. It can be entertaining—if sometimes alarming—to be swept up in the hype over AI, and to be carried away by the vision of machines that have no need for mere mortals. But if you look closer, you’ll likely see the traces of human labor.

This is the final installment of a six-part series on the untold history of AI. Part 5 told a story of algorithmic bias—from the 1980s. 

Eight years, 2000 blog posts

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/eight-years-2000-blog-posts/

Today’s a bit of a milestone for us: this is the 2000th post on this blog.

Why does a computer company have a blog? When did it start, who writes it, and where does the content come from? And don’t you have sore fingers? All of these are good questions: I’m here to answer them for you.

The first ever Raspberry Pi blog post

Marital circumstances being what they are, I had a front-row view of everything that was going on at Raspberry Pi, right from the original conversations that kicked the project off in 2009. In 2011, when development was still being done on Eben’s and my kitchen table, we met with sudden and slightly alarming fame when Rory Cellan Jones from the BBC shot a short video of a prototype Raspberry Pi and blogged about it – his post went viral. I was working as a freelance journalist and editor at the time, but realised that we weren’t going to get a better chance to kickstart a community, so I dropped my freelance work and came to work full-time for Raspberry Pi.

Setting up an instantiation of WordPress so we could talk to all Rory’s readers, each of whom decided we’d promised we’d make them a $25 computer, was one of the first orders of business. We could use the WordPress site to announce news, and to run a sort of devlog, which is what became this blog; back then, many of our blog posts were about the development of the original Raspberry Pi.

It was a lovely time to be writing about what we do, because we could be very open about the development process and how we were moving towards launch in a way that sadly, is closed to us today. (If we’d blogged about the development of Raspberry Pi 3 in the detail we’d blogged about Raspberry Pi 1, we’d not only have been handing sensitive and helpful commercial information to the large number of competitor organisations that have sprung up like mushrooms since that original launch; but you’d also all have stopped buying Pi 2 in the run-up, starving us of the revenue we need to do the development work.)

Once Raspberry Pis started making their way into people’s hands in early 2012, I realised there was something else that it was important to share: news about what new users were doing with their Pis. And I will never, ever stop being shocked at the applications of Raspberry Pi that you come up with. Favourites from over the years? The paludarium’s still right up there (no, I didn’t know what a paludarium was either when I found out about it); the cucumber sorter’s brilliant; and the home-brew artificial pancreas blows my mind. I’ve a particular soft spot for musical projects (which I wish you guys would comment on a bit more so I had an excuse to write about more of them).

As we’ve grown, my job has grown too, so I don’t write all the posts here like I used to. I oversee press, communications, marketing and PR for Raspberry Pi Trading now, working with a team of writers, editors, designers, illustrators, photographers, videographers and managers – it’s very different from the days when the office was that kitchen table. Alex Bate, our magisterial Head of Social Media, now writes a lot of what you see on this blog, but it’s always a good day for me when I have time to pitch in and write a post.

I’d forgotten some of the early stuff before looking at 2011’s blog posts to jog my memory as I wrote today’s. What were we thinking when we decided to ship without GPIO pins soldered on? (Happily for the project and for the 25,000,000 Pi owners all over the world in 2019, we changed our minds before we finally launched.) Just how many days in aggregate did I spend stuffing envelopes with stickers at £1 a throw to raise some early funds to get the first PCBs made? (I still have nightmares about the paper cuts.) And every time I think I’m having a bad day, I need to remember that this thing happened, and yet everything was OK again in the end. (The backs of my hands have gone all prickly just thinking about it.) Now I think about it, the Xenon Death Flash happened too. We also survived that.

At the bottom of it all, this blog has always been about community. It’s about sharing what we do, what you do, and making links between people all over the world who have this little machine in common. The work you do telling people about Raspberry Pi, putting it into your own projects, and supporting us by buying the product doesn’t just help us make hardware: every penny we make funds the Raspberry Pi Foundation’s charitable work, helps kids on every continent to learn the skills they need to make their own futures better, and, we think, makes the world a better place. So thank you. As long as you keep reading, we’ll keep writing.

The post Eight years, 2000 blog posts appeared first on Raspberry Pi.

Storing Encrypted Credentials In Git

Post Syndicated from Bozho original https://techblog.bozho.net/storing-encrypted-credentials-in-git/

We all know that we should not commit any passwords or keys to the repo with our code (no matter if public or private). Yet, thousands of production passwords can be found on GitHub (and probably thousands more in internal company repositories). Some have tried to fix that by removing the passwords (once they learned it’s not a good idea to store them publicly), but passwords have remained in the git history.

Knowing what not to do is the first and very important step. But how do we store production credentials. Database credentials, system secrets (e.g. for HMACs), access keys for 3rd party services like payment providers or social networks. There doesn’t seem to be an agreed upon solution.

I’ve previously argued with the 12-factor app recommendation to use environment variables – if you have a few that might be okay, but when the number of variables grow (as in any real application), it becomes impractical. And you can set environment variables via a bash script, but you’d have to store it somewhere. And in fact, even separate environment variables should be stored somewhere.

This somewhere could be a local directory (risky), a shared storage, e.g. FTP or S3 bucket with limited access, or a separate git repository. I think I prefer the git repository as it allows versioning (Note: S3 also does, but is provider-specific). So you can store all your environment-specific properties files with all their credentials and environment-specific configurations in a git repo with limited access (only Ops people). And that’s not bad, as long as it’s not the same repo as the source code.

Such a repo would look like this:

└─── production
|   |   application.properites
|   |   keystore.jks
└─── staging
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client1
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client2
|   |   application.properites
|   |   keystore.jks

Since many companies are using GitHub or BitBucket for their repositories, storing production credentials on a public provider may still be risky. That’s why it’s a good idea to encrypt the files in the repository. A good way to do it is via git-crypt. It is “transparent” encryption because it supports diff and encryption and decryption on the fly. Once you set it up, you continue working with the repo as if it’s not encrypted. There’s even a fork that works on Windows.

You simply run git-crypt init (after you’ve put the git-crypt binary on your OS Path), which generates a key. Then you specify your .gitattributes, e.g. like that:

secretfile filter=git-crypt diff=git-crypt
*.key filter=git-crypt diff=git-crypt
*.properties filter=git-crypt diff=git-crypt
*.jks filter=git-crypt diff=git-crypt

And you’re done. Well, almost. If this is a fresh repo, everything is good. If it is an existing repo, you’d have to clean up your history which contains the unencrypted files. Following these steps will get you there, with one addition – before calling git commit, you should call git-crypt status -f so that the existing files are actually encrypted.

You’re almost done. We should somehow share and backup the keys. For the sharing part, it’s not a big issue to have a team of 2-3 Ops people share the same key, but you could also use the GPG option of git-crypt (as documented in the README). What’s left is to backup your secret key (that’s generated in the .git/git-crypt directory). You can store it (password-protected) in some other storage, be it a company shared folder, Dropbox/Google Drive, or even your email. Just make sure your computer is not the only place where it’s present and that it’s protected. I don’t think key rotation is necessary, but you can devise some rotation procedure.

git-crypt authors claim to shine when it comes to encrypting just a few files in an otherwise public repo. And recommend looking at git-remote-gcrypt. But as often there are non-sensitive parts of environment-specific configurations, you may not want to encrypt everything. And I think it’s perfectly fine to use git-crypt even in a separate repo scenario. And even though encryption is an okay approach to protect credentials in your source code repo, it’s still not necessarily a good idea to have the environment configurations in the same repo. Especially given that different people/teams manage these credentials. Even in small companies, maybe not all members have production access.

The outstanding questions in this case is – how do you sync the properties with code changes. Sometimes the code adds new properties that should be reflected in the environment configurations. There are two scenarios here – first, properties that could vary across environments, but can have default values (e.g. scheduled job periods), and second, properties that require explicit configuration (e.g. database credentials). The former can have the default values bundled in the code repo and therefore in the release artifact, allowing external files to override them. The latter should be announced to the people who do the deployment so that they can set the proper values.

The whole process of having versioned environment-speific configurations is actually quite simple and logical, even with the encryption added to the picture. And I think it’s a good security practice we should try to follow.

The post Storing Encrypted Credentials In Git appeared first on Bozho's tech blog.

Hiring a Director of Sales

Post Syndicated from Yev original https://www.backblaze.com/blog/hiring-a-director-of-sales/

Backblaze is hiring a Director of Sales. This is a critical role for Backblaze as we continue to grow the team. We need a strong leader who has experience in scaling a sales team and who has an excellent track record for exceeding goals by selling Software as a Service (SaaS) solutions. In addition, this leader will need to be highly motivated, as well as able to create and develop a highly-motivated, success oriented sales team that has fun and enjoys what they do.

The History of Backblaze from our CEO
In 2007, after a friend’s computer crash caused her some suffering, we realized that with every photo, video, song, and document going digital, everyone would eventually lose all of their information. Five of us quit our jobs to start a company with the goal of making it easy for people to back up their data.

Like many startups, for a while we worked out of a co-founder’s one-bedroom apartment. Unlike most startups, we made an explicit agreement not to raise funding during the first year. We would then touch base every six months and decide whether to raise or not. We wanted to focus on building the company and the product, not on pitching and slide decks. And critically, we wanted to build a culture that understood money comes from customers, not the magical VC giving tree. Over the course of 5 years we built a profitable, multi-million dollar revenue business — and only then did we raise a VC round.

Fast forward 10 years later and our world looks quite different. You’ll have some fantastic assets to work with:

  • A brand millions recognize for openness, ease-of-use, and affordability.
  • A computer backup service that stores over 500 petabytes of data, has recovered over 30 billion files for hundreds of thousands of paying customers — most of whom self-identify as being the people that find and recommend technology products to their friends.
  • Our B2 service that provides the lowest cost cloud storage on the planet at 1/4th the price Amazon, Google or Microsoft charges. While being a newer product on the market, it already has over 100,000 IT and developers signed up as well as an ecosystem building up around it.
  • A growing, profitable and cash-flow positive company.
  • And last, but most definitely not least: a great sales team.

You might be saying, “sounds like you’ve got this under control — why do you need me?” Don’t be misled. We need you. Here’s why:

  • We have a great team, but we are in the process of expanding and we need to develop a structure that will easily scale and provide the most success to drive revenue.
  • We just launched our outbound sales efforts and we need someone to help develop that into a fully successful program that’s building a strong pipeline and closing business.
  • We need someone to work with the marketing department and figure out how to generate more inbound opportunities that the sales team can follow up on and close.
  • We need someone who will work closely in developing the skills of our current sales team and build a path for career growth and advancement.
  • We want someone to manage our Customer Success program.

So that’s a bit about us. What are we looking for in you?

Experience: As a sales leader, you will strategically build and drive the territory’s sales pipeline by assembling and leading a skilled team of sales professionals. This leader should be familiar with generating, developing and closing software subscription (SaaS) opportunities. We are looking for a self-starter who can manage a team and make an immediate impact of selling our Backup and Cloud Storage solutions. In this role, the sales leader will work closely with the VP of Sales, marketing staff, and service staff to develop and implement specific strategic plans to achieve and exceed revenue targets, including new business acquisition as well as build out our customer success program.

Leadership: We have an experienced team who’s brought us to where we are today. You need to have the people and management skills to get them excited about working with you. You need to be a strong leader and compassionate about developing and supporting your team.

Data driven and creative: The data has to show something makes sense before we scale it up. However, without creativity, it’s easy to say “the data shows it’s impossible” or to find a local maximum. Whether it’s deciding how to scale the team, figuring out what our outbound sales efforts should look like or putting a plan in place to develop the team for career growth, we’ve seen a bit of creativity get us places a few extra dollars couldn’t.

Jive with our culture: Strong leaders affect culture and the person we hire for this role may well shape, not only fit into, ours. But to shape the culture you have to be accepted by the organism, which means a certain set of shared values. We default to openness with our team, our customers, and everyone if possible. We love initiative — without arrogance or dictatorship. We work to create a place people enjoy showing up to work. That doesn’t mean ping pong tables and foosball (though we do try to have perks & fun), but it means people are friendly, non-political, working to build a good service but also a good place to work.

Do the work: Ideas and strategy are critical, but good execution makes them happen. We’re looking for someone who can help the team execute both from the perspective of being capable of guiding and organizing, but also someone who is hands-on themselves.

Additional Responsibilities needed for this role:

  • Recruit, coach, mentor, manage and lead a team of sales professionals to achieve yearly sales targets. This includes closing new business and expanding upon existing clientele.
  • Expand the customer success program to provide the best customer experience possible resulting in upsell opportunities and a high retention rate.
  • Develop effective sales strategies and deliver compelling product demonstrations and sales pitches.
  • Acquire and develop the appropriate sales tools to make the team efficient in their daily work flow.
  • Apply a thorough understanding of the marketplace, industry trends, funding developments, and products to all management activities and strategic sales decisions.
  • Ensure that sales department operations function smoothly, with the goal of facilitating sales and/or closings; operational responsibilities include accurate pipeline reporting and sales forecasts.
  • This position will report directly to the VP of Sales and will be staffed in our headquarters in San Mateo, CA.


  • 7 – 10+ years of successful sales leadership experience as measured by sales performance against goals.
    Experience in developing skill sets and providing career growth and opportunities through advancement of team members.
  • Background in selling SaaS technologies with a strong track record of success.
  • Strong presentation and communication skills.
  • Must be able to travel occasionally nationwide.
  • BA/BS degree required

Think you want to join us on this adventure?
Send an email to jobscontact@backblaze.com with the subject “Director of Sales.” (Recruiters and agencies, please don’t email us.) Include a resume and answer these two questions:

  1. How would you approach evaluating the current sales team and what is your process for developing a growth strategy to scale the team?
  2. What are the goals you would set for yourself in the 3 month and 1-year timeframes?

Thank you for taking the time to read this and I hope that this sounds like the opportunity for which you’ve been waiting.

Backblaze is an Equal Opportunity Employer.

The post Hiring a Director of Sales appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Japan’s Directorate for Signals Intelligence

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/japans_director.html

The Intercept has a long article on Japan’s equivalent of the NSA: the Directorate for Signals Intelligence. Interesting, but nothing really surprising.

The directorate has a history that dates back to the 1950s; its role is to eavesdrop on communications. But its operations remain so highly classified that the Japanese government has disclosed little about its work ­ even the location of its headquarters. Most Japanese officials, except for a select few of the prime minister’s inner circle, are kept in the dark about the directorate’s activities, which are regulated by a limited legal framework and not subject to any independent oversight.

Now, a new investigation by the Japanese broadcaster NHK — produced in collaboration with The Intercept — reveals for the first time details about the inner workings of Japan’s opaque spy community. Based on classified documents and interviews with current and former officials familiar with the agency’s intelligence work, the investigation shines light on a previously undisclosed internet surveillance program and a spy hub in the south of Japan that is used to monitor phone calls and emails passing across communications satellites.

The article includes some new documents from the Snowden archive.

The Software Freedom Conservancy on Tesla’s GPL compliance

Post Syndicated from corbet original https://lwn.net/Articles/754919/rss

The Software Freedom Conservancy has put out a
blog posting
on the history and current status of Tesla’s GPL
compliance issues. “We’re thus glad that, this week, Tesla has acted
publicly regarding its current GPL violations and has announced that
they’ve taken their first steps toward compliance. While Tesla acknowledges
that they still have more work to do, their recent actions show progress
toward compliance and a commitment to getting all the way there.

EC2 Instance Update – C5 Instances with Local NVMe Storage (C5d)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-instance-update-c5-instances-with-local-nvme-storage-c5d/

As you can see from my EC2 Instance History post, we add new instance types on a regular and frequent basis. Driven by increasingly powerful processors and designed to address an ever-widening set of use cases, the size and diversity of this list reflects the equally diverse group of EC2 customers!

Near the bottom of that list you will find the new compute-intensive C5 instances. With a 25% to 50% improvement in price-performance over the C4 instances, the C5 instances are designed for applications like batch and log processing, distributed and or real-time analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding. Some of these applications can benefit from access to high-speed, ultra-low latency local storage. For example, video encoding, image manipulation, and other forms of media processing often necessitates large amounts of I/O to temporary storage. While the input and output files are valuable assets and are typically stored as Amazon Simple Storage Service (S3) objects, the intermediate files are expendable. Similarly, batch and log processing runs in a race-to-idle model, flushing volatile data to disk as fast as possible in order to make full use of compute resources.

New C5d Instances with Local Storage
In order to meet this need, we are introducing C5 instances equipped with local NVMe storage. Available for immediate use in 5 regions, these instances are a great fit for the applications that I described above, as well as others that you will undoubtedly dream up! Here are the specs:

Instance NamevCPUsRAMLocal StorageEBS BandwidthNetwork Bandwidth
c5d.large24 GiB1 x 50 GB NVMe SSDUp to 2.25 GbpsUp to 10 Gbps
c5d.xlarge48 GiB1 x 100 GB NVMe SSDUp to 2.25 GbpsUp to 10 Gbps
c5d.2xlarge816 GiB1 x 225 GB NVMe SSDUp to 2.25 GbpsUp to 10 Gbps
c5d.4xlarge1632 GiB1 x 450 GB NVMe SSD2.25 GbpsUp to 10 Gbps
c5d.9xlarge3672 GiB1 x 900 GB NVMe SSD4.5 Gbps10 Gbps
c5d.18xlarge72144 GiB2 x 900 GB NVMe SSD9 Gbps25 Gbps

Other than the addition of local storage, the C5 and C5d share the same specs. Both are powered by 3.0 GHz Intel Xeon Platinum 8000-series processors, optimized for EC2 and with full control over C-states on the two largest sizes, giving you the ability to run two cores at up to 3.5 GHz using Intel Turbo Boost Technology.

You can use any AMI that includes drivers for the Elastic Network Adapter (ENA) and NVMe; this includes the latest Amazon Linux, Microsoft Windows (Server 2008 R2, Server 2012, Server 2012 R2 and Server 2016), Ubuntu, RHEL, SUSE, and CentOS AMIs.

Here are a couple of things to keep in mind about the local NVMe storage:

Naming – You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (/dev/nvme*1 on Linux) after the guest operating system has booted.

Encryption – Each local NVMe device is hardware encrypted using the XTS-AES-256 block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated.

Lifetime – Local NVMe devices have the same lifetime as the instance they are attached to, and do not stick around after the instance has been stopped or terminated.

Available Now
C5d instances are available in On-Demand, Reserved Instance, and Spot form in the US East (N. Virginia), US West (Oregon), EU (Ireland), US East (Ohio), and Canada (Central) Regions. Prices vary by Region, and are just a bit higher than for the equivalent C5 instances.


PS – We will be adding local NVMe storage to other EC2 instance types in the months to come, so stay tuned!

Pirate IPTV Service Gave Customer Details to Premier League, But What’s the Risk?

Post Syndicated from Andy original https://torrentfreak.com/pirate-iptv-service-gave-customer-details-to-premier-league-but-whats-the-risk-180515/

In a report last weekend, we documented what appear to be the final days of pirate IPTV provider Ace Hosting.

From information provided by several sources including official liquidation documents, it became clear that a previously successful and profitable Ace had succumbed to pressure from the Premier League, which accused the service of copyright infringement.

The company had considerable funds in the bank – £255,472.00 to be exact – but it also had debts of £717,278.84, including £260,000 owed to HMRC and £100,000 to the Premier League as part of a settlement agreement.

Information received by TF late Sunday suggested that £100K was the tip of the iceberg as far as the Premier League was concerned and in a statement yesterday, the football outfit confirmed that was the case.

“A renowned pirate of Premier League content to consumers has been forced to liquidate after agreeing to pay £600,000 for breaching the League’s copyright,” the Premier League announced.

“Ace IPTV, run by Craig Driscoll and Ian Isaac, was selling subscriptions to illegal Premier League streams directly to consumers which allowed viewing on a range of devices, including notorious Kodi-type boxes, as well as to smaller resellers in the UK and abroad.”

Sources familiar with the case suggest that while Ace Hosting Limited didn’t have the funds to pay the Premier League the full £600K, Ace’s operators agreed to pay (and have already paid, to some extent at least) what were essentially their own funds to cover amounts above the final £100K, which is due to be paid next year.

But that’s not the only thing that’s been handed over to the Premier League.

“Ace voluntarily disclosed the personal details of their customers, which the League will now review in compliance with data protection legislation. Further investigations will be conducted, and action taken where appropriate,” the Premier League added.

So, the big question now is how exposed Ace’s former subscribers are.

The truth is that only the Premier League knows for sure but TF has been able to obtain information from several sources which indicate that former subscribers probably aren’t the Premier League’s key interest and even if they were, information obtained on them would be of limited use.

According to a source with knowledge of how a system like Ace’s works, there is a separation of data which appears to help (at least to some degree) with the subscriber’s privacy.

“The system used to manage accounts and take payment is actually completely separate from the software used to manage streams and the lines themselves. They are never usually even on the same server so are two very different databases,” he told TF.

“So at best the only information that has voluntarily been provided to the [Premier League], is just your email, name and address (assuming you even used real details) and what hosting package or credits you bought.”

While this information is bad enough, the action against Ace is targeted, in that it focuses on the Premier League’s content and how Ace (and therefore its users) infringed on the football outfit’s copyrights. So, proving that subscribers actually watched any Premier League content would be an ideal position but it’s not straightforward, despite the potential for detailed logging.

“The management system contains no history of what you watched, when you watched it, when you signed in and so on. That is all contained in a different database on a different server.

“Because every connection is recorded [on the second server], it can create some two million entries a day and as such most providers either turn off this feature or delete the logs daily as having so many entries slows down the system down used for actual streams,” he explains.

Our source says that this data would likely to have been the first to be deleted and is probably “long gone” by now. However, even if the Premier League had obtained it, it’s unlikely they would be able to do much with it due to data protection laws.

“The information was passed to the [Premier League] voluntarily by ACE which means this information has been given from one entity to another without the end users’ consent, not part of the [creditors’ voluntary liquidation] and without a court order to support it. Data Protection right now is taken very seriously in the EU,” he notes.

At this point, it’s probably worth noting that while the word “voluntarily” has been used several times to explain the manner in which Ace handed over its subscribers’ details to the Premier League, the same word can be used to describe the manner in which the £600K settlement amount will be paid.

No one forces someone to pay or hand something over, that’s what the courts are for, and the aim here was to avoid that eventuality.

Other pieces of information culled from various sources suggest that PayPal payment information, limited to amounts only, was also handed over to the Premier League. And, perhaps most importantly (and perhaps predictably) as far as former subscribers are concerned, the football group was more interested in Ace’s upwards supplier chain (the ‘wholesale’ stream suppliers used, for example) than those buying the service.

Finally, while the Premier League is now seeking to send a message to customers that these services are risky to use, it’s difficult to argue with the assertion that it’s unsafe to hand over personal details to an illegal service.

“Ace IPTV’s collapse also highlighted the risk consumers take with their personal data when they sign up to illegal streaming services,” Premier League notes.

TF spoke with three IPTV providers who all confirmed that they don’t care what names and addresses people use to sign up with and that no checks are carried out to make sure they’re correct. However, one concedes that in order to run as a business, this information has to be requested and once a customer types it in, it’s possible that it could be handed over as part of a settlement.

“I’m not going to tell people to put in dummy details, how can I? It’s up to people to use their common sense. If they’re still worried they should give Sky their money because if our backs are against the wall, what do you think is going to happen?” he concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Pirate IPTV Service Goes Bust After Premier League Deal, Exposing Users

Post Syndicated from Andy original https://torrentfreak.com/pirate-iptv-service-goes-bust-after-premier-league-deal-exposing-users-180913/

For those out of the loop, unauthorized IPTV services offering many thousands of unlicensed channels have been gaining in popularity in recent years. They’re relatively cheap, fairly reliable, and offer acceptable levels of service.

They are, however, a huge thorn in the side of rightsholders who are desperate to bring them to their knees. One such organization is the UK’s Premier League, which has been disrupting IPTV services over the past year, hoping they’ll shut down.

Most have simply ridden the wave of blocks but one provider, Ace Hosting in the UK, showed signs of stress last year, revealing that it would no longer sell new subscriptions. There was little doubt in most people’s minds that the Premier League had gotten uncomfortably close to the IPTV provider.

Now, many months later, the amazing story can be told. It’s both incredible and shocking and will leave many shaking their heads in disbelief. First up, some background.

Doing things ‘properly’ – incorporation of a pirate service…

Considering how most operators of questionable services like to stay in the shade, it may come as a surprise to learn that Ace Hosting Limited is a proper company. Incorporated and registered at Companies House on January 3, 2017, Ace has two registered directors – family team Ian and Judith Isaac.

In common with several other IPTV operators in the UK who are also officially registered with the authorities, Ace Hosting has never filed any meaningful accounts. There’s a theory that the corporate structure is basically one of convenience, one that allows for the handling of large volumes of cash while limiting liability. The downside, of course, is that people are often more easily identified, in part due to the comprehensive paper trail.

Thanks to what can only be described as a slow-motion train wreck, the Ace Hosting debacle is revealing a bewildering set of circumstances. Last December, when Ace said it would stop signing up new members due to legal pressure, a serious copyright threat had already been filed against it.

Premier League v Ace Hosting

Documents seen by TorrentFreak reveal that the Premier League sent legal threats to Ace Hosting on December 15, 2017, just days before the subscription closure announcement. Somewhat surprisingly, Ace apparently felt it could pay the Premier League a damages amount and keep on trading.

But early March 2018, with the Premier League threatening Ace with all kinds of bad things, the company made a strange announcement.

“The ISPs in the UK and across Europe have recently become much more aggressive in blocking our service while football games are in progress,” Ace said in a statement.

“In order to get ourselves off of the ISP blacklist we are going to black out the EPL games for all users (including VPN users) starting on Monday. We believe that this will enable us to rebuild the bypass process and successfully provide you with all EPL games.”

It seems doubtful that Ace really intended to thumb its nose at the Premier League but it had continued to sell subscriptions since receiving threats in December, so all things seemed possible. But on March 24 that all changed, when Ace effectively announced its closure.

Premier League 1, Ace Hosting 0

“It is with sorrow that we announce that we are no longer accepting renewals, upgrades to existing subscriptions or the purchase of new credits. We plan to support existing subscriptions until they expire,” the team wrote.

“EPL games including highlights continue to be blocked and are not expected to be reinstated before the end of the season.”

Indeed, just days later the Premier League demanded a six-figure settlement sum from Ace Hosting, presumably to make a lawsuit disappear. It was the straw that broke the camel’s back.

“When the proposed damages amount was received it was clear that the Company would not be able to cover the cost and that there was a very high probability that even with a negotiated settlement that the Company was insolvent,” documents relating to Ace’s liquidation read.

At this point, Ace says it immediately ceased trading but while torrent sites usually shut down and disappear into the night, Ace’s demise is now a matter of record.

Creditors – the good, the bad, and the ugly

On April 11, 2018, Ace’s directors contacted business recovery and insolvency specialists Begbies Traynor (Central) LLP to obtain advice on the company’s financial position. Begbies Traynor was instructed by Ace on April 23 and on May 8, Ace Hosting director Ian Isaac determined that his company could not pay its debts.

First the good news. According to an official report, Ace Hosting has considerable cash in the bank – £255,472.00 to be exact. Now the bad news – Ace has debts of £717,278.84. – the details of which are intriguing to say the least.

First up, Ace has ‘trade creditors’ to whom it owes £104,356. The vast majority of this sum is a settlement Ace agreed to pay to the Premier League.

“The directors entered into a settlement agreement with the Football Association Premier League Limited prior to placing the Company into liquidation as a result of a purported copyright infringement. However, there is a residual claim from the Football Association Premier League Limited which is included within trade creditors totaling £100,000,” Ace’s statement of affairs reads.

Bizarrely (given the nature of the business, at least) Ace also owes £260,000 to Her Majesty’s Revenue and Customs (HMRC) in unpaid VAT and corporation tax, which is effectively the government’s cut of the pirate IPTV business’s labors.

Former Ace Hosting subscriber? Your cash is as good as gone

Finally – and this is where things get a bit sweaty for Joe Public – there are 15,768 “consumer creditors”, split between ‘retail’ and ‘business’ customers of the service. Together they are owed a staggering £353,000.

Although the documentation isn’t explicit, retail customers appear to be people who have purchased an Ace IPTV subscription that still had time to run when the service closed down. Business customers seem likely to be resellers of the service, who purchased ‘credits’ and didn’t get time to sell them before Ace disappeared.

The poison chalice here is that those who are owed money by Ace can actually apply to get some of it back, but that could be extremely risky.

“Creditor claims have not yet been adjudicated but we estimate that the majority of customers who paid for subscription services will receive less than £3 if there is a distribution to unsecured creditors. Furthermore, customer details will be passed to the relevant authorities if there is any suggestion of unlawful conduct,” documentation reads.

We spoke with a former Ace customer who had this to say about the situation.

“It was generally a good service notwithstanding their half-arsed attempts to evade the EPL block. At its heart there were people who seemed to know how to operate a decent service, although the customer-facing side of things was not the greatest,” he said.

“And no, I won’t be claiming a refund. I went into it with my eyes fully open so I don’t hold anyone responsible, except myself. In any case, anyone who wants a refund has to complete a claim form and provide proof of ID (LOL).”

The bad news for former subscribers continues…potentially

While it’s likely that most people will forgo their £3, the bad news isn’t over for subscribers. Begbies Traynor is warning that the liquidators will decide whether to hand over subscribers’ personal details to the Premier League and/or the authorities.

In any event, sometime in the next couple of weeks the names and addresses of all subscribers will be made “available for inspection” at an address in Wiltshire for two days, meaning that any interested parties could potentially gain access to sensitive information.

The bottom line is that Ace Hosting is in the red to the tune of £461,907 and will eventually disappear into the bowels of history. Whether its operators will have to answer for their conduct will remain to be seen but it seems unimaginable at this stage that things will end well.

Subscribers probably won’t get sucked in but in a story as bizarre as this one, anything could yet happen.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.