Tag Archives: consumer-electronics

The Consumer Electronics Hall of Fame: Sony PlayStation Portable

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-sony-playstation-portable

The PlayStation Portable (PSP) was the labradoodle of the electronic games industry, an odd crossbreed, muscular and fun, but also kind of pricey. The PSP, a combination game system and media player, wasn’t the most popular handheld system, but it was a crackling cauldron of surprisingly advanced tech for its time.

Handheld game players had been around for decades. The first ones were dedicated to playing a single game; a popular early example was Mattel’s Football, introduced in 1977. Handheld players never went away entirely, but mostly languished until Nintendo came up with a spectacularly winning formula in 2001. The company had already established its credentials with larger console systems, and when it introduced the Game Boy Advance handheld, it already had not just popular games but popular game franchises. The company successfully migrated its Mario, Zelda, and Pokémon series to the Game Boy Advance, which became hugely popular as a result.

In the console market, Sony and Microsoft had been chipping away at Nintendo’s dominance for years with PlayStation and Xbox, respectively. But as late as 2003, nobody had managed to provide any real competition to Nintendo in handheld systems.

If anyone could give Nintendo a run for its money there, it was probably Sony. In 1995, Sony had pushed into the U.S. and European console markets with the original PlayStation by offering games that made use of superior graphics, coupled with outstanding multimedia capabilities. (For example, subsequent models of the PlayStation would have the ability to play Blu-ray disks.) It could use the same playbook for a handheld system. The best time to catch a well-established competitor is when the competitor is transitioning from one generation of technology to the next. In 2003, Nintendo was known to be preparing its follow-on to the Game Boy Advance. That year, Sony announced the PSP.

Nintendo got its next portable, the DS, to the market in November 2004. The PSP debuted in Japan in December of 2004; distribution to North America began a few months later. The Nintendo DS embodied several innovations, the most notable of which was a second LCD screen (this one a touch screen) that worked in tandem with the first. But where the DS was basically just a good handheld game system, the PSP was a new hybrid: a combination game system and media player.

And it was a beast. Designed by a team led by Shinichi Ogasawara at Sony, the PSP debuted as the most powerful handheld on the market, with remarkable graphics capabilities. It was based on a pair of MIPS R4000 microprocessors from MIPS Technologies, which ran at a then-impressive 333 megahertz. One functioned as the central processing unit and the other as a media processor. There was also a dedicated graphics processor, which Sony called the Graphics Core, that ran at 166 MHz. It could handle 24-bit color graphics, about eight bits better than most comparable units at the time. It also had one of the largest screens in the handheld-gaming world, a 480- by 272-pixel thin-film-transistor LCD that measured 4.3 inches on the diagonal. The original version had 32 megabytes of onboard memory, considered inadequate by some, so Sony doubled it in subsequent models. It had a slot for a Memory Stick Duo, a Sony proprietary flash-memory format, for additional storage (the Memory Stick Duo was soon renamed Memory Stick Pro Duo). It had stereo speakers built in, a headphone jack, and an embedded microphone.

For a handheld in that time period, it had an immense array of connectivity options, including Wi-Fi, Bluetooth, IrDA (an infrared-based protocol), and USB. The system software included a clunky but usable browser for Web surfing. It could connect to Sony’s PlayStation. Starting in 2006, it came with access to the PlayStation Network, a portal for games, video, and music that Sony launched that year. The network let users buy games and get instant access to them, and it also let them play against each other online.

The PSP could store and play music, in either the popular MP3 format or Sony’s proprietary ATRAC system. But it turned out that music would always be a subsidiary capability, partly because in those days Apple owned the music-player market with its iPod player and iTunes software. Also, where other game consoles generally used game cartridges, the PSP used a Sony-proprietary optical format called Universal Media Disc (UMD). Discs were sold containing a game or a movie, but UMD was not a recordable format. So users had to buy prerecorded media on UMDs or put their own tunes and video on a Memory Stick Pro Duo, which turned out to be a process that was annoyingly clunky for music and exasperatingly difficult for video.

In sporadic efforts to exploit the PSP’s impressive media-player features, Sony kept adding new potential uses. In 2008, the company produced a GPS-enabled accessory that turned the PSP into a navigation device for driving. In 2009, Sony secured rights to distribute electronic versions of some of the best-selling comic books around the world. The Digital Comics Reader never really took off (Sony shut it down in 2012), but it was emblematic of the varied and creative uses Sony envisioned for the PSP platform.

Sony had its own game franchises to exploit, including the Final Fantasy, God of War, and Grand Theft Auto series, all of which sustained the platform’s popularity. We should also pause here to pay tribute to a quirky, mostly acclaimed fantasy series called Jak and Daxter that had a small but die-hard following (including this author). The point is that Sony did attract third-party game developers, which was critical, but Nintendo was consistently more successful at that, giving it the edge in the number of titles available.

In the final analysis, the technical dazzle of the PSP was undercut somewhat by the proprietary formats, notably the UMD, which Sony devised specifically for the PSP and which was never used on any other device. The company stopped manufacturing the PSP in 2014 and the UMD not long after; by then the PSP’s successor, the PlayStation Vita, had been around for 2 years. Sony ended production of the Vita in March 2019.

Astoundingly, Sony still produces memory sticks, but its format was long ago eclipsed by the SD and microSD card formats. Recalling the old Betamax misfire, Digital Audio Tape, MiniDiscs, and the forlorn ATRAC audio-compression algorithm, it’s hard to escape the conclusion that Sony has been plagued by really bad decisions in the realm of formats.

At its introduction, the PSP was priced at US $249 in the United States, almost $50 more than the Nintendo DS. It was not an unreasonable price, given all the relatively advanced tech that went into a PSP. Even so, PSP accessories, such as the expensive Memory Stick Pro Duo cards, could make the price disparity greater.

Despite all the drawbacks (higher cost, fewer titles, proprietary formats), Sony still sold over 80 million PSPs in the nine years it was in production, from late 2004 to 2014. The total sales figure was only a bit more than half of what the Nintendo DS sold between 2004 and 2013. But it was a pretty good run for a powerful device that offered a very compelling preview of the same sorts of things people would be doing with smartphones nearly a decade in the future.

The Consumer Electronics Hall of Fame: Pure Evoke-1 DAB Digital Radio

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-pure-evoke1-dab-digital-radio

The replacement of analog media with digital formats began with recorded music in the early 1980s, and then progressed inexorably through recorded video, still photography, broadcast radio, and broadcast television. In Europe, digital broadcast radio began as a small research project in 1981, at the Institut für Rundfunktechnik, in Munich. At roughly the same time, Japan began exploring digital television. The contrast between the two ostensibly similar projects is important: When consumer and broadcast companies in various countries began pursuing digital-TV technology, developing standards became as much a political process as it was a technological one. Radio, however, escaped that politicization, in large part because regional markets were content to adopt their own standards.

That was a plus early on, because it let developers sidestep the long and contentious standards wars that bogged down other technologies. But it was probably a net negative in the long run because it left global markets fragmented. While Europe stuck with its own digital-radio system, which is called Digital Audio Broadcasting (DAB) and European stations began using it in 1995, North American countries elected to pursue an entirely different technology. This American system, called HD Radio, is a proprietary technology created by iBiquity Digital Corp. The Federal Communications Commission selected HD Radio as the standard for the United States in 2002; to this day, any company that wants to market an HD Radio set or operate a radio station that broadcasts in the HD format has to pay fees to DTS Inc., which acquired iBiquity in 2015.

There were many good reasons for making broadcast radio digital. Most of them stem from one giant advantage: spectral efficiency. Any modulated radio signal has sidebands, swaths of spectrum that are higher and lower than the carrier frequency that are unavoidably created by the modulation process. These sidebands are particularly pronounced for FM. They reduce an FM signal’s spectral efficiency, which is defined as the amount of information the signal can carry for a given amount of bandwidth. Because of data compression and other techniques, digital radio signals can have several times the spectral efficiency of analog FM ones. In both DAB and HD Radio, this spectacularly higher spectral efficiency means a huge amount of extra bandwidth. It’s used to provide not just higher audio quality and supplementary program data, but also, in some cases, a couple of additional high-quality, stereo radio channels, all within the bandwidth formerly occupied by a single station’s lower-quality analog signal.

The first radio station to adopt DAB was NRK Klassisk, in Norway, which began digital broadcast operations on 1 June, 1995. (Twenty-two years later, Norway would become the first country to cease all analog radio broadcasting in favor of digital.) Several other DAB stations, in England, Germany, and Sweden, followed within a few months. It nevertheless took several years for DAB to get popular. Part of the reason for the lag was the early radio sets themselves, which were clunky and expensive. In fact, it would be years before a company called Pure Digital would kick-start the DAB market with a compact, high-performance, and yet inexpensive radio called the Evoke-1.

The story starts with an English company with a different name: Imagination Technologies. It was then, as now, a developer of integrated circuits for video and audio processing. In 2000, it began selling a digital tuner IC but couldn’t find any buyers among the big radio manufacturers. The IC’s biggest advocate inside the company was CEO Hossein Yassaie, an Iranian-born engineer who had earned a Ph.D. at the University of Birmingham, in England, in the application of charge-coupled devices in sonar. He helped design and build a complete radio receiver that incorporated the chip to demonstrate its capabilities.

That radio, which bore the VideoLogic label, looked like almost every other rack stereo component at the time. It was a black rectangular box with a faceplate that was short and wide. The faceplate shared the same simple, elegant styling common to most audio equipment of the day.

In 2001, Imagination Technologies’ radio operation, which by this point was also called VideoLogic, introduced what the company claims was the world’s first portable digital radio. This one carried yet another new brand name: Pure. From a visual standpoint, the Pure DRX-601EX was very different from its predecessor, with a retro-tinged style. It stood upright, like an old-school lunchbox, and it had a blonde-oak finish. The triangular faceplate had a chrome finish evocative of a manta ray and took up only part of the front surface. Much of the front of the radio was a grille with rakishly slanted openings covering the speakers. The curved handle on top was solid, but nonetheless a visual reference to the plastic straps on the inexpensive transistor radios of the 1960s. It was striking visually, and it sounded great, but, at £499, it was not cheap.

What does all this have to do with the Evoke-1? We’re getting there. Later that same year, 2001, VideoLogic got involved in a project with the BBC, which had been operating several DAB radio channels since 1995 and was eager to increase its audiences for them. The Digital Radio Development Bureau (DRDB), a trade organization that promoted DAB and was partly funded by the BBC, seized on the 100th anniversary of Marconi’s first wireless transmission as a hook. With that in mind, the DRDB worked with VideoLogic to create a special commemorative edition DAB radio that it would sell for only £99. The radio hit the market late in 2001, and sold out rapidly. There was clearly growing demand for digital radio in the United Kingdom.

Emboldened by the brisk sales, in 2002, VideoLogic introduced the Pure Evoke-1. It shared the DRX-601EX’s distinctive combination of blonde wood and brushed chrome, but with this model, the company seemed to tip the retro/modern balance toward the former. There was more brushed chrome on the face, and the single speaker was covered with a perforated screen reminiscent of the grilles on ’60s transistor radios. The company also sold a separate speaker that could be plugged in to provide true stereo sound.

A year later it was still the only DAB radio on the market available for less than £100, according to a review in Radio Now, which praised the Evoke-1 for sounding “amazing” for a portable. “The radio is pretty sensitive—if there’s a hint of signal the Evoke-1 will find it through its telescopic aerial,” the review went on. The Evoke-1 ran only on mains power (the subsequent Evoke-2 would have built-in stereo speakers and also run on batteries).

The availability of the affordable Evoke-1 radios helped DAB finally take off in the United Kingdom. In 2003, the company claimed it was selling more than twice as many Evoke-1s as any competitive portable radio. In the ensuing years, hundreds of stations in Europe would switch to DAB, expanding the market. There are now over 40 countries with radio stations that conform to either the DAB standard, or its follow-on, called DAB+. The company said its cumulative DAB radio sales exceeded 5 million units as of 2014.

Along the way, Imagination’s radio unit ditched the name VideoLogic and began calling itself Pure. Pure was successful selling its DAB radio ICs to other radio manufacturers, but eventually sales of its own Evoke radios began to decline. The Pure operation started losing money in 2009. In 2016, shortly after Yassaie stepped down as CEO, Imagination Technologies sold Pure to an Austrian investment operation called AVenture for a mere £2.6 million. Pure not only continues to make DAB+ radios, it claims it is still one of the leading sellers in the category.

And Yassaie? “In recognition of his services to technology and innovation,” he was knighted by Elizabeth II, Queen of the United Kingdom, on New Year’s Day in 2013. At the time, Imagination Technologies had market capitalization of £1 billion and was employing more than 1,300 people.

U.S. radio stations, by the way, began adopting HD Radio in 2003. In contrast to Europe, digital radio in the United States has been largely ignored by consumers, despite the existence of thousands of HD stations in the country. Proponents have attempted to popularize it by getting HD Radio receivers factory-installed in cars, with some success. This year, the percentage of cars with factory installed HD Radio reached 50 percent. However, by way of comparison, satellite radio (Sirius XM) is installed in 75 percent of cars sold in the United States.

The Consumer Electronics Hall of Fame: Motorola T250 Talkabout Walkie-Talkies

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-motorola-t250-talkabout-walkietalkies

Walkie-talkies have been around since the 1940s, but until the late 1990s the available models were generally powerful units used in the military and law enforcement or children’s toys with severely limited range. That all changed in 1996 with the advent in the United States of the Family Radio Service, a personal-radio system using spectrum in the ultrahigh-frequency band. Motorola was reportedly first to market with FRS walkie-talkies in 1997, and it was the leading seller for at least the next three years—a brief but intense heyday for walkie-talkies as a popular consumer product. Moto’s iconic T250 Talkabouts were the most popular model during that stretch.

Portable two-way radios were invented in the late 1930s, almost simultaneously by engineers working separately in Canada and in the United States. Like so many technological innovations, walkie-talkies were first employed in the military, in this case by the Allies in World War II, starting around 1943. The very first versions of the technology were already referred to as “walkie-talkies.” They earned the “walkie” half of the nickname inasmuch as the electronics, based on vacuum-tube circuits powered by carbon-zinc or lead-acid batteries, were just small enough and light enough to fit in a backpack. The most important of these early models was the SCR-300 [PDF], introduced in 1943 by the Galvin Manufacturing Co. (the precursor of Motorola) for the U.S. Armed Forces. It weighed either 14.5 or 17 kilograms (32 or 38 pounds), depending on the battery it was using. Galvin began shipping smaller, handheld models a year later, though to distinguish them from the backpack models these were referred to as “handie-talkies.” The introductory appellation was the one that stuck, though.

Running through the history of walkie-talkies are decades-long efforts in the United States and other countries to find a suitable frequency band where using them wouldn’t interfere with other radio equipment. Starting in 1960, the U.S. Federal Communications Commission required a license for commercial use of mobile radio equipment used on land (as opposed to at sea). The FCC would later designate a category called General Mobile Radio Service (GMRS). Public-safety and emergency-response agencies soon began relying on GMRS radios, including tabletop sets and walkie-talkies, which also became useful for marine communications and some commercial applications in mining and the like.

Around the same time, the FCC also allocated radio spectrum for unlicensed walkie-talkie use. This spectrum was close to Citizens Band frequencies, and as both consumer walkie-talkies and CB radios grew in popularity in the 1970s, they interfered with each other more and more often, according to a brief account published by walkie-talkie manufacturer Midland. To stop walkie-talkies from interfering with CB radio, the FCC moved walkie-talkie usage to 49 megahertz, a band also open to unlicensed use. By the late 1980s and early ’90s, more and more companies were using those unlicensed bands for yet other radio-based products such as cordless phones and baby monitors, and interference among all those products—including walkie-talkies—again became a problem, according to the Midland account.

The countdown to FRS began in 1994, when Radio Shack (a subsidiary of Tandy Corp.) brought a proposal to the FCC that would once again change the frequency for unlicensed land-mobile radios, along with implementing several other ideas aimed at expanding their use. “Working with Tandy Corp., Motorola was a leading advocate for the creation of the Family Radio Service in the U.S. and was primarily responsible for developing the technical and operational rules ultimately adopted by the Federal Communications Commission,” a spokesman for Motorola Solutions told IEEE Spectrum in an email.

Two years later, in establishing FRS, the FCC set aside a total of 14 channels in two groups, one clustered slightly above 462.5 MHz and the other above 467.6 MHz. Other countries soon followed suit. The European Union in 1997 approved something called Private Mobile Radio, or PMR446 (as its frequency band is 446 MHz). Australia, China, and many other countries similarly cleared spectrum for unlicensed land-mobile radio use.

The bands vary from country to country, but they’re all in the ultrahigh-frequency range. For compact mobile radios, there are several advantages to UHF. One is that the relatively high frequencies mean lots of available bandwidth, which translates into channels that can be well separated and with little interference from adjacent ones. Also, short wavelengths mean small antennas.

FRS in the United States originally permitted maximum power output of half a watt. Though manufacturers would often claim ranges of 40 kilometers (25 miles) or more, these distances were wildly inflated. Under real-world conditions, a half-watt of power typically translates to a range of perhaps 1–4 km, depending on various conditions including the presence or absence of obstructions. For contrast, GMRS radio output can be as high as 5 watts, enabling them to consistently reach 30–50 km, again depending on local conditions.

FRS radios use narrow-band frequency modulation, with channels spaced at 25-kilohertz intervals. Since 2017, all FRS channels have coincided with ones used by GMRS radios. Though interference between the two is possible, users of unlicensed FRS equipment and licensed GMRS gear have coexisted mostly peacefully, with the FCC occasionally tinkering with the rules to keep the peace.

The combination of frequency modulation and the ultrahigh-frequency band made FRS walkie-talkies immensely more reliable and clear sounding than any personal radios that came before. Even today, under some circumstances, FRS walkie-talkie communications can be clearer and more reliable than cellphones.

Motorola’s participation in defining the rules that would govern the new FRS category helped it get to the market first, in 1997, according to an article in the Los Angeles Times in 2000. That L.A. Times piece reported that Motorola had been the sales leader in the walkie-talkie market ever since. Other manufacturers of FRS radios at the time included Audiovox, Cobra, Icom, Kenwood, Midland, and Radio Shack.

Prices of FRS sets could range from $40 or $50 to several hundred dollars for a pair, and many manufacturers also sold them in packs of four, for family use. Historical sources quote Motorola selling two T250s for below $100; $85 was a commonly cited price. That would make them far more expensive than the toy walkie-talkies that were common before FRS, but considerably cheaper than the most expensive two-way handheld radios available at the time.

Motorola’s T250s had superb industrial design. They were squat, about 17 centimeters long including the antenna, and also light, at about 200 grams (just over 7 ounces). The most popular color was bright yellow, and the case was edged with rubbery black grips. It turned out they were as rugged as they looked, which appealed to the outdoor workers and adventurers who needed them and also to city dwellers who longed for outdoor adventure.

T250s had jacks for earphones and external microphones (the Talkabout Remote Speaker Microphone was sold separately) that allowed users to operate the devices hands-free in something called VOX mode. T250s had a backlit LCD screen, and they ran on three AA batteries. Motorola followed up the T250 with the T280, a version that had a rechargeable nickel-metal-hydride battery.

That same L.A. Times article also mentions that the demand for FRS radios had taken off in 1999. Another article reported that available models grew from 14 in 1997 to 73 in 1999. In general, the introduction of cellular telephony helped amplify demand for two-way wireless. At the time, cellular telephony wasn’t really new, but its popularity wasn’t yet universal, even in developed countries. Many people considered walkie-talkies a viable alternative to cellphones. There were obvious drawbacks: notably, the short range and the occasional lack of privacy associated with walkie-talkies—everyone using the same channel could hear everyone else’s communications, if they were close enough to each other. And yet some people regarded these as acceptable trade-offs for the convenience and the lack of subscription fees. And because cellular coverage was far from ubiquitous, many used walkie-talkies as adjuncts to their cell service, for example in remote areas or when traveling.

And manufacturers did find ways of adding a measure of privacy. Though FRS radios before 2017 were limited to 14 channels, some models, including the T250, could in effect expand the number with a feature sometimes referred to as “privacy codes.” They were part of a squelching scheme that filtered out signals from other users of the same channel who weren’t using the code. Multiplying the number of channels by the number of privacy codes on the T250 rendered 532 combinations; in practice, if a pair of T250 users wanted to keep their conversation private, they were likely to be able to do so.

Other features identified in an article in The New York Times published in 2000 included notification tones when someone else was sending, channel scanners that would identify the busiest or least-used channels, and rechargeable batteries, as well as folding antennas, altimeters, FM radio, and GPS.

The late ’90s heyday of walkie-talkies was relatively brief. Cellular telephony got more popular and more technologically advanced in the 2000s, and that drastically reduced both the usefulness of and the necessity for a separate wireless two-way communications medium. Nevertheless, in 2017, the FCC gave FRS a bit of a boost when it added more channels, bringing the total to 22, and increased, from 0.5 W to 2 W, the maximum permitted power for channels 1 to 7 and 15 to 22. The extra power could mean ranges of perhaps 4 km or more, depending on conditions. As part of the same rules revision, the FCC also made the FRS and GMRS bands entirely overlapping. Only GMRS’s license requirement and the permitted power levels now differentiate the two.

Today, people still buy walkie-talkies for various reasons, among them the fact that cell coverage is not now and never will be truly ubiquitous, especially in remote areas on land and at sea. Motorola Solutions continues to produce Talkabout walkie-talkies—the product line now consists of a dozen different models, ranging in price from $35 to $120 a pair. Market information is lacking, surprisingly so. But the Talkabout series is often acclaimed as the most successful line of walkie-talkies ever.

The Consumer Electronics Hall of Fame: Boss TU-12 Guitar Tuner

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-boss-tu12-guitar-tuner

The market for portable, battery-powered instrument tuners was pioneered by Korg, with its introduction of the WT-10 guitar and bass tuner in 1975. Though it wasn’t as accurate as previous tuners, no one really cared. Mechanical strobe tuners, around since the 1930s, could be quite accurate, but they required wall power, cost a lot of money, and could weigh as much as 15 kilograms. So back in the day, many guitarists would simply use tuning forks, pitch pipes, pianos, or even records to tune their instruments. Some players were even content with simply having the strings in “relative tune”—in other words, being at the proper intervals from each other without bothering to make sure that the E string was vibrating at the proper absolute frequency. So when an electronic tuner came along, the issue wasn’t a matter of how much more convenient it was; the issue was the difference between convenience and none whatsoever.

In comparison with wind and keyboard instruments, guitars are especially apt to drift out of tune, even during a performance. So guitarists in particular were delighted to have something simple they could bring on stage with them. The inaccuracy of the WT-10 wasn’t too bad, but it was bad enough to inspire other companies to improve on it. One of those was Boss, which introduced the TU-12 chromatic tuner in 1983, a model that decades later is still occasionally referred to as an industry standard.

Boss claims that at its introduction, the TU-12 was the first automatic chromatic tuner. Let’s unpack that. “Automatic” in this usage meant the device automatically sensed and displayed the current pitch as it was picked up via a cable from an electric guitar or, for an acoustic instrument, detected by an input microphone. (However, thanks to more recent inventions, the phrase “automatic guitar tuner” now generally refers to a device that is installed on a guitar’s headstock and that physically turns the tuning pegs until one or more strings are in tune.)

The distinction between chromatic tuners and nonchromatic tuners is that a chromatic tuner can detect all 12 tones of a chromatic scale: not just CDEFGAB, but also the half tones: C-sharp, D-sharp, F-sharp, G-sharp, and A-sharp (or, if you prefer, D-flat, E-flat, G-flat, A-flat, and B-flat). Most nonchromatic guitar tuners, on the other hand, can tune only to the six specific notes in conventional guitar tuning: the strings in a typical six-string guitar are tuned to EADGBE. So the advantage of a chromatic tuner is that it can tune to the nearest semitone. If it detects a pitch close to A-sharp, it will tune for that note, rather than tuning for A or B. This scheme offers a few important advantages. One of them is that it allows guitarists to tune their guitars when playing with alternate string tunings; DGDGBD is only one example among many. The TU-12 offers both options—there’s a chromatic as well as a nonchromatic mode.

The TU-12 has a lighted meter that shows how close the detected note is to the nearest tone or half tone, a tuning guide, and a built-in microphone for tuning acoustic instruments. Of course, as with other tuners, electric instruments can be plugged in. The tuner can run on batteries or wall current.

A nice and unusual feature of the TU-12 is that musicians can manually select the pitches they want to tune to. For guitarists this means being able to select which specific string they want to tune.

The TU-12 also has a setting for instruments with very high frequency ranges, like flutes and piccolos. And there’s another setting for those with low frequency ranges, such as bass and baritone guitars.

Not only was the TU-12 accurate, and not only did it have a list of features important to musicians, it is regarded as one of the most rugged tuners ever built. Rock stars like to complain about how grueling touring is, and no one really pays attention to such whining. But it’s worth noting that stage equipment does in fact have to stand up to being banged, dropped, kicked, and doused repeatedly in alcoholic beverages. The Boss TU-12 has a reputation for standing up to all that abuse and remaining accurate.

Among the pro guitarists who have endorsed it in the past or were still using it as of 2019 are Steve Vai, The Edge (U2), Ace Frehley (Kiss), and John Squire (The Stone Roses).

Guitarists tend to include a tuner on the pedalboard, which is a rack mounted on the floor that holds all of the foot pedals that trigger different sound effects they use when they play. A feature that performing musicians wanted was the ability to mute their guitars completely so that the audience (and bandmates) didn’t hear the guitar as it was being tuned during a performance. Eventually, Boss, Korg, Peterson and other tuner manufacturers introduced models that could mute the guitar while it was being tuned. Although players can simply turn down the volume knob on the guitar to use a tuner, the lack of a mute feature is the one thing that took the shine off the TU-12.

Boss still makes tuners, its lineup now spanning eight models. Over the years there were several follow-ons to the TU-12, including the TU-12H, which has an extended range, and the TU-12BW, for brass and woodwinds. The most recent successor, the TU-12EX, was introduced in 2009. It has an even greater range than the TU-12H—8½ octaves—allowing it to be used with pretty much any musical instrument. It also provides flat tuning, in which the guitar’s lowest E string is actually tuned to E-flat, with the tuning of the other five strings adjusted accordingly. Flat tuning is favored by some guitarists and is said to be more accommodating to vocalists. The TU-12EX also has the Accu-Pitch feature, which emits a beep when the note played is in tune. With a retail price under US $100, the unit is not quite 15 centimeters long (six inches), weighs 142 grams (5 ounces), and can tune to within one-hundredth of a semitone, known as “1 cent.”

The Consumer Electronics Hall of Fame: Nest Thermostat

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-nest-thermostat

In 2009, engineer Tony Fadell was building his green dream vacation home near Lake Tahoe, in California. There were heat pumps, photovoltaic panels, expensive windows, and the best insulation money could buy. But when it came time to choose thermostats, Fadell was appalled at the options. The basic choice was the classic 1953 Honeywell Round thermostat, designed by Henry Dreyfuss, and its imitators. On the more contemporary end of the scale were digital thermostats that were expensive, comically difficult to program, and couldn’t be managed with a smartphone app.

This, at the Dawn of the Age of Phone Apps, struck Fadell as very odd. Why hadn’t anyone designed a simple-to-operate thermostat that could be remotely controlled with a smartphone? Home-automation systems and gadgets had been around for years, but most were aimed at people who could shell out thousands of dollars for almost entirely bespoke systems assembled and installed by specialists. There were window shades that would raise and lower at the touch of a button, lighting that would turn on or off based on some customized sensor setup, and maybe a multiroom audio installation that could be used and reconfigured with a remote control. For some reason, the quintessential automated-home owner at the time seemed to be a villain in a James Bond film.

By 2010, there wasn’t a technologist left on Earth who didn’t see a connected world coming—the term Internet of Things had been coined more than a decade earlier. But even by 2010, there weren’t that many people who had much experience building connected devices and systems.

Fadell was one of them. He was already famous in tech circles, starting with his stint in the 1990s at General Magic, which developed an operating system and personal digital assistants that are credited with helping to foment the smartphone revolution. Fadell then worked at Philips, where he helped develop networked services that would run on handheld devices, and finally as a longtime team leader for Apple’s wildly successful iPod product line. He left Apple in 2008.

Seeing an opportunity, Fadell founded Nest Labs in 2010 with a former engineering colleague from Apple, Matt Rogers. And the thing about the Nest thermostat was that from the very beginning, it was going to be only the beginning. One of Nest’s earliest investors, Randy Komisar from Kleiner Perkins Caufield & Byers, told Fortune that during Nest’s first pitch requesting funding, he had been deeply disappointed when Fadell revealed a prototype of a thermostat, and not just because it was carved out of Styrofoam. It was just a thermostat—one of the single most boring categories of gizmo in modern technology. Fadell talked to Komisar for a while about popularizing connected thermostats before letting the other shoe drop: Nest would do the same thing “for every unloved product in the home.”

“Then I got it,” Komisar told Fortune. “Nest was a Trojan horse into the home. In 48 hours we had a check for Tony.”

Within a few years, Nest’s product line expanded beyond the thermostats to include a security and monitoring camera, a doorbell, a door lock, and a smoke-and-carbon-monoxide detector. But in the beginning, Nest had to survive with just its thermostat. The company had to make a strategic choice—a bet, actually.

At the time, there was a commonly held view that automated homes would work best by supporting various automation products through a home gateway, or hub. The supposed advantage of a gateway was that it would represent a single point for Internet access and distribution within the home, as well as a central point for managing all connected devices used in the home. Through a standardized hub, broadband providers and other companies could sell devices and services for such applications as home security.

Nest was not interested in connecting to a hub. As long as its thermostats (and projected subsequent products) were Wi-Fi enabled, then the smartphone that millions of people already carried around with them could be used to control it. There was no need to connect to some other intermediary unit. At the time, though, companies as diverse as ADT, Comcast, and Staples, among many others, were interested in providing gateway hubs. It could have gone either way.

The developers of TiVo had considered building a gateway more than 10 years earlier and couldn’t find anyone interested, which is one of the reasons they built the TiVo DVR instead. Consumers had no enthusiasm for the idea back then, and by the time Nest was ready to build its thermostat, consumers still had no interest. It seemed that these big and powerful American tech companies were all chasing an illusion and had been doing so for years.

Nest introduced its thermostat at the end of 2011, with a price of US $249. Several publications ran teardowns of the Nest. One by EE News found a Texas Instruments Sitara microprocessor, a power-management and USB chip, Flash memory, a 32Mx16 SDRAM, and a Murata Wi-Fi module. EE News also found a Zigbee network coprocessor. Zigbee is a low-power, low-data-rate indoor network standard that, in 2010, had few users. Nest was clearly keeping its networking options open.

Where programmable thermostats always had a bewildering constellation of buttons and switches, the Nest has essentially one—its outer ring, which is a control dial used mainly to turn the target temperature up or down. The dial ring was fitted with the same sort of sensor used in optical mice; the sensor read markings inside the thermostat’s housing. The front face of the round unit is itself a button; click it to get a menu, and then rotate the dial to cycle through options. Think about it—the guy who ran Apple’s iPod business designed the Nest thermostat as if it were a giant iPod click wheel. Upon first encountering a Nest thermostat, most people already had a pretty good idea how to use it. Though users can physically manipulate the thermostat to program it, the company’s user studies indicate that people are more likely to use the Nest app on their smartphone.

The Nest’s most revolutionary feature is that it employs software enabling it to learn from user behavior and adjust accordingly. For example, it’s equipped with both short-range and long-range infrared sensors, so it can detect when people approach the device. That’s useful because when people approach the Nest, it’s usually to turn the thermostat up or down. After being manually adjusted enough times—in as little as a week, the company says—the thermostat begins to pick up the patterns and turn itself up or down accordingly at the appropriate times.

Those infrared sensors can also detect when people enter or leave the room, a feature that enables it to make a room warmer when it’s inhabited and cooler when it’s not. It’s one of the factors behind Nest’s claim that its thermostats are more energy efficient than nonprogrammable thermostats. The only strategy that might be more efficient is allowing utilities to manage home heating and cooling systems, and with Nest’s networking capabilities, it’s not only possible, it is in fact already being used that way. For example, the Texas utility Reliant Energy, in Houston, offers a plan to remotely manage its customers’ heating systems to reduce their costs; Reliant equips those who sign up with a Nest thermostat.

Nest got an additional boost early on when the two biggest home-improvement chains in the United States—Home Depot and Lowe’s—chose to carry Nest products. The company has never shared sales numbers, but it is commonly reported that the device has been shipping in the millions per year.

Ironically, considering that one of Nest’s first big decisions was to refuse to be a part of a standardized home-automation hub, Nest might yet go in that direction. To understand why, start with the fact that Google was an early investor in Nest. In January 2014, it bought the company outright, for $3.2 billion. Fadell left the company in 2016 and decamped to Paris, where he started an early-stage investment firm called Future Shape that plows some of his vast proceeds from the Nest sale into startup companies. That same year, Google released an artificial-intelligence-powered virtual-assistant app called Google Assistant. Coupled with a voice-activated speaker, called Google Home, the package can act as a smart hub, controlling multiple compatible devices—such as thermostats, security systems, cameras, and lighting. Sure enough, in May of 2019, Google announced a plan to bridge its Google Assistant products with its Nest product line. Some analysts wondered why it took Google three years to integrate the two.

The Consumer Electronics Hall of Fame: TiVo

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-tivo

The TiVo is the least successful, most significant consumer electronics device ever. When it came out in early 1999, it was one of the first two digital video recorders (DVRs), the other being the digital video recorder from ReplayTV, which was introduced at almost the same time. ReplayTV (founded by Anthony Wood, the same guy who later founded Roku) was sued over some of the functions it offered, and it never recovered from the legal entanglement. TiVo survives to this day. Nevertheless, the acronym “DVR” is recognized as the defining device of early 21st-century television viewing, while “TiVo” is fondly recalled in some circles as having been a verb for a few years.

In the early 1990s, TiVo cofounders Jim Barton and Mike Ramsay both worked at Silicon Graphics Inc. (SGI), where they were each hip-deep in the digitalization of video. Ramsay was selling SGI workstations to Hollywood studios for advanced graphics effects. Barton was on the SGI team that supported Time Warner Cable’s Full Service Network interactive TV experiment, which ran from November 1994 to May 1997. After they both left SGI, they met up and decided to create a company together.

Because of their backgrounds, they both knew that video digitalization was creating opportunities for startups. At the time, people were batting around the concept of a home gateway that would manage and direct data flowing into a house. Barton himself had pondered it in connection with the Full Service Network experiment. The upshot was that it was clear there were going to be multiple sources of digital content coming into the home, and it would probably be useful to have a single central device to manage all the traffic in and out, distributing signals to the appropriate places—video to the TV in the den, Internet to the computer in the home office. And someday all sorts of electronic devices might be connected—refrigerators and power meters and who knew what else. Barton and Ramsay decided to make a home gateway.

But as they talked to their contacts about the idea, the responses weren’t enthusiastic. In fact, 20 years later there still isn’t much of a market for that kind of a gateway. “Then someone told us—I don’t even remember who—but they said, ‘What I’d really like to be able to do is pause the TV and go the bathroom.’ And I realized: I know how to do that!” Barton says in an interview.

There was already high-end digital video production equipment that stored data on hard drives. The idea was to base a consumer device on a hard drive—and also allocate buffer space on the drive. With a buffer, the device would be able to play and record simultaneously. You could do that with a video cassette recorder, too, but all sorts of so-called trick plays that were impossible with a single VCR were eminently possible with a single hard drive. Including pausing to let people go to the bathroom.

Hard drives had a characteristic that was going to be a problem for TiVo, however. Every time data is stored, the drive needs to locate physical storage space for it. But as time goes on and more data gets recorded and erased and recorded over, free spaces get smaller and smaller. The drive ends up breaking new incoming streams into smaller and smaller segments and scattering those snippets more widely all over the disk’s surface. The bottom line is that data gets fragmented, and accessing it becomes more time consuming.

Data fragmentation wasn’t a major problem for computers running spreadsheets or word processing programs, but it was going to be one hell of a problem when dealing with video. Disk drives mount their read heads at the tip of a delicate arm that skitters above the surface of the storage disk seeking each bit of requested data, and the mechanism simply wasn’t fast enough to reconstitute fragmented video data in a steady-enough stream for the video to play smoothly. TiVo needed to build a system that recorded and played back data in a controlled manner, according to Ted Malone, currently TiVo’s vice president of consumer products and services. (Malone is in his second stint with the company; he was originally hired when the company was developing its first DVR.)

In an interview, Malone discloses that TiVo hired a team of engineers away from National Semiconductor to build an application-specific integrated circuit that would sit between the computing system and the hard drive to “basically unfragment in real time”—that is, manage write and read requests to minimize the scattering of data and, therefore, the seeking. This real-time defragmentation assured that the DVR would continue to operate smoothly over the life of the system, Malone says. Eventually, drives would get so fast that fragmentation was no longer a problem, but that was years away.

The system would also need an encoder, an integrated circuit that would take incoming analog video and convert it to digital, for storage on the hard disk. To keep the cost down, it would have to be a single-chip encoder, Barton says, but no such thing existed. The only encoders available were multichip products for encoding into a compression format called MPEG-1 (MPEG stands for Moving Picture Experts Group).

“Just a few months after we set up the company, Sony announced an MPEG-2 encoder,” Barton recalls. It was a single chip at a reasonable price. At about the same time, IBM debuted its PowerPC processor, and TiVo decided to use that as the machine’s central processor. The company sourced its drives from Maxtor. “We got it all to a cost that was not where we’d like it to be, but it was close to a consumer price point,” Barton adds.

The other big technological decision was to use the Linux operating system rather than the most obvious choice, Windows from Microsoft. “That’s not talked about a lot—TiVo was the first consumer electronics product to run on Linux,” Barton declares.

It was a risky choice. Linux had its fans in the computer industry, but it was still new then and regarded with suspicion. Nevertheless, to TiVo it seemed like the safer choice. “We just didn’t want to have to wait on Microsoft to fix things,” Barton explains.

The upside of using Linux—an open-source OS—was that TiVo could modify it if necessary. The downside was that modifying it was definitely going to be necessary. But then Linux couldn’t do what TiVo wanted it to do—function as a real-time OS. “There was RT Linux [real-time Linux], but it was very thin, it didn’t have a lot of capabilities, and it was too complicated to work with,” Malone says. “TiVo made the decision to take regular Linux and ‘fix’ it so it was fast enough to do what we wanted it to do.”

Barton adds, “I remember early on, we were talking to some people at Sony, we were discussing us becoming an OEM [original equipment manufacturer] for them. We had a meeting in this big conference room. One whole wall was windows, and on the other wall there was a TV playing. They were pounding the table, yelling at us, ‘You can’t do this with Linux! It’s not a real-time operating system!’ I showed them the TV playing behind me and told them, ‘That’s Linux.’ It all proved out in the end, but yeah, it was a big deal back then.”

Malone explains that engineers elsewhere in the industry appreciated the improvements TiVo made in Linux. That may be so, but TiVo restricted access to its version of the code, and that rubbed some Linux adherents the wrong way. Some called the licensing approach that TiVo took “tivoization”; it was not meant as a compliment. Then Linus Torvalds, the godfather of Linux, allowed that there might be a legitimate rationale for tivoization, and the whole controversy blew over.

The TiVo was introduced in 1999. The first model had a maximum storage capacity of 14 hours, and it sold for US $499. That was followed by a dual-drive 30-hour model for $999. Each TiVo came with a distinctively peanut-shaped remote control that was a model for ease of use. Customers also had to pay a one-time fee of $199 for a lifetime subscription or subscribe to the TiVo service for $9.95 per month or $99 a year. That got subscribers access to a networked service that included a well-designed program guide and a “Wish List” that allowed viewers to set their TiVos to record things in advance, for example, every “Seinfeld” episode, or all televised St. Louis Cardinals games, or a Star Wars marathon.

TiVo had other advantages over competing video recording technologies, too. Let’s say a VCR owner programmed her machine to record a show. Let’s further say that she decided to watch the show while it was still being broadcast—but she missed the beginning. With a VCR, she had only two options. She could start viewing the live broadcast in progress or she could wait until the broadcast (and the simultaneous recording of that broadcast) was over and then watch the recorded version from the beginning. Okay, sure, middle-class problems. But bear in mind that many popular shows started at 9 or 10 p.m. or later. So if she wanted to watch the whole show after it was recorded, starting from the beginning, it would mean staying up into the wee hours.

With DVRs, viewers getting to their TVs however many minutes late could start watching from the beginning immediately. Viewers could also pause a live program, and then restart it right where they left off. They could perform instant replay—go back to an earlier point in a show and watch a scene over (and over) again, then resume watching the rest of the program live in progress. TiVo viewers also quickly figured out they could exploit those same capabilities to fast-forward through commercials in not-quite-live TV. That’s the feature that makes a video recorder of any type popular.

Once one of these features was employed, the viewer thereafter was really watching the recorded (or buffered) version. At that point the word “live” was obviously inaccurate, but for viewers, the experience was pretty much the same as watching live TV. In aggregate those features represented a huge leap in time-shifting capabilities.

The fact that the TiVo was networked allowed the company to log viewing behavior at a level of detail never before possible. When Janet Jackson experienced a “wardrobe malfunction” that bared most of her right breast for 0.5 seconds while singing with Justin Timberlake during Super Bowl XXXVIII’s halftime show, TiVo was able to state precisely how many times TiVo users had replayed the moment. (It was a big number indeed.)

The people who used TiVos tended to get evangelical about the device. TiVo’s problem, Barton and Malone agree, is that it was (and still is) incredibly difficult to get potential customers to use a TiVo long enough to appreciate that it’s more than just a VCR with additional storage.

That perception also made it hard to justify the subscription fee, which could seem duplicative. Well over 60 million Americans were already paying a monthly subscription for cable TV. Why pay for two TV subscriptions? Worse still, even if you were willing to pay for two TV subscriptions, TiVo didn’t work with cable. You couldn’t record cable-TV programs.

The company sold 48,000 TiVo DVRs in 1999. Thereafter, TiVo’s sales averaged a few hundred thousand per year at most. Subscribership from direct sales peaked at 1.7 million in 2008; it’s now at about 1 million. The TiVo box never quite took off, but the concept caught on in a very big way.

When TiVo first started, the two direct-broadcast-satellite (DBS) companies, Dish Network and DirecTV, were also new, and they needed something to differentiate themselves from their well-established cable rivals. DirecTV decided the DVR would be a good differentiator; it collaborated with TiVo on a combination satellite receiver and DVR that became popular with DirecTV subscribers. To compete with DirecTV, Dish Network was compelled to develop a similar receiver/DVR too, which it sourced from its parent company, EchoStar.

TiVo wanted to collaborate with cable companies in the same way it was working with DirecTV, but cable operators wanted nothing to do with TiVo. TiVo was shut out of the enormous cable market. The friction between TiVo and the cable industry got complicated and bitter.

With limited prospects for new markets, TiVo had few business options. Among its assets was a rich patent portfolio covering its numerous innovations; in 2004 TiVo sued EchoStar for patent infringement and won a $500 million judgement. The company then embarked on a campaign of suing others for patent infringement, going after DVR manufacturers and service providers alike. Cable companies had eventually responded to DBS competition by getting their set-top box vendors to integrate hard drives and DVR functionality into their set-top boxes too; Internet Protocol TV providers (these were mostly traditional phone companies) had done likewise. There were a lot of potential patent infringers for TiVo to go after. And they did.

By one estimate, TiVo made $1.6 billion dollars enforcing its patents through 2013. Having established an apparently unassailable patent position, TiVo then began to license its technology. Today, there are roughly 60 cable and Internet Protocol TV service providers—mostly midsize companies—scattered all around the world that use set-top boxes integrated with TiVo-branded DVR functionality. Many of them bear the label “powered by TiVo.”

As of 2019, TiVo was still making and marketing its own DVRs, and they still weren’t selling all that well. It doesn’t matter. It was TiVo technology, not the TiVo box itself, that was a triumphal success. The real legacy of the TiVo box was the DVR technology that let people watch TV on their own terms.

The Consumer Electronics Hall of Fame: Panasonic PV-1563 VCR

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-panasonic-pv1563-vcr

The word disruptive gets used a little too casually sometimes, but the video cassette recorder was massively so. The introduction of consumer VCRs in the 1970s freed TV viewers from the strict schedules that movie theaters and broadcast networks had no choice but to set. The concept of “time shifting” originated with the VCR, and in the movie and TV industries, time shifting changed everything.

The consumer VCR market lasted for roughly 25 years before the machines were superseded by digital formats, mainly DVD players and digital video recorders (DVRs). During that quarter-century span, scores of manufacturers cranked out hundreds of models with cumulative unit sales in the hundreds of millions. The market was too fractured and evolved too quickly for any single model to have dominated. Nevertheless, the Panasonic PV-1563 was emblematic, one of the better models from among the best-selling VCRs in 1986, a year in which technological innovations significantly improved VCR recording and playback performance.

When the consumer VCR market began in earnest in the 1970s, viewers could record only the show playing on their screens. So at the beginning, time shifting wasn’t yet the dominant rationale for having a VCR. In those days, viewers were apt to use a VCR to record a show while they were watching it so they could watch it again later. It was an expensive proposition. In 1975 the average VCR cost between US $1,000 and $1,400—at least $4,800 in 2019 dollars. It was far too costly for all but the most affluent and enthusiastic TV viewers to buy, mostly to rewatch shows.

The market was almost immediately bogged down by one of the most notorious format wars in consumer electronics history: Betamax (“Beta”) vs. Video Home System (VHS). Sony created the Beta format and introduced it in 1975; VHS was devised by JVC and introduced in 1976. Many consumers hesitated, wondering which one they should buy, knowing that if they chose wrong they’d soon have a very expensive piece of electronic junk lying around. The two formats ran neck and neck for a few years, but by 1981 VHS machines were outselling Beta and by a big enough margin to be declared the winner.

Yes, Beta was technically superior. It still lost. Get over it.

Oddly, Sony kept manufacturing Betamax tapes until 2016, for a market that by then must have been difficult to detect without a microscope. By 2016, not only had VHS long since vanquished Beta, it had itself been vanquished by DVD, which itself had been eclipsed by home streaming.

Getting back to VCRs: In 1980, only about one in 100 U.S. households had one. But that was the first year of a decade-long run of accelerating sales growth. In 1988, for example, VCR penetration soared to nearly 60 percent. Every year, manufacturers kept driving down the price of VCRs, while adding features that made time shifting easier. People loved time shifting for two reasons: You could record a show and watch it whenever you wanted, of course. Perhaps just as alluringly, with fast-forwarding, you could skip commercials.

Not incidentally, VCRs also sparked a boom in amateur filming. Sony and JVC both introduced VCR-friendly camcorders in 1983, ushering in a new era in home moviemaking. Recording and playback were much easier with videocassettes than with film. With Beta or VHS tapes, there was no waiting for film processing. You just ejected the cassette from your camcorder and popped it into your VCR and it was ready for viewing.

As much as consumers loved VCRs, film and TV producers hated them. Viewers were gleefully skipping commercials, which deeply dismayed advertisers. The motion-picture studios had also become convinced that home taping was eating into movie theater profits and was enabling widespread film piracy—essentially, the theft of their content. In 1982, Jack Valenti, president of the Motion Picture Association of America (MPAA), testified before the U.S. Congress in a speech that reached dizzying heights of hyperbolic hyperventilation. Speaking before the House Subcommittee on Courts, Civil Liberties, and the Administration of Justice, Valenti thundered: “I say to you that the VCR is to the American film producer and the American public as the Boston strangler is to the woman home alone.”

The simmering feud came to a boil in 1983 when Universal City Studios sued Sony, claiming that home recording was copyright infringement. A Universal win would have certainly hobbled the VCR industry and might have even killed it. The case went all the way to the U.S. Supreme Court, and the verdict was reached in early 1984: Home recording was “fair use” and therefore permissible. Universal—and the entire film industry—lost.

The ruling validated both the machine itself and solidified the legality of the sale and rental of prerecorded content, which the motion-picture industry eventually embraced as a very lucrative additional business. Renting VHS tapes of major motion pictures soon became a sizable business. In the United States, Blockbuster LLC was founded in 1985 and quickly dominated the market. Kim’s Video and Music became a mecca for Manhattan cinephiles, a crowd that loved browsing through motion-picture history with films filed alphabetically by director. From 1984 to 1985—in the middle of that 10-year stretch of booming sales—U.S. sales of VCR players increased 50 percent to about 11.5 million units.

In 1985, the price for the typical VCR also dropped roughly 15 percent, to the $200–$400 range (there were a few models that were cheaper, and many that were more expensive). Buyers had their choice of multiple models each from roughly 70 brands, among them Akai, Fisher, GE, Hitachi, JVC, Magnavox (Philips), Marantz, Mitsubishi, NEC, Panasonic, Philco (Philips), Quasar, RCA, Sanyo, Sharp, Sylvania (Philips again), Toshiba, and Zenith. In reality, many of them, including ones sold at various times under the brand names GE, Magnavox, NEC, Philips, RCA, Sanyo, and Toshiba, and were manufactured by Matsushita Electric Industrial Co. in Japan. Matsushita was Panasonic’s parent firm; it produced all of the VCRs (and many other consumer electronics goods) that carried the brand names Panasonic and, after 1974, Quasar.

Also in 1985, VCR remote controls became available, and manufacturers introduced freeze-frame and search features. In 1986, a watershed year, 12 million players were sold in the United States. By then, RCA and Panasonic were the two leading VCR brands, with 14 percent and 12 percent of the market, respectively. No one else was even close to being in the double digits.

That was the year that most manufacturers began releasing models that incorporated a new technology standard called HQ, which stood for “high quality.” The addition of HQ justified the first-ever price increases in VCRs.

HQ was an umbrella term for several different technologies. A review of the new HQ VCRs in a contemporary issue of Popular Science described HQ as “a series of four circuits: extended white clip, to increase picture sharpness; luminance noise reduction, to lower overall picture snow; chrominance noise reduction, to remove snow from color; and detail enhancement, to increase picture detail.” White clip and detail enhancement were used only during recording.

HQ video was visibly superior, the review said, while acknowledging it was impossible to figure out which of the four technological constituent elements contributed the most to the improvement.

Every leading manufacturer in 1986 produced multiple models adopting the new HQ features in various combinations. Some models incorporated all four HQ features. RCA and a few other makers were skeptical that chrominance noise reduction had much benefit and omitted it.

Panasonic’s PV-1563 was one of at least 11 models released by Panasonic in 1986. All of them, including the PV-1563, incorporated only two of the four HQ features: white clip and detail enhancement. In addition, the 1563 had other high-end touches, including wireless remote control, on-screen display and programming, and auto rewind. Like other premium VCRs of the time, it had four heads. Standard machines had two heads. Each of the heads was angled to sense or record (depending on whether the machine was in play mode or record mode) the magnetic information in adjacent tracks, or “fields,” on the magnetic tape as it passed by the head. However, VHS machines had multiple speeds, known as SP, LP, and EP. Four-head machines added a second pair of heads that were wider, to specifically accommodate the SP speed, which was the fastest. The benefit was that with four-head models, the slow-motion and freeze-frame features were much clearer and sharper. For the first time, slo-mo and frozen images were mostly free from video artifacts, notably horizontal white bands of interference.

Another major advance that debuted on VCRs in 1986, including the PV-1563, was Multichannel Television Sound, or MTS. Up until the mid-1980s, television was basically a monaural technology. MTS allowed it to go stereo. It did so by adding a second channel to the monaural sound channel, which was then standard. For purposes of moving to stereo, the monaural channel continued to be the sum of the audio from the original left and right signals. The added channel carried the left signal minus the right signal. When the signal from the second channel was added to the monaural signal (left plus right), you got the left stereo channel. When it was subtracted, you got the right stereo channel. The system, developed by Zenith and standardized by the U.S. Electronic Industries Association, was incorporated into the NTSC TV standard starting around 1984.

It is a measure of the technological ferment in the VCR industry in 1986 that the VCR nostalgia website Vintage Electronics characterizes the 1563’s combination of features and performance as merely “slightly ahead of the times in that era.” Of Panasonic’s 11 new models that year, the 1563, which listed at $775, was situated right in the middle of the range in terms of price.

By 1989, 65 percent of U.S. households had a VCR, according to the U.S. Bureau of the Census. That was the year the VCR market experienced its first-ever decline in sales, though. At the time, the suspected causes included market saturation and the growing popularity of cable TV. There were storage-media formats that competed with VCRs, most notably the LaserDisc, but they were all failures until DVDs came along. Introduced in Japan in 1996, the DVD format gradually gained in popularity worldwide, and DVD players began outselling VCRs around 2001.

The last VCR manufacturer threw in the towel on VCRs in 2016, but the VCR’s influence persists to this day. The digital video recorder, or DVR, performs the same basic functions as the VCR, plus a few more. Meanwhile, the film and video industries have been and are still continuously reorganizing themselves around the principle of time shifting, most recently through the medium of broadband streaming. The VCR is gone, but its legacy of liberation will endure as long as people stare at screens, be they small, medium, or large.

The Consumer Electronics Hall of Fame: Fitbit

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-fitbit

As early as the 1940s, people became enchanted with the idea of electronic devices so small you could wear them like a wristwatch. During the heyday of vacuum electronics, the Dick Tracy comic strip in the United States envisioned wrist-worn radios and televisions, examples of what we’d call “wearables” today. Then, in the 1970s, electronic wristwatches took the market by storm, around the same time that handheld calculators were also becoming popular. Combining the two seemed like a good idea, at least to some people. So manufacturers tried it, with some limited commercial success starting in the late 1970s.

As time and technology progressed, the engineering work naturally moved on from calculators to computers you could wear on your wrist. But for decades the computer watch remained stubbornly unpopular. Among the companies that tried and failed early on were Casio, Epson, Sony, IBM, and Microsoft.

The first computer-powered wearable to achieve huge, breakout success proved most futurists almost completely wrong regarding form, features, and function. The Fitbit, when it was introduced in 2009, had a rudimentary LED screen capable of displaying only numbers. Nor was it worn on the wrist; it clipped to a pocket or waistband. And it tracked fitness metrics—steps taken, distance traveled, and (estimated) calories burned.

It was a modest start for a company that would come to dominate the nascent field and, in November 2019, be bought by Alphabet’s Google in a blockbuster deal. The US $2.1 billion acquisition was seen by analysts as an attempt by Google to shore up its position in the burgeoning wearables market, where rival Apple has enjoyed substantial success with its watch and associated fitness-tracking apps for the iPhone.

Early on, there was hardly a hint that such a lofty perch would be attained. The Fitbit grew out of a quest to make a wearable that was something other than a computer watch, and also out of the zeitgeist. Fitness has of course been in the public consciousness for a very long time, going through erratic cycles, like the hemlines on skirts. In the 1960s a walking craze took place in Japan; it might be the earliest instance of the 10,000-step ethos. In the 1970s, jogging (now known as “running”) became popular in America and then spread out to other nations; around the same time mall walking would become a thing among older Americans. The activities caused sales of pedometers to surge. Other fitness-related metric activities also benefited, too. Dieters had long tracked estimates of calories ingested and burned through programs such as Weight Watchers (founded in 1963). And the President’s Council on Sports, Fitness and Nutrition, which dates back to 1956, always encouraged people to keep logs of their fitness activities.

Counting steps, tracking calories, and logging miles run, cycled, or swum dovetailed precisely with another long-term societal trend: measuring and quantifying everything that can be measured and quantified. That’s the line running through such disparate modern activities as setting recommended daily allowances for vitamins and minerals, predicting the number of transistors on a die, and closely tracking the performance of professional athletes.

Measuring everything has led to analysis. For many businesses, collecting and analyzing data long ago became an imperative. For some individuals, meanwhile, doing so has become an obsession.

By the early 2000s, the basic technological elements for a fitness tracker were readily available. Accelerometers, which are used to track motion, had become accurate, easy to fabricate, and cheap. There were multiple options for transferring data from the tracker, including physical ones, like USB ports and cables, as well as wireless options like Bluetooth and Wi-Fi.

Fitbit was founded in 2007 as Healthy Metrics Research by James Park, now the CEO, and Eric Friedman, chief technology officer. Park and Friedman have been open about how difficult the development of the Fitbit was. They’ve acknowledged starting out with so little experience in manufacturing that one of their early demos consisted of a circuit board inside a balsa wood box. Going in to a subsequent demo, a problem with the antenna rendered the device inoperable until Park fixed it by attaching a piece of foam to the circuit board. Park is on record citing seven distinct occasions when a combination of unanticipated manufacturing difficulties and capital shortfalls nearly killed the startup.

Fitbit’s exceedingly fitful progress allowed competitors to get to market first. Apple, collaborating with Nike, got there in 2006. Their Nike+ included a chip that fit into a receptacle that Nike built into a special line of sneakers. The chip tracked pace, distance traveled, and other metrics, and connected wirelessly with a receiver module that users plugged into an iPod. Other competitors at the time included Philips, with its DirectLife Activity Monitor, which showed your daily progress toward your preset goals as a series of glowing LED dots. Another one was BodyMedia, a Massachusetts startup that produced a series of armbands, at least one of which was marketed as the Bodybugg by a company called Apex Fitness.

Fitbit, meanwhile, persevered and got its first device to market toward the end of 2009. That first Fitbit looked like a futuristic black clothespin. The Fitbit linked wirelessly with a PC to transfer data. It measured steps and distance traveled, estimated calories burned, and reported hours active versus hours sedentary. In marketing materials, the company also said customers could wear it at night so they could gauge the extent to which they tossed and turned—presumably an indication of the quality of their sleep. It was priced at $99; the company sold about 5,000 units by the end of the year and reportedly had an additional 20,000 orders logged.

The original Fitbit was an ugly little thing, but its biggest advantage may have been what it lacked: a fatal flaw. All of Fitbit’s rivals had at least one characteristic that proved to be a drawback. The Philips and Bodybugg products were too bulky, where the Fitbit was sleek and light. The Nike+ kit was complicated, and it also required an iTunes subscription, where Fitbit was simple to use and provided free access to its Web portal and analytical software.

That website was key. The developers made an effort to make it easy not just to log fitness data but also to keep track of food intake. For example, they gathered nutritional data from many of the biggest chain restaurants in the United States. Fitbit users could simply identify what they’d eaten and the system would automatically fill in values for calorie intake. The process was familiar to anyone who had experience with Weight Watchers, except it was automated.

Partly because of these advantages, Fitbit was already emerging as a market favorite when it got a big boost from the U.S. electronics chain Best Buy, which began stocking its products in some of its stores in 2010. More big retailers around the world followed. The company built momentum. Subsequent competitors, including Garmin, Jawbone, Misfit, and Xiaomi, got into the market too late to catch up.

One of Fitbit’s next models was indeed worn on the wrist. The company also kept gradually adding features. In 2011 it produced a model called the Ultra that also incorporated an altimeter, a digital clock, and a stopwatch.

The computer watch might have taken off on its own, but in retrospect it seems pretty obvious that fitness trackers helped pave the way. It wasn’t until a few years after fitness trackers became common that computer watches—we now call them smart watches—finally began succeeding in the marketplace, in 2012. Early on, makers of smart watches tried to integrate fitness functions, but for whatever reason, consumers wouldn’t be charmed by a combo device for at least a few more years.

In 2014, Fitbit had 77 percent of the market for “full-body activity trackers,” according to the market research company NPD Group, as well as the No. 1 fitness app in Apple’s App Store.

Around 2015, fitness trackers began being subsumed into smart watches. For reasons nobody has ever quite fathomed, people finally became receptive to the concept. Smart watches with fitness-tracking capability naturally began eating into the market for stand-alone trackers. Around the same time, it became apparent that there were simply too many companies making wearables, and some of them began to consolidate. Jawbone bought BodyMedia, maker of the Bodybugg wearable, in 2013. Fossil bought Misfit in 2015. Fitbit bought Pebble in 2016 and Vector Watch in 2017. That same year, Fitbit introduced its first Fitbit-branded smart watch.

Even though fitness tracking is being subsumed into smart watches, the market for stand-alone fitness trackers was nonetheless projected to keep growing in 2019, to $3.3 billion in global sales, according to the market research firm Statista. Fitbit was still the clear leader in the market as recently as 2016 (the last year for which Statista publicly released numbers); back then it had a 47 percent share, followed by Xiaomi, with 27 percent. No one else had anywhere near double digits.

Looking at the global market for all types of wearables (smart watches, fitness trackers, and other devices), as of 2018 Fitbit ranked behind only Xiaomi and Apple, the dominant smart-watch vendors. Late in 2019, with the power of Google behind it, Fitbit seemed poised for a run at the top.

The Consumer Electronics Hall of Fame: Casio F-91W Wristwatch

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-casio-f91w-wristwatch

The Casio F-91W digital wristwatch is a triumph of adequacy. It’s plain, utilitarian, and utterly dependable. When it was introduced in 1989 it sold for about US $20. Today—yes, it is still being manufactured 30 years after its introduction—you can get one for a little more than $10.

The best mechanical watches, some of which cost a lot more than $100,000, are accurate to within about 2 seconds a day. The average watch with a quartz movement, such as the F-91W, is accurate to within 1 second a day, which is the accuracy that Casio claims for the F-91W. It’s more accurate than anyone looking to spend a sawbuck on a watch should reasonably expect.

When it was introduced, it had a basic set of features, which included a daily alarm, an auto calendar, day/date display, and a stopwatch that does splits and marks time in hundredths of a second. It has three buttons, and if you’re reasonably bright you probably won’t need the manual to figure out what they do. But don’t ignore the manual [PDF], because it has one of the more entertainingly assertive notes concerning operator error in the consumer electronics industry:

“THERE IS NO WAY unit components can be damaged or malfunction, due to misoperation of buttons. If confusing information appears on the display it means entry sequence was incorrect. Please read the manual and try again.”

Well, okay then. Also, with newer versions, if you hold down the button on the right-hand side of the watch for several seconds, the display spells out “Casio,” a feature that supposedly indicates that the watch is a legitimate Casio product, instead of a counterfeit. (Strange but true: Even a $10 watch is vulnerable to counterfeit.) The F-91W’s LCD screen is clearly readable in the daytime, and at night there’s a greenish backlight that’s entirely adequate.

From the very first model three decades ago, none of that has changed. Nor has Casio ever changed the design. The common version of the watch has always had a black plastic case and a stainless-steel back. There have been versions with plastic cases produced in a color other than black, and also models with all-metal cases. But that’s it as far as fancy embellishment goes.

At precisely one-third of an inch thick, the F-91W is relatively thin and flat, compared to the hunky wristwatches popular today. It’s neither ugly enough to be shunned nor attractive enough to qualify as stylish. These days the watch gets lauded for its retro style, but that’s hard to credit. Can any item’s style be called “retro” when it simply hasn’t changed in three decades? Casio describes the design as “tried and true.”

It is water resistant. The battery lasts about seven years. Customers commend its resin strap for its longevity. The watch earns admiration for being durable and reliable. Granted, when applied to a wristwatch, “durable” and “reliable” are considerably fainter praise than “sexy” and “glamorous.” But did we mention that it costs $10? And, furthermore, there are photos of a prepresidential Barack Obama wearing it.

The watch does have something of a checkered past. In the mid-1990s, the F-91W became briefly notorious as “the terrorist watch.” Extremists in Pakistan and the Philippines learned how to use the F-91W as a timer for improvised explosive devices. The watch endured the bad publicity. More than 20 years later, it’s a bit of trivia that has faded from the consciousness of the watch-buying public.

Casio has never provided sales figures for its watches, nor has it revealed its manufacturing output. That said, a commonly cited estimate holds that the company has been producing roughly 3 million F-91Ws a year. Production levels have certainly varied over the years, but that 3 million figure, multiplied by 30—the number of years the watch has been available—gives a ballpark estimate: 90 million F-91Ws shipped. Even if that’s too high by millions, Casio has sold an astounding number of F-91Ws in a market with hundreds, maybe thousands of manufacturers and countless different models all told.

Today, the Casio F-91W is typically the first- or second-best-selling watch on Amazon (the “Casual Sport” version of the F-91W also ranks in the top 50). And it is still consistently included in reviewers’ lists of the best digital watches, invariably in the category of “best value.”

The Consumer Electronics Hall of Fame: Apple iPhone

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-apple-iphone

In 2007 Apple introduced its first cellular handset with the advertising slogan “Apple reinvents the phone.” Unlike most such slogans, this one was not a ridiculous exaggeration. It was a statement of fact. Though the iPhone incorporated a lot of features that had already been pioneered by other phone manufacturers, it was still a paragon of functionality, coupled with a user interface that was innovative for its simplicity and elegance. Crucially, the phone fit comfortably into Apple’s nearly seamless environment of products and complementary services, notably iTunes and the App Store.

From Apple’s inception, cofounder Steve Jobs had been intent on changing the way people used electronics. The first Macintosh computers famously incorporated the most cutting-edge user interface technology available. Jobs was ousted from Apple in 1985, but after a 12-year exile he returned and began to apply the same philosophy of user-interface simplicity to a series of consumer electronics items that would include the iPod, iPhone, and iPad. That imperative basically took the form of minimizing the number of hardware buttons and also relying on the user’s index finger as a pointing and input tool.

As for buttons, Jobs wanted just one of them—if Apple engineers could figure out how to pull it off. There had to be at least one, if for no other reason than to turn the darn thing on and off.

The iPhone wasn’t developed from scratch but rather began with the iPod MP3 music player, the first generation of which was introduced in 2001. Apple engineers had devised a click wheel for the iPod, a big step toward a one-button user interface. The click wheel wasn’t a single button, but it sort of resembled one, at least from a distance. It had a central button within a ring that had four more buttons embedded within it, one at each of the cardinal points.

As for finger input—that was a tougher nut to crack. Touch-sensitive screens had been around for decades, but for a long time they were seldom used because they were sluggish and imprecise. To make them more responsive, the display industry developed screens in the 1990s that would work with specialized electronic pens. One of the pioneering instances of stylus input was Apple’s own Newton personal digital assistant, introduced in 1993 and discontinued in 1998. By the early 2000s, several companies were making headway with triple-layer capacitive multitouch touch-screen technology, which held out the promise, at least, of precise input. But at the end of 2004, when work on the iPhone began, that technology was just barely on the verge of being ready for commercialization.

If you had responsive finger-based input, you wouldn’t need multiple physical buttons. Put a virtual keyboard on the screen and there would be no reason to have a physical keyboard, which many smartphones had in those days. Such an arrangement would also mean that the screen could occupy almost all of the face of the phone. For Apple, specifically, use of a good, responsive touch screen meant the click wheel could finally be reduced to Jobs’s vision of a single-button device. He decided the first iPhone would have such a screen, as would the next version of the iPod. To make that happen, in 2005, Apple bought FingerWorks, one of the companies working on triple-layer capacitive touch screens.

The original device, introduced in 2007, was built around a 620-megahertz Arm microprocessor. Some of the supporting chips came from Skyworks Solutions and Marvell Technology Group. The touch screen measured 3.5 inches on the diagonal, and it had a resolution of 480 x 320 pixels. Connectivity options included GSM/EDGE (basically 2G wireless), Wi-Fi 802.11b/g, and Bluetooth. There was a music player and a 2-megapixel camera, which was pretty good in those days. Not long after it was introduced, the iPhone also began incorporating some other features that had already been introduced by other phone manufacturers, including support for games, apps, and the Multimedia Messaging Service (a form of text messaging).

Apple also included a headphone jack in the original iPhone that was so deeply recessed into the upper edge of the phone that it forced customers to purchase new headphones or an adapter for their old ones. It was a harbinger. With almost every new generation of iPhone, Apple has made one thing or another obsolete, forcing its famously loyal customers to buy some additional new thing—headphones, an adapter, a power cord. Although no one could have guessed it at the time, the original iPhone with its clunky headphone jack set the tone for Apple’s future reputation in smartphones: at once boldly trailblazing and bafflingly irritating.

Apple was not the first smartphone maker to introduce a touch screen. That distinction went to LG, which publicly announced it would have a phone with a capacitive touch screen early in 2007. The phone, called the Prada, was created with the Italian luxury designer. Jobs announced the iPhone shortly after; he likened it to an iPod that was also a telephone with Internet connectivity. LG got the Prada to market in May 2007, edging out Apple by just a few weeks. The iPhone version with 4 gigabytes of storage sold for US $499; the 8-GB version was $599. (The iPod Touch music player, also sporting a touch screen and a nearly identical appearance to the iPhone but with no telephone, followed shortly after in September.)

Reviewers and customers agreed: Apple had indeed reinvented the phone. It was simple, it was attractive, it was elegant. It was the little black dress of consumer electronics. Music was easy to enjoy and purchase through the iTunes app, which had already been a hit with the iPod. There were many, many more apps to come. Smartphones, essentially pocket-size computers, were becoming favored platforms for applications developers, especially game designers. Apple made it easy to develop, sell, and download apps with its App Store. The iPhone wasn’t just a product; it was an experience.

Still, the original iPhone sold a total of only 6.1 million units. One problem was that it worked only with 2G networks, and in 2007 2G was already giving way to 3G. The original model was replaced just a year later, in 2008, with a 3G version. From that point on, Apple retroactively referred to the original iPhone as the iPhone 2G.

Apple has never been the sales leader in smartphones. In recent years that’s been Samsung, which sells something over 3 hundred million smartphones a year. Apple’s emphasis has always been on prestige products. Its legendarily loyal devotees pay more for them because of Apple’s reputation for exceptional quality and superior product design. And it’s not that sales were unimpressive—quite the contrary. The iPhone 6 and 6+ together sold roughly 220 million units, behind only the Nokia 1100 (250 million units) as the best-selling cellphone model of all time. By early 2019, Apple had achieved cumulative iPhone sales in the neighborhood of 1.6 billion (an exact figure is unavailable because Apple stopped announcing iPhone sales figures in 2018).

Jony Ive became one of the most prominent industrial designers in the world for his work on the iPhone, iMac, iPod, iPad, and MacBook; he was even knighted in his native England. He made worldwide headlines in the spring of 2019 when he announced he was leaving Apple.

The Consumer Electronics Hall of Fame: Yamaha HP-1 Headphones

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-yamaha-hp1-headphones

The 1970s were heady years for fans of recorded music. Microphones, tape recorders, record players, radios, speakers, amplifiers…nearly every category of gear used to record and reproduce electronic sound was improving by leaps and bounds. Meanwhile, recording engineers were continuously outdoing each other with creative new studio techniques. It was amid this splendid ferment that engineers at Yamaha began designing a set of headphones equal to the artistry of the most ambitious musicians and the studio wizards those musicians were working with.

By the early 1970s stereo recording had progressed far beyond such simple techniques as assigning bass and rhythm guitar to the right channel and lead guitar and drums to the left channel. The apotheosis of multichannel recording was Pink Floyd’s Dark Side of the Moon, which began blowing minds in 1973. To listen to Dark Side—no, to, like, really hear it, man—you had to listen with headphones. There had been almost nothing like it before. It sounded as though the bass lines were physically flying loops around you. Keyboard runs emerged from one direction, moved through you, and kept going in the other direction. Vocalists appeared in front of you and then receded into distant guitar chords.

Thanks in part to Dark Side in the 1970s, for the first time, having a great pair of headphones was a necessity not just for audiophiles but for any ardent music fan. So a great pair of “cans” at a reasonable price was practically guaranteed to sell.

Around 1974, Yamaha made two interesting decisions about its next set of headphones, the set that would get the designation HP-1. One was to try to pursue an unconventional technology to achieve a balance of high performance and affordable cost. The other was to hire a celebrity designer to design the product. Doing the first was almost common. Doing the other less so. Doing them both at the same time was, and is, rare.

Most headphones and full-size speakers then, and now, were based on dynamic drivers, which go back to the 1920s. They use an electromagnet coil to vibrate a cone-shaped diaphragm that produces sound waves. High-end speaker manufacturers at the time were experimenting with alternatives, and there was growing enthusiasm for planar, or full-range, speakers. Headphone makers at the time meanwhile were busy working on a third type—electrostatic, or condenser, speakers. Electrostatic headphones can provide great sound, but they’re relatively expensive.

Yamaha’s plan was to find an approach to building headphones that would combine the best attributes of the various speaker types. The company’s engineers decided to create a fourth type, which they called full-drive magnetic headphones. The approach they envisioned would be “very close to condenser models, an ideal method for combining the characteristics of the condenser and the ease of use of the dynamic model,” according to an online Yamaha technology retrospective.

The idea was not new. Prior to the HP-1, a few products had hit the market trying to make use of essentially the same concept. No one had been successful, however, according to that retrospective on the HP-1. The reason, the company said, was simple: “Basically, such products were difficult to manufacture, and not much benefit was found to offset the difficulty.” Yamaha engineers figured out how to get a lot more of the benefit.

Yamaha’s headphones used a polyester diaphragm just 12 micrometers thick –exactly the same as the thickness of the tape in a C90 cassette, according to the Yamaha retrospective. A similarly thin diaphragm is also used in electrostatic (condenser) speakers. However, the Yamaha approach embedded on top of this thin diaphragm a conductive trace, similar in function to the voice coil in a dynamic speaker. The diaphragm was placed within an array of permanent magnets that created a very even magnetic field, called an isodynamic magnetic field.

In operation, the music signal flowed through the conductive trace on the diaphragm, creating a magnetic field that interacted with the isodynamic field, causing the diaphragm to move and produce sound waves. Yamaha’s big breakthrough was figuring out a way to economically print the conductive coil directly on the diaphragm. Yamaha’s references to how this was accomplished are inconsistent; they variously describe the method as printing and as “photo-etching.” Either way, the coil was a “250μ interval spiral” with five concentric rings, according to the retrospective.

The company engaged industrial designer Mario Bellini to design the product. Bellini had already designed typewriters for Olivetti and audio systems for several companies, including Brionvega, in Italy. Bellini, who was still active in 2019 at age 84, would subsequently design a camera for Fuji; vehicle interiors for Fiat and Lancia; notable buildings in Paris, Bologna, Tokyo, North Carolina, and Berlin; an enormous residential complex in the Virgin Islands; and so many other celebrated items that he is acknowledged today as one of the most accomplished industrial designers of his era. 

Judging from the retrospective, Yamaha was still clearly enthralled with the Bellini design decades after its release:

Even more than simply the beauty of its appearance, the contact of the ear pads with the ear, twin headbands sharing in supporting the entire headset on the head, and its light, airy fit thanks to a universal joint connection that allowed the ear pad section and headband to move freely, must have strongly impressed the meaning of the concept of industrial design upon the audiophiles of the time….

“From the catalog of the time, you can see the development team’s enthusiasm to try to create something from zero that had never previously been seen anywhere in the world,” the retrospective’s HP-1 history concludes. 

The enthusiasm was merited. In devising a method to manufacture the diaphragms economically, Yamaha had finally succeeded where other audio companies had failed. Yamaha called its headphone technology “orthodynamic.” Today, the technology is typically referred to as planar magnetic or, less commonly, isodynamic.

The HP-1 headphones were priced at about US $200 upon release in 1976, or $900 in 2019 dollars. Pricey, but not outrageously so. They were immediately praised for both their sound quality and their design. HP-1s are still well regarded by those who can get their hands on working models today.

A Yamaha spokesman declined to release sales figures. The company is still producing headphones, mostly of the earbud variety, but there are still a few over-the-ear models in its lineup.

Dark Side of the Moon remained ranked in the Billboard Top 200 for 741 consecutive weeks—over 14 yearsin part because of its reputation as providing a stupendous headphone experience. With subsequent sales spikes, its cumulative residence on the charts is well in excess of 900 weeks. No other album comes close to that record, and you better believe that pun was intended.

The Consumer Electronics Hall of Fame: Texas Instruments’ Speak & Spell

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-texas-instruments-speak-spell

Before the Speak & Spell was introduced, hardly anybody other than the four engineers who developed it thought it should be introduced. Texas Instruments management saw little value in the project. When the four described it to one of the nation’s top experts in spelling education, he essentially advised them to kill it. When they test-marketed it, parents were dismissive.

But Paul Breedlove, Gene Frantz, Larry Brantingham, and Richard Wiggins were having fun building it, and didn’t want to stop. They somehow forgot to mention to TI management that they had encountered surprising antipathy to the idea, even where one might have expected warm endorsement. Fortunately for them, though, it just so happened the Speak & Spell was the only consumer-targeted project at TI at that particular moment that was within its budget (two of the others were a CB radio and a home computer).

Well, almost within budget. In an interview, Frantz recalls that it’s just possible the team may have neglected to properly expense a few things to keep the official cost tally low. But these things happen, don’t they?

The basic idea for an educational spelling toy was Breedlove’s. Breedlove just happened to have been in several right places, all at the right times.

Breedlove and Frantz had participated in the development of Little Professor, a calculator-with-a-twist that TI introduced in 1976. Educators at the time considered the use of calculators in the classroom to be cheating. TI, then and now one of the world’s biggest makers of calculators, tried to get over that objection and crack the education market by creating a calculator that presented equations lacking their solutions; students had three chances to type in the correct answer.

Breedlove also had a daughter who was learning to spell, and it occurred to him that there was a basic similarity between teaching math and teaching spelling. With math, you present an equation and the student has to solve it; with spelling, you enunciate a word and the student has to spell it. A spelling toy would differ in that the words would have to be spoken aloud. That would obviously require technology that converted stored words to audible speech. Such technology did exist then, but it was costly and would have made any product in which it was used far too expensive for the consumer market.

But Breedlove, who had recently done a stint in TI’s speech-research lab, knew that workers in that lab, and probably elsewhere, were on the verge of creating affordable speech-synthesis tech. Even so, there was yet another hurdle, which was that the speech-synthesis capabilities for this spelling toy would require an amount of memory then considered ridiculous.

He didn’t let it deter him. Instead, he started assembling a team by recruiting Frantz to be the system designer. Breedlove knew Wiggins from the speech-processing lab; conveniently, Wiggins was between projects. Wiggins was good friends with Brantingham, an IC architect, and suggested him for the team. Breedlove then went to TI management to get funding.

They were never going to get funding from the regular R&D budget, Frantz explains, because the risk for the proposed product was too high and the potential return was too low. But TI had a second source of funding for “wild hair” ideas. Alas, the spelling device was too risky for even that. The last-resort option was something called the idea program. All you had to do was convince a senior TI technologist to back it. Breedlove found such a person to sign off on his proposal, and the team got US $25,000 to work with (roughly $100,000 in 2019 dollars).

They started work in early 1977. Their design called for three new chips: a controller, “huge” ROMs, and a single-chip speech synthesizer, which did not then exist. Initially designated TMC0281, the synthesizer chip would be the world’s first.

The Speak & Spell would ultimately incorporate a pair of 128-kilobit ROM chips. The state of the art at the time was 16 kb.

The synthesizer was based on a technique for coding speech called linear predictive coding (LPC). LPC let the device generate the speech signal for an entire word from a relatively small amount of data, much smaller than what would have been required to simply store digitized recordings of the words themselves. The idea to use LPC, which was cutting edge at the time, came from Wiggins, whose specialty was voice-processing algorithms, and who held a doctorate in applied math from Harvard and had previously worked at the Mitre Corp. Wiggins was acquainted with some of the pioneers of LPC, which was then starting to find commercial applications after being developed in the 1960s at Bell Labs in New Jersey, and at Nagoya University and NTT in Japan.

The synthesizer chip ran digital signal processing logic, although it was not itself a DSP. Wiggins and Brantingham specified and designed the chip entirely by themselves and without any approvals from anyone, according to Frantz. TI management had neglected to fit the project into any particular reporting structure, and that suited the four team members just fine. In Frantz’s estimation, the lack of “help” from management gave Wiggins and Brantingham the freedom they needed to figure out a successful design. “They made the appropriate compromises to get that puppy to work,” Frantz says.

The team also had a choice of process technologies. In addition to complementary metal-oxide-semiconductor (CMOS) technology, its predecessors were available. Negative-channel metal-oxide semiconductor (NMOS) was still widely used in those days. Much less common, and much less fast, was positive-channel metal-oxide semiconductor (PMOS).

“NMOS was high-performance at the time,” Frantz explains. “We did it in PMOS, which was…not high-performance. So we not only designed the first speech-synthesis device, we did it in the worst technology you could pick. And the reason for that was if you were doing a consumer product, you couldn’t afford NMOS.” The ROMs were done in PMOS as well.

When the Speak & Spell hit the market in 1978, it was instantly popular with the only constituency that really mattered—kids. And it appeared to be an effective means of helping youngsters learn to spell. Most important, the Speak & Spell was instrumental in overcoming the resistance to using electronic devices in education.

Texas Instruments wouldn’t produce its first digital signal processor chip for another five years, in 1983. But the implementation of DSP logic in the Speak & Spell was such an important breakthrough that the toy was designated an IEEE Milestone in 2009. It also helped launch TI toward a leadership position in DSP that the company continues to hold more than four decades later.

The Consumer Electronics Hall of Fame: Zircon StudSensor

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-zircon-studsensor

Humans use tools. It’s what sets us apart from all the animals (except for dolphins, crows, ravens, Egyptian vultures, most primates, polar bears, elephants, sea otters, some mongooses, octopuses and a few rodents). For most of human history, hand tools were entirely mechanical. Electric hand tools began to become popular in the 1930s. It took another half century for the first electronic hand tool to arrive: the StudSensor from Zircon Corp., which was introduced in 1980. With the StudSensor, Zircon single-handedly opened an entirely new market segment to the electronics industry: the hardware business.

It was in the 1860s when people first began building frame houses using standard-size wooden boards called studs. Those framing studs hold up walls, which of course enclose the studs. Originally, the walls were typically plaster over lath, and now they’re usually drywall. To attach anything heavy to a wall, such as cabinetry or a shelf, it’s generally a bad idea to mount it right on the wall. Better to go through the wall and mount the heavy objects on the rigid wooden studs. For as long as there have been studs, however, the problem has been locating exactly where the heck they are behind the walls.

The first stud finders seem to have been introduced around the turn of the 20th century. They were magnets on pivots. Users would run the device across a wall, watching closely to see if the magnet was attracted to something, which would be one or more nailheads in a stud. A great deal of patience was required, but in the end it was usually possible to determine the exact location of a stud hidden in a wall.

Jump forward to the swingin’ ’70s. Robert Franklin, an engineer with Beckman Instruments (now part of Beckman Coulter), in Silicon Valley, would tinker away on side projects in his spare time. He was convinced he could build a better stud finder, and he set out to do it in true Silicon Valley fashion: in his garage. He was doing this at the same time and only a few miles away from where Steve Jobs and Steve Wozniak were working on a garage project of their own.

Franklin received a patent for his stud finder in 1977. The patent describes using a sensor with a capacitor plate to detect changes in the dielectric constant of a wall; basically, any change in charge on the capacitor indicates the presence of a stud. The device would note the center of the change in dielectric constant in a range. If the builder had been conscientious enough to hammer the nails into the center of the stud, the center of that range indicated the center of the stud.

Franklin arranged meetings with several of the big tool companies (most of which are still in business) to sell his stud finder. Not one was interested. He went home and put his prototype on a shelf.

A few years later Franklin was out socializing when he got into a conversation with a stranger. The fellow’s name was Chuck Stauss, and as they chatted they discovered they were neighbors. Turned out Stauss ran a consumer electronics company called Zircon. Product cycles in consumer electronics are short, so Stauss was always looking for the next new thing. Franklin mentioned his stud finder. Soon after meeting, the two were in business together.

It took them a while to get the StudSensor right. The basic concept was simple, but contemporary technology was only barely up to the task. It was “tricky” to build, recalls John Stauss, the current CEO of Zircon and the son of Chuck Stauss.

The capacitive sensor had to react to “infinitesimal changes in voltage and current,” the younger Stauss recalls. “We used discrete components—off-the-shelf parts—and some of them were stretched to the limits of their capabilities. Some ICs would work, while other manufacturers with [ICs that had] identical specs wouldn’t work because the tolerances were different. So the manufacturing issues became very, very critical for us early on, which was also why, in those early days, we began to manufacture it ourselves—so we could control the quality.”

On the plus side, that maddening inconsistency made it nearly impossible for other companies to knock off the design simply by taking the StudSensor’s circuit board and trying to match it part for part.

Zircon introduced the StudSensor in 1980. Sales began briskly, thanks to an existing business relationship Zircon had—it was already providing computer games to Sears. Sears’s Craftsman line of tools was then the most popular in the United States, and after Sears agreed to stock the StudSensor, sales took off.  

The next trick was to get it placed on shelves at hardware stores. It was terra incognita—at the time there wasn’t a single electronic tool in the hardware universe. In 1982, the hardware market in the United States was still made up of a few regional companies and a lot of local chains and countless thousands of mom and pop stores.

By then the younger Stauss was on the payroll and making sales calls. “I remember talking to this classic Chicago buyer, sitting behind this desk, chomping a cigar—literally chomping a cigar. He looked at the StudSensor and said, ‘It’s a toy. It’s a fad. It will be over in six months. Why should I buy it?’ I told him Sears is selling out of it. He said, ‘Oh, yeah? Let me take a look at that again.’

“We sold it to every hardware store in the country,” Stauss claims. “When we got into the hardware channel, it was a dream come true. Product life cycles were measured in decades. Our customers had all been around for 100 years, and they paid their bills like clockwork. We were the only truly electronic innovators in this playing field.”

You might figure that once you’ve got a stud finder, you wouldn’t need another. But Zircon, which takes pride in its engineering, kept improving the technology. For example, it eventually upgraded to a custom, mixed-mode analog/digital IC. “That was very tricky then, and it’s still tricky,” Stauss explains. That kept copycats at bay too. Franklin’s patents ran out in 1998, but Zircon continues to innovate, has introduced other products for the builders market, and is still one of the market leaders in an increasingly crowded field. Today, electronic stud finders have taken their place alongside hammers, screwdrivers, pliers, and rulers on just about every handyman’s list of must-have tools.

The Consumer Electronics Hall of Fame: PhoneMate 400

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-phonemate-400

The PhoneMate 400 was the first commercial residential telephone answering machine. When it was introduced, in 1971, it unshackled people from their telephones, metaphorically speaking. That was a big deal because in those days, telephones were still literally tethered to walls.

The PhoneMate 400 also played a role in one of the most consequential legal battles in the history of electronics, according to the memoir of Mark Brooks, the company’s CEO at the time. In his book 85 Is the New 65, Brooks claims that if it weren’t for the PhoneMate, the development of the Internet as we know it would likely have been delayed by at least a few years.

In the mid-1960s, Brooks worked for a company called NovaTech, where he helped develop the idea for a residential answering machine. NovaTech got bought out by investors who weren’t interested in the consumer electronics market, however. As Brooks tells the story, the new management let him depart and take the idea with him.

He founded PhoneMate in 1968 and began work on the first residential answering machine. “Unlike the ultra-sophisticated solid-state electronics devices that came along later, the PhoneMate basic design was based on an old fashioned, primitive really, Japanese reel-to-reel tape recorder,” Brooks wrote in his memoir.

“The PhoneMate was built around a standard off-the-shelf box made of pressed sawdust covered with wood grain plastic,” he continued. “The size of the box was determined by the minimum amount of space necessary to contain two small reel-to-reel tape recorders and the necessary electronics to make it into a telephone answering machine.”

PhoneMate contracted with Asahi Corp., in Japan, to manufacture the device. The original machine, introduced in 1971 (or possibly 1970; Brooks is inconsistent), weighed about four and a half kilograms (10 pounds), could record 20 messages before having to be rewound, and sold for about US $300. At roughly the same time, the company introduced another model designated the 800—essentially the 400 “with some bells and whistles.” (These were followed by the 9000, “with the ability to call in from another phone, sound a beeper, and retrieve stored messages by remote control,” according to Brooks.)

Before there were answering machines, if you were anticipating a call that you didn’t want to miss, you had to stay close enough to your phone to hear it ring—and close enough to get to the phone within seven or eight rings. These are of course arbitrary numbers that somehow everyone had settled on as being the proper number of times to let a telephone ring before assuming the recipient of the call wasn’t going to answer. If you didn’t pick up in time, there was no way in those days to know who was calling. If you were out, you had no way of knowing if anyone had called at all.

The PhoneMate 400 changed all that. With the PhoneMate, people expecting a call no longer had to wait by their phones. The arc of the product’s rise and near-disappearance tells the story of how integral the basic function became to telephony. In the mid-1980s, millions of answering machines were sold each year; by the 1990s, tens of millions. By the early 2000s, the market for answering machines had started shrinking fast, but only because the function was shifted into the network, in the form of messaging services that eventually came free with nearly every telephony account. Even so, the machines have never quite disappeared entirely. In 2019, stand-alone machines could be bought for as little as $23, and some phones come with answering machines built in.

But it was the PhoneMate 400 that started the world down the path toward voicemail. Its additional significance is rooted in the iron grip that AT&T had on the entire telephone industry in the United States prior to being broken up in 1984.

Until 1968, AT&T, which built and controlled most of the telephone networks in the United States, would not allow anyone else to attach any electronic device directly to any of its wires anywhere. So hardly anybody tried to. An exception was Carterfone, an obscure company with an obscure device that bore the company’s name. The Carterfone allowed someone on a radio to connect to AT&T’s network and speak to someone using a telephone. The company petitioned the FCC to be allowed to attach its products to AT&T’s phone network. The FCC in 1968 issued an order that AT&T must permit anything electronic that didn’t harm the network to be attached to the network.

The so-called Carterfone decision was on the books, but for a while, anyway, it seemed to be irrelevant because the product soon failed commercially. Furthermore, AT&T subsequently made it difficult for every other company that wanted to attach a product to its network by arguing that those products harmed its network. There weren’t many such products; hardly anyone else had the resources to challenge one of the largest, richest, and most influential companies in corporate history.

The most notable exception was PhoneMate. With the PhoneMate 400, Brooks had an increasingly popular product and, perhaps even more important, he had an attitude. AT&T, he wrote, “forbade the connection of the PhoneMate to their system. I took the attitude ‘Take your prohibition and put it where the sun doesn’t shine,’ and we forged ahead.”

AT&T informed PhoneMate customers that they must also install a piece of safety equipment called an Authorized Protective Connecting Module (APCM),  according to Brooks. He said the APCM never existed—that the claim was essentially a bullying tactic meant to intimidate PhoneMate’s customers. It is possible the tactic suppressed some PhoneMate sales, but Brooks said that not one paying customer ever came to PhoneMate looking for an APCM—they just plugged their PhoneMates right in, just as his company recommended, thereby defying AT&T.

Brooks wrote that his company hired Hogan & Hartson (since merged into Hogan Lovells), sued AT&T, and won, with an argument based on the Carterfone decision and bolstered by technical evidence that the PhoneMate in no way harmed AT&T’s network.

Carterfone is universally cited as the landmark decision that cleared the way for modems—and therefore computers—to be attached to the phone network, thus paving the way for rise of the Internet more than 20 years later. Brooks’s view is that the rights established by the Carterfone decision were barely of academic interest until PhoneMate deliberately made them a practical issue and prevailed.

Many of Brooks’s broad claims are difficult to confirm now, but his story aligns well with that era’s timeline. The flood of products that could attach to AT&T’s networks—phones from other manufacturers, fax machines, modems, etcetera—did not happen immediately after the Carterfone decision but some years later, subsequent to the introduction of the first PhoneMate products.

PhoneMate dominated the answering machine market for a few years. The company is identified in a 1982 article in The New York Times as the largest supplier of answering machines. The paper quotes PhoneMate’s estimate that the entire market that year was worth $140 million. Forty percent of sales were of machines costing less than $140. In the intervening years, several other makers got into the business, solid-state electronics were adopted, and the tape format progressed from reel-to-reels to cassettes to microcassettes. (Today’s models are based on ICs, of course, and digital storage.)

Consumer electronics giants, such as Panasonic and Sony, followed PhoneMate into the answering-machine market. Competition and the dynamics of solid-state electronics drove prices down; answering machines became commodity products. PhoneMate was bought by its supplier, Asahi, in 1989. Then, in 1991, Casio acquired a 58 percent stake in Asahi. But even with Casio’s resources behind it, PhoneMate still couldn’t keep up with larger rivals. By the time The New York Times ran a follow-up in 1991, Panasonic and AT&T itself were the two biggest manufacturers by far in a market worth $900 million. PhoneMate was a distant third.

“Although we had won the battle to connect to the telecommunications network, we could not even begin to compete with the giants,” Brooks wrote.

The Consumer Electronics Hall of Fame: SiriusXM Satellite Radio System

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-siriusxm-satellite-radio-system

Listening to the radio and driving are as inextricably linked as peanut butter and jelly. Even if you’ve never cruised a boulevard with the top down and the radio blaring, you’ve probably heard about how great it is in a Beach Boys song (which you probably heard on the radio while driving in your car).

Even so, right from the beginning, listening to the radio in a car had its frustrations. Satellite radio was conceived as the answer to all the things that were irritating about terrestrial broadcast radio.

Complaint No. 1? Advertisements. Conventional, broadcast radio is ostensibly free, so long as you’re willing to subject yourself to the shouty ads that commercial radio stations survive on. Another irritation: the lack of variety. In most countries, commercial radio is dominated by cheesy pop music along with, in parts of the United States, big pockets of talk shows, country, salsa, or Mexican formats such as banda and mariachi. If you’re into classical, jazz, folk, or indie rock, good luck finding a station. Yet another problem is the relatively limited geographical coverage of most stations. On a long trip, it usually seems like the moment you start warming to a station you’re exiting its coverage map. Furthermore, depending on where you are in the world, the number of stations might be few. Some places are completely out of reach of any signals at all.

Satellite radio, on the other hand, is subscription based, so there are no ads. There are scores of different channels, so there’s something for everybody. Partial to ’80s hair bands? The Grateful Dead? “Emotionally driven alt rock”? There are channels for that. Moreover, every single channel is available everywhere there is service. And if there’s sky above you, you’re going to get a signal.

Those may have been the justifications for creating the Sirius Satellite Radio system, but they weren’t the reason, according to Robert Briskman, one of Sirius Radio’s cofounders.

Briskman’s experience with satellite technology goes way back. He got a job with NASA in 1959, the year after it was founded, and then worked for the Communications Satellite Corp. (Comsat) and after that with Geostar Corp. In 1990, one of Briskman’s friends founded a company to compete with cable TV providers. The startup planned to broadcast residential TV services directly from an orbital satellite. Briskman offered his technical expertise.

“I was helping Eddy Hartenstein bring DirecTV to the home, and I suggested adding radio channels to the video channels,” Briskman told IEEE Spectrum. “He said, ‘You know, Rob, people at home look at television, and they listen to radio in the car.’ And I said, ‘You’re right, Eddy, why don’t we do it in the car?’ So he said, ‘That’s technically impossible.’ Those were fighting words to me.”

Satellite communication is contingent on maintaining line of sight (LOS) between satellite and receiver. What made satellite radio seem impossible was that vehicles were inevitably going to pass under bridges, drive under tree canopies or into parking garages, enter tunnels, or become isolated in deep canyons of the natural or urban variety.

The challenge facing Briskman was to design satellites and complementary receivers that would somehow maintain LOS as much as possible. If you were to drive into an underground garage and park, the signal would be lost, and nothing could be done about that. But if LOS was lost only briefly, there had to be some way to make sure the receiver could keep on playing until it could pick up the signal again.

Briskman, an IEEE Life Fellow, says it took him seven or eight years and at least five patents’ worth of technology to make the impossible a reality.

A big part of the solution to maintaining a LOS connection between receiver and satellite was satellite diversity. “It just means putting two satellites up there radiating the same signal, but you put them in different parts of the sky. If the car is blocked from one satellite, it hopefully has a clear line of sight to the other,” Briskman explains.

In cities with tall buildings, the satellites are supplemented by terrestrial repeaters, land-based antennas that beam signals directly into urban canyons. Those measures meant motorists listening to Sirius were almost always going to get a signal, but “almost always” wouldn’t be good enough. There are hundreds of thousands of underpasses in the United States alone, Briskman noted, and going under any one of them could block both satellites.

“The solution to that was another patent, for satellite-time diversity,” Briskman says. “It simply means having two satellites radiating the same signal, but we delay one for, say, roughly 5 seconds.”

The receiver contains a 5-second buffer, so that if the satellites are blocked, the receiver plays out the buffered data. If you’re out of sight of a satellite for more than 5 seconds, you get an outage, but 5 seconds turned out to be enough to get most Sirius subscribers through most underpasses without program interruption.

The receiver is where incoming signals are decoded and decrypted. The Sirius radio is unusual, Briskman says, because it actually has three independent receivers, one each for the two satellites and one for terrestrial repeaters. To deal with three separate receivers, he devised what he called a maximal-ratio combiner. If there are two or three strong signals from the receivers, this circuit puts them in phase and—as the name suggests—combines them. Conversely, if one or two of the signals is bad, it suppresses them.

And the Sirius radio actually comprises two separate units. There’s the first, the one with the receivers, in the vehicle cabin. That’s the part that looks like a typical dashboard radio with a display screen. The other is mounted on the roof or the trunk. It holds the antenna and a low-noise receiver. It takes the signals coming in from the satellites (at slightly different frequencies in the 2300-megahertz band), amplifies them, downconverts them to around 75 MHz, and runs them in to the first unit in the main cabin.

The omnidirectional antenna is fairly small, about 30 millimeters wide, or roughly the size of a U.S. half-dollar coin. Briskman is to this day exasperated that automakers typically insist on packaging his satellite radio antenna with a GPS antenna, which makes the combined antenna unit much bigger than he would like.

Briskman recalls that the implementation of both the satellites and the radio receivers went smoothly. The part of the development that was most taxing and required the most iterations was designing the user interface, a process exacerbated by every different carmaker having different notions of how it should work. “You wouldn’t think you’d have to spend a lot of time on something like that, but we had to. It was worthwhile, though, because the customers appreciated the ease with which you could operate the radio,” he says.

In those heady early days, Sirius had a rival, called XM. Sirius was the first to secure a broadcast license, but XM beat it to the market, going live in September of 2001. Sirius began broadcasting in February of 2002. XM originally got placement with GM, Honda, and Toyota. Meanwhile, BMW, Chrysler, and Ford signed on with Sirius.

The satellite radio market was tough, however. XM would eventually file for Chapter 11 bankruptcy protection. The two companies argued that the only way for both to survive was to merge, and they got permission from regulatory authorities to do so in 2008. Since the merger, the combination has had enough subscribers to remain profitable.

SiriusXM kept improving its technology. The chipset in Sirius’s original radio was a two-IC set, with the chips manufactured using 160-nanometer design rules. The company reduced that pair to one chip at 130 nm in 2004. The postmerger entity kept at it; in 2014 the chip was scaled down again to 40 nm, Briskman notes.  

Hartenstein was named chairman of the XM half of the company in 2009; he remains a board member of the combined SiriusXM. Briskman still cruises the highway in his BMW with the top down (“I’m a convertible guy”) blasting Siriusly Sinatra (Ch. 71) and ’40s Junction (Ch. 73). SiriusXM radios are now installed in three-quarters of all automobiles manufactured around the world.

The Consumer Electronics Hall of Fame: McIntosh MR 78 Tuner

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-mcintosh-mr-78-tuner

McIntosh’s MR 78 was a trailblazer, the first AM/FM tuner precise enough to tune in the weaker of two radio signals broadcast on very close frequencies.

That seems like a narrow qualifier, but it was a remarkable technical advance. Why was it such a big deal? Like many of the other items in our Consumer Electronics Hall of Fame, including the Fuzzbuster, the Walkman, and the Cobra 138XLR CB radio, the McIntosh MR 78 satisfied a need that was at once technological and cultural.

When broadcast radio began as a business in the 1920s, all radio used amplitude modulation (AM). Frequency modulation (FM) was invented by Edwin H. Armstrong in the 1930s, but FM radio stations did not begin appearing until the 1940s. Music on FM sounds better, but that didn’t make that much of a difference until the 1960s, when mono recording gave way to stereo. FM could support stereo starting in 1961, but AM wouldn’t until the 1970s. Music programming finally began shifting to FM stations.

Top 40 stations on the AM dial still dominated the ratings. But as the 1960s progressed, musicians were writing songs that got progressively more unconventional or longer, which disqualified them from AM’s regimented Top 40 format. A growing number of FM stations started popping up to play this increasingly popular music.

Listeners had no reason to tune in to the Top 40 station from the next town over because every Top 40 station was playing almost the same 40 songs as every other. But suppose listeners wanted to hear something by an artist that didn’t fit that format—at least 99 percent of all music released, surely. Then they had to search for a station that would play that artist. So for listeners with adventurous taste in music, searching for distant stations—spinning the dial—became an imperative in those pre-Internet days.

A signal that is strong, because it is higher-powered, physically closer, or both, will of course drown out a weaker signal on the same channel. From the 1920s through the 1950s in the United States that interference was almost never a problem, because radio stations had always had generous space in terms of both listening area and assigned frequency.

But by the early 1970s, the airwaves were getting crowded and interference was becoming more common. The number of stations in the United States had jumped from 5,134 in 1963 to 6,530 in 1970, and the likelihood that two stations might be near each other both physically and on the radio dial had gone up. People were spending more time listening to the radio, and many of them were trying to tune in stations with signals that were weaker, either because they were far away or were transmitting at lower power (such as many college stations).

And contemporary tuners weren’t up to the task. These included McIntosh’s MR 71, the Fisher FM-1000, and the Marantz 10B. “They all had the same wide-bandwidth signal issue,” says Ron Cornelius, a salesman who sold many of those systems at the time (he was hired by McIntosh directly in 1993). Radio stations are allowed to broadcast in 0.2-megahertz-wide bands centered on their assigned frequencies, but even the best tuners at the time were imprecise and tuned in bands wider than that—the “wide-bandwidth signal issue” Cornelius refers to.

Because radio stations were at first well spaced, they never had any pressing need to be scrupulous about remaining strictly within their assigned 0.2 MHz. In fact, the imprecision of tuners was an incentive for some stations to transmit signals that exceeded the edges of their assigned bands; it was occasionally a deliberate attempt to match the wider bands that tuners used.

When the airwaves began to get crowded, however, imprecise tuners began to be problematic. They were inherently unable to filter out adjacent signals, a problem exacerbated by those stations slopping out past their allotted 0.2 MHz.

Richard Modafferi was a senior engineer at McIntosh Laboratories from 1968 to 1974. His first stab at a new tuner that could do better was the MR 77. That model, introduced in 1970, was acclaimed among audiophiles—and it was the subject of an article by Modafferi in the November 1970, issue of IEEE Transactions on Broadcast and Television Receivers. And then Modafferi outdid himself with the MR 78 tuner, which came out in 1972.

The MR 78’s distinction was a new intermediate frequency filter, the culmination of a design that Modafferi had begun working on [PDF] as a graduate student at the Newark College of Engineering (now part of the New Jersey Institute of Technology) in the mid-1960s. He dubbed it the Rimo filter (a concatenation of the first two letters of his first and last name). In a superheterodyne radio receiver, which all FM receivers are, the incoming signal is mixed with that of a local oscillator to produce an intermediate frequency (IF) that is lower than that of the carrier frequency of the received signal. There are several reasons for doing this. A really big one is that the lower value of this IF makes it easier to filter out nearby sources of interference. Basically, Modafferi’s Rimo took a leap beyond existing IF filters by applying a computer to fine-tune the filter’s frequency response, and also by minimizing “intermodulation distortion,” which plagued IF filters at the time.

Regarding the Rimo, “It is the correct width to let just one FM station through,” McIntosh states in the owner’s manual for the MR 78 [PDF]. “The excellent selectivity of the MR 78 (210 kHz wide at 60dB down) permits tuning stations that are impossible to receive on ordinary tuners.” Touting the great “mathematical complexity” of the new filter, the manual also asserts that “an IBM 1130 high speed computer spent eighteen minutes on the mathematics for the design of the IF filter. It would have taken an engineer, working twenty-four hours a day, seven days a week, and working error-free three-hundred years to perform the same mathematical calculations.”

One of the more eye-popping disparities between the Rimo filter’s performance and the performance of competing products was in delay distortion. “All other IF filters have delay distortion, as much as 100%,” the owner’s manual rhapsodized. “The MR 78 filter has less than 1.0% delay distortion from antenna input to discriminator output,” it added.

In addition, the company built a new FM detector, also designed by Modafferi, to work with the new filter. The new filter, the new FM detector, and other improvements added up to the MR 78 having the narrowest IF bandwidth ever achieved in a stereo tuner, the company claimed.

People paid for that level of performance. The original list price for the MR 78 was US $1,699, which translates to a whopping $10,000 in today’s dollars. Even then, the price was roughly 10 times as much as that of the average tuner. Today, when an MR 78 pops up on eBay in decent shape, it generally fetches more than $1,000.

The high price notwithstanding, audio salesman Cornelius says the MR 78 “was very popular at the time as it was a demonstrable solution to a common problem. With a large FM antenna, you could receive stations from one to many hundreds of miles away.”

He adds that the MR 78 solidified McIntosh’s position in the highest echelon of tuner manufacturers. It had an immediate effect on the rest of the market too; it forced all the competitors to scramble to match it. “They all had to develop ‘super tuners,’ usually FM-only, to compete,” he recalls.

Editor’s note: A fascinating brief history of the Rimo filter was published as a letter, from Modafferi himself, to the editor of Audio magazine in May 1977. The complete issue is available online in PDF format.

The Consumer Electronics Hall of Fame: Epson Stylus Color

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-epson-stylus-color

In technology, as in life, timing is everything. Case in point: Epson’s Stylus Color, the world’s first high-resolution color inkjet printer, a dazzling innovation that came along in 1994, at precisely the time it was needed. Digital cameras were starting to eclipse film, and millions of consumers were accumulating digital photographs, which they wanted to print at home.

The first inkjet patent was awarded in 1951, but it took three decades to improve the technology sufficiently to merit mass-market adoption. Manufacturers persisted for so long because they knew if the approach could be perfected, it held the promise of much more accurate printing. The perpetual problem was applying ink with precision. For a long time, it was a challenge to print anything that wouldn’t immediately splotch and smear.

Seiko Epson has a good pedigree in the printer market. It had produced an early electronic printer in 1964, and introduced what was considered the first small digital printer, the EP-101, in 1968. At the time, the company was a Seiko subsidiary known as Shinshu Seiki. In the 1970s it started branding its printers with the Epson label, and then adopted the name as its corporate moniker in 1982. It sold mostly dot-matrix printers through the 1970s and early 1980s.

In 1984, Hewlett-Packard introduced the first inkjet printer, the ThinkJet, beating Epson to the market by a few months. Epson’s response was the SQ-2000. Both were black-and-white printers.

The breakthrough for Epson was something it called Micro Piezo technology. In the mid-1980s, Epson was developing a novel printhead. It had a few dozen tiny reservoirs, each filled with black ink (later versions would add more reservoirs, for colored inks). Each reservoir had a minuscule nozzle with a valve. A plate capped each reservoir. The idea was to push on the plates, deforming them so that they pressed into the reservoirs. Well-controlled pressure squeezed microscopic drops of ink out of the nozzles and onto the paper.

What Epson needed was a precise means to push on the plates. It’s not clear if Epson had the idea from the start to use piezoelectric elements with this printhead—the company told IEEE Spectrum no one associated with the development of its early inkjet printers still worked there. But there is no question that engineers in Epson’s printer division were familiar with the technology.

To cut to the chase, at some point Epson engineers decided to use piezoelectric materials in the printheads as the squeezing mechanism. They put a piezoelectric crystal on the back of every plate capping each ink reservoir in the printhead they were developing. When voltage was applied to any individual crystal, it contracted, deforming the plate and pushing out a tiny amount of ink. Precise control of the voltage led to precise control of the deformations, in turn providing unprecedented control of the amount of ink Epson was able to emit for each dot.

That level of control led to another great advantage for Epson.

Other printers minimized smearing and blotting by using inks that had been formulated to be thicker than inks used in other types of printers. These inks were less viscous when cold but would flow well when heated. So the common approach was to use these inks in combination with a head that incorporated a heating element. The downside was that heating gradually degraded the printheads, meaning they would have to be replaced periodically.

A benefit of Epson’s new printhead was that it could use almost any ink, including less expensive inks that were more viscous at room temperature. This was possible because the high degree of control meant that ink could be dispensed so sparingly that there wasn’t enough of it to be smeared on the paper. Because Epson could use inks that did not need to be warmed, it could forgo having a heating element in its printheads. Epson’s heads weren’t subject to thermal degradation, and therefore lasted a long time.

That meant Epson could build a printer in which the printhead was permanent. The only thing that would need to be replaced would be the ink wells. The wells, or cartridges, could be simple, easy to produce, easy to replace, and cheap.

Epson’s first printer with the Micro Piezo technology was the Stylus 800 (the MJ-500 in Japan), introduced in 1993. The Stylus 800 had 48 nozzles in a 4-by-12 configuration, and could print 360 dots per square inch (dpi).

The following year, Epson released its Stylus Color (P860A). The Micro Piezo technology printhead was refined for the Stylus Color; this one had 64 nozzles in a 16-by-4 configuration. It had a palette of 16 million colors. Epson had also developed new inks that dried very quickly, contributing to better printing control and better prints.

Photographs printed on the Stylus 800, with its 360 dpi, had decent resolution. But photos printed on the Stylus Color, which printed at 720 dpi, were obviously superior to anything any competing printer could produce.

Epson also boasted about a means of controlling the printhead precisely enough to almost eliminate a common problem with printers of the time, which was traces of ink, called horizontal banding, between lines of text. And to top it all off, the Stylus Color printed faster than other printers. Between the speed and the more economical inks, it was also considerably less costly to own and operate.

Bundled with drivers for both Windows and Macintosh computers, the Stylus Color was an immediate and major success. Epson said it sold some 300,000 units in Japan, where it carried the model number MJ-700V2C, a record then for a printer. The company says it had similar success in other markets around the world.

The beauty of inkjet printers is that their key component, the ink cartridge, is easy to produce and to work with. For that reason and others, inkjets are relatively inexpensive—roughly one-half the price of laser printers, the next most popular type today. The low price and high quality of the printing combine to make inkjets the best-selling type of printer to this day. Epson remains a significant force in the market with its still-unique Micro Piezo technology, which it has continued to refine. The company claims it can now apply droplets as small as 1.5 picoliters—a trillionth of a liter.

The Consumer Electronics Hall of Fame: BRK First Alert Smoke Alarm

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-brk-first-alert-smoke-alarm

The race to create the modern residential smoke alarm began in earnest in the mid-1960s, not long after Duane D. Pearsall founded a company called Statitrol to sell heating and ventilation equipment. In 1963, a chain-smoking Statitrol technician exhaled cigarette smoke toward an ion generator, causing an ionization detector to spike. Similar phenomena had been reported as early as the 1930s. But this time, something big would come of it: Statitrol engineers immediately began devising a smoke detector that would exploit the phenomenon.

As is so often the case with technology, the pioneer, in this case Statitrol, would not be the one to reap long-lasting commercial success. In the end, most of that went to its main rival, BRK.

The construction of an ionization-type smoke detector involves placing a small amount of radioactive material (typically americium) in a tiny chamber between a pair of electrical plates. As the radioactive material decays, it releases energetic particles that ionize air molecules and set up a current between the charged plates. Smoke that enters impedes the current, tripping the alarm.

Ionization detectors were a big improvement over the previous technology, which detected heat rather than smoke and typically didn’t trigger an alarm until the area where the detector was mounted was aflame. Smoke, of course, typically spreads far faster than flames, so an ionization detector is much more likely to sound the alarm in advance of the spread of flames, providing critical additional seconds or minutes for escape.

The technological implementations weren’t sophisticated, but using radioactive material in a consumer item was a big deal at the time. The Atomic Energy Commission (now the Nuclear Regulatory Commission) had to approve all commercial use of radioactive elements. The AEC okayed smoke detectors for commercial properties in 1963, but didn’t green-light them for residential use until 1969.

Statitrol may have had a head start, but BRK claims to have beaten it to market with a battery-operated residential smoke detector. According to BRK, a model it commercialized in 1969 was the first such product to earn approval from Underwriters Laboratories (UL), the world’s most prominent safety-certifying agency.

The whole point of that product seems to have been to secure a UL listing, however. And there were yet more regulatory hurdles to get over. Residential building codes had provisions only for smoke detectors connected to mains power. It wasn’t until 1974 that the National Fire Protection Association revised its NFPA 74 standard to support the installation of battery-operated models. BRK may have had a battery-operated model in 1969, but as of 1975 it was still pitching its AC models in ads.

In 1975, Statitrol began mass-producing what is widely regarded as the first practical battery-operated ionization-type smoke detector for the residential market, the SmokeGard 700. By the end of 1975, a little under 10 percent of all U.S. homes had a smoke detector.

That year, the National Bureau of Standards (now the National Institute of Standards and Technology, or NIST) began a project to evaluate smoke-alarm technology. That project, known as the Indiana Dunes tests, concluded in 1976. Even before it was finished, the results were conclusive: smoke detectors would save lives.

Around the same time, several technological advances were being incorporated into smoke detectors, “including solid-state circuitry…more effective sensing and alarm sounding, and the option of AA batteries in some models,” according to the Safe Community Project, a nonprofit dedicated to fire safety and run by active and retired firefighters and medics.

By the early 1970s, with smoke detectors obviously on the cusp of being a big business, GE, Gillette, and Norelco entered the market. In 1976, BRK commercialized its first battery operated detector for residential use that carried the First Alert brand name.

The timing couldn’t have been better, because the U.S. government and independent safety agencies were all then touting the results of the Indian Dunes study. In 1976, the National Fire Protection Association Life Safety Code “required” that all U.S. homes have smoke alarms (technically this was a recommendation; the Life Safety Code is very influential but does not carry the force of law).

Though the new market had spawned half a dozen competitors, at least, BRK enjoyed two big advantages. It had been doing business with electrical contractors all along, so it had a channel into the home-builder market. As important, if not more so, was the distribution deal it had signed in 1974 with Sears, then the single most important national retailer of tools and do-it-yourself equipment in the United States.

With regulations and technology coming together, sales of smoke detectors took off, going from what NIST calls “token usage” through 1975 to reach millions of homes in just a few years. The share of U.S. residences with smoke alarms doubled to 20 percent by the end of 1976, and by 1990 roughly 90 percent of all U.S. homes had at least one. Since 1992 the figure has been roughly 95 percent. BRK First Alert was an early success, and it’s been among the sales leaders ever since.

The number of deaths in fires has been on a continuous downward trend for at least 100 years. The rate of reduction diminished through the 1950s and 1960s, however, before plunging again thanks to smoke detectors. The death rate was cut by half between 1975 and 2000. According to the Consumer Product Safety Commission, a U.S. government agency, there are about 300,000 residential fires each year, leading to approximately 2,000 deaths. Two-thirds of those deaths occur in homes lacking smoke alarms or with nonfunctional alarms.

BRK is now a subsidiary of Newell Brands, in Hoboken, N.J. And Statitrol? It was sold by founder Duane Pearsall to Emerson Electric in 1977. It is no longer in business.

The Consumer Electronics Hall of Fame: Canon EOS 300D

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-canon-eos-300d

The Canon EOS 300D camera was the first to bring high-quality digital single-lens-reflex technology to the masses.

By the beginning of 2003, it was obvious that a good, affordable digital single-lens reflex (DSLR) was inevitable, but still probably at least a year away. So when Canon introduced the 300D in August of that year, it was a surprise in terms of both timing and cost. It was quite a capable camera at a very reasonable price.

The EOS 300D, known as the Digital Rebel in the United States and the Kiss Digital in Japan, was the direct descendent of another Canon model, the 10D, which was introduced in February of 2003. The 10D, in turn, was an upgrade from its predecessor, the D60, the very first Canon DSLR. The D60, with a 6.3-megapixel sensor, had been introduced a full year earlier, in February of 2002. The D60 and the 10D seemed to establish a logical schedule for Canon: a new DSLR model every year.

The 10D, intended for the professional market, had a solid body made of a magnesium alloy. Like the D60 it had a resolution of 6.3 megapixels (roughly 3,000 by 2,000). It included image technology Canon had developed and had begun using in 2002 in various cameras other than the D60. Canon called this technology Digic, an acronym for “Digital Imaging Integrated Circuit.” Originally, Digic included three chips: an image processor, a video processor, and a camera-control IC. Canon boasted at the time that its sensors were notable for low noise and low power consumption.

The 10D was basically a Digic-driven replacement for the D60. Upon introduction, it was immediately praised for its picture quality.

That was the technology progression. The pricing progression was something else again. Canon announced the D60, its second DSLR, in February 2002 with a list price of US $2,199 for a kit that included a battery and charger, some cables and software and other goodies but no lens, and $1,999 without those items. A year later, in February of 2003, Canon brought out the 10D for US $1,999, also without a lens. Then, only 6 months after that, Canon unexpectedly released yet another new 6.3-megapixel DSLR with the price slashed by half. Without a lens it was $899; another hundred bucks got you an 18–55-mm f3.5 “kit” lens. That was the 300D.

Canon built the 300D around the same 6.3-megapixel CMOS image sensor and the image processor as the 10D. Among the most obvious differences, the 300D was noticeably smaller and lighter.

The 10D’s viewfinder was based on a pentaprism, basically a block of glass that has, you guessed it, five sides. The 300D’s was based on a pentamirror, a set of mirrors that were naturally less heavy than a pentaprism. Also, where the 10D had a durable housing, the 300D’s body was mostly plastic, though nicely styled to fit in with Canon’s popular Rebel line of 35-mm film cameras.

Features that the 300D and 10D had in common included an illuminated LCD control panel and a separate, 1.8-inch color LCD display for captured images. The 300D lacked a few features of its predecessor, including some dedicated controls for autofocus and exposure. These were missed mainly by professionals, who weren’t in the target market anyway.

These details notwithstanding, it is still a mystery how Canon managed to take almost all of the components that constituted a 10D, and in roughly six months, reconfigure and repackage them into a unit it could sell profitably for half the price. (Canon did not respond to requests for an interview.)

What Canon has said publicly is that the Digital Rebel 300D was one of its most successful introductions in 2003. In its annual report for that year, it disclosed that sales exceeded expectations, and that if it hadn’t designed and manufactured its own semiconductors—the Digic CMOS image sensors and controller chips—it would have run out of both.

By establishing the sub-$1,000 DSLR, Canon made what had been professional-grade digital photography affordable for nonpros. In so doing it created the “prosumer” market, a niche previously identified by electronics manufacturers in other consumer categories. Considering its price and performance, contemporary reviewers mostly agreed it was the best camera value on the market. It was the start of the DSLR’s ascendance to the top of the high-performance camera market. It’s a perch that only now does it seem poised to relinquish, in the foreseeable future, to a newer and fast-rising category: full-frame mirrorless cameras.

The Consumer Electronics Hall of Fame: Amazon Echo Dot

Post Syndicated from Brian Santo original https://spectrum.ieee.org/consumer-electronics/gadgets/the-consumer-electronics-hall-of-fame-amazon-echo-dot

There have been products that have established a new product category—there was nothing like the phonograph before Thomas Edison invented it. There have been products that were vast improvements within an existing category—Sony’s Trinitron represented a huge technological leap in color-TV technology. But there aren’t many precedents for what Amazon did with its Echo product line. It hijacked an existing product category—smart speakers—and helped turn it into something else: a gateway to the Internet of Things, pointing the way toward a truly “smart” home.

In the 2000s, Logitech and Sonos were among the companies that created the market for smart speakers for home use. The option to connect via Bluetooth led to a new kind of smart speaker, a lightweight, standalone speaker that could be used in the home but was more typically paired with a mobile phone and used to play music or podcasts on the go. Some of the leading makers of such Bluetooth smart speakers are Bose, JBL, and UE.

Amazon’s transformation of the smart-speaker market was a two-step process. It took the first step in 2014 with its first Echo, designed to look like a run-of-the-mill entry in the smart-speaker category. None of the individual features of that Echo were new. What set it apart was the way it seamlessly combined them.

This first Echo had a speaker with audio performance comparable to that of other high-quality smart speakers. But unlike those speakers, it connected through Wi-Fi to the Internet, and through the Internet to the cloud. It also came with a voice-activated digital assistant, called Alexa. Before that first Echo, the “smart” in “smart speakers” meant that the speaker in question could, via Bluetooth, be connected wirelessly and controlled by a remote device.

Voice assistants were nothing new, either. But the combination of Alexa with Echo was subtly different from other devices with voice-activated assistants. Apple’s Siri (introduced in 2011) and Google Now (a 2012 precursor of Google Assistant) were typically accessed through mobile devices—smartphones and tablets. Users invoked them when they had those devices in hand, or close-at-hand: “Siri, where is the nearest Chinese restaurant?” Siri and Google Now were part of the mobile experience, and, crucially, not the home experience.

The second step in Amazon’s transformation of the smart-speaker category was the introduction of the Echo Dot, in 2016. It demonstrated that the voice-activated assistant, Alexa, wasn’t merely a supplemental feature, it was the point of the product line. The Echo Dot is minimally functional as a speaker (in fact, it has a port into which a better-quality speaker can be plugged).

As an electronic device, it is mostly unremarkable. In a package like an oversize ice-hockey puck, it integrates a Texas Instruments Digital Media Processor (the DM3725), SDRAM from Micron Technology, a 4-GB flash memory from Samsung, and a Wi-Fi/Bluetooth chip from Qualcomm. TI also supplied the power-management chip and DAC (digital-to-analog converter). As an object, it is unobtrusive and well designed, with a light that glows pleasantly around the circumference of the top surface.

Like the original smart speakers, the Echo Dot was built to mostly stay put in a room. It is neither carried nor picked up; users talk to it from a distance, for example across a room.

The incorporation of a group of features already found in different combinations in previous products might seem like a weak distinction. But by combining them in the Echo Dot, Amazon arrived at something new: an exclusively hands-free home-based device that lets people perform useful tasks and access a growing universe of services, including buying stuff online. From Amazon.

If there were any question the Echo Dot was its own separate thing, it was laid to rest when Google and Apple both decided they needed something to take on Echo Dot, which resulted in Google’s Home and Apple’s HomePod product lines. Lacking a tidy way to describe the category they define, all three (and the increasing number of similar products) are still typically lumped into the “smart speaker” category. They’re distinctly different, however, because they truly are smart, the embodiments of evolving, network-based artificial intelligences such as Alexa, Google Assistant, and Siri.

In recent years, the bits of information that come out of major companies are rare and sporadic, like the photons emitted by black holes. Sales and engineering figures are hard to come by from corporate behemoths like Amazon. Early in 2019, an Amazon executive said 100 million “Alexa-enabled” devices had been sold, one of the few (if not only) sales-related numbers ever released about Echo products.

Note that phrase, “Alexa-enabled.” That of course includes Echo products, but Alexa is also resident on a range of other products from other manufacturers, including Facebook’s Portal and Portal+, Harman Kardon’s Allure, Bose’s Home Speaker, and others. Meanwhile, JBL, Klipsch, and others have speakers with Google Assistant built in. The growth of the smart-speaker category pioneered by the Echo Dot continues. While it is unclear what “100 million Alexa-enabled devices” precisely means for Echo sales, it strongly suggests that Echoes are the best-selling smart speakers by far.

Today, people routinely use smart speakers to access information, listen to music, purchase goods, summon a taxi, among literally thousands of such activities. As more and more homes add automated, networked systems such as baby monitors, door locks and home-security systems, heating and cooling, smart lighting systems, multiroom audio setups, and so on, network-based virtual assistants will be used to control it all. The Echo Dot, with Alexa, was the first to blaze that trail.