Tag Archives: consumer-electronics

COVID’s Unlikely Offspring: The Rise of Smartwatch as Illness Detector

Post Syndicated from Brian T. Horowitz original https://spectrum.ieee.org/tech-talk/consumer-electronics/gadgets/covid-byproduct-smartwatch-increasingly-illness-detector

Smartwatches and wearable devices have come a long way beyond counting steps. They’re getting better at detecting illness, and now they can even spot signs of COVID-19 before you are aware that you’re sick. Could a few data points gathered from consumer tech on your wrist really give doctors the data they need to diagnose you with a serious illness like COVID-19? 

Research is still in its early stage, but the last several months have seen a number of research efforts to increase the smartwatch’s illness detection capabilities. And it now looks like these tools will likely outlast the present pandemic.

Scripps Research has introduced an app called MyDataHelps as part of a study that tracks changes to a person’s sleep, activity level or resting heart rate. Fitbit is also building an algorithm that can detect COVID-19 before a person experiences symptoms. Meanwhile, Stanford Medicine researchers have developed a smartwatch alert system that can work on any wearable device, including Fitbit, Apple Watch and Garmin watches.

Michael Snyder, professor and chair of the Department of Genetics and director of Stanford Center for Genomics and Personalized Medicine at Stanford University, says watches can pick up signals of respiratory illnesses, even with asymptomatic cases. As COVID-19 hit, Snyder’s research increased “full blast,” he said. 

The alert system Snyder’s team ultimately developed is a “distributed system up on Google Cloud that reads people in real time,” he said. “And when it’s following their heart rate, if we see it jump up, we ping them back.” At that point researchers tell participants to stay home because they may be infected. The app uses an algorithm that warns patients using yellow or red alerts. Snyder says these alerts can help reduce COVID-19 transmission rates. In fact, 70% of the time a signal indicates a COVID-19 infection before or at the time it’s detected, he said. Snyder explained that just like a car dashboard monitors car health, a smartwatch will track a person’s physiology. 

“People like us will build algorithms that that will do the interpretation like pathologists when they see an imaging section for cancer diagnosis. They count on the pathologist to tell them what’s going on,” Snyder said. “That’s the way this will work.” 

Another study from Snyder’s group, published in a 2018 issue of the journal Personalized Medicine, examined how wearable devices like smartwatches could prevent evidence of inflammation or predict cardiometabolic health. 

For people with respiratory conditions, their heart rate jumps when they are breathing harder, Snyder noted. Another indicator that triggers alarms in Snyder’s algorithm is skin temperature. He has also developed a signal to detect signs of diabetes from drier skin. Snyder’s team is also working on expanding the types of data types it can study—and the frequency with which they sample the data. Higher frequency or resolution sampling increases the sensitivity of a data set. 

However, Snyder added, “You don’t want to oversample, because you’ll just drain your battery on the watch.”

Mount Sinai Studies How Wearables Indirectly Detect Inflammation

In another research project, Mount Sinai Health System is using the Apple Watch to detect inflammation based on changes in heart rate. Rob Hirten, assistant professor of medicine at the Icahn School of Medicine at Mount Sinai, is a gastroenterologist and had been using wearable devices to predict flares of inflammation from Crohn’s disease and ulcerative colitis. 

“When you have inflammation develop in the body in particular, you can see fluctuations in your nervous system function,” Hirten explained. When the COVID-19 pandemic hit, Hirten and his team applied the research to detect increased risk of COVID infection in healthcare workers. The custom application Hirten designed allowed his team to collect physiologic data for analysis and ask short questions about symptoms.

“What we’re able to then do through our app is take the data from the phone and collect it into our research portal so that we’re actually able to analyze the data that’s being collected by the Apple Watch normally,” Hirten said. 

Smartwatches and Biomarkers

At Duke University, Jessilyn Dunn, Ph.D. assistant professor of biomedical engineering, biostatistics and bioinformatics at Duke University, and her team are testing the role of smartwatches as another tool to use to detect physical signs of COVID-19. Dunn formerly collaborated with Snyder at Stanford on research prior to the pandemic. With one of her doctoral students studying in Wuhan, China, and providing firsthand info on the start of the health crisis, Dunn knew COVID would be an opportunity to do some research on illness detection with smartwatches. 

“We had daily electronic surveys asking people about their symptoms, testing status, those sorts of general questions to get a feel if people are sick or not, and then pulling in their smartwatch data,” Dunn said. “We could pull in data from pretty much any consumer wearable device that is compatible with an Apple phone or an Android phone,” Dunn said. 

Dunn’s team has partnered with organizations in North Carolina to distribute Apple Watches and Garmin watches to communities of color to balance the demographics of the study. 

“We’ve seen that COVID is disproportionately affecting communities of color, and so to only develop what we call digital biomarkers on the people who already own smartwatches just seemed like it was missing the mark,” Dunn said. “Equitable digital biomarker development” has been a focus of her study to overcome bias that exists in machine learning, she said.  

“We’re going to need to collect data in people who have certain types of illnesses and to be able to build these digital biomarkers,” Dunn said. The biomarkers could distinguish between an illness like diabetes and respiratory illness. In addition to heart rate, Dunn is using algorithms that can monitor blood oxygen (SPO2) levels.

The Future of Wearable Illness Detection

So can a few noisy alarms from someone’s wrist really tell you if you are sick? 

“In my lab, we’re always a little bit skeptical of making sure that technologies are properly validated,” Dunn said. She noted that consumer devices such as smartwatches were made for primarily entertainment or hobbies and less for medical purposes. However, Dunn is optimistic that the smartwatches will be able to help detect when people are sick, but oversight of the data will be key. 

“It’s really important that there is appropriate oversight set up for this sort of digital health arena,” Dunn said. “[We need to make] sure that when we’re getting information from these devices that it’s trustworthy.” 

Hirten said his team is planning to integrate additional physiological markers into their algorithms to predict additional diseases. Combining heart rate along with blood-oxygen levels and steps increases the predictive ability for medical professionals, he noted. 

“What we’re learning from the COVID infections by using these wearable devices I think will translate in the future over the coming years as we take what we’ve learned here and start applying it to other conditions and diseases to try to impact them,” Hirten said. 

Snyder says his group’s heart rate alert studies have established they can detect signs of illness.

“It’s really clear your heart rate jumps up when you’re ill,” Snyder said. The next step will be to expand to data types for other illnesses beyond the COVID-19 pandemic.

How Your Smart Phone Can See You Sweat

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/consumer-electronics/portable-devices/how-your-smart-phone-can-see-you-sweat

Into this stay-at-home era of DIY personal training comes the first sweat-monitoring patch intended for broad consumer use. Rolling out this week from PepsiCo is the Gx Sweat Patch, developed by startup Epicore Biosystems in partnership with PepsiCo subsidiary Gatorade. It measures the rate of perspiration and the sodium chloride concentration in that sweat.

The aim? To track sweat loss during physical activity and heat stress and use that information on a personalized basis to recommend exactly how much and how often the athlete should drink to properly replace fluids and electrolytes to avoid dehydration and impaired performance. In the future, its developers predict, information from the patch will be used to help athletes determine their optimal diet and sleep patterns.

The patch will go on sale today, in sporting goods stores and online, at a suggested price of US $24.99 for a pack of two. It represents Gatorade’s first move into the world of digital products and apps for athletes.

Epicore has been testing its flexible, stretchable, single use patch on athletes for some time. The device routes sweat through microfluidic channels cut into stacks of thin-film polymers. In one of the microchannels, used to track sweat rate and volume, the excreted sweat is dyed orange to make it visible as it moves through the pathways. In the other, chemical reagents react with the chloride in the sweat and turn it purple, with the intensity of the purple color corresponding to the concentration of the chlorine ions detected.

After wearing the device on the inner left forearm for the duration of a workout, the user scans the patch with a smartphone. Then the Gx app uses that image, along with previously input data like weight, sex, workout type, and the environment, to create what the company calls a “sweat profile” and make recommendations about the individual’s fluid intake.

In recent clinical trials involving 60 players in the National Basketball Association’s G-League, PepsiCo and Epicore tested the patch, worn on the left forearm of each athlete, against an absorbent pad worn on the athlete’s right forearm. The researchers compared the snapshot reading from the patch against laboratory tests conducted on the sweat collected by the pad, demonstrating comparable results.

Previously, the Gatorade Sports Science Institute and Epicore published the results of a similar large scale trial of the patch on more than 300 bicyclists, track and field athletes, and others exercising in real world conditions, outside of the laboratory environments.

Microfluidic technologies have been used in many applications including DNA chips, point of care diagnostics, and inkjet printing. These devices tend to be rigid, and not suitable for use in a wearable.

“The Gx Sweat patch is the first soft, conformal, and skin-interfaced microfluidic patch that has entered the consumer health and fitness arena,” says Epicore co-Founder and CEO Roozbeh Ghaffari.

After getting a sample patch from PepsiCo this past weekend, I can attest that it is indeed comfortable to wear. It feels like the kind of sticker store clerks sometimes hand out to children, and, a few minutes after attaching it to my skin, I no longer felt it. The accompanying app was simple enough to use, and the scan feature found and photographed the patch almost as soon as I got my arm into the camera’s view. I took a long, brisk walk, hoping I would work up enough of a sweat to get a reading, however, the cool February weather worked against me, and I literally came up dry. I’ll try it again on a warmer day, or when I can get back into a gym post-pandemic.

Augmented Reality Contact Lens Startup Develops Apps With Early Adopters-to-Be

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/consumer-electronics/audiovideo/startup-mojo-vision-has-the-earliest-adopters-of-augmented-reality-contact-lenses-in-its-sights

Last year, startup Mojo Vision unveiled an early prototype of a contact lens that contains everything it needs to augment reality—an image sensor, display, motion sensors and wireless radios all safe and comfortable to tuck into your eye.

These days, the company’s engineers are “charging hard on development” says Steve Sinclair, Mojo Vision’s senior vice president of product and marketing. While most of Mojo Vision’s hundred employees are working at home, the pandemic only minimally affected the company’s ability to use its laboratories and clean room facilities as needed. The company’s engineers, for instance, are helping a vendor of motion sensors thin its dies for better wearability, partnering with a battery manufacturer to build custom batteries, and refining their own designs of displays, image sensors, and power management electronics. And Mojo last year signed an agreement with Japanese contact lens manufacturer Menicon to fine-tune the materials and coatings of the lens itself.

About a dozen of the company’s employees have worn early prototypes of its lenses. The next generation of prototypes, now under development, is expected to be ready for such testing later this year.

While commercial release is still a few years out, requiring FDA approval as a medical device, development of the first generation of applications is well underway. While Sinclair says the company long anticipated that the earliest adopters of its AR contacts would be the visually impaired, exactly what applications would be useful wasn’t fully clear.

Ashley Tuan, Mojo Vision’s vice president of medical devices, has a personal interest in the technology—her father has limited vision, due to a rare retinal degeneration disease.

With a Ph.D. in vision science, Tuan has also studied the biology. People with low vision, she says, “can’t see fine detail because photoreceptors have died. That is typically solved with magnification, though people generally don’t like to carry a magnifier around. They also have an issue with contrast sensitivity, that is, the ability to sense light grey, for example, from a white background. This can stop them from going into unfamiliar surroundings because they don’t feel safe. Most of us subconsciously use shadows to identify something coming up, something that we might trip over. In studies, even a slight reduction in contrast sensitivity stops people from going outside.”

Mojo Vision is developing apps to address these issues. “Enhancing contrast is easy to do with our technology,” Tuan says. “Because we are projecting an image onto the retina, we can easily increase the contrast of that image.

“We can also do edge detection, which I see as being one step above contrast enhancement, by highlighting the edges of objects with light.”

Mojo Vision has these tools implemented in prototypes. In the future, Tuan expects the technology will evolve to adjust what it is highlighting according to the context—a street sign, perhaps, or the facial expressions of someone talking to the wearer.

Magnification, with the current prototype, is triggered by the user zooming using the image sensor. In the future, the company expects to make magnification context dependent as well, with the system determining when a wearer is looking at the menu in a restaurant and therefore needs magnification, or looking for the bathroom, and instead needs the letters of the sign and the frame of the door brightened.

Mojo Vision’s development team is now trying to take these basic ideas for apps and turn them into something that they hope will be immediately useful for their early adopters. To do that, they turned to a Silicon Valley resource, the Palo Alto Vista Center’s Corporate Partners Program.

Says Alice Turner, Vista Center’s director of community and corporate relations: “We have been working with tech companies in the product design realm for about three years, about eight to ten companies so far, including Facebook, Microsoft, Samsung, and other companies that haven’t announced products yet. I know partnering with us results in a better product, better suited, and the population that it is addressing will be better served.”

“Mojo Vision,” she says, “is a huge success story for our partnership program.”

The tech tools that eventually come to market through this program help some of Vista Center’s clients. But the program also provides more immediate benefits; the fees it charges the companies provide a steady source of revenue for the nonprofit.

When brought in on a product development project by a tech company, Turner taps into Vista Center’s client database, some 3400 individuals representing a wide range of vision impairments and demographics. She selects people who are appropriate matches for the technology under development, confirms that they have some basic tech skills and are comfortable talking freely about their condition and experiences, and sets up one-on-one meetings (virtual in these pandemic times) and focus groups during which the developers can get feedback for anything from an idea to a rough demo to a working prototype.

David Hobbs, director of product management at Mojo Vision explained that, for Mojo, the process started with interviewing to find out more about problems that the technology could potentially solve. After identifying the problems, the team brought a subset of Vista’s clients in for a deeper dive.

“For example,” he said, “When we are trying to understand how to cross a crosswalk, we may talk for hours about different crosswalk situations.”

Then the Mojo Vision design team builds prototype software for use with customized virtual reality headsets. The Vista clients test the prototype software and give feedback. The company will do clinical trials with lens prototypes after it gets FDA Breakthrough Device approval.

“We are learning,” Hobbs said, “that vision is uniquely intimate. Everyone sees differently, so finding a way to provide the information that someone wants in the way they want it is really challenging. And different scenarios require different levels of detail and context.”

While this development process continues, the Mojo Vision developers have already learned from the tests conducted so far.

“When we looked at edge detection,” Hobbs said, “we were just taking an image and, wherever there was a lot of difference between two pixels, drawing a line. We created detailed models of the world  in this way. Our bias was that all of this detail was valuable.

“But the feedback we got was all these lines were a lot of noise,” he continues. “It turns out that there is a level of information that we can provide that can help people on their journey without being overwhelming.”

Hobbs expects many the AR applications developed for the visually impaired to evolve into technology useful for everyone. “Fulfilling the needs of the most demanding users provides a lot of capabilities for general users as well. Take the ability to see in the dark. Going into a dark stairwell and having enhanced vision flip on would be valuable to many.”

Keysight University Live from the Lab: A Sneak Peek of Keysight’s New Test Gear, Tips from Industry Gurus, and Chances to Win Test Gear

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/keysight-university-live-from-the-lab-a-sneak-peek-of-keysights-new-test-gear-tips-from-industry-gurus-and-chances-to-win-test-gear

Become an Engineering Legend

Keysight text gear

Join Keysight University Live from the Lab for a sneak peek at new test gear, tips from industry gurus, and chances to win test gear.

CES 2021: FEMA’s Emergency Alert System Coming to a Game or Gadget Near You?

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/consumer-electronics/gadgets/ces-2021-fema-wants-its-emergency-alert-system-talk-to-your-favorite-game-system-or-gadget

It was easy for exhibitors to get lost in the virtual shuffle at CES 2021, where the digital exhibit hall simply displayed the logos of the 1900-plus exhibitors. To get any detail about a particular presenter, you had to search for their booth and click into it, wading through videos, power points, PDFs, and images to try to figure out exactly what was on display.

Among the obscured were the exhibitors from the U.S. Federal Emergency Management Agency (FEMA). If anyone even noticed the agency’s logo—then wondered what FEMA was doing at CES—they likely surmised that it was simply there in case of a literal emergency and passed it by.

At a real-world show, though, their exhibit would likely have caught the eye of the curious. It would have contained kiosks, large electronic billboards, and even braille readers showing off the diversity of devices that are part of FEMA’s Integrated Public Alert and Warning System (IPAWS).

And, had a CES attendee stopped to look, the IPAWS folks would have explained their program to help consumer products designers build emergency alert capability into just about any system that includes a display.

IPAWS is the organization that sends wireless emergency alerts, like Amber Alerts and weather warnings, to mobile phones. It also manages the emergency alert system that triggers the audio warnings that interrupt radio and television broadcasts. In the past year, I’ve mostly seen phone alerts in the form of public health announcements of new shelter-in-place orders and, in one case, a tornado warning that gave me time to pull off the road before the weather got particularly crazy.

IPAWS acts as what it calls a “redistributor,” that is, it takes alerts from counties and other agencies and passes them on to devices designed to display them. Besides mobile phones, said IPAWS program analyst Justin Singer, such devices today include digital billboards, Braille readers, and public tourism kiosks.

But, says Singer, getting alerts in front of the people who could benefit from them gets more challenging when people spend more and more time in front of a diversity of displays. And he is hoping that the consumer electronics industry will help them meet this challenge.

“We are relying on the industry to take this on as a project and implement our technology in their products. We don’t have the ability to build products ourselves and we don’t want to regulate anybody. We just want to get alerts to as many people as possible through as many media as possible,” Singer said.

Take gamers. “Gamers are inherently cut off. I don’t want to interrupt their games, but if I can get a little alert on their screen displaying a tornado warning, say, maybe I can get them to move to the basement,” he continued. “Virtual reality would be really important; there you are really cut off. I’m trying to get smart mirror companies to see the light, too.”

Singer has tried to reach out to Microsoft’s Xbox team but has yet to connect with any interested engineers. He says he did manage to have a CES conversation with representatives of Sony. But trying to catch the eye of product designers at a virtual show has proven difficult. Singer would like to tell developers that IPAWS offers design help, and can also give them access to a laboratory in Maryland that would allow them to test their products on a closed system.

And, he’d tell them, “If you build it, we will show it off in our CES booth next year.”

Potential developers can find out more about the program by contacting [email protected].

CES 2021: My Top 3 Gadgets of the Show—and 3 of the Weirdest

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/consumer-electronics/gadgets/ces-2021-my-top-3-gadgets-of-the-showand-3-of-the-weirdest

CES 2021, the all-digital show I attended through a computer screen this week, had some 1900 virtual booths, several peripheral product showcases, and kept my email inbox jammed with a constant stream of product announcements. It had new TV displays, robots promising to be your best friend, and gadgets aimed at making life in the pandemic world a little easier.

And among all that were a few concepts for new products that, in my eyes, showcased each their own touch of genius. Of course, genius varies in the eyes of the consumer. A great product, after all, is not only unique and clever, but it also fills a real need. And needs are personal. With that caveat—and with the reminder that I have yet to try out or even touch any of these products personally—here are the CES products that most lit up my world this week, in no particular order, along with three that I found unique in a different way.

First, in the “why didn’t someone think of this before” category:

JLab’s JBuds Frames Wireless Audio

Audio “buds” for your glasses instead of your ears? Why haven’t I seen this before? I have yet to find an earbud, wireless or otherwise, that I find truly comfortable and that stays on when I’m doing my daily walk. And over-the-ear headphones are too much of everything. A few years ago I was excited by the launch of Aftershokz headphones that go behind instead of in or over the ear, but found that the vibrations going through my head tended to make me queasy.

So JBuds Frames—tiny Bluetooth speakers with microphones that clip onto the frame of your glasses instead of tucking into your ears—got my attention. These days, I wear glasses everywhere, though often switch out to sunglasses for that walk. JLab’s press release says the speakers come with an assortment of silicon sleeves that will let them adjust to a variety of glasses frames. A spokesperson I queried said that at 11.7 grams each, they are light enough to not change the feel or fit of my glasses noticeably. The company promises eight hours of playtime and 100 hours of standby time on a two-hour charge. JBuds Frames are also water resistant. JLabs says the Frames will start shipping to customers in the spring, priced at around $50. I’m looking forward to trying them, and I’m hopeful that I won’t be disappointed when I do.

Samsung’s Galaxy Upcycling at Home

Samsung announced plans to release a line of software designed to encourage consumers to repurpose smart phones as IoT devices instead of tossing them into a drawer or the trash.  The software, to be released under a program it calls Galaxy Upcycling at Home, will allow old phones to be used as baby monitors, light controllers, and other smart home gadgets. DIY’ers have been repurposing phones this way for a long time, but making it simple for everyday consumers to do so is game-changing.

Wazyn’s smart sliding door adapter

I’ve had traditional flap-style pet doors in the past, and I know that raccoons aren’t frustrated by electronically-controlled locks. The creatures just pry them open. Plus, you also have to cut a hole in your door to install the things. So I was intrigued by Wazyn’s demo of its $400 gadget that turns a sliding door into an automatic or remotely controlled door. The device, the company says, is never permanently installed and doesn’t involve cutting a hole in anything. It can be controlled by a motion sensor that detects the arrival of a pet, which then sends an alert to your phone or smart speaker, at which you can tell it to open the door. It can also be set to automatically open—and automatically be turned off to keep those racoons out at night. All it requires is a smartphone or Alexa. So I’ve got sliding doors, I’ve got Alexa… all I need is a new cat.

And in the “hmmmm, who exactly would want this?” category:

Incipio’s Organicore phone cases

I know it’s tough for phone case manufacturers to distinguish themselves. You can make these gadget covers stronger and more colorful and branded by famous designers, but it’s still hard to make one line of phone cases stand out from all the other ones. So you can imagine the designers at Incipio in a Zoom brainstorming session, during a time when many of us are at home literally watching our grass grow, coming up with the company’s latest twist on a phone case. “Let’s make it compostable!” suggested someone, leading to Incipio’s $40 Organicore phone case. The company advises that composting in a residential bin will take two to three years, I can’t imagine pushing aside an old phone case every time I turn my compost for that long.

Neuvana’s Xen vagus nerve stimulating earbuds

These are stressful times to be sure, times when all of us are looking for ways to reduce our anxiety. But I’m not convinced that zapping my ears with electrical signals is going to make me happier than pandemic baking.

Neuvana is hoping that at least some of us are looking to try new stress-reduction technology. The company says its $330 Xen earbuds send an electrical signal through the ear to the vagus nerve, “bringing on feelings of calm, boosted mood, and better sleep.” I’m not questioning the power of vagus nerve stimulation—there’s a lot of research underway involving treatments for epilepsy and depression—just whether this is something I would actually want to do at home.

Ninu’s AI-guided perfume customizer

“Embark on a perfume fusion journey guided by AI perfume master Pierre,” stated Ninu’s press release. It took me back…back to Disneyland, where, as a teenager, I paid a few dollars for the thrill of having a parfumier with a bad French accent create a custom scent just for me. So I get that the idea of a custom scent can capture the imagination. But do I really need a perfume system that uses an app and AI and can “change the scent with every spray”? Pricing for Ninu’s cartridge-based system is not yet available.

CES 2021: Consumer Electronics Makers Pivot to Everything Covid

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/consumer-electronics/gadgets/ces-2021-consumer-electronics-makers-pivot-to-everything-covid

It’s been ten months since the coronavirus pandemic changed everything—plenty of time to design, prototype, and manufacture products designed for consumers looking to navigate the new reality more safely, comfortably, and efficiently. And more than enough time to rebrand some existing products as exactly what a consumer needs to weather these challenging times.

So I wandered the virtual show floor of CES 2021 and the peripheral press-targeted events to find these Covid gadgets. Here are my top picks, in no particular order.

Tech-packed face masks

I’m sure there were many more variants of the high-tech face mask than I managed to find in the virtual halls. Those I spotted included:

Binatone’s $50 MaskFone, an N95 mask with built in wireless earbuds, uses a microphone under the mask to eliminate mask-muffle from phone conversations.

Razer’s Project Hazel mask comes with a charging box that uses UV light to disinfect while the mask charges. The N95 mask includes clear panels and a light, to allow whoever you’re talking to see your mouth move day or night (helpful for understanding speech for all, not just for those with hearing loss). There’s also an internal microphone and external amplifier for voice projection across social distances and built-in air conditioning. This is still a concept product with no pricing available.

AirPop’s $150 Active+ mask monitors air quality and breathing, tracking breaths during different activities and flagging the user when the filter needs replacing. A Bluetooth radio connects the mask to smartphones for data analysis.

Personal air purifiers

I’m not convinced that the average consumer will be as likely to toss a personal air purifier in their tote or backpack as they are to carry a canister of disinfecting wipes, even though these two products are about the same size. But plenty of gadget makers think there is a market for the personal air purifier. They don’t agree, however, on their choice of air purification technology. LuftQi, for example, uses UVA LEDs in its $150 Luft Duo; NS Nanotech picked far-UVC light for its $200 air purifier. And Dadam Micro’s $130 Puripot M1 uses titanium dioxide and visible wavelength light.

Lexon’s Oblio desktop phone sanitizer

Lexon combined a wireless charger and a UV-C sanitizer into an $80 desktop appliance that looks like a pencil holder; there’s no reason why this gadget couldn’t disinfect pencils as well

Panasonic’s car entertainment systems

The moment that Covid tech jumped the shark might have been when Panasonic Automotive President Scott Kirchner, in introducing the company’s automotive entertainment systems, pitched the technologies as relevant because “our vehicles have become second homes” from which we celebrate birthdays and attend performances and political rallies. Panasonic’s latest in-car technology, he said, can drive 11 displays, and distribute audio seat by seat or throughout the cabin.

NanoScent’s Covid diagnostics technology

Talk about a pivot! Startup NanoScent, a company that has built an odor sensor that, coupled with machine learning, it has been developing for use in detecting gas leaks, cow pregnancies, and nutritional status, aims to use its technology to detect the coronavirus. The company says that the proliferation of virus cells among the microrganisms that inhabit the noses of Covid patients produces what it believes to be a distinct smell. It has run two clinical trials, one in Israel and one in the United Arab Emirates, with 3420 total patients.

Yale’s smart delivery box

Yale, the lock company, addressed the problem of no-contact doorstep delivery security with its Smart Delivery Box. Users place the chest wherever deliveries generally take place, weighting or tethering it to prevent theft. It sits there unlocked until it is opened, then, after a delivery person places items inside and closes it, it locks until the owner unlocks it with a smartphone. The $230 to $330 lockbox (depending on style and features) can also be managed via WiFi.

 

CES 2021: A Countertop Chocolate Factory Could Be This Year’s Best Kitchen Gadget

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/consumer-electronics/gadgets/ces-2021-a-countertop-chocolate-factory-could-be-this-years-best-kitchen-gadget

Sheltering-in-place orders sent many of us into the kitchen, baking and pickling and tackling ambitious cooking projects that maybe hadn’t captured as much widespread public interest pre-pandemic. So I wasn’t surprised that design engineers at consumer products companies spent 2020 thinking about high-tech kitchen gadgets.

The wave of kitchen tech introduced at CES 2021 includes a countertop chocolate factory that, if priced right, will likely be a top holiday gift in 2021. It also includes yet another attempt to apply Keurig’s pod concept as well as a spoon I’m not exactly sure I want in my mouth.

Unfortunately, with an all-digital CES this year, I was unable to get my hands on any of these gadgets—or to taste their creations. Instead, I viewed live-streamed demos or recorded pitches. And since some of the best food-tech ideas don’t necessarily produce the best tasting foods, the jury is very much out. But here are my picks for at least the most mouth-watering kitchen gadgets from CES 2021.

CocoTerra’s countertop automated chocolate factory.

The process of making chocolate from scratch has always seemed magical, even without Willie Wonka involved, and I’ve never missed a chance to visit a chocolate factory. I’ve seen enough to know that getting from cocoa bean to chocolate bar has many steps involving friction and heating and cooling. And so while chocolate making might seem like the perfect pandemic project, it’s a little too complicated to try at home. Which is why CocoTerra’s chocolate-making appliance jumped out at me. Founder Nate Saal, who previously worked in software engineering at various tech companies, explained in a live-streamed demo that the company’s recipes suggest different combinations of cocoa nibs, sugar, cocoa butter, and milk powder. It takes about two hours for the gadget to grind, heat, cool, spin, stir, and mold the chocolate. And, as a big selling point for me, the countertop appliance is compact, approximately 10 inches in diameter and 13 inches tall. Saal pointed out that he designed the gadget to use user-measured ingredients, not pods, to open up the possibilities of using cocoa beans from different sources. Pricing is not yet available

ColdSnap’s rapid ice-cream maker

ColdSnap’s 90-second countertop ice-cream freezer didn’t excite me as much as CocoTerra’s chocolate factory, as it’s a pod-based system—and there have been many bad pod ideas since Keurig introduced the world to coffee pods. Not only am I thinking that the single-serving pods, at $2.50 or more each, are pricier than an equivalent amount of premium ice cream, but also I’m skeptical that the product will taste as good. Rather, ColdSnap seems like a gadget that would quickly go from countertop to garage. The device, which the company says can make smoothies and frozen cocktails as well as ice cream, did win a CES Innovation Award, however. ColdSnap is expected to retail at $500 to $1000.

PantryChic’s automated ingredient dispensing system

PantryChic’s creators jumped onto two trends from the early days of stay-at-home orders—pantry reorganization (think matching canisters) and baking. They came up with a system that accurately measures flour and other dry ingredients by weight, automatically converting cups to the gram equivalent when necessary. Users store ingredients in PantryChic’s clear, smart canisters, identifying the type of ingredient when they fill each canister. Then the gadget will recognize the ingredient when the canister locks onto the dispenser. For someone who bakes constantly and prefers the precision of weighed ingredients, perhaps this gadget makes sense. But the company’s visuals suggest rice, cereal, and beans be dispensed by the device as well as flour and sugar—and that’s really not going to happen in a normal kitchen. The starter system—the countertop dispenser and two small canisters—is $350, additional canisters are $40 to $45.

TasteBooster’s SpoonTek flavor-enhancing spoon

TasteBooster’s founders Ken and Cameron Davidov have been developing products that use a mild electric current produced by the human body for several years. Their latest, SpoonTek, aims to use that current to “excite the taste buds.” The user places a finger on an electrode on the spoon handle, scoops up food, and completes the circuit by touching the tongue to the bowl of the spoon. The founders say the system will allow the health-conscious to use less salt and will also eliminate bitter aftertastes, enabling users to enjoy foods they may previously have found unpleasant. The spoons are priced at $29 each, less in quantity, on Indiegogo.

HyperLychee’s Skadu electric pot scrubber

Finally, we get to cleanup, and a power pot scrubber. Think cordless drill with scrubbing pads and other attachments. There just may be a market for it among people who do the kitchen cleanup in their households and are fans of power tools. The Skadu is $70 on Indiegogo.

CES 2021: What Is Mini-LED TV?

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/consumer-electronics/audiovideo/ces-2021-what-is-miniled-tv

CES 2021, this year’s fully virtual consumer electronics show, kicked off on Monday with Media Day and a flurry of announcements from the largest consumer electronics manufacturers. For these companies, the center of the consumer electronics world is the television—the bigger the screen the better. People generally don’t replace their TVs as often as they do their mobile devices, so TV manufacturers are constantly looking for a new display technology or feature that will make that TV on the store shelf seem a lot better than the TV in the family room. Some of these efforts have been more successful than others—3D displays, for example, never caught on.

This year, the TV manufacturers’ tech news coalesced around mini-LED technology. LG pitched its “quantum nanocell mini-LED,” a technology it somehow turned into the acronym QNED.

TCL touted its  “ODZero” mini-LED.

And Hisense, Samsung, and others are also unveiling mini-LED televisions at the show.

To understand what mini-LED is—and isn’t—and why it improves the TV picture, it helps to know a bit about what came before it.

First, to be clear, mini-LED isn’t a new display technology so much as a new backlight. The picture itself is generated by a liquid crystal display (LCD); how that evolved is an entirely different story.

Originally, LCD displays were lit by fluorescent tubes running behind the screens. Then, as LEDs became available at mass market prices, they replaced the fluorescent tube and the LCD TV came to be called the LED TV (a misrepresentation that still drives me a little crazy). LED has several advantages over fluorescent tubes, including energy efficiency, size, and the ability to be turned off and on quickly.

The first LED TVs used just dozens of the components, either arrayed on the edges or behind the LCD panel, but the arrays quickly grew in complexity and companies introduced what is called “local dimming.” With this technology (in which groups of LEDs are turned down or even off in the darkest areas of the TV picture), contrast, a big contributor to picture quality, increases significantly.

Recalls Aaron Drew, director of product development for TCL North America: “We were an early proponent of local dimming in the U.S. market. We had the first TV with what we called contrast control zones in 2016. That array had nearly 100 zones with a total number of LEDs in the hundreds.

There is no industry definition of mini-LED. For us, I would say we introduced our first backlight using mini-LEDs, that had just over 25,000 LEDs and nearly 1000 contrast control zones.”

Drew says that he’s happy to see other brands join TCL with mini-LED product announcements, but points out that TCL’s new display technology is interesting not just because it uses mini-LEDs, but because the company has figured out a way to eliminate the need to maintain space between the LEDs and the LCD panel; in traditional designs, he says, a little space is required to allow lenses to distribute the light evenly.

“We have a way to precisely control the distribution of the light without a globe shape lens and optical depth,” Drew says.

And that reduced optical depth (OD) feature gives the TCL technology the “OD Zero” tag.

This latest generation of TCL LED TVs contain tens of thousands of LEDs and thousands of contrast control zones, the company indicated in its announcement.

Over at LG, the Q of its QNED acronym refers to the quantum dot color film that most LED TVs use today to convert some of the blue LED light into the green and red wavelengths used in an RGB picture. The N, for NanoCell, also refers to that quantum dot layer. The company apparently dropped the L from LED to avoid confusion with its OLED TVs.

LG says these QNED TVs will have almost 30,000 LEDs and 2500 local dimming zones.

None of the mini-LED TV announcements have included pricing to date.

Is mini-LED technology different enough to send the average consumer running to the store to replace a TV they currently own? Without actually seeing these new displays in person—the huge downside of a virtual trade show—it’s impossible to tell. My guess, however, is no. But it is different enough to make a mini-LED TV display look better on a store shelf than a non-mini-LED TV parked next to it, so it’s not surprising that everybody is jumping into this pool.

Yet to come to the consumer market, and likely to make a much bigger difference in picture quality, is the so-called micro-LED. These use LED components that are small enough to act as pixels themselves, not as backlights for an LCD. The upshot: They lose no brightness to filters and can be turned off individually for true blacks—and they actually deserve to be called LED TVs. While some companies have announced micro-LED displays, these are expensive and gigantic—in the over-100-inch screen size category—and aimed at commercial markets only. Samsung did announce a 110-inch micro-LED model at CES 2021 that will be available in March, but it’s hard to see where such an expansive TV display would fit in most homes. Micro-LEDs will have to get even smaller (again, there is no official measurement of “micro”) before the prices and the screen sizes make sense for consumers.

And now about those rollable displays. TCL in its online press conference also demonstrated flexible OLED displays; one was in the form of a phone that rolls out to extend the display (LG also teased a rollable phone). TCL’s other rollable display came in the form of a scroll about the size of a folded compact umbrella. It unrolls to a 17-inch display. Had there been an in-person CES audience, these would definitely have sparked gasps and rustles in the crowd, but they are likely a long way from appearing on store shelves.

 

How to Find the Ideal Replacement When a Research Team Member Leaves

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/consumer-electronics/audiovideo/how-to-find-the-ideal-replacement-when-an-academic-team-member-leaves

Two brilliant minds are always better than one. Same goes for three, four, and so on. Of course the greater the number of team members, the greater the chance one or more will end up departing from the team mid-project—leaving the rest of the group casting about for the ideal replacement. 

Research, thankfully, can answer how best to replace a research team member. The new model not only helps identify a replacement who has as the right skillset, but it also accounts for social ties within the group. The researchers say their model “outperforms existing methods when applied to computer science academic teams.” 

The model is described in a study published December 8 in IEEE Intelligent Systems.

Feng Xia is an associate professor in the Engineering, IT and Physical Sciences department at Federation University Australia who co-authored the study. He notes that each member within an academic collaboration plays an important contributing role and has a certain degree of irreplaceability with the team. “Meanwhile, turnover rate has also increased, leading to the fact that collaborative teams are [often] facing the problem of the member’s absence. Therefore, we decided to develop this approach to minimize the loss,” he says.

Xia’s previous research has suggested that members with stable collaboration relationships can improve team performance, yielding higher quality output. Therefore, Xia and his colleagues incorporated ways of accounting for familiarity among team members into their model, including relationships between just two members (pair-wise familiarity) and multiple members (higher-order familiarity).

“The main idea of our technique is to find the best alternate for the absent member,” explains Xia. “The recommended member is supposed to be the best choice in context of familiarity, skill, and collaboration relationship.”

The researchers used two large datasets to develop and test their model. They explored 42,999 collaborative relationships between 15,681 scholars using the CiteSeerX dataset, which captures data within the field of computer science. Another 436,905 collaborative relationships between 252,439 scholars was explored through the MAG (Microsoft Academic Graph) dataset, which contains scientific records and relative information covering multiple disciplines.

Testing of the model reveals that it is effective at finding a good replacement member for teams. “Teams that choose the recommended candidates achieved better team performance and less communication costs,” says Xia. These results mean that the replacement members had good communication with the others, and illustrate the importance of familiarity among collaborators.

The researchers aim to make their model freely available on platforms such as GitHub. Now they are working towards models for team recognition, team formation, and team optimization. “We are also building a complete team data set and a self-contained team optimization online system to help form research teams,” says Xia.

This Is the Year for Apple’s AR Glasses—Maybe

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/consumer-electronics/gadgets/this-is-the-year-for-apples-ar-glassesmaybe

Apple didn’t invent the portable music player, although I challenge you to name one of the approximately 50 digital-music gadgets that preceded the iPod. Apple didn’t invent the smartphone either—it just produced the first one that made people line up overnight to buy it.

And Apple isn’t first out of the gate with augmented-reality (AR) glasses, which use built-in sensors, processors, and displays to overlay information on the world as you look at it. Google introduced its Glass in 2013, but it generated more controversy and criticism than revenues. More recently, Magic Leap promised floating elephants and delivered file sharing. And Epson has been quietly selling its Moverio AR glasses for niche applications like closed captioning for theatergoers and video monitoring for drone pilots, while steering clear of the consumer market. The point is, although they were pioneering, none of these efforts managed to put augmented reality into comfortable, useful, affordable glasses that appealed to an ordinary person.

And now comes Apple. For years, Apple has been filing patents for AR and virtual-reality (VR) technology, acquiring related startups, and hiring AR experts from the Jet Propulsion Laboratory, Magic Leap, Oculus, and others. The company has been tilling this soil for quite a while, and speculation has for years been intense about when all this cultivation would bear fruit. Though Apple has carefully shrouded its AR efforts since their origins around 2015, a few signs, such as a declaration from a legendary Apple leaker, suggest that an unveiling could come as soon as March of this year.

It’s a giant project for Apple. Some analysts suggest it could give the company a jump on a market that could swell from US $7.6 billion to $29.5 billion over the next five years. Published reports indicate that Apple has around 1,000 people working on the effort. And now, after working on various designs for years, those engineers have likely made dozens and dozens of prototypes, according to Benedict Evans, an analyst who also produces an influential newsletter on technology. Before long, we’ll find out whether Apple can do for AR glasses what it did for portable music players, smartphones, and smartwatches.

“It’s the threshold moment that all of the AR community have been waiting for,” says David Rose, a researcher in the MIT Media Lab and former CEO of ­Ambient Devices. “AR glasses hold so much promise for learning, and ­navigating, and simply getting someone to see through your eyes. The uses are mind-blowing…. You could see a city through the eyes of an architect or an urban planner; find out about the history of a place; how something was made; or how the landscape you are seeing could be made more sustainable in the future.”

Rumors of a 2021 launch flared up last May, when Jon Prosser, who hosts the YouTube Channel Front Page Tech and has made a career out of reporting leaks from Apple and others, said that an announcement of what he expected to be called Apple Glass would likely come at a March 2021 event. Prosser predicted displays for both eyes, a gesture-control system, and a $500 price point. Other pundits have chimed in with different release dates and specifications. But 2021 remains the popular favorite, at least for an unveiling.

What technology will be packed inside Apple’s first generation of AR glasses? It depends on the experience Apple has chosen to provide, and for this, there are two main possibilities. One is simply displaying information about what’s in front of the wearer via text or icons that appear in a corner of the visual field and effectively appear attached to the glasses. In other words, the text doesn’t change as you swivel your head. The alternative is placing data or graphics so that they appear to be attached to or overlaid upon objects or people in the environment. With this setup, if you swivel your head, the data moves out of your vision as the objects do and new data appears that’s relevant to the new objects swerving into your field of view. This latter scheme is harder to pull off but more in line with what people expect when they think about AR.

Evans is betting on that second approach. “If they were just going to do a head-up display, they could have done it already for $100,” he points out.

Evans isn’t making a guess as to whether Apple will launch AR glasses in 2021 or later, but when they do, he says, it won’t be as a prototype, or an experiment aimed at a niche market, like Magic Leap or HoloLens. “Apple sells things that they think have a reason for a normal person to buy. It will be a consumer product and have a mass-market price. There will be stuff to develop further, but it won’t be $2,000 and weigh 3 kilos.”

Evans expects the first version will include eye tracking, so the glasses can tell what part of the broader field of view is attracting the user’s attention, along with inertial sensors to monitor head motion. Head gestures may well be part of the interface, and it will likely have a lidar sensor on board, enabling the glasses to create a depth map of the wearer’s surroundings. In fact, Apple’s top-of-the-line tablet and phone, the iPad Pro and iPhone 12 Pro, incorporate lidar for tracking motion and calculating distances to objects in a scene. “It’s pretty obvious,” Evans says, “that lidar in the iPad is a building block” for the glasses.

One big question about the glasses’ display, Evans says, is whether it will take a new approach to presenting an image that can be visible in daylight. The most common approach to date has been using a microLED to project the image onto the glass; in daylight conditions this approach requires that the added-in graphics be limited to the brightest of colors. Recent rumors suggest that Apple will use Sony’s OLED microdisplay as a source for the projected image. But although the luminance of OLED displays is impressive, MIT’s Rose says, rendering a full spectrum of color in daylight will still be challenging.

The glasses will contain a visible-light camera—or two—to collect images of people and places for analysis. The main function of that camera won’t be to record video, because the backlash against Google Glass made that function pretty much a nonstarter. Rather, the purpose of the camera will be to simply enable the software to know what the wearer is seeing in order to provide the contextual information.

“Apple will try hard to not to use words like ‘video camera,’” says Rose. “Rather, they will call it, say, a ‘full-spectrum sensor,’” he adds. “Lifelogging as a use case has become pretty abhorrent to our society.” If an option to store video clips does exist, Apple will likely design the glasses to prominently warn observers exactly when video or still images are being recorded, Rose believes.

The data processing, at least for this first generation of glasses, is widely expected to take place on the user’s phone. Otherwise, says Rose, “the battery requirements will be too high.” And off-board processing means the designers don’t have to worry about the problem of heat dissipation just yet.

What will Apple call the gadget? Prosser is saying “Glass”; others say anything but, given that Google Glass became the subject of many jokes.

Whether or not Apple will ship AR glasses in 2021—and whether or not the product will be successful—comes down to one question, says analyst Evans. “Whose job at Apple is it to look at this and say ‘This is sh-t’ or ‘This is not sh-t’? In the past it was Steve Jobs. Then it was Jonathan Ive. Who now will look at version 86 or version 118 and say, ‘Yes, this is great now. This is it!’?”

This article appears in the January 2021 print issue as “Look Out for Apple’s AR Glasses.”

Noise-Canceling Headphones Without the Headphones

Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/consumer-electronics/audiovideo/active-noise-cancellation-using-ldvs

Active noise cancellation/control (ANC) headsets depend on passive sound attenuation from the padding and cups found in headphones and earphones. This makes ANC headphones effective noise control devices, although wearing these for extended periods of time can be uncomfortable and even cause injury. A group of researchers from the University of Technology Sydney’s Centre for Audio, Acoustics and Vibration have been working on a new kind of virtual noise cancellation system that moves the ANC components away from personal audio equipment and into the headrest of a chair.

Active noise cancellation has an almost century-long history—going back to the 1930s when the first patent was awarded in the US for a device that could cancel out unwanted noise by inverting the polarity of these sounds and playing them back. In 1989 the first working prototype of ANC headphones was released for the aviation industry. In 1991, ANC was used for noise-control in an enclosed space—for the hardtop model of the Nissan Bluebird in Japan.

“However, this ANC headrest system has seen very little progress over the years compared to ANC headphones,” says Tong Xiao, one of the authors of the current study.

There are multiple reasons for this. Existing ANC headrests use microphones set in strategic positions around a user’s head to sample the sounds reaching them. These setups are best for dealing with low-frequency noises up to 1 kHz. However, the passive noise control provided by the cushioning and cupping available in headphones, which reduce high-frequency noises more effectively, are absent. These high frequencies include human speech, which is around 4 to 6 kHz. 

Thus the Sydney team needed to demonstrate an ANC system that worked with both high and low frequencies. So they used a remote acoustic sensing system built around a laser Doppler vibrometer (LDV), which measures non-contact vibrations over a wide range. In their tests, they placed a tiny, jewelry-sized, retro-reflective membrane in the ear as a pick-up for the LDV.

The test was a success, reports Xiao, with noise cancellation active to 6 kHz—and attenuation by 10 to 20 dB

The researchers tested their system on three representative types of  environmental noises—aircraft interior, airplane flyby noise, and speech. Xiao says their system performs closer to a pair of ANC headphones than to any existing ANC headrests, with a wide frequency response but without the need for any bulky headsets or other devices. “We call it ‘virtual ANC headphones,’” he says.

The performance of these virtual headphones was demonstrated on a head and torso simulator (HATS) device to record consistent measurements. However, Xiao says, they have performing initial tests with human subjects since the paper was published.  

“The configuration…[is] simple and yet effective,” he says “[One] limitation…so far is the cost, since the system uses LDVs, which are delicate scientific instruments and can be pricey. However, we believe science advancements in the near future can make the system more cost effective.”

There are other aspects of the system to be ironed out still—a more realistic head-tracking system; improved material of the membrane placed in the user’s ears; lasers that are safe and invisible to humans; and, of course, even higher levels still of noise reduction.

The IoT’s E-Waste Problem Isn’t Inevitable

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/consumer-electronics/gadgets/the-iots-ewaste-problem-isnt-inevitable

In my office closet, I have a box full of perfectly good smart-home gadgets that are broken only because the companies that built them stopped updating their software. I can’t bear to toss them in a landfill, but I don’t really know how to recycle them. I’m not alone: Electronic waste, or e-waste, has become much more common.

The adoption of Project Connected Home Over IP (CHIP) standards by Amazon, Apple, Google, and the Zigbee Alliance will make smart homes more accessible to more people. But the smart devices these people bring into their homes will also eventually end up on the junk heap.

Perhaps surprisingly, we still don’t have a clear answer as to what we should do when a product’s software doesn’t outlive its hardware, or when its electronics don’t outlast the housing. Companies are building devices that used to last decades—such as thermostats, fridges, or even lights—with five- to seven-year life-spans.

When e-waste became a hot topic in the computing world, computer makers such as Dell and HP worked with recycling centers to better recycle their electronics. You might argue that those programs didn’t do enough, because e-waste is still a growing problem. In 2019 alone, the world generated 53.6 million metric tons of e-waste, according to a report from the Global E-waste Monitor. And the amount is rising: According to the same report, each year we produce 2.5 million metric tons more e-waste than the year before.

This is an obviously unsustainable amount of waste. While recycling programs might not be enough to solve the problem, I’d still like to see the makers of connected devices partner up with recycling centers to take back devices when they are at the end of their lives. The solution could be as simple as, say, Amazon adding a screen to the app for a smart device that offers the address of a local recycling partner whenever someone chooses to decommission that device.

The idea is not unprecedented for smart devices. The manufacturer of the Tile tracking device has an agreement with a startup called Emplacement that offers recycling information when the battery on one of Tile’s trackers dies and the device is useless. Another example is GE Appliances, which hauls away old appliances when people buy new ones, even as added software potentially shortens their years of usefulness.

Companies can also make the recycling process easier by designing products differently. For example, they should rely less on glues that make it hard to salvage recyclable metals from within electronic components and use smaller circuit boards with minimal components. Companies should also design their connected products so that they physically work in some fashion even if the software and app are defunct. In other words, no one should design a connected product that works only with an app, because doing so is all but forcing its obsolescence in just a few years. If the device still works, however, people might be able to pass it along for reuse even if some of the fancier features aren’t operational.

Connected devices won’t be in every home in the future, but they will become more common, and more people will come to rely on the features they offer. Which means we’re set for an explosion of new electronic waste in the next five to ten years as these devices reach the end of their life-spans. How we handle that waste—and how much of it we have to deal with—depends on the decisions companies make now.

This article appears in the January 2021 print issue as “E-Waste Isn’t Inevitable.”

Thin Holographic Video Display For Mobile Phones

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/consumer-electronics/portable-devices/holovideo-phones

A thin holographic display could one day enable 4K 3-D videos on mobile devices as well as household and office electronics, Samsung researchers and their colleagues say.

Conventional holograms are photographs that, when illuminated, essentially act like 2D windows looking onto 3D scenes. The pixels of each hologram scatter light waves falling onto them, making the light waves interact with each other in ways that generate an image with the illusion of depth.

Holography creates static holograms by using laser beams to encode an image onto a recording medium such as a film or plate. By sending the coherent light from lasers through a device known as a spatial light modulator—which can actively manipulate features of light waves such as their amplitude or phase—scientists can instead produce holographic videos.

Hum to Google to Identify Songs

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/consumer-electronics/portable-devices/google-hum

Ever have a song you can’t remember the name of, nor any of its words? Now Google has a new feature where you can simply hum the melody and it can hopefully name that tune.

The idea of identifying songs through singing, humming or whistling instead of lyrics is not a new idea—the music app SoundHound has possessed hum-to-search for at least a decade. Google’s new feature should help the search engine with the many requests it receives to identify music. Aparna Chennapragada, a Google vice president who introduced the new feature during a streamed event Oct. 15, said people ask Google “what song is playing” nearly 100 million times each month.

Listen Up With Speakers in Lightbulbs, Shower Heads

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/consumer-electronics/gadgets/speakers-lightbulbs-shower-heads

The last time I spent much time thinking about LED bulbs was some seven years ago, when a kitchen remodel turned out more operating room than cozy family gathering place. What went wrong? The architect determined that the contractor had purchased the wrong temperature of LEDs; a problem easily fixed.

Since then, I occasionally noticed some advances in LED light bulbs at CES and gadget shows—like dimmable LEDs (now common) and smart bulbs that connect to home Wi-Fi networks for remote control. But nothing that made LEDs shine.

So the last thing I expected when checking out the more than 40 new products at Pepcom’s holiday launch event was to get excited about a couple of LED bulbs. One of these gadgets acts as a Bluetooth speaker that automatically networks with nearby speaker-bulbs to create a surround-sound effect, the other has an adjustable color temperature and a unique user interface. My third gadget pick, a water-powered shower speaker, doesn’t light up, but is about as unobtrusive as a household light bulb.

Here are the details. (Note that this was a virtual event, so the demos and discussions, while held live, were remote; I haven’t actually held any of these gadgets in my hands, much less tested them in the real world.)

1. GE’s LED+ Speaker bulbs

The Bluetooth speaker in one of these LED+ bulbs can work alone, or as part of a surround-sound network of as many as 10 bulbs. Company representatives indicated that the gadgets come in a variety of standard bulb sizes to fit lamps, floodlights, or recessed lighting, starting at about US$30. Each bulb comes with a remote control, though in a multi-speaker network only one bulb needs to be paired with the remote; it then acts as a parent and controls the other bulbs in its vicinity.

2. Feit Electric’s Selectable Color bulbs

These LED bulbs vary color temperature from about 2700K to 6500K, depending on the particular version. As I learned with my kitchen remodel mistake, color temperature matters a lot; it can make the difference between a space feeling like an office or operating room instead of a cozy den. I was particularly impressed by the simple interface that doesn’t require an app or a remote—flicking the light switch on and off cycles through the color options; circuitry in the bulb recognizes the short sequence of power interruptions. And Feit’s representatives made the pitch that in today’s stay-at-home Covid times, the ability to change the feel of a room matters even more than usual, not a bad selling point. Prices, again, vary by type of bulb, but generally start at about US $10, a premium of a couple of dollars over a standard LED bulb.

3. Ampere’s Shower Power

Another clever placement of a Bluetooth speaker in an ordinary household object, the cool factor of Ampere’s shower speaker isn’t that it’s waterproof, it’s that screws into the shower head to run on hydropower from the shower flow. I was already slightly familiar with the potential of shower power—I have an outside shower that’s lit by LEDs built into the shower head and powered by the water flow. Unlike that gadget, however, Ampere’s device includes a battery that can store power for listening while the shower is off. Company representatives indicated that the gadget produces about 120 milliampere per hour with standard water flow, slightly less or more depending on water pressure, and will retail for around $70. (It is currently taking preorders via Kickstarter.)

 

Programmable Filament Gives Even Simple 3D Printers Multi-Material Capabilities

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/tech-talk/consumer-electronics/gadgets/programmable-filament-gives-even-simple-3d-printers-multimaterial-capabilities

On the additive manufacturing spectrum, the majority of 3D printers are relatively simple, providing hobbyists with a way of conjuring arbitrary 3D objects out of long spools of polymer filament. If you want to make objects out of more than just that kind of filament, things start to get much more complicated, because you need a way of combining multiple different materials onto the print bed. There are a bunch of ways of doing this, but it’s not cheap, so most people without access to a corporate or research budget are stuck 3D printing with one kind of filament at a time.

At the ACM UIST Conference last week, researchers presented a paper that offers a way of giving even the simplest 3D printer the ability to print in as many materials as you need (or have the patience for) through a sort of printception—by first printing a filament out of different materials and then using that filament to print the multi-material object that you want.

Researchers Pack 10,000 Metasurface Pixels Per Inch in New OLED Display

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/consumer-electronics/audiovideo/metasurface-oled-display

A new OLED display from Samsung and Stanford can achieve more than 10,000 pixels per inch, which might lead to advanced virtual reality and augmented reality displays, researchers say.

An organic light-emitting diode (OLED) display possesses a film of organic compounds that emits light in response to an electric current. A commercial large-scale OLED television might have a pixel density of about 100 to 200 pixels per inch (PPI), whereas a mobile phone’s OLED display might achieve 400 to 500 PPI.

Two different kinds of OLED displays have reached commercial success in mobile devices and large-scale TVs. Mobile devices mostly used red, green and blue OLEDs, which companies manufacture by depositing dots of organic film through metal sheets with many tiny holes punched in them. However, the thickness of these metal sheets limits how small these fabricated dots can be and sagging of these metal sheets limits how large these displays can get.

In contrast, large-scale TVs use white OLEDs with color filters placed over them. However, these filters absorb more than 70% of light from the OLEDs. As such, these displays are power-hungry and can suffer “burn-in” of images that linger too long. The filters also limit how much the pixels can scale down in size.

The new display uses OLED films to emit white light between two reflective layers, one of which is made of a silver film, whereas the other is a “metasurface,” or forest of microscopic pillars each spaced less than a wavelength of light apart. Square clusters of these 80-nanometer-high, 100-nanometer-wide silver pillars served as pixels each roughly 2.4 microns wide, or slightly less than 1/10,000th of an inch.

Each pixel in the new display’s metasurface is divided into four subpixels of equal size. In principle, the OLED films can specify which subpixels they illuminate. The nano-pillars in each subpixel manipulate white light falling onto them, such that each subpixel can reflect a specific color of light, depending on the amount of spacing between its nano-pillars. In each pixel, the subpixel with the most densely packed nano-pillars yields red light; the one with moderately densely packed nano-pillars yields green light; and the two with the least densely packed nano-pillars yield blue light.

Emitted light reflects back and forth between the display’s reflective layers until it finally escapes through the silver film out the display’s surface. The way in which light can build up within the display gives it twice the luminescence efficiency of standard color-filtered white OLED displays, as well as higher color purity, the researchers say.

“If you think of a musical instrument, you often see an acoustic cavity that sounds come out of that helps make a nice and beautiful pure tone,” says study senior author Mark Brongersma, an optical engineer at Stanford University. “The same happens here with light — the different colors of light can resonate in these pixels.”

In the near term, one potential application for this new display is with virtual reality (VR). Since VR headsets place their displays close to a user’s eyes, high-resolutions are key to help create the illusion of reality, Brongersma says.

As impressive as 10,000 pixels per inch might sound, “according to our simulation results, the theoretical scaling limit of pixel density is estimated to be 20,000 pixels per inch,” says study lead author Won-Jae Joo, a nanophotonic engineer at the Samsung Advanced Institute of Technology in Suwon, Korea. “The challenge is the trade-off in brightness when the pixel dimensions go below one micrometer.”

Other research groups have developed displays they say range from 10,000 to 30,000 pixels per inch, typically using micro-LED technology, such as Jade Bird Display in China and VueReal in Canada. In terms of how the new OLED display compares with those others, “our color purity is very high,” Brongersma says.

In the future, metasurfaces might also find use trapping light in applications such as solar cells and light sensors, Brongersma says.

The scientists detailed their findings online Oct. 22 in the journal Science.

From Foldable Phones to Stretchy Screens

Post Syndicated from Huanyu Zhou original https://spectrum.ieee.org/consumer-electronics/portable-devices/from-foldable-phones-to-stretchy-screens

Motorola demonstrated the very first handheld mobile phone almost a half century ago. It was the size of a brick and weighed half as much. That prototype spawned the first commercial mobile phone a decade later. It, too, was ungainly, but it allowed a person to walk around while sending and receiving phone calls, which at that point was a great novelty. Since then, mobile phones have acquired many other functions. They now have the ability to handle text messages, browse the Web, play music, take and display photos and videos, locate the owner on a map, and serve countless other uses—applications well beyond what anybody could have imagined when mobile phones were first introduced.

But smartphones, nimble as they are, have struggled to overcome one seemingly fundamental drawback: Their displays are small. Sure, some phones have been made larger than normal to provide more real estate for the display. But make the phone too big and it outgrows the owner’s pocket, which is a nonstarter for many people.

The obvious solution is to have the display fold up like a wallet. For years, developing suitable technology for that has been one of our goals as researchers at Seoul National University. It’s also been a goal for smartphone manufacturers, who just in the past year or two have been able to bring this technology to market.

Soon, phones with foldable screens will no doubt proliferate. You or someone in your family will probably have one, at which point you’ll surely wonder: How in the world is it possible for the display to bend like that? We figured we’d explain what’s behind that technology to you here so that you’re ready when a phone with a large, bright, flexible display comes to a pocket near you—not to mention even more radical electronic devices that will be possible when their screens can stretch as well as bend.

Researchers have been seriously investigating how to make flexible displays for about two decades. But for years, they remained just that—research projects. In 2012, though, Bill Liu and some other Stanford engineering graduates set out to commercialize flexible displays by founding the Royole Corp. (which now has headquarters in both Fremont, Calif., and Shenzhen, China).

In late 2018, Royole introduced the FlexPai, whose flexible display allows the device to unfold into something that resembles a tablet. The company demonstrated that this foldable display could withstand 200,000 bending cycles—and quite tight bends at that, with a radius of curvature of just 3 millimeters. But the FlexPai phone was more of a prototype than a mature product. A review published in The Verge, for example, called it “charmingly awful.”

Soon afterward, Samsung and Huawei, the world’s two largest smartphone makers, began offering their own foldable models. Samsung Mobile officially announced its Galaxy Fold in February 2019. It features dual foldable displays that can be bent with a radius of curvature as small as 1 mm, allowing the phone to fold up with the display on the inside. Huawei announced its first foldable smartphone, the Mate X, later that month. The Mate X is about 11 mm thick when folded, and its display (like that of the FlexPai) is on the outside, meaning that the bending radius of the display is roughly 5 mm. And in February of this year, each company introduced a second foldable model: Samsung’s Galaxy Z Flip and Huawei’s Mate Xs/5G.

The most challenging part of engineering these phones was, of course, developing the display itself. The key was to reduce the thickness of the flexible display panel so as to minimize the bending stresses it has to endure when folded. The smartphone industry has just figured out how to do that, and panel suppliers such as Samsung Display and Beijing-based BOE Technology Group Co. are now mass-producing foldable displays.

Like those found in conventional smartphones, these are all active-matrix organic light-emitting-diode (AMOLED) displays. But instead of fabricating these AMOLEDs on a rigid glass substrate, as is normally done, these companies use a thin, flexible polymer. On top of that flexible substrate is the backplane—the layer containing the many thin-film transistors needed to control individual pixels. Those transistors incorporate a buffer layer that can prevent cracks from forming when the display is flexed.

Although flexible displays constructed along these lines are fast becoming more common for phones and other consumer products, the standards that apply to these displays, as well as language for describing their ability to bend, are still, you might say, taking shape. These displays can be at least broadly characterized according to the radius of curvature they can withstand when flexed: “Conformable” refers to displays that don’t bend all that tightly, “rollable” refers to ones with intermediate levels of flexibility, and “foldable” describes those that can accommodate a very small radius of curvature.

Because any material, be it a smartphone display or a steel plate, is in tension on the outside surface of a bend and in compression on the inside, the electronic components that make up a display must resist those stresses and the corresponding deformations they induce. And the easiest way to do that is by minimizing those shape-changing forces by bringing the outside surface of a flexed display closer to the inside surface, which is to say to make the device very thin.

To make the display as thin as possible, designers omit the protective film and polarizer that normally go on top, along with the adhesive applied between these layers. While removing those elements is not ideal, both the protective film and antireflection polarizer are optional components for AMOLED displays, which generate light internally rather than modifying the amount of light transmitted from an LED backlight, as in liquid-crystal displays.

Another difference between flexible and conventional displays has to do with the transparent conductive electrodes that sandwich the light-emitting organic materials that make the pixels shine. Normally, a layer of indium tin oxide (ITO) fills this role. But ITO is quite brittle under tension, making it a bad choice for flexible displays. To make matters worse, ITO tends to adhere poorly to flexible polymer substrates, causing it to buckle and delaminate when compressed.

Researchers battling this problem a decade ago found a few strategies for improving the adhesion between ITO and a flexible substrate. One is to treat the substrate with oxygen plasma before depositing the ITO electrode on top. Another is to insert a thin layer of metal (such as silver) between the electrode and the substrate. It also helps to place the top of the substrate in the exact middle of the layer cake that makes up the display. This arrangement puts the fragile interface with the ITO layer on the display’s mechanical neutral plane, which experiences neither compression nor tension when flexed. Currently, the leading electronics companies that make foldable displays are using this strategy.

Even simpler, you can get rid of the ITO electrodes altogether. While that hasn’t been done yet in commercial devices, this strategy is attractive for reasons having nothing to do with the desire for flexibility. You see, indium is both toxic and expensive, so you really don’t want to use it if you don’t have to. Fortunately, over the years researchers, including the two of us, have come up with several other materials that could function as transparent electrodes for flexible displays.

A flexible film that contains silver nanowires is probably the most promising candidate. These vanishingly tiny wires form a mesh that conducts electricity while remaining largely transparent. Such a layer can be prepared at low cost by applying a solution containing silver nanowires to the substrate in a manner similar to that of printing ink on newsprint.

Most of the research on silver nano­wires has been focused on finding ways to reduce the resistance of the junctions between individual wires. You can do that by adding certain other materials to the nanowire mesh, for example. Or you can physically treat the nanowire layer by heating it in an oven or by sending enough electricity through it to fuse the nanowire junctions through Joule heating. Or you can also treat it by hot-pressing it, subjecting it to a plasma, or irradiating it with a very bright flash to fuse the junctions. Which of these treatments is the best to use will depend in large part on the nature of the substrate onto which the nanowires are applied. A polymer substrate, such as polyethylene terephthalate (PET, the same material that many clear plastic food containers are made of), is prone to problematic amounts of deformation when heated. Polyimide is less sensitive to heat, but it has a yellowish color that can compromise the transparency of an electrode created in this way.

But metal nanowires aren’t the only possible substitute for ITO when creating transparent conductive electrodes. Another one is graphene, a form of carbon in which the atoms are arranged in a two-dimensional honeycomb pattern. Graphene doesn’t quite match ITO’s superb conductivity and optical transparency, but it is better able to withstand bending than any other electrode material now being considered for flexible displays. And graphene’s somewhat lackluster electrical conductivity can be improved by combining it with a conducting polymer or by doping it with small amounts of nitric acid or gold chloride.

Yet another possibility is to use a conductive polymer. The prime example is poly(3,4-ethylenedioxythiophene) polystyrene sulfonate—a mouthful that normally goes by the shorter name PEDOT:PSS. Such polymers can be dissolved in water, which allows thin, transparent electrodes to be easily fabricated by printing or spin coating (an industrial process akin to making spin art). The right chemical additives can significantly improve the ability of a film of this conductive polymer to bend or even stretch. Careful selection of additives can also boost the amount of light that displays emit for a given amount of current, making them brighter than displays fabricated using ITO electrodes.

Up to now, the organic LED displays used in mobile phones, computer monitors, and televisions have mainly been fabricated by putting the substrate under vacuum, evaporating whatever organic material you want to add to it, and then using metal masks to control where those substances are deposited. Think of it as a high-tech stenciling operation. Those metal masks with their very fine patterns are hard to fabricate, though, and much of the applied material is wasted, contributing to the high cost of large display panels.

An interesting alternative, however, has emerged for fabricating such displays: inkjet printing. For that, the organic material you want to apply is dissolved in a solvent and then jetted onto the substrate where it is needed to form the many pixels, followed by a subsequent heating step to drive off any solvent that remains. DuPont, Merck, Nissan Chemical Corp., and Sumitomo are pursuing this tactic, even though the efficiency and reliability of the resulting devices still remain far lower than needed. But if one day these companies succeed, the cost of display fabrication should diminish considerably.

For makers of small displays for smartphones, an even higher priority than keeping costs down is reducing power consumption. Organic LEDs (OLEDs) are becoming less power hungry, but the more mature the OLED industry becomes, the more difficult it will be to further trim power consumption from its current value of around 6 milliwatts per square centimeter (about 40 mw per square inch). And the diminishing returns here are especially problematic for foldable phones, which boast displays that are much larger than normal. So it’s probably a safe bet that your foldable phone, compact as it is, will have to contain an especially hefty battery, at least in the near term.

What’s next for flexible displays after they allow our smartphones to fold? Given how much people seem glued to their phones now, we anticipate that in the not-so-distant future, people will start wearing displays that attach directly to the skin. They’ll likely use these devices initially to visualize various kinds of biometric data, but other applications will no doubt emerge. And perhaps such wearable displays will one day be used just to make a high-tech fashion statement.

The materials used to produce such a display should, of course, be soft enough not to be bothersome when attached to the skin. What’s more, they would have to be stretchable. Fabricating intrinsically stretchable conductors and semiconductors is an enormous challenge, though. So for several years researchers have been exploring the next-best thing: geometrically stretchable displays. These contain rigid but tiny electronic components attached to a stretchable substrate and connected by conductive pathways that can withstand the deformation that accompanies stretching.

More recently, though, there’s been progress in developing intrinsically stretchable displays—ones in which the conductors and semiconductors as well as the substrate can all be stretched. Such displays require some novel materials, to be sure, but perhaps the greatest hurdle has been figuring out how to devise stretchable materials to encapsulate these devices and protect them from the damaging effects of moisture and oxygen. Our research team has recently made good progress in that regard, successfully developing air-stable, intrinsically stretchable light-emitting devices that do not require stretchable protective coatings. These devices can be stretched to almost twice their normal length without failing.

Today, only very crude prototypes of stretchable displays have been fabricated, ones that provide just a coarse grid of luminous elements. But industry’s interest in stretchable displays is huge. This past June, South Korea’s Ministry of Trade, Industry and Energy assigned LG Display to lead a consortium of industrial and academic researchers to develop stretchable displays.

With just a little imagination, you can envision what’s coming down the road: athletes festooned with biometric displays attached to their arms or legs, smartphones we wear on the palms of our hands, displays that drape conformably over various curved surfaces. The people who are working hard now to develop such future displays will surely benefit from the many years of research that have already been done to create today’s foldable displays for smartphones. Without doubt, the era for not just bendable but also stretchable electronics will soon be here.

This article appears in the November 2020 print issue as “Displays That Bend and Stretch.”

About the Authors

Huanyu Zhou is studying for a doctorate at Seoul National University under the direction of Tae-Woo Lee, a professor of materials science and engineering there.

Spotify, Machine Learning, and the Business of Recommendation Engines

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/consumer-electronics/audiovideo/spotify-machine-learning-and-the-business-of-recommendation-engines

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

You’re surely familiar—though you may not know it by name—with the Paradox of Choice; we’re surrounded by it: 175 salad dressing choices, 80,000 possible Starbucks beverages, 50 different mutual funds for your retirement account.

“All of this choice,” psychologists say, “starts to be not only unproductive, but counterproductive—a source of pain, regret, worry about missed opportunities, and unrealistically high expectations.”

And yet, we have more choices than ever— 32,000 hours to watch on Netflix, 10 million e-books on our Kindles, 5000 different car makes and models, not counting color and dozens of options.

It’s too much. We need help. And that help is available in the form of recommendation engines. In fact, they may be helping us a bit too much, according to my guest today.

Michael Schrage is a research fellow at the MIT Sloan School’s Initiative on the Digital Economy. He advises corporations— including Procter & Gamble, Google, Intel, and Siemens—on innovation and investment, and he’s the author of several books including 2014’s The Innovator’s Hypothesis, and the 2020 book Recommendation Engines, newly published by MIT Press. He joins us today via Skype.

Steven Cherry Michael, welcome to the podcast.

Michael Schrage Thank you so much for having me.

Steven Cherry Michael, many of us think of recommendation engines as those helpful messages at Netflix or Amazon, such as people like you also watched or also bought, but also Yelp and TripAdvisor, predictive text choices and spelling corrections. And, of course, Twitter posts, Google results and they order things are your Facebook feed. How ubiquitous are recommendation engines?

Michael Schrage They’re ubiquitous. They’re pervasive. They’re becoming more so, in no small part because of the way the reason for which you set up this talk. There’s more choices and more choices aren’t inherently better choices. So what are the ways that better data and better algorithms can personalize or customize or in some other way make more relevant a choice or an opportunity for you? That is the reason why I wrote the Recommendation Engines book, because this issue of, on one hand, the explosion of choice in the absence of time, the constraints of time, but the chance, the opportunity to get something that really resonates with you, that really pleasantly and instructively and empoweringly helps you—that’s a big deal. That’s a big deal. And I think it’s a big deal that’s going to become a bigger deal as machine learning algorithms kick in and our recommender systems, our recommendation engines become even smarter.

Steven Cherry I want to get to some examples, but before we do, it seems like prediction and recommendation are all tied up with one another. Do we need to distinguish them?

That is an excellent, excellent question. And let me tell you why I think it’s such an excellent question. When I really began looking into this area, I thought of recommendation as just that, you know, analytics and analytics proffers out relevant choices. In fact, depending upon the kind of datasets you have access to, one can and should think of recommendation engines as generators of predictions of things you will like. Predictions of things you will engage with. Predictions of things you will buy. Now, what you like and what you engage with and what you buy may be different things to optimize around, but they are all different predictions. And so, yes—recommendation engines are indeed predictors.

Steven Cherry Back in the day Spectrum had some of the early articles on the Netflix Prize. Netflix now seems to collect data on just about every moment we spend on the service … what we queue up, how much bingeing we do, where exactly we stop a show for the night or forever …. It switched from a five-star rating system to a thumbs up or down, and it seems it doesn’t even need that information when there’s so much actual behavior to go by.

Michael Schrage You are exactly right on the Netflix instrumentation. It is remarkable what they have learned and what they do—and decline to disclose about what they’ve learned—about the difference between what’s called implicit versus explicit ratings. Explicit ratings are exactly what you’ve described, five stars. But in fact, thumbs up, thumbs down turns out to be statistically quite relevant and quite useful. The most important thing Netflix discovered and of course, let’s not forget that Netflix didn’t begin as a streaming service. It began with its delivery system being the good old-fashioned United States Postal Service. What Netflix discovered was people’s behavior was far more revealing and far more useful in terms of, yes, predicting preference.

Steven Cherry So Netflix encourages us to binge-watch. But it seems YouTube is really the master at queuing up a next video and playing it … you start a two-minute video and find that an hour has gone by … In the book, you say Uber uses much the same technique with its drivers. How does that work?

Michael Schrage Yes! YouTube has really literally, not just figuratively re-engineered itself around recommendation. And TikTok was a born recommender. The circumstances for Uber were somewhat different because what Uber did and its analytics was it discovered that if there was too much downtime between rides, some of its drivers would abandon the platform and become Lyft drivers. So the question then became, how do we keep our power drivers engaged constructively, productively, cost-effectively, time-effectively engaged? And they began to do stacking and they began to—the team using a platform I think called Leonardo—began analyzing what kind of requests were coming in. And they began sorting out and stacking requests so that drivers could literally, as they were dropping somebody off, have the option, a recommendation of one or two or three, depending upon what the flow was, what the queue was, what ride match they could do next.

And that was a very obviously a very, very big deal because it was a win-win for the platform. It gave more choices for people who wanted to use the ride-hailing service. But it also kept the flow of drivers very, very smooth and consistent. So it was a demand-smoothing and a supply-smoothing approach. And recommendation is really key for that. In fact, forgive me for going on a bit longer on this, but this was one of the reasons why Netflix went into recommender systems and recommendation engines, because, of course, everybody wanted the blockbuster films back in the DVD days. So what could we give people if we were out of the top five blockbusters? So this was the emergence of recommendation engines to help manage the long- or longer-tail phenomenon. How can we customize the experience? How can we do a better job of managing inventory? So there are certain transcendent mathematical algorithmic techniques that were as relevant to the Uber you hail as to the movie you watch.

Steven Cherry Recommendation engine designers draw from psychology, but also you say in the book from behavioral economics and then even one more area persuasion technology. How do these three things differ and how did they fit into the recommendation engines?

Michael Schrage They are very different flavors—and I’m very much appreciative of that question and that perception because there’s the classic notion that, you know, the recommendation it’s about persuasion and there is persuasion technology. It’s been called captiveology. And it’s the folks at Stanford were pioneers in that regard. And one of the people in the class, one of the captiveology classes, persuasion technology classes, ended up being a founder, a co-founder of Instagram.

And the whole notion there is how do we persuade or how do we design a technology was rooted in technology. How do we design and technology, engagement or interaction to create persistent engagement, persistent awareness? So in Stanford, for understandable reasons, it was rooted in technology. Psychology, absolutely. The history of choice presentation you mentioned about the tyranny, the paradox of choice. Barry Schwartz’s work in Swarthmore. How do people cognitively process choice? When does choice become overwhelming? There’s the famous article by George Miller, the magic number seven plus or minus two, which talks about the cognitive constraints that people have when they are trying to make a decision. This is where psychology and economics began to intersect with Herb Simon, who was one of the pioneers of artificial intelligence from Carnegie Tech and then Carnegie Mellon—bounded rationality. So there are limits on what we can’t remember everything. So what are the limits and how do those limits constrain the quantity and quality of choices that we make?

This evolved into behavioral economics. This was Daniel Kahneman and Amos Tversky, and Kahneman also won the Nobel Prize in economics. Cognitive heuristics, shortcuts. And basically what you had was the incred—because of the Internet, you had all of these incredible software and technical tools, and the Internet became the greatest laboratory for behavioral, economic, psychological, and captiveology experiments in the history of the world. And the medium, the format, the template, which made the most sense for doing this kind of experimentation. These kinds of experimentation, mashing up persuasion technology with behavioral economics was recommender systems, recommendation engines. They really became the domain where this work was tested, explored, exploited. And in 2007, the RecSys, the Recommendation Systems Conference, academic conference, was launched internationally and people from all over the world, and most importantly, from all of these disciplines, computer science, psychology, economics, etc., they came and began sharing stuff in this regard. So it’s a remarkable, remarkable epistemological story, not just a technological or innovation story.

Steven Cherry You point out that startups nowadays are not only born digital but born recommending. You have three great case studies in the book, Spotify, ByteDance, which is the company behind TikTok, and Stitch Fix, which is a billion-dollar startup that applies data science to fashion. I want to talk about Spotify because the density of data is just a bit mind-bending here. Two hundred million users, 30 million songs, spectrogram data of each song’s tempo key and loudness, activity logs and user playlists. micro-genre analysis…. You were particularly impressed by Spotify Discover Weekly Service, which uses all of that data. Could you tell us about it?

Michael Schrage Yes. And it’s the fifth anniversary and I just got a release saying that they’ve been over 2.3 billion hours streamed under Discovery. And the whole idea was that it’s an obvious idea, but it’s an obvious idea that’s difficult to do. The idea was, what can you listen to that you’ve never heard before? Discover. This is key. One of the key elements of effective recommender-system/recommendation- engine design is discovery and serendipity. And what they did was basically launch a program where you get a playlist, where you get to hear stuff that you’ve never heard before but that you are likely to like. And how can they be confident that you are likely to like it? Because it deals directly with everything that you mentioned in setting up the question. The micro-genres, the tempo, the cadence, the different genres, what you have on your playlist, what your friends have on their playlists, etc. And of course, as with Netflix, they track your behavior. How long do you listen to the track? Do you skip it, etc? And it’s proven to be remarkably successful.

And it illustrates to my mind one of the most interesting ontological, epistemological, esthetic, and philosophical issues that that recommendation engine design raises: What is the nature of similarity? How similar is similar? What is the more important similarity in music? The lyrics? The tempo, the mood, the spectrograph, the acoustics, the time of day? What are the dimensions of similarity that matter most? And the algorithms that either individually or ensembled tease out and identify and rank those similarities. And based on those similarities, proffer this list, t his playlist of songs, of music you are most likely to like. It’s remarkable. It’s a prediction of your future taste based on your past behavior. But! But! Not in a way that is simply, no pun intended, an echo of what you’ve liked in the past.

But it represents a trajectory of what you are likely to like in the future. And I find that fascinating because it’s using the past to predict serendipitous, surprising, and unexpected future preferences, seemingly unexpected future preferences. I think that’s a huge deal.

Steven Cherry Yeah, the music case is so interesting to me, I think, because, you know, we want to hear new things that we’re going to like, but we also want to hear the old stuff that we know that we like. It’s that mix that’s really interesting. And it seems to me that you go to a concert and the artist, without all of the machinery of a recommendation engine, is doing that, him- or herself. They’re presenting the stuff off of the new album, but they’re making sure to give you a healthy selection of the old favorites.

I’m going to make a little bit of a leap here, but something like that, I think goes on with—you mentioned ranking and we have this big election coming up in the U.S. and a handful of jurisdictions have moved to ranked-choice voting. In its simplest form, this is where people select not just their preferred candidate, but they rank them. And then after an initial counting, the candidate with the fewest votes as the first choice has dropped out from the count and their ballots get redistributed based on people’s number-two choices and so on until there’s a clear winner.

The idea is to get to a candidate who is acceptable to the largest number of voters instead of maybe one that’s more strongly preferred by a smaller number. And here’s the similarity where I think a concert is sort of in a crude form doing what the recommendation engine does. Runoff elections do this in a much cruder way. And so my question for you is, is there some way in a manageable form for recommendation systems to help us in voting for candidates and and and help us get to the candidate who is most acceptable to the largest number of voters?

Michael Schrage What a fascinating question. And just to be clear, my background is in computer science and economics. I’m not a political scientist. Some of my best friends are political scientists. And let me point out that there is a very, very rich literature on this. And I would go so far to say that people who are really interested in pursuing this question should look at public choice literature. They should go back to the French. You know, Condorcet … the French came up with a variety of voting systems in this regard. But let me tell you one of my immediate, visceral reactions to it. Are we voting for people or are we voting for policies? What would be the better option or opportunity for people: to vote for a referendum on immigration, public health, or for the people to enact a variety of policies where whereas, you know, we do not have direct democracy. There are certain areas there, certain states where you can, of course, you know, vote on directly on a particular question. The way I would repurpose your question would be, do we want recommendation engines that help us vote for people? Help us vote for policies? Or help us vote for some hybrid of the two?

Why am I complicating your seemingly simple question? Precisely because it is the kind of question that forces us to ask, “Well, what is the real purpose of the recommendation?” To get us to make a better vote for a person or to get a better outcome from the political system and the governance system?

Let me give you a real-world example that I’ve worked with companies on. We can come up with a recommendation system recommendation engine that is optimized around the transaction, getting you to buy now. Now! We’re optimizing recommendations so that you will buy now! Or we say hmm, that customer, we can have a relationship with. Maybe what we should do is have recommendations that optimize customer lifetime value. And this is one of the most important questions going on for every single Internet platform company that I know. Google had exactly this issue with YouTube; it still has this issue with its advertising partners and platform. This is the exact issue that Amazon has. Clearly, it regards its Prime customers from a customer lifetime value perspective. So your political question raises the single most important issue. What are the recommendations optimized for: the vote in this election or the general public welfare over the next three to four to six years?

Steven Cherry That turned out to be more interesting than I thought it was going to be. I’d be remiss if I didn’t take a minute to ask you about your previous book, The Innovator’s Hypothesis. Its core idea is that cheap experiments are a better path to innovation than brainstorming sessions, innovation vacations, and a bunch of things that companies are often advised to do to promote innovation and creativity. Maybe you could just take a minute and tell us about the 5×5 framework.

Michael Schrage Oh, my goodness. So I’d be delighted. And just to be clear, the book on recommendation engines could not and would not have been written without the work that I did on The Innovator’s Hypothesis.

You have five people from different parts of the organization come up with a portfolio of five different experiments based on well-framed, rigorous, testable hypotheses. Imagine doing this with 15 or 20 teams throughout an organization. What you’re creating is an innovation pipeline, an experimentation pipeline. You see what percent and proportion of your hypotheses address the need of users, customers, partners, suppliers. Which ones are efficiency-oriented? Which ones are new value-oriented? Wow! What a fascinating way to gain insight into the creative and innovative and indeed the business culture of the enterprise. So I wanted to move the Schwerpunkt, the center of gravity, away from, “Where are good ideas coming from?” to, “Can we set up an infrastructure to frame, test, experiment, and scale business hypotheses that matter? What do you think a recommendation engine is? A recommendation engine is a way to test hypotheses about what people want to watch, what they want to listen to, who they want to reach out to, who they want to share with. Recommendation engines are experimentation engines.

Steven Cherry I’m reminded of IDEO, the design company. It has a sort of motto, no meeting without a prototype.

Michael Schrage Yes. A prototype is an experiment. A prototype shouldn’t be…. Here’s where you know people are cheating: when they say “proof of concept.” Screw the proof of concept. You want skepticism when you want to validate a hypothesis. Screw validation. What do you want to learn? What do you want to learn? Prototypes are about learning. Experimentation is about learning. Recommendation engines are about learning—learning people’s preferences, what they’re willing to explore, what they’re not willing to explore. Learning is the key. Learning is the central organizing principle for why these technologies and these opportunities are so bloody exciting.

Steven Cherry You mentioned Stanford earlier and it seems there’s a direct connection between the two books here, and that is the famous 2007 class at Stanford where Facebook’s future programmers were taught to “move fast and break things.” There was a key class taught by the experimental psychologist B.J. Fogg.

Michael Schrage Right. B.J. is a remarkable guy. He is one of the pioneers of captiveology and persuasion technology. And one of the really impressive things about B.J. is he really took a prototyping/experimentation sensibility to all of this.

It used to be that the deliverable if you took a class on entrepreneurship at Stanford or M.I.T.—and this is as recent as a decade ago—is what did you have to come up with? What was your deliverable? A business plan. Screw the business plan! With things like GitHub and open-source software, you now have to come up with a prototype. What’s the cliché phrase? The MVP: the minimum viable prototype. Okay? But that’s really the key point here. We’re trying to turn prototypes into platforms from learning what matters most, learning about what matters most in terms of our customer or user orientation and value, and learning what matters most about what we need to build and how it needs to be built. How modular should it be? How scalable should it be? How bespoke should be?

What’s the big difference between this in 2020 and 2010? What we’re building now will have the capability to learn—machine learning capabilities. One of the bad things that happened to me is I wrote my book was, far faster than I expected machine learning algorithms colonized the recommendation engine/recommender system world. And so I had to get up to speed and get up to speed fast on machine learning and machine learning platforms because recommendation engines now are the paragon and paradigm of machine learning worldwide.

Steven Cherry Well, it seems that once our basic needs are satisfied, the most precious commodities, we have our time and attention. One of the central dilemmas of our age is that we may be giving over too much of our everyday life to recommendation engines, but we certainly can’t live our overly complex everyday lives without them. Michael, thank you for studying them, for writing this book about them, and for joining us today.

Michael Schrage My thanks for the opportunity. It was a pleasure.

Steven Cherry We’ve been speaking with Michael Schrage of the MIT Sloan School and author of the new book, Recommendation Engines, about how they are influencing more and more of our experiences.

This interview was recorded 26 August 2020. Our audio engineering was by Gotham Podcast Studio in New York. Our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronics Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.