All posts by Tekla S. Perry

Silicon Valley Stays on Top as Tech Salaries Climb Across U.S.

Post Syndicated from Tekla S. Perry original

When Hired released its annual report on the state of software engineers in 2020, it warned that the steady upward trend in tech salaries couldn’t be counted on to continue, given the uncertainties early in the pandemic.

And indeed, Hired’s 2021 State of Software Engineers Report concluded that demand, in the form of interview requests, dropped across the board as a result of the pandemic. But the online employment marketplace, recently acquired by Vettery, a competing recruitment platform, found that salaries for virtually all tech jobs increased in all major tech hubs last year. Hired didn’t report average software engineering salaries by metropolitan area, as the firm has done in the past.

But Dice, another job search platform, also released its annual tech salary report this month, and Dice did look at changes in tech salaries by region. According to Dice’s numbers, tech salaries grew the most in Charlotte, N.C. However, its data indicated, in terms of straight dollar figures, Silicon Valley remains solidly above the pack. Overall, Dice indicated the average salary of a tech professional in the U.S. increased 3.6 percent to US $97,859. Dice based its numbers on a survey of some 9000 tech employees.

Tech Salary report by major U.S. Cities

Zooming in on those high San Francisco Bay Area tech salaries, up an average 5 percent for the year, Hired sifted its data by specialty. According to its numbers, engineers working in augmented and virtual reality topped the Silicon Valley pay scale, with average salaries jumping 13 percent in 2020. That increase pushed the pay scale for AR and VR engineers well above those working in natural language processing (NLP), engineering management, and search. Those latter experts had topped the charts in Hired’s 2020 report. (This data, collected from interview requests posted on Hired, did not include bonuses or benefits.)

The chart below shows changes in Silicon Valley salaries for the most in-demand software engineering roles, based on interview requests made. The study looked at data from both Hired and Vettery, and covered 10,000 companies and 245,000 job seekers.

How Your Smart Phone Can See You Sweat

Post Syndicated from Tekla S. Perry original

Into this stay-at-home era of DIY personal training comes the first sweat-monitoring patch intended for broad consumer use. Rolling out this week from PepsiCo is the Gx Sweat Patch, developed by startup Epicore Biosystems in partnership with PepsiCo subsidiary Gatorade. It measures the rate of perspiration and the sodium chloride concentration in that sweat.

The aim? To track sweat loss during physical activity and heat stress and use that information on a personalized basis to recommend exactly how much and how often the athlete should drink to properly replace fluids and electrolytes to avoid dehydration and impaired performance. In the future, its developers predict, information from the patch will be used to help athletes determine their optimal diet and sleep patterns.

The patch will go on sale today, in sporting goods stores and online, at a suggested price of US $24.99 for a pack of two. It represents Gatorade’s first move into the world of digital products and apps for athletes.

Epicore has been testing its flexible, stretchable, single use patch on athletes for some time. The device routes sweat through microfluidic channels cut into stacks of thin-film polymers. In one of the microchannels, used to track sweat rate and volume, the excreted sweat is dyed orange to make it visible as it moves through the pathways. In the other, chemical reagents react with the chloride in the sweat and turn it purple, with the intensity of the purple color corresponding to the concentration of the chlorine ions detected.

After wearing the device on the inner left forearm for the duration of a workout, the user scans the patch with a smartphone. Then the Gx app uses that image, along with previously input data like weight, sex, workout type, and the environment, to create what the company calls a “sweat profile” and make recommendations about the individual’s fluid intake.

In recent clinical trials involving 60 players in the National Basketball Association’s G-League, PepsiCo and Epicore tested the patch, worn on the left forearm of each athlete, against an absorbent pad worn on the athlete’s right forearm. The researchers compared the snapshot reading from the patch against laboratory tests conducted on the sweat collected by the pad, demonstrating comparable results.

Previously, the Gatorade Sports Science Institute and Epicore published the results of a similar large scale trial of the patch on more than 300 bicyclists, track and field athletes, and others exercising in real world conditions, outside of the laboratory environments.

Microfluidic technologies have been used in many applications including DNA chips, point of care diagnostics, and inkjet printing. These devices tend to be rigid, and not suitable for use in a wearable.

“The Gx Sweat patch is the first soft, conformal, and skin-interfaced microfluidic patch that has entered the consumer health and fitness arena,” says Epicore co-Founder and CEO Roozbeh Ghaffari.

After getting a sample patch from PepsiCo this past weekend, I can attest that it is indeed comfortable to wear. It feels like the kind of sticker store clerks sometimes hand out to children, and, a few minutes after attaching it to my skin, I no longer felt it. The accompanying app was simple enough to use, and the scan feature found and photographed the patch almost as soon as I got my arm into the camera’s view. I took a long, brisk walk, hoping I would work up enough of a sweat to get a reading, however, the cool February weather worked against me, and I literally came up dry. I’ll try it again on a warmer day, or when I can get back into a gym post-pandemic.

Augmented Reality Contact Lens Startup Develops Apps With Early Adopters-to-Be

Post Syndicated from Tekla S. Perry original

Last year, startup Mojo Vision unveiled an early prototype of a contact lens that contains everything it needs to augment reality—an image sensor, display, motion sensors and wireless radios all safe and comfortable to tuck into your eye.

These days, the company’s engineers are “charging hard on development” says Steve Sinclair, Mojo Vision’s senior vice president of product and marketing. While most of Mojo Vision’s hundred employees are working at home, the pandemic only minimally affected the company’s ability to use its laboratories and clean room facilities as needed. The company’s engineers, for instance, are helping a vendor of motion sensors thin its dies for better wearability, partnering with a battery manufacturer to build custom batteries, and refining their own designs of displays, image sensors, and power management electronics. And Mojo last year signed an agreement with Japanese contact lens manufacturer Menicon to fine-tune the materials and coatings of the lens itself.

About a dozen of the company’s employees have worn early prototypes of its lenses. The next generation of prototypes, now under development, is expected to be ready for such testing later this year.

While commercial release is still a few years out, requiring FDA approval as a medical device, development of the first generation of applications is well underway. While Sinclair says the company long anticipated that the earliest adopters of its AR contacts would be the visually impaired, exactly what applications would be useful wasn’t fully clear.

Ashley Tuan, Mojo Vision’s vice president of medical devices, has a personal interest in the technology—her father has limited vision, due to a rare retinal degeneration disease.

With a Ph.D. in vision science, Tuan has also studied the biology. People with low vision, she says, “can’t see fine detail because photoreceptors have died. That is typically solved with magnification, though people generally don’t like to carry a magnifier around. They also have an issue with contrast sensitivity, that is, the ability to sense light grey, for example, from a white background. This can stop them from going into unfamiliar surroundings because they don’t feel safe. Most of us subconsciously use shadows to identify something coming up, something that we might trip over. In studies, even a slight reduction in contrast sensitivity stops people from going outside.”

Mojo Vision is developing apps to address these issues. “Enhancing contrast is easy to do with our technology,” Tuan says. “Because we are projecting an image onto the retina, we can easily increase the contrast of that image.

“We can also do edge detection, which I see as being one step above contrast enhancement, by highlighting the edges of objects with light.”

Mojo Vision has these tools implemented in prototypes. In the future, Tuan expects the technology will evolve to adjust what it is highlighting according to the context—a street sign, perhaps, or the facial expressions of someone talking to the wearer.

Magnification, with the current prototype, is triggered by the user zooming using the image sensor. In the future, the company expects to make magnification context dependent as well, with the system determining when a wearer is looking at the menu in a restaurant and therefore needs magnification, or looking for the bathroom, and instead needs the letters of the sign and the frame of the door brightened.

Mojo Vision’s development team is now trying to take these basic ideas for apps and turn them into something that they hope will be immediately useful for their early adopters. To do that, they turned to a Silicon Valley resource, the Palo Alto Vista Center’s Corporate Partners Program.

Says Alice Turner, Vista Center’s director of community and corporate relations: “We have been working with tech companies in the product design realm for about three years, about eight to ten companies so far, including Facebook, Microsoft, Samsung, and other companies that haven’t announced products yet. I know partnering with us results in a better product, better suited, and the population that it is addressing will be better served.”

“Mojo Vision,” she says, “is a huge success story for our partnership program.”

The tech tools that eventually come to market through this program help some of Vista Center’s clients. But the program also provides more immediate benefits; the fees it charges the companies provide a steady source of revenue for the nonprofit.

When brought in on a product development project by a tech company, Turner taps into Vista Center’s client database, some 3400 individuals representing a wide range of vision impairments and demographics. She selects people who are appropriate matches for the technology under development, confirms that they have some basic tech skills and are comfortable talking freely about their condition and experiences, and sets up one-on-one meetings (virtual in these pandemic times) and focus groups during which the developers can get feedback for anything from an idea to a rough demo to a working prototype.

David Hobbs, director of product management at Mojo Vision explained that, for Mojo, the process started with interviewing to find out more about problems that the technology could potentially solve. After identifying the problems, the team brought a subset of Vista’s clients in for a deeper dive.

“For example,” he said, “When we are trying to understand how to cross a crosswalk, we may talk for hours about different crosswalk situations.”

Then the Mojo Vision design team builds prototype software for use with customized virtual reality headsets. The Vista clients test the prototype software and give feedback. The company will do clinical trials with lens prototypes after it gets FDA Breakthrough Device approval.

“We are learning,” Hobbs said, “that vision is uniquely intimate. Everyone sees differently, so finding a way to provide the information that someone wants in the way they want it is really challenging. And different scenarios require different levels of detail and context.”

While this development process continues, the Mojo Vision developers have already learned from the tests conducted so far.

“When we looked at edge detection,” Hobbs said, “we were just taking an image and, wherever there was a lot of difference between two pixels, drawing a line. We created detailed models of the world  in this way. Our bias was that all of this detail was valuable.

“But the feedback we got was all these lines were a lot of noise,” he continues. “It turns out that there is a level of information that we can provide that can help people on their journey without being overwhelming.”

Hobbs expects many the AR applications developed for the visually impaired to evolve into technology useful for everyone. “Fulfilling the needs of the most demanding users provides a lot of capabilities for general users as well. Take the ability to see in the dark. Going into a dark stairwell and having enhanced vision flip on would be valuable to many.”

Mining Traffic Data for Insights About The Pandemic

Post Syndicated from Tekla S. Perry original

Every year for the past decade TomTom, the location technology company that supplies mapping and traffic data to navigation devices, carmakers, and apps around the world, releases an analysis of the world’s traffic.

This analysis includes an index of congestion levels, created from data collected from 600 million drivers in 416 cities around the world, aggregated anonymously, and crunched with the company’s proprietary algorithms. Their process identifies routes and calculates both optimal and average drive times. The outcome is expressed as a percent, that is, how much extra time the average trip took in a particular city during a particular time period, compared with how long it would take to drive that route with no traffic delays whatsoever.

TomTom’s index features an overall “competition” for the dubious title of most congested city in the world. In 2019, Bengaluru, India (Bangalore) and Manila, Philippines took the top spots, with traffic congestion in those cities increasing drive time by 71 percent.

TomTom also breaks down its results by hours, days, and weeks, highlighting local rush hours and weekly and daily trends. This data gets used by local governments to find ways to improve traffic flow, by employers to adjust work schedules, and by individuals to calculate commute times—all in hopes of making traffic flow a little better.

For 2020, however, TomTom’s results revealed more about the pandemic than about the successes or failures of traffic mitigating efforts.

Indeed, its overall rankings turned out to be basically meaningless: Moscow topped the 2020 charts at 54 percent, but that’s not because Moscow’s drive times increased from 2019; they actually dropped 5 percent. Rather, Bengaluru and Manila dropped in the ranks, because India faced more strict pandemic restrictions.

So in 2020, instead of showing where new development caused snarls or where a infrastructure improvements eased congestion, TomTom’s traffic data painted a picture of the coronavirus spreading around the world. The data also showed the extent of local lockdown orders, how well those were followed in different cities, and the reaction of workers when they were lifted.

Gijs Peters, a data scientist at TomTom, describes the pandemic as viewed through the lens of traffic:

“When Wuhan went into lockdown, traffic there was gone,” he says. “Everything was still normal here in Europe. Then we watched the virus spread by watching traffic data. Traffic collapsed in Milan, then Rome, then the rest of Italy, followed by other European countries. …

 “In the West,” Peters continues, “the first lifts of lockdown restrictions came in the summer, then they went back into place in September, in some cases more strict than in April. However, when we look at the traffic data, while we saw rush hour completely disappear in European cities in March, April, and May, now, even with similar restrictions in place, we see rush hour patterns again. The sense of urgency seems to be lower.”

Moving the slider on this interactive graphic shows how traffic changed from 2019 to 2020 as a result of responses to the pandemic.

Peters points out that changes in traffic patterns in the United States varied widely from city to city:

“Minneapolis had stricter lockdowns compared with other U.S. cities. Traffic congestion there went to half in April and still is very low,” he says. “In Florida, however, where the lockdown was lifted on the first of June, traffic seems to be back to normal.

“Meanwhile, in San Jose and the San Francisco Bay Area, we saw traffic drop ahead of the lockdown orders; as soon as employers said [to] start working from home. Working at home was easy for tech employees to do. So though you see traffic in many large U.S. cities catching up in recent months, traffic around San Jose is still very low.”

This kind of pandemic-related data drew the attention of financial analysts, banks, and media, trying to figure out how the pandemic was affecting daily lives and the overall economy.

Meanwhile, for traffic planners, the pandemic brought in data that one could only have dreamed of gathering in normal times.            

For example, Peters says, “When I look at [The] Netherlands, the total number of driven kilometers in April was about 50 percent of what we expected, and congestion was almost gone. In November, with second lockdown, congestion was still almost gone, but we saw the number of driven kilometers back up to 80 to 90 percent. What that tells us is, if we are able to reduce our traffic by 10 or 15 percent, we would be able to limit and potentially completely prevent congestion.”

Peters hopes the work he and other data scientists are doing with TomTom’s pandemic era traffic data will lead to long term changes.

“If we are smarter,” he says, “and only go to work when we need to, that could lead to 10 percent of people staying home on average each day. And then we might be able to spend so much less time in traffic than we do right now. That would help us work towards a congestion-free, emissions-free future.”

Can Silicon Nanostructures Knock Plastic Lenses Out of Cell Phone Cameras?

Post Syndicated from Tekla S. Perry original

It’s been a good decade or so for the makers of plastic lenses. In recent years, smartphone manufactures have been adding camera modules, going from one to two to five or more. And each of those camera modules contains several plastic lenses. Over the years, these lenses have changed little, though image processing software has improved a lot, merging images from multiple camera modules into one high quality picture and enabling selective focus and other features.

The glory days of the plastic camera lens, however, may be drawing to a close. At least that’s the hope of Metalenz, a Boston-area startup that officially took its wraps off today.

The company aims to replace plastic lenses with waveguides built out of silicon nanostructures using traditional semiconductor processing techniques. Metalenz’s technology grew out of work done at Harvard’s John A. Paulson School of Engineering and Applied Sciences. Harvard is not the only university laboratory that has investigated metastructures for use as optical wave guides— Columbia, the University of Michigan, and King Abdulla University of Science and Technology in Saudi Arabia are among the institutions with teams researching the technology. However, Harvard’s team, led by applied physics professor Federico Capasso, was the first group to be able to focus the full spectrum of visible light using a metalens.

Capasso cofounded Metalenz in 2017 with Robert Devlin, who worked on the Harvard project as part of his Ph.D. research. The company has an exclusive license to Harvard’s patents related to metalenses.

Devlin says that a metalens has several advantages.

“Producing a lens via semiconductor processes reduces the complexity of what is now a multistep process to build a camera module. And it could lead to much smaller modules, with the lens attaching directly to the surface of the sensor, instead of using more complex packaging methods,” he says.

Also, Devlin pointed out, “with a variety of these structures on the same chip, one metalens can act like multiple plastic lenses, allowing the image processing software to combine images to improve image quality in the same way it combines images from separate camera modules today.”

Metalenz has raised $10 million to scale up production of its devices, with investors including 3M Ventures, Applied Ventures, Intel Capital, M Ventures, and TDK Ventures. The company expects to ship its first chips in early 2022. These will be for use in 3D imaging.

“3D cameras are even more complex than traditional cameras,” says Devlin. “They have multiple lenses, sometimes made by different suppliers, and a laser that illuminates the scene. We bring less complexity, and, because we can get more light to the sensor, the laser doesn’t have to be as bright or shine as long, so we can increase battery life.”

For now, Metalenz is aiming to replace the existing camera modules in cell phones, But Devlin anticipates that in the future, the technology will allow new imaging tools to move into mobile devices.

“We can combine different types of optics in a single layer, so things that now are too big and bulky to leave a lab or medical facility because they contain many large lenses—like a spectrometer—can shrink down to a size and price point that will allow them to fit in anybody’s pocket.”

CES 2021: FEMA’s Emergency Alert System Coming to a Game or Gadget Near You?

Post Syndicated from Tekla S. Perry original

It was easy for exhibitors to get lost in the virtual shuffle at CES 2021, where the digital exhibit hall simply displayed the logos of the 1900-plus exhibitors. To get any detail about a particular presenter, you had to search for their booth and click into it, wading through videos, power points, PDFs, and images to try to figure out exactly what was on display.

Among the obscured were the exhibitors from the U.S. Federal Emergency Management Agency (FEMA). If anyone even noticed the agency’s logo—then wondered what FEMA was doing at CES—they likely surmised that it was simply there in case of a literal emergency and passed it by.

At a real-world show, though, their exhibit would likely have caught the eye of the curious. It would have contained kiosks, large electronic billboards, and even braille readers showing off the diversity of devices that are part of FEMA’s Integrated Public Alert and Warning System (IPAWS).

And, had a CES attendee stopped to look, the IPAWS folks would have explained their program to help consumer products designers build emergency alert capability into just about any system that includes a display.

IPAWS is the organization that sends wireless emergency alerts, like Amber Alerts and weather warnings, to mobile phones. It also manages the emergency alert system that triggers the audio warnings that interrupt radio and television broadcasts. In the past year, I’ve mostly seen phone alerts in the form of public health announcements of new shelter-in-place orders and, in one case, a tornado warning that gave me time to pull off the road before the weather got particularly crazy.

IPAWS acts as what it calls a “redistributor,” that is, it takes alerts from counties and other agencies and passes them on to devices designed to display them. Besides mobile phones, said IPAWS program analyst Justin Singer, such devices today include digital billboards, Braille readers, and public tourism kiosks.

But, says Singer, getting alerts in front of the people who could benefit from them gets more challenging when people spend more and more time in front of a diversity of displays. And he is hoping that the consumer electronics industry will help them meet this challenge.

“We are relying on the industry to take this on as a project and implement our technology in their products. We don’t have the ability to build products ourselves and we don’t want to regulate anybody. We just want to get alerts to as many people as possible through as many media as possible,” Singer said.

Take gamers. “Gamers are inherently cut off. I don’t want to interrupt their games, but if I can get a little alert on their screen displaying a tornado warning, say, maybe I can get them to move to the basement,” he continued. “Virtual reality would be really important; there you are really cut off. I’m trying to get smart mirror companies to see the light, too.”

Singer has tried to reach out to Microsoft’s Xbox team but has yet to connect with any interested engineers. He says he did manage to have a CES conversation with representatives of Sony. But trying to catch the eye of product designers at a virtual show has proven difficult. Singer would like to tell developers that IPAWS offers design help, and can also give them access to a laboratory in Maryland that would allow them to test their products on a closed system.

And, he’d tell them, “If you build it, we will show it off in our CES booth next year.”

Potential developers can find out more about the program by contacting [email protected].

CES 2021: My Top 3 Gadgets of the Show—and 3 of the Weirdest

Post Syndicated from Tekla S. Perry original

CES 2021, the all-digital show I attended through a computer screen this week, had some 1900 virtual booths, several peripheral product showcases, and kept my email inbox jammed with a constant stream of product announcements. It had new TV displays, robots promising to be your best friend, and gadgets aimed at making life in the pandemic world a little easier.

And among all that were a few concepts for new products that, in my eyes, showcased each their own touch of genius. Of course, genius varies in the eyes of the consumer. A great product, after all, is not only unique and clever, but it also fills a real need. And needs are personal. With that caveat—and with the reminder that I have yet to try out or even touch any of these products personally—here are the CES products that most lit up my world this week, in no particular order, along with three that I found unique in a different way.

First, in the “why didn’t someone think of this before” category:

JLab’s JBuds Frames Wireless Audio

Audio “buds” for your glasses instead of your ears? Why haven’t I seen this before? I have yet to find an earbud, wireless or otherwise, that I find truly comfortable and that stays on when I’m doing my daily walk. And over-the-ear headphones are too much of everything. A few years ago I was excited by the launch of Aftershokz headphones that go behind instead of in or over the ear, but found that the vibrations going through my head tended to make me queasy.

So JBuds Frames—tiny Bluetooth speakers with microphones that clip onto the frame of your glasses instead of tucking into your ears—got my attention. These days, I wear glasses everywhere, though often switch out to sunglasses for that walk. JLab’s press release says the speakers come with an assortment of silicon sleeves that will let them adjust to a variety of glasses frames. A spokesperson I queried said that at 11.7 grams each, they are light enough to not change the feel or fit of my glasses noticeably. The company promises eight hours of playtime and 100 hours of standby time on a two-hour charge. JBuds Frames are also water resistant. JLabs says the Frames will start shipping to customers in the spring, priced at around $50. I’m looking forward to trying them, and I’m hopeful that I won’t be disappointed when I do.

Samsung’s Galaxy Upcycling at Home

Samsung announced plans to release a line of software designed to encourage consumers to repurpose smart phones as IoT devices instead of tossing them into a drawer or the trash.  The software, to be released under a program it calls Galaxy Upcycling at Home, will allow old phones to be used as baby monitors, light controllers, and other smart home gadgets. DIY’ers have been repurposing phones this way for a long time, but making it simple for everyday consumers to do so is game-changing.

Wazyn’s smart sliding door adapter

I’ve had traditional flap-style pet doors in the past, and I know that raccoons aren’t frustrated by electronically-controlled locks. The creatures just pry them open. Plus, you also have to cut a hole in your door to install the things. So I was intrigued by Wazyn’s demo of its $400 gadget that turns a sliding door into an automatic or remotely controlled door. The device, the company says, is never permanently installed and doesn’t involve cutting a hole in anything. It can be controlled by a motion sensor that detects the arrival of a pet, which then sends an alert to your phone or smart speaker, at which you can tell it to open the door. It can also be set to automatically open—and automatically be turned off to keep those racoons out at night. All it requires is a smartphone or Alexa. So I’ve got sliding doors, I’ve got Alexa… all I need is a new cat.

And in the “hmmmm, who exactly would want this?” category:

Incipio’s Organicore phone cases

I know it’s tough for phone case manufacturers to distinguish themselves. You can make these gadget covers stronger and more colorful and branded by famous designers, but it’s still hard to make one line of phone cases stand out from all the other ones. So you can imagine the designers at Incipio in a Zoom brainstorming session, during a time when many of us are at home literally watching our grass grow, coming up with the company’s latest twist on a phone case. “Let’s make it compostable!” suggested someone, leading to Incipio’s $40 Organicore phone case. The company advises that composting in a residential bin will take two to three years, I can’t imagine pushing aside an old phone case every time I turn my compost for that long.

Neuvana’s Xen vagus nerve stimulating earbuds

These are stressful times to be sure, times when all of us are looking for ways to reduce our anxiety. But I’m not convinced that zapping my ears with electrical signals is going to make me happier than pandemic baking.

Neuvana is hoping that at least some of us are looking to try new stress-reduction technology. The company says its $330 Xen earbuds send an electrical signal through the ear to the vagus nerve, “bringing on feelings of calm, boosted mood, and better sleep.” I’m not questioning the power of vagus nerve stimulation—there’s a lot of research underway involving treatments for epilepsy and depression—just whether this is something I would actually want to do at home.

Ninu’s AI-guided perfume customizer

“Embark on a perfume fusion journey guided by AI perfume master Pierre,” stated Ninu’s press release. It took me back…back to Disneyland, where, as a teenager, I paid a few dollars for the thrill of having a parfumier with a bad French accent create a custom scent just for me. So I get that the idea of a custom scent can capture the imagination. But do I really need a perfume system that uses an app and AI and can “change the scent with every spray”? Pricing for Ninu’s cartridge-based system is not yet available.

CES 2021: Consumer Electronics Makers Pivot to Everything Covid

Post Syndicated from Tekla S. Perry original

It’s been ten months since the coronavirus pandemic changed everything—plenty of time to design, prototype, and manufacture products designed for consumers looking to navigate the new reality more safely, comfortably, and efficiently. And more than enough time to rebrand some existing products as exactly what a consumer needs to weather these challenging times.

So I wandered the virtual show floor of CES 2021 and the peripheral press-targeted events to find these Covid gadgets. Here are my top picks, in no particular order.

Tech-packed face masks

I’m sure there were many more variants of the high-tech face mask than I managed to find in the virtual halls. Those I spotted included:

Binatone’s $50 MaskFone, an N95 mask with built in wireless earbuds, uses a microphone under the mask to eliminate mask-muffle from phone conversations.

Razer’s Project Hazel mask comes with a charging box that uses UV light to disinfect while the mask charges. The N95 mask includes clear panels and a light, to allow whoever you’re talking to see your mouth move day or night (helpful for understanding speech for all, not just for those with hearing loss). There’s also an internal microphone and external amplifier for voice projection across social distances and built-in air conditioning. This is still a concept product with no pricing available.

AirPop’s $150 Active+ mask monitors air quality and breathing, tracking breaths during different activities and flagging the user when the filter needs replacing. A Bluetooth radio connects the mask to smartphones for data analysis.

Personal air purifiers

I’m not convinced that the average consumer will be as likely to toss a personal air purifier in their tote or backpack as they are to carry a canister of disinfecting wipes, even though these two products are about the same size. But plenty of gadget makers think there is a market for the personal air purifier. They don’t agree, however, on their choice of air purification technology. LuftQi, for example, uses UVA LEDs in its $150 Luft Duo; NS Nanotech picked far-UVC light for its $200 air purifier. And Dadam Micro’s $130 Puripot M1 uses titanium dioxide and visible wavelength light.

Lexon’s Oblio desktop phone sanitizer

Lexon combined a wireless charger and a UV-C sanitizer into an $80 desktop appliance that looks like a pencil holder; there’s no reason why this gadget couldn’t disinfect pencils as well

Panasonic’s car entertainment systems

The moment that Covid tech jumped the shark might have been when Panasonic Automotive President Scott Kirchner, in introducing the company’s automotive entertainment systems, pitched the technologies as relevant because “our vehicles have become second homes” from which we celebrate birthdays and attend performances and political rallies. Panasonic’s latest in-car technology, he said, can drive 11 displays, and distribute audio seat by seat or throughout the cabin.

NanoScent’s Covid diagnostics technology

Talk about a pivot! Startup NanoScent, a company that has built an odor sensor that, coupled with machine learning, it has been developing for use in detecting gas leaks, cow pregnancies, and nutritional status, aims to use its technology to detect the coronavirus. The company says that the proliferation of virus cells among the microrganisms that inhabit the noses of Covid patients produces what it believes to be a distinct smell. It has run two clinical trials, one in Israel and one in the United Arab Emirates, with 3420 total patients.

Yale’s smart delivery box

Yale, the lock company, addressed the problem of no-contact doorstep delivery security with its Smart Delivery Box. Users place the chest wherever deliveries generally take place, weighting or tethering it to prevent theft. It sits there unlocked until it is opened, then, after a delivery person places items inside and closes it, it locks until the owner unlocks it with a smartphone. The $230 to $330 lockbox (depending on style and features) can also be managed via WiFi.


CES 2021: A Countertop Chocolate Factory Could Be This Year’s Best Kitchen Gadget

Post Syndicated from Tekla S. Perry original

Sheltering-in-place orders sent many of us into the kitchen, baking and pickling and tackling ambitious cooking projects that maybe hadn’t captured as much widespread public interest pre-pandemic. So I wasn’t surprised that design engineers at consumer products companies spent 2020 thinking about high-tech kitchen gadgets.

The wave of kitchen tech introduced at CES 2021 includes a countertop chocolate factory that, if priced right, will likely be a top holiday gift in 2021. It also includes yet another attempt to apply Keurig’s pod concept as well as a spoon I’m not exactly sure I want in my mouth.

Unfortunately, with an all-digital CES this year, I was unable to get my hands on any of these gadgets—or to taste their creations. Instead, I viewed live-streamed demos or recorded pitches. And since some of the best food-tech ideas don’t necessarily produce the best tasting foods, the jury is very much out. But here are my picks for at least the most mouth-watering kitchen gadgets from CES 2021.

CocoTerra’s countertop automated chocolate factory.

The process of making chocolate from scratch has always seemed magical, even without Willie Wonka involved, and I’ve never missed a chance to visit a chocolate factory. I’ve seen enough to know that getting from cocoa bean to chocolate bar has many steps involving friction and heating and cooling. And so while chocolate making might seem like the perfect pandemic project, it’s a little too complicated to try at home. Which is why CocoTerra’s chocolate-making appliance jumped out at me. Founder Nate Saal, who previously worked in software engineering at various tech companies, explained in a live-streamed demo that the company’s recipes suggest different combinations of cocoa nibs, sugar, cocoa butter, and milk powder. It takes about two hours for the gadget to grind, heat, cool, spin, stir, and mold the chocolate. And, as a big selling point for me, the countertop appliance is compact, approximately 10 inches in diameter and 13 inches tall. Saal pointed out that he designed the gadget to use user-measured ingredients, not pods, to open up the possibilities of using cocoa beans from different sources. Pricing is not yet available

ColdSnap’s rapid ice-cream maker

ColdSnap’s 90-second countertop ice-cream freezer didn’t excite me as much as CocoTerra’s chocolate factory, as it’s a pod-based system—and there have been many bad pod ideas since Keurig introduced the world to coffee pods. Not only am I thinking that the single-serving pods, at $2.50 or more each, are pricier than an equivalent amount of premium ice cream, but also I’m skeptical that the product will taste as good. Rather, ColdSnap seems like a gadget that would quickly go from countertop to garage. The device, which the company says can make smoothies and frozen cocktails as well as ice cream, did win a CES Innovation Award, however. ColdSnap is expected to retail at $500 to $1000.

PantryChic’s automated ingredient dispensing system

PantryChic’s creators jumped onto two trends from the early days of stay-at-home orders—pantry reorganization (think matching canisters) and baking. They came up with a system that accurately measures flour and other dry ingredients by weight, automatically converting cups to the gram equivalent when necessary. Users store ingredients in PantryChic’s clear, smart canisters, identifying the type of ingredient when they fill each canister. Then the gadget will recognize the ingredient when the canister locks onto the dispenser. For someone who bakes constantly and prefers the precision of weighed ingredients, perhaps this gadget makes sense. But the company’s visuals suggest rice, cereal, and beans be dispensed by the device as well as flour and sugar—and that’s really not going to happen in a normal kitchen. The starter system—the countertop dispenser and two small canisters—is $350, additional canisters are $40 to $45.

TasteBooster’s SpoonTek flavor-enhancing spoon

TasteBooster’s founders Ken and Cameron Davidov have been developing products that use a mild electric current produced by the human body for several years. Their latest, SpoonTek, aims to use that current to “excite the taste buds.” The user places a finger on an electrode on the spoon handle, scoops up food, and completes the circuit by touching the tongue to the bowl of the spoon. The founders say the system will allow the health-conscious to use less salt and will also eliminate bitter aftertastes, enabling users to enjoy foods they may previously have found unpleasant. The spoons are priced at $29 each, less in quantity, on Indiegogo.

HyperLychee’s Skadu electric pot scrubber

Finally, we get to cleanup, and a power pot scrubber. Think cordless drill with scrubbing pads and other attachments. There just may be a market for it among people who do the kitchen cleanup in their households and are fans of power tools. The Skadu is $70 on Indiegogo.

CES 2021: What Is Mini-LED TV?

Post Syndicated from Tekla S. Perry original

CES 2021, this year’s fully virtual consumer electronics show, kicked off on Monday with Media Day and a flurry of announcements from the largest consumer electronics manufacturers. For these companies, the center of the consumer electronics world is the television—the bigger the screen the better. People generally don’t replace their TVs as often as they do their mobile devices, so TV manufacturers are constantly looking for a new display technology or feature that will make that TV on the store shelf seem a lot better than the TV in the family room. Some of these efforts have been more successful than others—3D displays, for example, never caught on.

This year, the TV manufacturers’ tech news coalesced around mini-LED technology. LG pitched its “quantum nanocell mini-LED,” a technology it somehow turned into the acronym QNED.

TCL touted its  “ODZero” mini-LED.

And Hisense, Samsung, and others are also unveiling mini-LED televisions at the show.

To understand what mini-LED is—and isn’t—and why it improves the TV picture, it helps to know a bit about what came before it.

First, to be clear, mini-LED isn’t a new display technology so much as a new backlight. The picture itself is generated by a liquid crystal display (LCD); how that evolved is an entirely different story.

Originally, LCD displays were lit by fluorescent tubes running behind the screens. Then, as LEDs became available at mass market prices, they replaced the fluorescent tube and the LCD TV came to be called the LED TV (a misrepresentation that still drives me a little crazy). LED has several advantages over fluorescent tubes, including energy efficiency, size, and the ability to be turned off and on quickly.

The first LED TVs used just dozens of the components, either arrayed on the edges or behind the LCD panel, but the arrays quickly grew in complexity and companies introduced what is called “local dimming.” With this technology (in which groups of LEDs are turned down or even off in the darkest areas of the TV picture), contrast, a big contributor to picture quality, increases significantly.

Recalls Aaron Drew, director of product development for TCL North America: “We were an early proponent of local dimming in the U.S. market. We had the first TV with what we called contrast control zones in 2016. That array had nearly 100 zones with a total number of LEDs in the hundreds.

There is no industry definition of mini-LED. For us, I would say we introduced our first backlight using mini-LEDs, that had just over 25,000 LEDs and nearly 1000 contrast control zones.”

Drew says that he’s happy to see other brands join TCL with mini-LED product announcements, but points out that TCL’s new display technology is interesting not just because it uses mini-LEDs, but because the company has figured out a way to eliminate the need to maintain space between the LEDs and the LCD panel; in traditional designs, he says, a little space is required to allow lenses to distribute the light evenly.

“We have a way to precisely control the distribution of the light without a globe shape lens and optical depth,” Drew says.

And that reduced optical depth (OD) feature gives the TCL technology the “OD Zero” tag.

This latest generation of TCL LED TVs contain tens of thousands of LEDs and thousands of contrast control zones, the company indicated in its announcement.

Over at LG, the Q of its QNED acronym refers to the quantum dot color film that most LED TVs use today to convert some of the blue LED light into the green and red wavelengths used in an RGB picture. The N, for NanoCell, also refers to that quantum dot layer. The company apparently dropped the L from LED to avoid confusion with its OLED TVs.

LG says these QNED TVs will have almost 30,000 LEDs and 2500 local dimming zones.

None of the mini-LED TV announcements have included pricing to date.

Is mini-LED technology different enough to send the average consumer running to the store to replace a TV they currently own? Without actually seeing these new displays in person—the huge downside of a virtual trade show—it’s impossible to tell. My guess, however, is no. But it is different enough to make a mini-LED TV display look better on a store shelf than a non-mini-LED TV parked next to it, so it’s not surprising that everybody is jumping into this pool.

Yet to come to the consumer market, and likely to make a much bigger difference in picture quality, is the so-called micro-LED. These use LED components that are small enough to act as pixels themselves, not as backlights for an LCD. The upshot: They lose no brightness to filters and can be turned off individually for true blacks—and they actually deserve to be called LED TVs. While some companies have announced micro-LED displays, these are expensive and gigantic—in the over-100-inch screen size category—and aimed at commercial markets only. Samsung did announce a 110-inch micro-LED model at CES 2021 that will be available in March, but it’s hard to see where such an expansive TV display would fit in most homes. Micro-LEDs will have to get even smaller (again, there is no official measurement of “micro”) before the prices and the screen sizes make sense for consumers.

And now about those rollable displays. TCL in its online press conference also demonstrated flexible OLED displays; one was in the form of a phone that rolls out to extend the display (LG also teased a rollable phone). TCL’s other rollable display came in the form of a scroll about the size of a folded compact umbrella. It unrolls to a 17-inch display. Had there been an in-person CES audience, these would definitely have sparked gasps and rustles in the crowd, but they are likely a long way from appearing on store shelves.


Will Alphabet’s Unionization Effort Spread to Other Big Tech Companies?

Post Syndicated from Tekla S. Perry original

On Monday, a group of employees from Google and other companies under the Alphabet umbrella announced the creation of the Alphabet Workers Union (AWU). The organization, formed with support of the Communications Workers of America (CWA), indicated that it had 226 members at Monday’s launch, by Friday its membership had grown to 530. The union is open to all employees and contractors of Alphabet, including engineers and other tech workers. Two software engineers, Parul Koul and Chewy Shaw, have been elected to head the organization as executive chair and vice chair, respectively. Members will contribute one percent of their total compensation to fund its efforts.

“This is historic—the first union at a major tech company by and for all tech workers,” said Dylan Baker, a Google software engineer, in the press release.

It is indeed historic, agrees Peter Meiksins, a sociology professor emeritus at Cleveland State University who has studied engineering unions. Engineers generally haven’t been friendly to the idea of unionization, he says. And the AWU is also groundbreaking because it has formed to address social issues, not economic concerns that more typically spark union movements. Simply, many of the first members of the AWU would like to see the company return to its original company standard: “Don’t be evil.”

As one of its first official acts, the organization on Thursday released an open letter to YouTube executives blasting the company for its lackluster response to President Donald Trump’s part Wednesday in what it described as a “fascist coup attempt.” (YouTube is under the alphabet umbrella.) YouTube, the letter states, “refuses to hold Donald Trump accountable to the platform’s own rules by choosing only to remove one video instead of removing him from the platform entirely. Additionally, the platform only cited ‘election fraud’ as the reason for removing yesterday’s video, even as he clearly celebrates the individuals responsible for the violent coup attempt. … YouTube must no longer be a tool of fascist recruitment and oppression.”

In recent years, unionization efforts sparked at a few small tech companies. In early 2018, startup Lanetix fired 14 software engineers after they petitioned to be represented by the CWA; the workers filed a complaint with the National Labor Relations Board (NLRB) and, in 2019, shortly before hearings were to begin, Lanetix settled with the former workers. (Lanetix recently rebranded as Winmore)

And tech workers at Kickstarter unveiled an organizing effort in 2019, meeting resistance from senior staff but ultimately prevailing. Moreover, in 2019 four former employees fired by NPM, the company behind NPM JavaScript, filed complaints with the NLRB indicating that they the dismissals were retaliation for union organizing activities. The company and former employees reached a settlement fairly quickly.

While these formal unionizing efforts were going on at smaller companies, numbers of tech professionals at Google held protests and petition drives without an official organization behind them.

The largest such protest, a worldwide walkout in 2018, opposed the company’s handling of sexual harassment charges; more than 20,000 employees participated. A sit-in followed to protest retaliation taken against organizers of the original workout. Google employees also held a petition drive opposing involvement with the U.S. Department of Defense’s Project Maven, an effort involving using artificial intelligence in a way that would potentially be used for drone warfare. And another petition drive opposed efforts to build Dragonfly, a search app intended for use in China that would allow government censorship.

Google eventually dropped both Project Maven and Dragonfly. But friction between the company and its tech workforce has continued. The most recent outrage, according to AWU’s announcement, was the firing of AI researcher Timnit Gebru, who had coauthored a paper on issues of bias and other concerns about AI. It was these and other situations that sparked the formation of the union, though the organizers indicated that economic issues are not off the table.

“The only tactic that has ensured workers are respected and heard is collective action,” the statement said. “The Alphabet Workers Union will be the structure that ensures Google workers can actively push for real changes at the company, from the kinds of contracts Google accepts to employee classification to wage and compensation issues.”

I asked Cleveland State’s Meiksins to put the Alphabet unionization effort in historical perspective.

IEEE Spectrum: Why have engineers typically not formed unions?

Peter Meiksins: Engineers in the U.S., especially since the latter part of 19th century, have seen themselves as professionals like doctors and lawyers and accountants. They see unions as a blue collar thing. Because they think of themselves as professionals, they organize themselves through societies like IEEE and ASME [the American Society of Mechanical Engineers]. Those organizations historically haven’t been friendly to the idea of unionizing. That’s not surprising, there is a significant corporate presence in their membership. And labor laws favor that perspective. If you have any supervisory responsibility at all, you are not seen as appropriate for a union member.

Spectrum: How will those laws affect the Alphabet union, which right now is pitching itself as, basically, Come one, come all?

Meiksins: I do wonder if somebody will question whether, if engineers are parts of hiring committees that hire other engineers, they are eligible to be members of unions. You may have to draw a line somewhere between project managers and line engineers.

Spectrum: The announcement indicated that the Alphabet union builds on activity involved in organizing the Google protests of recent years.

Meiksins: That’s the thing that is most striking. The traditional economic motivation for forming unions is largely absent here, rather, it seems to be of a response to social issues, particularly, military involvement and gender issues. They’re not complaining about their pay.

I’m not aware of too many examples of this. During the Vietnam war, there were grumblings by engineers who worked in the defense industry. These never got to the level of organized protests, but there were questions raised about collaborating with the military. I did some research on that by looking at the letters to the editor of IEEE Spectrum published in the early 1970s. There was a lot of discussion about the war then. This particular movement echoes that a little.

Spectrum: At this point, the AWU is a ‘minority union,’ which does not give it formal bargaining power; that would take recognition of the union by management, either voluntarily or forced by a company-wide vote. Do the members have any power or protections?

Meiksins: The union might provide moral support. That is, there is a collective voice that could speak on behalf of someone, but [it] has no power. The members can only get that if they organize a formal union and negotiate a contract that has a grievance procedure in it. With just an informal organization, the company does not have to pay any attention to it if they don’t want to.

Spectrum: Would you expect to see the Alphabet union formation spark similar efforts at other large tech companies?

Meiksins: Some of the issues motivating the movement at Google exist at plenty of companies. The question is whether, if there isn’t a real economic basis for the formation of a union, people will risk their livelihoods. In the United States, it is pretty easy to fire people. If Google fires some of the ringleaders, that could have a chilling effect on what happens elsewhere. People do take a risk when they do something like this.

Of course, for engineers today, particularly those in the computer sector, it is a seller’s market. So they may be more willing to take a risk, saying, “I like working here, but I don’t like what you are doing, and I can go make just as much across the street.” These people are well paid and in demand, so they aren’t taking a huge economic risk.

The alternative, of course, would be to just leave, not protest. However, what you may be seeing is that the job-hopping culture that engineers have lived in for decades now has led to the conclusion that there is nothing better out there. Google was supposed to be the great company. Now it is apparently not. So people are saying, “I want to work in tech, so I need to make tech some place I want to work in.”

What Do Software Engineers Get Paid?

Post Syndicated from Tekla S. Perry original

The ranks of highest-paying software engineering companies underwent a bit of a shuffle in 2020, particularly at the entry level. That’s according to’s 2020 report on software engineering salaries.

Levels, founded in 2017, builds tools to help employers and job seekers calculate and compare salary offers using standardized titles and job descriptions. The company gathers salary data through self-reports, verified when possible by pay stubs and other documentation.

According to the Levels 2020 report, at the entry level, engineers at Lyft are doing the best, with a median package of base salary, bonus, and stock grants of US $230,000 annually. In second place at $222,000 came Roblox, a newcomer to Levels’ charts. Roblox, an online game and event creation system, took off during the pandemic as a way to help children communicate with each other, and even hold birthday parties online.

For engineers with two-to-five years of experience, Airbnb’s package of $295,000 put it in first place, though that number dropped substantially from $334,000 in 2019. And for engineers with more than five years of experience, LinkedIn took the top spot, with a package worth $461,000.

Regional differences in engineering pay became a hot topic throughout 2020, with most software engineering jobs turning remote and companies beginning to contemplate, if not institute, geographically-based salary adjustments for engineers who moved their home offices beyond physical commuting distance. 

Levels did not include regional data in its 2018 analysis, and only reported limited data in 2019, so in the chart below, 2020 data stands alone. It reveals few surprises—the San Francisco Bay Area is at the top (minus cost-of-living adjustments), the position it has held in every regional study I’ve seen. These numbers may evolve over the coming year, as large California tech companies follow through on announced moves out of the region.

This Is the Year for Apple’s AR Glasses—Maybe

Post Syndicated from Tekla S. Perry original

Apple didn’t invent the portable music player, although I challenge you to name one of the approximately 50 digital-music gadgets that preceded the iPod. Apple didn’t invent the smartphone either—it just produced the first one that made people line up overnight to buy it.

And Apple isn’t first out of the gate with augmented-reality (AR) glasses, which use built-in sensors, processors, and displays to overlay information on the world as you look at it. Google introduced its Glass in 2013, but it generated more controversy and criticism than revenues. More recently, Magic Leap promised floating elephants and delivered file sharing. And Epson has been quietly selling its Moverio AR glasses for niche applications like closed captioning for theatergoers and video monitoring for drone pilots, while steering clear of the consumer market. The point is, although they were pioneering, none of these efforts managed to put augmented reality into comfortable, useful, affordable glasses that appealed to an ordinary person.

And now comes Apple. For years, Apple has been filing patents for AR and virtual-reality (VR) technology, acquiring related startups, and hiring AR experts from the Jet Propulsion Laboratory, Magic Leap, Oculus, and others. The company has been tilling this soil for quite a while, and speculation has for years been intense about when all this cultivation would bear fruit. Though Apple has carefully shrouded its AR efforts since their origins around 2015, a few signs, such as a declaration from a legendary Apple leaker, suggest that an unveiling could come as soon as March of this year.

It’s a giant project for Apple. Some analysts suggest it could give the company a jump on a market that could swell from US $7.6 billion to $29.5 billion over the next five years. Published reports indicate that Apple has around 1,000 people working on the effort. And now, after working on various designs for years, those engineers have likely made dozens and dozens of prototypes, according to Benedict Evans, an analyst who also produces an influential newsletter on technology. Before long, we’ll find out whether Apple can do for AR glasses what it did for portable music players, smartphones, and smartwatches.

“It’s the threshold moment that all of the AR community have been waiting for,” says David Rose, a researcher in the MIT Media Lab and former CEO of ­Ambient Devices. “AR glasses hold so much promise for learning, and ­navigating, and simply getting someone to see through your eyes. The uses are mind-blowing…. You could see a city through the eyes of an architect or an urban planner; find out about the history of a place; how something was made; or how the landscape you are seeing could be made more sustainable in the future.”

Rumors of a 2021 launch flared up last May, when Jon Prosser, who hosts the YouTube Channel Front Page Tech and has made a career out of reporting leaks from Apple and others, said that an announcement of what he expected to be called Apple Glass would likely come at a March 2021 event. Prosser predicted displays for both eyes, a gesture-control system, and a $500 price point. Other pundits have chimed in with different release dates and specifications. But 2021 remains the popular favorite, at least for an unveiling.

What technology will be packed inside Apple’s first generation of AR glasses? It depends on the experience Apple has chosen to provide, and for this, there are two main possibilities. One is simply displaying information about what’s in front of the wearer via text or icons that appear in a corner of the visual field and effectively appear attached to the glasses. In other words, the text doesn’t change as you swivel your head. The alternative is placing data or graphics so that they appear to be attached to or overlaid upon objects or people in the environment. With this setup, if you swivel your head, the data moves out of your vision as the objects do and new data appears that’s relevant to the new objects swerving into your field of view. This latter scheme is harder to pull off but more in line with what people expect when they think about AR.

Evans is betting on that second approach. “If they were just going to do a head-up display, they could have done it already for $100,” he points out.

Evans isn’t making a guess as to whether Apple will launch AR glasses in 2021 or later, but when they do, he says, it won’t be as a prototype, or an experiment aimed at a niche market, like Magic Leap or HoloLens. “Apple sells things that they think have a reason for a normal person to buy. It will be a consumer product and have a mass-market price. There will be stuff to develop further, but it won’t be $2,000 and weigh 3 kilos.”

Evans expects the first version will include eye tracking, so the glasses can tell what part of the broader field of view is attracting the user’s attention, along with inertial sensors to monitor head motion. Head gestures may well be part of the interface, and it will likely have a lidar sensor on board, enabling the glasses to create a depth map of the wearer’s surroundings. In fact, Apple’s top-of-the-line tablet and phone, the iPad Pro and iPhone 12 Pro, incorporate lidar for tracking motion and calculating distances to objects in a scene. “It’s pretty obvious,” Evans says, “that lidar in the iPad is a building block” for the glasses.

One big question about the glasses’ display, Evans says, is whether it will take a new approach to presenting an image that can be visible in daylight. The most common approach to date has been using a microLED to project the image onto the glass; in daylight conditions this approach requires that the added-in graphics be limited to the brightest of colors. Recent rumors suggest that Apple will use Sony’s OLED microdisplay as a source for the projected image. But although the luminance of OLED displays is impressive, MIT’s Rose says, rendering a full spectrum of color in daylight will still be challenging.

The glasses will contain a visible-light camera—or two—to collect images of people and places for analysis. The main function of that camera won’t be to record video, because the backlash against Google Glass made that function pretty much a nonstarter. Rather, the purpose of the camera will be to simply enable the software to know what the wearer is seeing in order to provide the contextual information.

“Apple will try hard to not to use words like ‘video camera,’” says Rose. “Rather, they will call it, say, a ‘full-spectrum sensor,’” he adds. “Lifelogging as a use case has become pretty abhorrent to our society.” If an option to store video clips does exist, Apple will likely design the glasses to prominently warn observers exactly when video or still images are being recorded, Rose believes.

The data processing, at least for this first generation of glasses, is widely expected to take place on the user’s phone. Otherwise, says Rose, “the battery requirements will be too high.” And off-board processing means the designers don’t have to worry about the problem of heat dissipation just yet.

What will Apple call the gadget? Prosser is saying “Glass”; others say anything but, given that Google Glass became the subject of many jokes.

Whether or not Apple will ship AR glasses in 2021—and whether or not the product will be successful—comes down to one question, says analyst Evans. “Whose job at Apple is it to look at this and say ‘This is sh-t’ or ‘This is not sh-t’? In the past it was Steve Jobs. Then it was Jonathan Ive. Who now will look at version 86 or version 118 and say, ‘Yes, this is great now. This is it!’?”

This article appears in the January 2021 print issue as “Look Out for Apple’s AR Glasses.”

Techies Want a Vaccine Mandate Before Returning To the Office

Post Syndicated from Tekla S. Perry original

IEEE COVID-19 coverage logo, link to landing page

With the United States poised to issue emergency use authorization for at least one COVID-19 vaccine, tech professionals are thinking about what that will mean for the workplace and returning to an office. Last week Blind, a company that operates private social networks for tech employees, asked its users three simple questions about the tech workplace and the COVID vaccine:

  • Do employers have the right to ask employees to get vaccinated before returning to the office?
  • Would you get vaccinated if your employer asked you to?
  • Would you go back to the office if vaccines are not mandatory?

An overwhelming majority (69 percent) of the survey’s 3273 respondents indicated that employers do have the right to mandate vaccination. Even more would comply with such a mandate.

Indeed, a vaccine mandate may be necessary to bring the majority of tech workers back into company offices. Only 36 percent of respondents indicated that they would be willing to return to in-person work without such a mandate.

Breaking the respondents down by company showed some differences. Tech professionals at Indeed and Netflix seem more willing to return to in-person work without a vaccine mandate than those at the average company, while tech workers at Airbnb, Cisco, Intuit, and Oracle are far less willing.

What is it about those workplaces that makes the difference? Could it be location? Company culture? The number of employees at the location? Or simply the design of the buildings? (I know I’d be more concerned about going to an office with sealed windows that’s accessed via elevator than a more open-air setting; certainly Oracle’s Silicon Valley conglomeration of office towers would raise my pandemic-primed hackles.) Perhaps we’ll get some insight on this vaccines roll out and tech companies prepare to move away from full-time work-at-home policies.

Semiconductor Industry Forecast: Sunny and Bright with Few Clouds in Sight

Post Syndicated from Tekla S. Perry original

“I’ve never seen a better time for this industry,” said Mark Edelstone. “Chips are cool again.”

Edelstone, who is chairman of global semiconductor investment banking for Morgan Stanley, and has some 30 years of experience in the chip business, was speaking on a panel at the annual semiconductor forum held (virtually this year) by startup incubator Silicon Catalyst. He was not alone in his assessment.

“The market is hot,” said fellow panelist Ann Kim, managing director and head of the frontier technology group for Silicon Valley Bank. “There is a strong funding environment and the cost of capital is low. Venture capital funds have over $150 billion of dry powder. Companies in semiconductor space are raising massive growth rounds…[and] semiconductor entrepreneurs should be attacking the market right now.”

The reason for such a sunshiny outlook? Surprisingly, it’s due in large part to winds that changed as a result of the coronavirus storm.

Said Edelstone: “The shift to the cloud and work from home are significant trends right now, catalyzed by COVID.”

Both are fueling demand for semiconductors, he indicated—in particular,  acceleration of the move by companies from their own computer infrastructures to cloud services will continue post-pandemic. “We are only about 10 percent of the way there in terms of what can move to the cloud.”

The fact that the pandemic has proven to be more of a boost than a drag on semiconductor industry fortunes was not exactly expected.

Said Jodi Shelton, cofounder CEO of the Global Semiconductor Alliance: “I don’t think any of us anticipated how well things would hold up. People [initially] talked about a lot of cost-cutting, but by May and June, the attitude changed.”

“The pandemic has been toxic for the economy,” she said, “but Nasdaq and the Dow hit records, there are record deals, record amounts of money being raised….You wouldn’t know we are in a pandemic if you look at those numbers.”

The (Brief) Pandemic Pause

Kim also reported that the initial fears by semiconductor company leaders were short-lived. “We spoke to the VC-backed companies that we work with at the beginning of sheltering in place,” she said. “There was definitely a pause. Management teams had to take a fresh look at their runways; people rushed to the capital market to close equity rounds. But then they realized that debt is available and cheap, and they can use debt as a safety blanket.”

“We thought we would see a U-shaped curve,” said Edelstone. He was referring to a rapid drop and then a slow period for semiconductor companies before a recovery—one that would be “easier to come out of than the dot-com crash.”

“But to see how it has shot up has been amazing,” he said.

Contributing to that extremely short pause and quick turnaround has been the ability of the semiconductor industry to pivot to remote work, a more daunting challenge than the adjustment faced by the software-centric tech companies.

“The thing that has impressed me the most is just how productive everybody has been,” Edelstone said. “All of our companies in our industry are operating virtually, designing these complex devices and taping them out relatively on schedule. It has been incredible to watch the resiliency that technology has been able to deliver.”

Industry executives had been somewhat worried that the boom was due to inventory stockpiling, and were concerned that either manufacturing issues due to the virus or trade issues with China would cause the companies that use semiconductors in their products to order more devices earlier, panelists reported. But that seems to not be the case. Said Shelton: “It seems a lot of inventory has been burned off.”

The China Question

Those trade issues still form the hint of a cloud on the horizon, panelists indicated. “Things with China may likely get worse before they get better,” said Shelton. And the tensions between the U.S. and China could spread to Taiwan, affecting TSMC. With TSMC being such a dominant manufacturer in terms of both technical capabilities and market share, any disruption there would ripple throughout the entire industry.

So, said Shelton, the question is, “Can we reset the relationship with China, dialing down the rhetoric and moving towards a solution? The Biden administration will have pressure placed on it to remain tough on China. And we have six weeks more of the Trump administration; there are things they could do [with regards to China] that would be hard to undo.”

Beyond the COVID Era

After the pandemic, when the world settles back to normal—or a new normal—what will the chip industry look like? That was the question posed by moderator Don Clark, a New York Times contributing journalist.

Said Shelton: “We are very optimistic about the future. The industry has a lot of room to grow.”

But the way the industry grows may be different than it has in the past, with semiconductor companies competing with each other to create the most powerful or lowest-power processors.

Instead, said Edelstone, “I think we will increasingly see semiconductors as a core competency for companies. They start with software, develop a chip to drive it, and go to market as a systems company.”

Consider Nvidia, he said. Even 10 years ago, “[Nvidia CEO] Jensen [Huang] would have said that it is not a semiconductor company, it is a full stack, end to end, solution provider. That is where the future is going to be,” not, he indicated, in bringing a $10 product to market as a standalone semiconductor company.

Noisy and Stressful? Or Noisy and Fun? Your Phone Can Tell the Difference

Post Syndicated from Tekla S. Perry original

Smartphones for several years now have had the ability to listen non-stop for wake words, like “Hey Siri” and “OK Google,” without excessive battery usage. These wake-up systems run in special, low-power processors embedded within a phone’s larger chip set. They rely on algorithms trained on a neural network to recognize a broad spectrum of voices, accents, and speech patterns. But they only recognize their wake words; more generalized speech recognition algorithms require the involvement of a phone’s more powerful processors.

Today, Qualcomm announced that Snapdragon 8885G, its latest chipset for mobile devices, will be incorporating an extra piece of software in that bit of semiconductor real estate that houses the wake word recognition engine. Created by Cambridge, U.K. startup Audio Analytic, the ai3-nano will use the Snapdragon’s low-power AI processor to listen for sounds beyond speech. Depending on the applications made available by smartphone manufacturers, the phones will be able to react to such sounds as a doorbell, water boiling, a baby’s cry, and fingers tapping on a keyboard—a library of some 50 sounds that is expected to grow to 150 to 200 in the near future.

The first application available for this sound recognition system will be what Audio Analytic calls Acoustic Scene Recognition AI. Instead of listening for just one sound, the scene recognition technology listens for the characteristics of all the ambient sounds to classify an environment as chaotic, lively, boring, or calm. Audio Analytic CEO and founder Chris Mitchell explains.

“There are two aspects to an environment,” he says, “eventfulness, which refers to how many individual sounds are going on, and how pleasant we find it. Say I went for a run, and there were lots of bird sounds. I would likely find that pleasant, so that would be categorized as ‘lively.’ You could also have an environment with a lot of sounds that are not pleasant. That would be ‘chaotic.’”

Mitchell’s team selected those four categories after reviewing studies about perceptions of sound. They then used its custom-created dataset of 30 million audio recordings to train the neural network.

What a mobile device will do with its newfound awareness of ambient sounds will be up to the manufacturers that use the Qualcomm platform. But Mitchell has a few ideas.

“A train, for example, is boring,” he says. “So you might want to increase the active noise cancellation on your headphones to remove the typical low hum.  But when you get off the tube, you want more transparency—so you can hear bike messengers, so noise cancellation should be reduced. On a smartphone you could also adjust notifications based on the type of environment, whether it vibrates or rings, or what sort of ring tone is used.”

I first met Mitchell two years ago, when the company was demonstrating prototypes of how its audio analysis technology would work in smart speakers. Since then, Mitchell reports, products using the company’s technology are available in some 150 countries. Most are security and safety systems, recognizing the sound of breaking glass, a smoke alarm, or a baby’s cry.

Audio Analytic’s approach, Mitchell explained to me, involves using deep learning to break sounds into standard components. He uses the word “ideophones” to refer to these components. The term also refers to the representation of a sound in speech, like “quack.” Once sounds are coded as ideophones, each can be recognized just as digital assistants’ systems recognize their wake words. This approach allows the ai3-nano engine to take up just 40 KB and run completely on the phone without connecting to a cloud-based processor.

Once the technology is established in smartphones, Mitchell expects its applications will grow beyond security and scene recognition. Early instances, he expects, will include media tagging, games, and accessibility.

For media tagging, he says, the system can search phone-captured video by sound. So, for example, a parent can easily find a clip of a child laughing. Or children could use this technology in a game that has them make the sounds of an animal—say a duck or a pig. Then for completing the task, the display could put a virtual costume on them.

As for accessibility, Mitchell sees the technology as a boon to the hard of hearing, who already rely on mobile phones as assistive devices. “This can allow them to detect [and specifically identify] a knock on the door, a dog barking or a smoke alarm,” he says.

After rolling out additional sound recognition capabilities, they expect to work next on identifying context beyond specific events or scenes. “We have started doing early stage research in that area,” he says. “So our system can say ‘It sounds like you are making breakfast’ or ‘It sounds like you are getting ready to leave the house.’” Which would allow apps to take advantage of that information in arming a security system or adjusting lights or heat.  

Sunshine Comes Out to “Manage the Mundane”

Post Syndicated from Tekla S. Perry original

People haven’t always been so nice to Marissa Mayer, the early Google employee who rose to a vice president in that company, then took over as CEO of a struggling Yahoo only to be removed when the company was acquired by Verizon five years later in 2017.  They criticized her for being a fashionista and an ice queen, for micromanaging, for taking too short a maternity leave and for bringing her infant to the office, for failing at a turnaround that might have been an impossible task, and simply for being “no Steve Jobs.”

But Mayer seemed to never lose her sense of fun—her bright clothes make her stand out at any tech event, and her Halloween and Christmas decorations have become a local legend.

Two years ago, Mayer answered the “what comes after Yahoo” question by announcing a startup, Lumi Labs, cofounded with long-time colleague Enrique Munoz Torres. At the time, the two gave little indication about exactly what this tiny Palo Alto company would do beyond developing some projects and prototypes.

This week, Lumi Labs changed its name to Sunshine and released its first official product, Sunshine Contacts, an app that uses AI to organize, autocomplete, and update a user’s contacts. Previously the company did an experimental release of an app designed to manage holiday mailing lists.

OK, maybe a contact manager doesn’t sound like a change-the-world product, or one that needed to be backed by $20 million in seed capital. But don’t write it off just yet. Sunshine’s goal, according to the company’s web site, is to “make the mundane magical.”

“Imagine if your contacts magically stayed up-to-date with no effort on your part,” the company states.  “Or if the great photos you have of your friends got sent to them automatically. What if you never forgot another birthday?

“Smartphones have connected the world and put the entire internet into our pockets. We can get whatever we want delivered to our home whenever we want, sometimes by flying drone. With the rise of artificial intelligence, dreams of virtual assistants, self-driving cars and global facial recognition are no longer that far-fetched. However, despite transformational advances in technology, there are still tons of mundane, time-consuming tasks that we all do (or just don’t do) daily.”

Sunshine plans Contacts to be the first of many products aimed at these everyday problems. The app pulls in data from Apple and Google contacts, removes duplicates while using AI to distinguish between contacts with the same name, and automatically digs through LinkedIn profiles and other publicly available information to fill missing details, including addresses, profile pictures, and additional phone numbers. Going forward, it promises to automatically update contact information when necessary.

I have to admit I was smiling and nodding my head while reading Sunshine’s announcement. Having covered Mayer on and off for about a decade, I could see her fingerprints all over the company. In the various interactions I’ve had with her, she does seem like someone who wouldn’t want to forget a birthday or just about any occasion: for some time she set the agenda of holidays, milestones, and artists for Google Doodles and her holiday decorations are stuff of Silicon Valley legend.  She may indeed be a micromanager—she personally passes out Halloween candy to the more than a thousand trick or treaters that line up in front of her house (in non-COVID years)—but all the more reason she needs tools to help stay on top of everything.

And she does have a history of sweating the details around a user’s experience. In a lengthy interview I did with her in 2011, she talked about how she argued to make Gmail display “unfurled threaded” messages—not just group messages, but display the whole conversation at once. She made a successful case for what she called “infinite scroll” through the first thousand images that came up on Google image search. “I think it is important to not ask people to click too much and to basically go with the flow.” And she famously tested some 40 shades of blue to find a uniform color for Google’s offerings.

Put all that together in the hands of a busy entrepreneur, parent of three, and computer scientist, and of course she’s aiming at using AI to fix our address books. Sunshine promises that tools for scheduling and event organization will come next.

Here’s where the “nice” part comes in. First, the name. Who wouldn’t appreciate a little sunshine on these dark days? (Though I do wonder how much it cost to pick up the URL

Mayer certainly will try to make Sunshine a fun place to work. When Google’s first offices were just down the street from Sunshine’s digs, she organized regular Friday movie nights, and kept them going for some time after the company expanded. That will have to wait until after the pandemic, but in the meantime, there’s ice cream. Sunshine’s website reports “We like ice cream. We have it every Friday.” And while I rarely talk about fashion when covering tech, you can’t tell me that the peacock-print dress Mayer wears in some of the publicity photos isn’t intended to make a statement about company culture.

As for hiring, the company is currently looking for software engineers with expertise in Android apps, iOS apps, machine learning, systems, and security, to work via Google Hangouts and Zoom for now and in the company’s Palo Alto offices post-pandemic. Sunshine’s careers page states: “Above all, everyone on our team is smart, loves to learn, and is NICE. Because life is too short to spend time working with people who aren’t nice.”

And two photos on the website display a throw pillow with appliqued letters spelling out “Be nice or leave,” perhaps a warning to Mayer’s critics as well as future employees.

How Facebook’s AI Tools Tackle Misinformation

Post Syndicated from Tekla S. Perry original

Facebook today released its quarterly Community Standards Enforcement Report, in which it reports actions taken to remove content that violate its policies, along with how much of this content was identified and removed before users brought it to Facebook’s attention. That second category relies heavily on automated systems developed through machine learning.

In recent years, these AI tools have been focused on hate speech. According to Facebook CTO Mike Schroepfer, the company’s automated systems identified and removed three times as many posts containing hate speech in the third quarter of 2020 as in the third quarter of 2019. Part of the credit for that improvement, he indicated, goes to a new machine learning approach that uses live, online data instead of just offline data sets to continuously improve. The technology, tagged RIO, for Reinforced Integrity Optimizer, looks at a number tracking the overall prevalence of hate speech on the platform, and tunes its algorithms to try to push that number down.

The idea of moving from a handcrafted off-line system to an online system is a pretty big deal,” Schroepfer said. “I think that technology is going to be interesting for us over the next few years.”

During 2020, Facebook’s policies towards misinformation became increasingly tighter, though many would say not tight enough. The company in April announced that it would be directly warning users exposed to COVID-19 misinformation. In September it announced expanded efforts to remove content that would suppress voting and a plan to label claims of election victory before the results were final. In October it restricted the spread of a questionable story about Hunter Biden. And throughout the year it applied increasingly explicit tags on content identified as misinformation, including a screen that blocks access to the post until the user clicks on it. Guy Rosen, Facebook vice president of integrity, reported that only five percent of users take that extra step.

That’s the policy. Enforcing that policy takes both manpower and technology, Schroepfer pointed out in a Zoom call with journalists on Wednesday. At this point, he indicated, AI isn’t used to determine if the content of an original post falls into the categories of misinformation that violates its standards—that is a job for human fact-checkers. But after a fact-checker identifies a problem post, the company’s similarity matching system hunts down permutations of that post and removes those automatically.

Facebook wants to automatically catch a post, says Schroepfer, even if  “someone blurs a photo or crops it… but we don’t want to take something down incorrectly.”

“Subtle changes to the text—a no, or not, or just kidding—can completely change the meaning,” he said. “We rely on third-party fact checkers to identify it, then we use AI to find the flavors and variants.”

The company reported in a blog post that a new tool, SimSearchNet++, is helping this effort. Developed through self-supervised learning, it looks for variations of an image, adding optical character recognition when text is involved.

As an example, Schroepfer pointed to two posts about face masks identified as misinformation (above).

Thanks to these efforts, Rosen indicated, Facebook directly removed 12 million posts with dangerous COVID misinformation between March and October, and put warnings on 167 million more COVID related posts debunked by fact checkers. It took action on similar numbers of posts related to the U.S. election, he reported.

Schroepfer also reported that Facebook has deployed weapons to fight deep fakes. Thanks to the company’s Deep Fake Challenge launched in 2019, Facebook does have a deep fake detector in operation. “Luckily,” he said, “that hasn’t been a top problem” to date.

“We are not done in terms of where we want to be,” he said. “But we are nowhere near out of ideas for how to improve the capability of our systems.”


Listen Up With Speakers in Lightbulbs, Shower Heads

Post Syndicated from Tekla S. Perry original

The last time I spent much time thinking about LED bulbs was some seven years ago, when a kitchen remodel turned out more operating room than cozy family gathering place. What went wrong? The architect determined that the contractor had purchased the wrong temperature of LEDs; a problem easily fixed.

Since then, I occasionally noticed some advances in LED light bulbs at CES and gadget shows—like dimmable LEDs (now common) and smart bulbs that connect to home Wi-Fi networks for remote control. But nothing that made LEDs shine.

So the last thing I expected when checking out the more than 40 new products at Pepcom’s holiday launch event was to get excited about a couple of LED bulbs. One of these gadgets acts as a Bluetooth speaker that automatically networks with nearby speaker-bulbs to create a surround-sound effect, the other has an adjustable color temperature and a unique user interface. My third gadget pick, a water-powered shower speaker, doesn’t light up, but is about as unobtrusive as a household light bulb.

Here are the details. (Note that this was a virtual event, so the demos and discussions, while held live, were remote; I haven’t actually held any of these gadgets in my hands, much less tested them in the real world.)

1. GE’s LED+ Speaker bulbs

The Bluetooth speaker in one of these LED+ bulbs can work alone, or as part of a surround-sound network of as many as 10 bulbs. Company representatives indicated that the gadgets come in a variety of standard bulb sizes to fit lamps, floodlights, or recessed lighting, starting at about US$30. Each bulb comes with a remote control, though in a multi-speaker network only one bulb needs to be paired with the remote; it then acts as a parent and controls the other bulbs in its vicinity.

2. Feit Electric’s Selectable Color bulbs

These LED bulbs vary color temperature from about 2700K to 6500K, depending on the particular version. As I learned with my kitchen remodel mistake, color temperature matters a lot; it can make the difference between a space feeling like an office or operating room instead of a cozy den. I was particularly impressed by the simple interface that doesn’t require an app or a remote—flicking the light switch on and off cycles through the color options; circuitry in the bulb recognizes the short sequence of power interruptions. And Feit’s representatives made the pitch that in today’s stay-at-home Covid times, the ability to change the feel of a room matters even more than usual, not a bad selling point. Prices, again, vary by type of bulb, but generally start at about US $10, a premium of a couple of dollars over a standard LED bulb.

3. Ampere’s Shower Power

Another clever placement of a Bluetooth speaker in an ordinary household object, the cool factor of Ampere’s shower speaker isn’t that it’s waterproof, it’s that screws into the shower head to run on hydropower from the shower flow. I was already slightly familiar with the potential of shower power—I have an outside shower that’s lit by LEDs built into the shower head and powered by the water flow. Unlike that gadget, however, Ampere’s device includes a battery that can store power for listening while the shower is off. Company representatives indicated that the gadget produces about 120 milliampere per hour with standard water flow, slightly less or more depending on water pressure, and will retail for around $70. (It is currently taking preorders via Kickstarter.)


Should the DoD’s Tech Professionals Work From Home—Permanently?

Post Syndicated from Tekla S. Perry original

With tech companies announcing plans to continue allowing—and even encouraging—employees to work remotely beyond the end of the pandemic, the U.S. Defense Innovation Board has urged the Department of Defense to improve its own work from home policies.

In a September report, the board, an advisory committee to the Secretary of Defense, pointed out that the DoD “has traditionally struggled to compete for digital talent,” and “the emerging work from home norm creates an opening for the Department to either adapt and narrow the gap or fall further behind in competing for top-notch technical talent.”

Right now, the Department of Defense is allowing some employees to work remotely, using standard remote collaboration tools with an extra layer of security, but has not decided whether use of these tools will be permitted after workers return to the office. The Defense Innovation Board’s report argues that not only should these tools be preserved, but the use of such tools, along with accompanying infrastructure upgrades, should be expanded. Embracing remote work permanently would, the report claims, allow the DoD to hire a “more agile, diverse, and distributed workforce.”

In addition to urging the DoD to follow in the footsteps of commercial tech employers, the Defense Innovation Board made a few suggestions that I haven’t seen coming from tech businesses, and which those firms might want to embrace in return.

For one, the Board suggested that the DoD create “a nationwide network of dedicated co-working or shared workspaces” for remote work. This, it suggested, might be a way of handling classified work in a more distributed fashion, but it also could be a way for businesses to better fulfill the desires of employees to live wherever they want, but work some number of days each week at home and some in an office.

In another suggestion, the Board urged that, as part of an effort to change the culture around remote work, that senior DoD leaders “should commit to periodically working from home to model behavior, norms, and expectations around performance and presence; this will also create a demand for IT capabilities to remain up-to-date and not atrophy.”