IndieBio, the life sciences startup incubator that is part of venture firm SOSV, has opened applications to startups focused on developing “diagnostics, therapeutics, vaccines, disinfection, and other solutions addressing the worldwide problem of emerging infectious diseases.” And IndieBio is not just looking for solutions from biotech companies—startups involved in computing, automation, or, indeed, anybody with a technology that can address the problem, can apply.
Working from home is the new normal, at least for those of us whose jobs mostly involve tapping on computer keys. But what about researchers who are synthesizing new chemical compounds or testing them on living tissue or on bacteria in petri dishes? What about those scientists rushing to develop drugs to fight the new coronavirus? Can they work from home?
Silicon Valley-based startup Strateos says its robotic laboratories allow scientists doing biological research and testing to do so right now. Within a few months, the company believes it will have remote robotic labs available for use by chemists synthesizing new compounds. And, the company says, those new chemical synthesis lines will connect with some of its existing robotic biology labs so a remote researcher can seamlessly transfer a new compound from development into testing.
The company’s first robotic labs, up and running in Menlo Park, Calif., since 2012, were developed by one of Strateos’ predecessor companies, Transcriptic. Last year Transcriptic merged with 3Scan, a company that produces digital 3D histological models from scans of tissue samples, to form Strateos. This facility has four robots that run experiments in large, pod-like laboratories for a number of remote clients, including DARPA and the California Pacific Medical Center Research Institute.
Strateos CEO Mark Fischer-Colbrie explains Strateos’ process:
“It starts with an intake kit,” he says, in which the researchers match standard lab containers with a web-based labeling system. Then scientists use Strateos’ graphical user interface to select various tests to run. These can include tests of the chemical properties of compounds, biochemical processes including how compounds react to enzymes or where compounds bind to molecules, and how synthetic yeast organisms respond to stimuli. Soon the company will be adding the capability to do toxicology tests on living cells.
“Our approach is fully automated and programmable,” Fischer-Colbrie says. “That means that scientists can pick a standard workflow, or decide how a workflow is run. All the pieces of equipment, which include acoustic liquid handlers, spectrophotometers, real-time quantitative polymerase chain reaction instruments, and flow cytometers are accessible.
“The scientists can define every step of the experiment with various parameters, for example, how long the robot incubates a sample and whether it does it fast or slow.&rdquo
To develop the system, Strateos’ engineers had to “connect the dots, that is, connect the lab automation to the web,” rather than dramatically push technology’s envelope, Fischer-Colbrie explains, “bringing the concepts of web services and the sharing economy to the life sciences.”
Nobody had done it before, he says, simply because researchers in the life sciences had been using traditional laboratory techniques for so long, it didn’t seem like there could be a real substitute to physically being in the lab.
Late last year, in a partnership with Eli Lilly, Strateos added four more biology lab modules in San Diego and by July plans to integrate these with eight chemistry robots that will, according to a press release, “physically and virtually integrate several areas of the drug discovery process—including design, synthesis, purification, analysis, sample management, and hypothesis testing—into a fully automated platform. The lab includes more than 100 instruments and storage for over 5 million compounds, all within a closed-loop and automated drug discovery platform.”
Some of the capacity will be used exclusively by Lilly scientists, but Fischer-Colbrie says, Strateos capped that usage and will be selling lab capacity beyond the cap to others. It currently prices biological assays on a per plate basis and will price chemical reactions per compound.
The company plans to add labs in additional cities as demand for the services increases, in much the same way that Amazon Web Services adds data centers in multiple locales.
It has also started selling access to its software systems directly to companies looking to run their own, dedicated robotic biology labs.
Strateos, of course, had developed this technology long before the new coronavirus pushed people into remote work. Fischer-Colbrie says it has several advantages over traditional lab experiments in addition to enabling scientists to work from home. Experiments run via robots are easier to standardize, he says, and record more metadata than customary or even possible during a manual experiment. This will likely make repeating research easier, allow geographically separated scientists to work together, and create a shorter path to bringing AI into the design and analysis of experiments. “Because we can easily repeat experiments and generate clean datasets, training data for AI systems is cleaner,” he said.
And, he says, robotic labs open up the world of drug discovery to small companies and individuals who don’t have funding for expensive equipment, expanding startup opportunities in the same way software companies boomed when they could turn to cloud services for computing capacity instead of building their own server farms.
Says Alok Gupta, Strateos senior vice president of engineering, “This allows scientists to focus on the concept, not on buying equipment, setting it up, calibrating it; they can just get online and start their work.”
“It’s frictionless science,” says CEO Fischer-Colbrie, “giving scientists the ability to concentrate on their ideas and hypotheses.”
Women across the United States who work in tech are generally paid less than their male counterparts, even when education, years of experience, and specific occupations match.
That’s not exactly news; study after study has confirmed the differential. The latest analysis, a review of three years’ worth of salary survey data collected by job search site provider Dice, once again found that the overall gap persists.
The Dice analysis did find some interesting differences in regions and engineering disciplines. For instance, women in cloud engineering, systems architecture, and network engineering might be doing better than their male counterparts, though the sample sizes for cloud engineering and systems architecture were too small to be conclusive, and the differences were not statistically significant.
Regionally, women in tech in Minnesota might be making more than their male counterparts, though again, the difference of $3929 didn’t meet the $5000 threshold for statistical significance. Meanwhile, those in Utah and Alabama are facing the biggest gap; California’s tech women fit somewhere in the middle.
Of course, being happy at work isn’t just about salary, although it is the number one factor, according to Dice’s research. Dice looked at other motivators, and found that remote work options are the second most significant factor for women, third for men. Those options are rapidly spreading through the tech industry this month, in response to the new coronavirus.
The pay gap is so normal that women have come to expect to be paid less, the Dice survey suggested. According to survey, the average salary for a woman who reports being satisfied with her compensation is $93,591, while the average salary for men who report being satisfied is $108,711. Despite lower expectations, more women (38 percent) than men (33 percent) told Dice that they aren’t satisfied with their current salaries.
What causes the gender pay gap? Men and women have different explanations. According to a different survey of 738 tech professionals conducted in February by software review site TrustRadius, 45 percent more women than men in tech think that discrimination and bias are the cause of the pay disparity, while three times as many men as women blame a difference in job performance.
Researchers on WeBank’s AI Moonshot Team have taken a deep learning system developed to detect solar panel installations from satellite imagery and repurposed it to track China’s economic recovery from the novel coronavirus outbreak.
This, as far as the researchers know, is the first time big data and AI have been used to measure the impact of the new coronavirus on China, Haishan Wu, vice general manager of WeBank’s AI department, told IEEE Spectrum. WeBank is a private Chinese online banking company founded by Tencent.
The team used its neural network to analyze visible, near-infrared, and short-wave infrared images from various satellites, including the infrared bands from the Sentinel-2 satellite. This allowed the system to look for hot spots indicative of actual steel manufacturing inside a plant. In the early days of the outbreak, this analysis showed that steel manufacturing had dropped to a low of 29 percent of capacity. But by 9 February, it had recovered to 76 percent.
The researchers then looked at other types of manufacturing and commercial activity using AI. One of the techniques was simply counting cars in large corporate parking lots. From that analysis, it appeared that, by 10 February, Tesla’s Shanghai car production had fully recovered, while tourism operations, like Shanghai Disneyland, are still shut down.
Moving beyond satellite data, the researchers took daily anonymized GPS data from several million mobile phone users in 2019 and 2020, and used AI to determine which of those users were commuters. The software then counted the number of commuters in each city, and compared the number of commuters on a given day in 2019 and its corresponding date in 2020, starting with Chinese New Year. In both cases, Chinese New Year saw a huge dip in commuting, but unlike in 2019, the number of people going to work didn’t bounce back after the holiday. While things picked up slowly, the WeBank researchers calculated that by 10 March 2020, about 75 percent of the workforce had returned to work.
Projecting out from these curves, the researchers concluded that most Chinese workers, with the exception of Wuhan, will be back to work by the end of March. Economic growth in the first quarter, their study indicated, will take a 36 percent hit.
Finally, the team used natural language processing technology to mine Twitter-like services and other social media platforms for mentions of companies that provide online working, gaming, education, streaming video, social networking, e-commerce, and express delivery services. According to this analysis, telecommuting for work is booming, up 537 percent from the first day of 2020; online education is up 169 percent; gaming is up 124 percent; video streaming is up 55 percent; social networking is up 47 percent. Meanwhile, e-commerce is flat, and express delivery is down a little less than 1 percent. The analysis of China’s social media activity also yielded the prediction that the Chinese economy will be mostly back to normal by the end of March.
You’ve heard of FOMA, or Fear of Missing Out? That concern is scarce these days. Right now, the big worry of tech employees is the novel coronavirus, and it’s causing a Fear of Going to Work. For want of a better acronym, let’s call it FOG. And FYI, it’s leading to a lot more people who are WFM (Working From Home).
Blind, the firm that provides anonymous social networks for verified professionals, says FOG is huge and growing. Blind surveyed its users this week, asking “Are you hesitant to go to work because of the coronavirus outbreak?”
Demand for data scientists and engineers has, for the past couple of years, been off the charts. The number of openings for machine learning and data engineers posted on recruiting web sites continues to grow by double digits annually, and those working in the field have been commanding ever-higher salaries.
So anyone contemplating a future in data science or machine learning needs to build up software engineering skills, right?
Wrong, says Ryohei Fujimaki, founder and CEO of dotData. Fujimaki has, for nearly a decade, been working to use AI to automate much of the job of the data scientist.
We can, he says, “eliminate the skill barrier. Traditionally, the job of building a machine learning model can only be done by people who know SQL and Python and statistics. Our system automates the entire process, enabling less experienced people to implement machine learning projects.”
DotData—which is currently offering its tools as a cloud-based service—came out of NEC. Fujimaki, then a research fellow at the company, started thinking about automating machine learning in 2011 as a way to make the 100 or so data scientists on his research team more productive. He got sidetracked for a few years, focused on commercializing an algorithm designed to make machine learning transparent, but in 2015 returned to the machine learning project.
“A typical use case for machine learning in the business world is prediction,” he said, “predicting demand of a product to optimize inventory, or predicting the failure of a sensor in a factory to allow preventive maintenance, or scoring a list of possible customers.”
“The first step in developing a machine learning model for prediction is feature engineering—looking at historical patterns and coming up with hypotheses,” he says. Feature engineering generally requires a team of people with a multitude of skill sets—data scientists, SQL experts, analysts, and domain experts. Typically, only after this team comes up with a set of hypotheses does machine learning step in, combining all those hypotheses to figure out how to best weigh them to come up with accurate predictions.
In dotData’s system, AI takes over that first step, coming up and testing its own hypotheses from a set of historical data.
So, he says, “you don’t need domain experts or data scientists, and as a subproduct AI can explore many more hypotheses than human experts—millions instead of hundreds in a limited time window.”
Fujimaki’s group at NEC in 2016 let Japan’s Sumitomo Mitsui Banking Corp. (SMBC) test a prototype against a team using traditional data science tools. “Their team took three months, our process took a day, and our results were better,” he says. NEC spun off the group in early 2018, remaining as a shareholder. Right now DotData has about 70 employees, about 70 percent of those are engineers and data scientists, along with a few dozen customers, Fujimaki says.
“In the near future,” Fujimaki says, “80 percent of machine learning projects can be fully automated. That will free up the most skilled, computer-science-PhD-type of data scientists, to focus on the other 20 percent.”
Demand for data scientists overall won’t drop from what it is today, Fujimaki predicts, though the double-digit growth may slow. The job, however, will become more focused. “Data scientists today are expected to be superman, good at too many things—statistics, and machine learning, and software engineering.”
And a new role is likely to emerge, he predicts. “Call it the business data scientist, or the citizen data scientist. They aren’t machine learning people, they are more business oriented. They know what predictions they need, and how to use those predictions in their business. It will be useful for them to have basic knowledge of statistics, and to understand data structures, but they won’t need deep mathematical understanding or knowledge of programming languages.
“We can’t eliminate the skill barrier, but we can significantly lower it. And here will be many more potential people who will be able to do this.”
What makes a job a really good job? Job search site Indeed defines it as a combination of salary, demand as represented by the share of job postings, and growth in the number of job postings for a particular title.
By that definition, tech jobs have generally done well. For the past few years Indeed has used these factors to rank a broad range of careers in the U.S., including doctors, lawyers, and realtors, as long as the average salary is at least $75,000 and the site sees 20 job postings per million jobs in its database.
In Indeed’s rankings, jobs in the tech category claimed five of the top 10 slots in 2018 and three in 2019. This year, however, tech jobs claimed a whopping seven of the top ten slots, pushing out all other professions except real estate agent, dentist, and sales director.
Indeed’s data confirms job reviews site Glassdoor’s 2020 list of top jobs, which also had seven tech jobs in the top ten, though a slightly different seven. In that ranking, that included job satisfaction among its factors, front-end engineer came out on top.
What are Indeed’s great tech jobs? Software architect came out on top, driven by demand. Full stack developer came in second, driven by the growth in the number of job postings. Dentists and doctors, however, still top the average salary charts. The 2020 top ten are listed in the table below.
Top 10 Jobs in 2020
Average Base Salary (2019)
Number of postings per 1 million jobs posted (2019)
In 2017, Facebook announced that it had assigned at least 60 engineers to an effort to build a brain-computer interface (BCI). The goal: allow mobile device and computer users to communicate at a speed of at least 100 words per minute—far faster than anyone can type on a phone.
Last July, Facebook-supported researchers at the University of California San Francisco (UCSF) published the results of a study demonstrating that Facebook’s prototype brain-computer interface could be used to decode speech in real time—at least speech in the form of a limited range of answers to questions.
Facebook that month published a blog post explaining a bit about the technology developed so far. The post described a device that shines near-infrared light into the skull and uses changes in the way brain tissue absorbs that light to measure the blood oxygenation of groups of brain cells.
Said the blog post:
Think of a pulse oximeter—the clip-like sensor with a glowing red light you’ve probably had attached to your index finger at the doctor’s office. Just as it’s able to measure the oxygen saturation level of your blood through your finger, we can also use near-infrared light to measure blood oxygenation in the brain from outside of the body in a safe, non-invasive way….And while measuring oxygenation may never allow us to decode imagined sentences, being able to recognize even a handful of imagined commands, like “home,” “select,” and “delete,” would provide entirely new ways of interacting with today’s VR systems—and tomorrow’s AR glasses.
For starters, the team has been finishing up a move to its new hardware design. It’s not, by any means, the final version, but they say it is vastly more usable than the initial prototype.
The hardware used for UCSF’s research was big, expensive, and not all that wearable, Chevillet admitted. But the team has developed a cheaper and more wearable version, using lower cost components and some custom electronics. This so-called research kit, shown in the July blog post [photo below], is currently being tested to confirm that it is just as sensitive as the larger device, he says.
Meanwhile, the researchers are focusing their efforts on speed and noise reduction.
“We are measuring the hemodynamic response,” Chevillet says, “which peaks about five seconds after the brain signal.” The current system detects the response at the peak, which may be too slow for a truly useful brain-computer interface. “We could detect it earlier, before the peak, if we can drive up our signal and drive down the noise,” says Chevillet.
The new headsets will help this effort, Chevillet indicated, because the biggest source of noise is movement. The smaller headset sits tightly on the head, resulting in fewer shifts in position than is the case with the larger research device.
The team is also looking into increasing the size of the optical fibers that collect the signal in order to detect more photons, he says.
And it has built and is testing a system that uses time domains to eliminate noise, Chevillet reports. By sending in pulses of light, instead of continuous light, he says, the team hopes to distinguish between the photons that travel only through the scalp and skull before being reflected—the noise—from those that actually make it into brain tissue. “We hope to have the results to report out later this year,” he says.
Another way to improve the signal-to-noise ratio of the device, he suggests, is increasing the contrast. You can’t necessarily increase the brightness of the light, he says; it has to stay below a safe level for brain tissue. But the team can increase the number of pixels in the photodetector array. “We are trying a 32-by-32-pixel single photon detector array to see if we can improve the signal-to-noise ratio, and will report that out later this year,” Chevillet says.
But, he admits, “even with what we are doing to get a better signal, it will be noisy.”
That’s why, Chevillet explained, the company is focusing on detecting the mental efforts that produce speech—it doesn’t actually read random thoughts. “We can use noisy signals with speech algorithms,” he says, “because we have speech algorithms that have been trained on huge amounts of audio, and we can transfer that training over.”
This approach to the brain-computer interface is intriguing but won’t be easy to pull off, says Roozbeh Ghaffari, a biomedical researcher at Northwestern University and CEO of Epicore Biosystems. “There may indeed be ways to relate neurons firing to changes in local blood oxygenation levels,” Ghaffari told Spectrum. “But the changes in blood oxygenation levels that map to neuronal activity are highly localized; the ability to map these localized changes to speech activity—from the skin surface, on a cycle-by-cycle basis—could be challenging.”
Larry Tesler, who died Sunday, February 16, at age 74, is the most famous computer scientist most people have never heard of—and one of the nicest guys I’ve worked with in my years as a tech journalist. I first met him when writing about the amazing things happening at Xerox Parc, when he told me that his groundbreaking work in user interface design started with his determination to prove that the computer mouse was a bad idea.
“I really didn’t believe in it,” he said. “I thought cursor keys were much better. We literally took people off the streets who had never seen a computer. In three or four minutes they were happily editing away using cursor keys. At that point I was going to show them the mouse and prove that they could select text faster than with the cursor keys. Then I was going to show that they didn’t like it.
“It backfired. I would have them spend an hour working with the cursor keys. Then I would teach them about the mouse. They would say, ‘That’s interesting but I don’t think I need it.’ Then they would play with it a bit, and after two minutes they never touched the cursor keys again.”
A researcher to the core, Tesler accepted the results of the experiment—but then set out to make the mouse, then a three-button device accompanied by a five-button keypad, better. He simplified the user interface—bringing us the click-and-drag movement to select text and graphics, along with cut, copy, and paste—and paved the way for the one-button mouse so many of us use today.
Tesler truly revolutionized the way we use computers. So today, when you cut, copy, or paste, take a moment to thank him.
I profiled Tesler, an IEEE member, in detail in 2005, while he was vice president of user experience and design at Yahoo, after he spent nearly two decades at Apple and a few years at Amazon developing that company’s shopping interface.
Here’s how I opened that story:
“Like Woody Allen’s 1983 movie character, Zelig, who appears at every significant historical event of his era, has had a hand in major events making computer history during the past 30 years. When the first document-formatting software was developed at Stanford University in 1971, Tesler was coding it. When a secretary first cut and pasted some text on a computer screen at Xerox Corp.’s Palo Alto Research Center (PARC) in 1973, Tesler was looking over her shoulder. When the first portable computer was turned on in an airport waiting area (and on an airplane), Tesler had his fingers on the keyboard. When Steve Jobs went to PARC in 1979 to see the legendary demo that is purported to have set the stage for a revolution in computing, Tesler had his hand on the mouse.
And when Apple Computer Inc.’s infamous Newton handheld computer failed spectacularly in the early 1990s, taking millions of dollars of investment and a few careers down with it, Tesler was there, too. Hey, nobody gets it right 100 percent of the time.”
That’s the takeaway of the Hottest Coding Languages section of job site Hired’s annual State of Software Engineers report. Engineers experienced with Go received an average of 9.2 interview requests, making it the most in-demand language. Worldwide, Go’s popularity among employers was followed by Scala and Ruby. That’s not great news for engineers, who ranked Ruby number one in least loved languages, followed by PHP and Objective-C.
There are regional differences in employer interest. In the San Francisco Bay Area and Toronto, Scala rules; in London, it’s TypeScript. A roundup of regional favorites, along with the worldwide rankings, is in the chart below.
(To compile its data, Hired reviewed 400,000 interview requests from 10,000 companies made to 98,000 job seekers throughout 2019.)
According to a survey of 1600 software engineers conducted by job search site Hired as part of its annual State of Software Engineers report, a number of stereotypes about software engineers are just plain wrong.
First, they aren’t all rolling into the office around noon and coding late into the night. In fact, 66 percent of software engineers, according to Hired, are larks, not owls, preferring to get up early and finish work early rather than sleeping in and working late. If forced to choose, 53 percent would work from home every day, and 47 percent would come into an office every day, the Hired survey indicated. (But, at least in Silicon Valley, most don’t have to choose on a permanent basis, and mix and match depending on the project, the day, or the season.)
The increasingly healthy foods and beverages made available by high tech companies appear to be luring engineers away from the coffee machine; according to the Hired survey 40 percent of software engineers drink just one cup of coffee a day, and only 2 percent ever drink Soylent—that would-be trend never did really catch on.
Finally, Hired asked engineers what kind of music they listen to through their ubiquitous noise-cancelling headphones. Electronic/dance beats came out on top, followed by rock and then classical.
It’s a good time to be an engineer specializing in augmented reality or virtual reality. That’s the conclusion of the latest report by job site Hired, which just released its annual state of software engineers report. To compile its data, Hired reviewed 400,000 interview requests from 10,000 companies made to 98,000 job seekers throughout 2019.
Demand for AR and VR engineers, in the form of job postings on Hired’s site, was 1400 percent higher in 2019 than in 2018. Salaries for engineers in these specialties climbed into the $135,000 to $150,000 range, at least in the largest U.S. tech hubs. Demand for gaming engineers and computer vision engineers is also on the upswing; both climbed 146 percent in 2019.
Meanwhile, demand for Blockchain expertise, a shooting star in 2018 with 517 percent greater demand than in the previous year, slowed dramatically, increasing only 9 percent.
What are these developers getting paid? Hired took a look at salaries in the San Francisco Bay Area, New York, Toronto, and London. Salaries climbed across the board, with London showing the most growth at 13 percent year over year, Toronto and New York following at 7 percent, and the already high San Francisco Bay Area salaries growing a not-too-shabby 6 percent. In spite of the growth in demand, AR/VR engineering salaries for most regions have yet to make it into the top ten among engineering specialties. But stay tuned for a change in the rankings next year.
Where are all the U.S. tech jobs? California, of course, and the region shows no sign of losing its dominance, according to a study by job search firm Dice. Dice analyzed 6 million 2019 job postings in the United States in a database provided by Burning Glass Technologies, which aggregated data from employer sites, job boards, and staffing agencies.
While pundits regularly predict that California’s congestion and high cost of housing will drive new regions to take over as the next Silicon Valley, the Dice analysis indicated that California won’t be losing its crown anytime soon.
What’s the hottest job in tech? It depends on how you look at it. If you count job openings, the most in-demand tech professional is the software developer, according to tech recruiting firm Dice. If you’re aiming for the fastest-growing tech role, point your arrow at data engineer, the firm’s research shows. And if you’re zooming in on specific tech skills, SQL is most in demand while Kubernetes is fastest growing.
IBM CEO Ginni Rometty will leave the post in April, the company announced last week. Rometty will be replaced by Arvind Krishna, a senior vice president who runs the company’s cloud computing business. Krishna’s technical chops seem sure to excite the company’s engineers. Krishna, with bachelor’s and Ph.D. degrees in electrical engineering, joined IBM in 1990 and spent years in the company’s technical ranks before moving into management, co-authoring 15 patents along the way. Most recently, he led IBMs efforts in artificial intelligence and quantum computing as well as cloud. Rometty, who holds a bachelor’s in computer science and electrical engineering, joined IBM as a systems analyst in 1981, before moving into sales and marketing posts about a decade later.
Blind, the company that provides anonymous social networks for employees within specific workplaces, surveyed its current pool of 4100 verified IBM employees to find out. Of the 105 who responded, the vast majority—66.7 percent—think that Krishna will have a positive impact as the new CEO of IBM. Only 5.7 percent of respondents predicted a negative impact, while 27.6 percent remained neutral.
One respondent to the Blind survey said, “I believe Arvind Krishna will be a net positive, and will focus on making IBM about tech again rather than marketing hype.”
By contrast, only 28.6 percent of the respondents indicated that Rometty had a positive impact during her tenure as CEO, with 71.4 percent indicating that was not the case. Of Rometty, another survey respondent said, “She thrived in the ‘tardy, bureaucratic mess’ so couldn’t see why it was killing the company’s future.”
And, though Rometty was the first woman to head the company, a move celebrated as a crack in the glass ceiling, 96.2 percent of respondents to Blind’s survey do not believe her departure will negatively impact diversity and inclusion efforts at the company.
Where is the best place for tech professionals in the United States? Personal finance website provider WalletHub tried to answer that question by looking at the 100 largest metro areas. By its analysis, Seattle, Boston, and Austin came out on top, while Florida metros dominated the bottom 10.
That’s very different from Indeed’s recent study of smaller tech hotspots, which put Huntsville, Ala., at the top, and from SpareFoot’s rankings that gave top honors to San Antonio, Texas. That’s because the WalletHub analysis merged an extremely broad range of factors. The data crunched included the usual variables—like share of job postings in tech, STEM employment growth, and annual median tech wages—but added not so common factors, including number of tech meetups per capita, family friendliness, singles-friendliness, invention patents per capita, quality of engineering universities, and R&D spending.
It grouped these factors into three categories: opportunities, STEM-friendliness, and quality of life, and ranked each metro in each category.
Yesterday I drove from Silicon Valley to San Francisco. It started raining on the way and I hadn’t thought to take an umbrella. No matter—I had the locations of two parking garages, just a block or so from my destination, preloaded into my navigation app. But both were full, and I found myself driving in stop-and-go traffic around crowded, wet, hilly, construction-heavy San Francisco, hunting for street parking or an open garage for nearly an hour. It was driving hell.
So when I finally arrived at a launch event hosted by Cruise, I couldn’t have been more receptive to the company’s pitch for Cruise Origin, a new vehicle that, Cruise executives say, intends to make it so I won’t need to drive or park in a city ever again.
What makes a job nearly perfect? It’s a combination of salary, demand (the number empty posts waiting to be filled), and job satisfaction, according to job search firm Glassdoor, which this week released a list of the best jobs in America for 2020.
Using median base salaries reported on Glassdoor in 2019, the number of U.S. job openings as of 18 December 2019, and the overall job satisfaction rating (on a scale of 1 to 5) reported by employees in those jobs, the company put front-end engineer in the number one spot, followed by Java developer and data scientist. That’s a switch previous trends; data scientist held the number one spot on Glassdoor’s top jobs list for the four previous years.
In fact, you don’t hit a non-tech job until the 8th ranking, where speech language pathologist claims the spot, boosted by astronomical demand [see table].
2020’s Top Jobs
Median Base Salary
Front End Engineer*
Speech Language Pathologist
Business Development Manager
*Tech job Source: Glassdoor
Tech jobs are among the highest paying, however, with seven of the top ten median salaries [see table].
2020’s Top Jobs by Salary
Median Base Salary
Dev Ops engineer*
Front End Engineer*
*Tech job Source: Glassdoor
Tech jobs, however, aren’t the most satisfying, according to Glassdoor’s rankings. Top honors in that category go to corporate recruiter posts, followed by strategy manager. The only tech jobs to make the top ten rankings in job satisfaction were Salesforce Developer and Data Scientist; two other “most satisfying” job categories included a mix of technical and non-technical professionals [see table].
2020’s Top Jobs by Satisfaction
Satisfaction Score (out of 5)
Customer Success Manager
Business Development Manager
*Tech job °Job category includes some tech professions Source: Glassdoor
Augmented reality in a contact lens? Science fiction writers envisioned the technology decades ago, and startups have been working on developing an actual product for at least 10 years.
Today, Mojo Vision announced that it has done just that—put 14K pixels-per-inch microdisplays, wireless radios, image sensors, and motion sensors into contact lenses that fit comfortably in the eyes. The first generation of Mojo Lenses are being powered wirelessly, though future generations will have batteries on board. A small external pack, besides providing power, handles sensor data and sends information to the display. The company is calling the technology Invisible Computing, and company representatives say it will get people’s eyes off their phones and back onto the world around them.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.