What makes a job nearly perfect? It’s a combination of salary, demand (the number empty posts waiting to be filled), and job satisfaction, according to job search firm Glassdoor, which this week released a list of the best jobs in America for 2020.
Using median base salaries reported on Glassdoor in 2019, the number of U.S. job openings as of 18 December 2019, and the overall job satisfaction rating (on a scale of 1 to 5) reported by employees in those jobs, the company put front-end engineer in the number one spot, followed by Java developer and data scientist. That’s a switch previous trends; data scientist held the number one spot on Glassdoor’s top jobs list for the four previous years.
In fact, you don’t hit a non-tech job until the 8th ranking, where speech language pathologist claims the spot, boosted by astronomical demand [see table].
2020’s Top Jobs
Median Base Salary
Front End Engineer*
Speech Language Pathologist
Business Development Manager
*Tech job Source: Glassdoor
Tech jobs are among the highest paying, however, with seven of the top ten median salaries [see table].
2020’s Top Jobs by Salary
Median Base Salary
Dev Ops engineer*
Front End Engineer*
*Tech job Source: Glassdoor
Tech jobs, however, aren’t the most satisfying, according to Glassdoor’s rankings. Top honors in that category go to corporate recruiter posts, followed by strategy manager. The only tech jobs to make the top ten rankings in job satisfaction were Salesforce Developer and Data Scientist; two other “most satisfying” job categories included a mix of technical and non-technical professionals [see table].
2020’s Top Jobs by Satisfaction
Satisfaction Score (out of 5)
Customer Success Manager
Business Development Manager
*Tech job °Job category includes some tech professions Source: Glassdoor
5G introduces test challenges related to massive MIMO, mmWave frequencies, and over-the-air (OTA) test. Successfully overcoming these challenges is the only way to reach 5G commercialization before the competition.
In Keysight’s latest eBook, Making 5G Work, you will learn:
5 strategies to accelerate 5G designs
4 insights to ace conformance testing
3 ways to speed up carrier acceptance test for your device
4 techniques that reduce 5G manufacturing test times and costs
In the January issue, Spectrum’s editors make every effort to bring the coming year’s important technologies to your attention. Some we get right, others less so. Twelve years ago, IEEE Fellow, Marconi Prize winner, and beloved Spectrum columnist Robert W. Lucky wrote about the difficulty of predicting the technological future. We’ve reprinted his wise words here.
Why are we engineers so bad at making predictions?
In countless panel discussions on the future of technology, I’m not sure I ever got anything right. As I look back on technological progress, I experience first retrospective surprise, then surprise that I’m surprised, because it all crept up on me when I wasn’t looking. How can something like Google feel so inevitable and yet be impossible to predict?
I’m filled with wonder at all that we engineers have accomplished, and I take great communal pride in how we’ve changed the world in so many ways. Decades ago I never dreamed we would have satellite navigation, computers in our pockets, the Internet, cellphones, or robots that would explore Mars. How did all this happen, and what are we doing for our next trick?
The software pioneer Alan Kay has said that the best way to predict the future is to invent it, and that’s what we’ve been busy doing. The public understands that we’re creating the future, but they think that we know what we’re doing and that there’s a master plan in there somewhere. However, the world evolves haphazardly, bumbling along in unforeseen directions. Some seemingly great inventions just don’t take hold, while overlooked innovations proliferate, and still others are used in unpredicted ways.
When I joined Bell Labs, so many years ago, there were two great development projects under way that together were to shape the future—the Picturephone and the millimeter waveguide. The waveguide was an empty pipe, about 5 centimeters in diameter, that would carry across the country the 6-megahertz analog signals from those ubiquitous Picturephones.
Needless to say, this was an alternative future that never happened. Our technological landscape is littered with such failed bets. For decades engineers would say that the future of communications was video telephony. Now that we can have it for free, not many people even want it.
The millimeter waveguide never happened either. Out of the blue, optical fiber came along, and that was that. Oh, and analog didn’t last. Gordon Moore made his observation about integrated-circuit progress in the midst of this period, but of course we had a hard time believing it.
Analog switching overstayed its tenure because engineers didn’t quite believe the irresistible economics of Moore’s Law. Most engineers used the Internet in the early years and knew it was growing at an exponential rate. But, no, it would never grow up to be a big, reliable, commercial network.
The irony at Bell Labs is that we had some of the finest engineers in the world then, working on things like the integrated circuit and the Internet—in other words, engineers who were responsible for many of the innovations that upset the very future they and their associates had been working on. This is the way the future often evolves: Looking back, you say, “We should have known” or “We knew, but we didn’t believe.” And at the same time we were ignoring the exponential trends that were all around us, we hyped glamorous technologies like artificial intelligence and neural networks.
Yogi Berra, who should probably be in the National Academy of Sciences as well as the National Baseball Hall of Fame, once said, “It’s tough making predictions, especially about the future.” We aren’t even good at making predictions about the present, let alone the future.
Journalists are sometimes better than engineers about seeing the latent future embedded in the present. I often read articles telling me that there is a trend where a lot of people are doing this or that. I raise my eyebrows in mild surprise. I didn’t realize a lot of people were doing this or that. Perhaps something is afoot, and an amorphous social network is unconsciously shaping the future of technology.
Well, we’ve made a lot of misguided predictions in the past. But we’ve learned from those mistakes. Now we know. The future lies in quantum computers. And electronics will be a thing of the past, since we’ll be using optical processing. All this is just right around the corner.
In a year-end review of Silicon Valley’s tech job activity for 2019, job-search firm Indeed found that machine learning engineers are commanding the highest salaries (averaging $172,792, up from $159,230 in 2018 and $149,519 in 2017), software engineers in general are in highest demand, and Amazon has been on the biggest hiring spree.
That’s a bit of a change from last year, when product development engineers claimed the highest salaries in Indeed’s database, at $173,570. It’s also different from 2017, when the big earners were directors of product management, with average salaries of $186,766.
The decline in top salary may reflect a slight softening in demand for tech professionals overall—Indeed’s researchers noted a 3.8 percent decrease in technology jobs listed on the site between October 2018 and October 2019.
Amazon, Walmart, and Apple posted the most Silicon Valley job openings on Indeed from January through October of this year. These three companies also claimed the top three positions in 2018, when Walmart stepped up its Silicon Valley hiring (though they shuffled positions slightly). Walmart ranked 13th in hiring in the region in 2017. Cisco, which was number three in 2017, slipped to fourth this year.
Indeed’s 2019 top 20 lists, below.
Highest-paying jobs in Silicon Valley
(ranked by average annual salary)
Machine learning engineer ($172,792)
Director of product management ($186,766)
Product development engineer ($173,570)
Principal software engineer ($169,268)
Senior reliability engineer ($181,100)
Director of product management ($173,556)
Platform engineer ($154,801)
Application security engineer ($173,903)
Data warehouse architect ($169,836)
Senior software engineer ($142,794)
Principal software engineer ($165,487)
DevOps manager ($166,448)
Software architect ($142,372)
Senior solution architect ($164,584)
Senior architect ($161,124)
Senior system engineer ($141,013)
Software engineering manager ($162,115)
Principal software engineer ($160,326)
Senior product manager ($134,547)
Software architect ($159,642)
Senior solutions architect ($158,329)
Cloud engineer ($132,852)
Machine learning engineer ($159,230)
Principal Java developer ($156,402)
iOS developer ($131,979)
User experience architect ($155,394)
Senior software architect ($154,944)
Development operations engineer ($128,495)
Platform engineer ($155,075)
Platform engineer ($154,739)
Back end developer ($127,088)
Data warehouse architect ($154,950)
Senior SQL developer ($154,161)
Firmware engineer ($124,190)
Director of information technology ($152,331)
Senior C developer ($152,903)
Android developer ($124,024)
Senior back end developer ($151,313)
Machine learning engineer ($149,519)
Software test engineer ($123,531)
Senior software architect ($150,970)
Software engineering manager ($148,937)
Data engineer ($120,281)
Salesforce developer ($150,923)
Software architect ($148,171)
Full-stack developer ($119,954)
Ruby developer ($149,944)
Cloud engineer ($146,900)
Data scientist ($118,887)
Server engineer ($149,435)
Senior product manager ($146,277)
Front end developer ($118,768)
Python developer ($149,331)
DevOps engineer ($146,222)
Mobile developer ($114,560)
Senior software engineer ($148,098)
Senior back end developer $144,306)
Software engineer ($112,969)
Most In-Demand Tech Jobs in Silicon Valley (ranked by share of job openings)
Over the years, various regions, in the U.S. and around the world, have pitched themselves as “the next Silicon Valley.” And some have indeed increased their respective pools of tech jobs. But none—with the exception of Boston, Seattle, San Diego, and North Carolina’s Research Triangle—have become serious technology hubs.
In fact, notes a recent study by the Brookings Institution, those first three (Boston, Seattle, and San Diego) plus Silicon Valley accounted for more than 90 percent of tech job growth from 2005 to 2017. Other “superstar metro areas” are growing quickly [see list]. By contrast, much of the rest of the country is starving for tech jobs—and losing ground. This is bad for a number of reasons, the report pointed out. These include increased political polarization in the country, rising housing costs and traffic problems in the cities that are tech-haves, and skilled worker shortages in the have-nots.
Top 20 U.S. Tech Metro Areas
New York-Newark-Jersey City, NY-NJ-PA
San Jose-Sunnyvale-Santa Clara, CA
Los Angeles-Long Beach-Anaheim, CA
San Francisco-Oakland-Hayward, CA
Dallas-Fort Worth-Arlington, TX
San Diego-Carlsbad, CA
Minneapolis-St. Paul-Bloomington, MN-WI
Houston-The Woodlands-Sugar Land, TX
Atlanta-Sandy Springs-Roswell, GA
Austin-Round Rock, TX
St. Louis, MO-IL
Miami-Fort Lauderdale-West Palm Beach, FL
(Source: Brookings Institution)
The solution Brookings researchers propose? Government intervention. They argue that the federal government should create eight to 10 regional “growth centers” in the U.S. heartland. According to them, each area should receive $700 million in direct R&D funding each year for the next 10 years. In addition, each should get workforce development funding of $5 million per year, plus exemptions from certain regulations, and other benefits, for a total 10-year cost of about $100 billion.
Where exactly are these potential “Silicon Valleys”? The report identified 35 possible areas that met certain criteria: geographically distant enough from superstar cities to be truly new regions; the presence of a university; a population higher than 500,000; and the presence of at least some local STEM talent, including Ph.D.-holders.
Using these and other factors, Brookings calculated an eligibility index. Madison, Wisc., came out way on top, followed by the Minneapolis area of Minnesota and Wisconsin, the Albany area of New York, and the Lexington area of Kentucky. [See table]
Some other interesting tidbits in the data:
• The Albany area wins the patent race, with 124 patents per 100,000 people living in the region
• Madison’s big advantage is in University STEM R&D, at nearly $1700 per person. Its nearest competitor, Pittsburgh, has roughly a third of that.
• Madison also has the largest percentage of people with bachelor’s degrees and STEM doctorates in its population. Go Badgers!
An Arizona man is suing the state’s technical registration board to protest being fined for working without an engineering license, which he maintains he doesn’t need because it doesn’t pertain to the type of work he performs.
It’s the latest case pitting engineers against state licensing agencies that by some accounts have become more aggressive in attempting to regulate who can call themselves an engineer, even as the use of that term becomes more widespread. Meanwhile, licensing proponents maintain it’s necessary for the public interest and point out that Arizona statutes have clear definitions of what an engineer is.
But what skills do you need to fill this lucrative niche?
Indeed set out to answer that question by looking at 500 tech skill terms related to data science that appeared in tech jobs posted on the site during the past five years. The analysis determined that, while Python dominates, Spark is on the fastest growth path and demand for engineers familiar with the statistical programming language R is also growing fast. Also on the radar: Hadoop, Tableau, SAS, Matlab, Redshift, and TensorFlow. [See graph, below, which omits Python because demand is literally off the charts, and because it is not strictly a data science skill.]
In terms of exactly how these skills are being applied, Indeed looked four fields that require data scientists. Machine learning came out on top—and is growing the fastest—followed by artificial intelligence, deep learning, and natural language processing. [See graph, below.]
The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
THE ENGINEER’S PLACE Steve Jobs took LSD 10 to 15 times and said that taking the drug was one of “two or three” most important things he ever did.
The late cofounder of Apple was an American original. Whatever singular qualities he possessed as a digital savant can’t be explained by his choice of recreational drugs. However, a new generation of engineers and software coders, centered in Silicon Valley but not limited to the world’s premier innovation hub, are now imitating Jobs in a rather dramatic way. They are routinely dropping “microdoses” of acid—about one-tenth the amount of the standard recreational dose—in order to achieve higher levels of creativity on the job, and greater intensity and focus.
Should you be doing the same?
Excuse me for posing such a personal question, but in the years ahead the question of whether you microdose may arise during a job interview or a coffee break with co-workers.
In Silicon Valley and other enclaves of leading-edge technology, the phrase “woke and wired” is coming to describe a certain openness by technologists to using pills and processors.
While the “pill” paradigm fits neatly into modern concepts of how to achieve wellness through supplements, the processor approach raises concerns, especially when it comes to implanting devices in the brain. Obvious risks notwithstanding, trailblazers believe they can enhance cognition using a brain-computer interface (BCI) to make real-world connections more quickly and durably.
It’s a compelling yet controversial vision, one that differs radically from mind-expansion through smart phones and Internet searches. Part of the appeal of implants comes from the passionate interest of serial entrepreneur Elon Musk. He founded Neuralink, in San Francisco, to pursue his dream of using BCIs to control digital devices and connect your thoughts to the Internet.
For some cognitive enhancement enthusiasts, the combination of drugs and chips is a bio-digital marriage made in heaven. They surmise that in the future, engineers may have to pursue parallel paths—microdosing and digital implants—to achieve heightened consciousness and levels of creativity and productivity that translate into more rewards and promotions as well as better designs, devices and services.
My personal position on the pill versus processor, or both, is old-fashioned. For the individual engineer and coder, consider an alternative: try systematically to squeeze more value from mental discipline.
In my view, the best methods to heighten creativity and increase your “out of the box” thinking are traditional, analog, and noninvasive. These methods can be found in John Dewey’s classic “How to Think” primer, first published in 1910 or even earlier from Rene’ Descartes’s Discourse on the Method of Rightly Conducting One’s Reason and Seeking Truth in the Science. In 1637, Descartes famously wrote, Cogito Ergo Sum (“I think, therefore I am”), laying the foundation for centuries cognitive enhancement through varieties of mental discipline.
The advice from Dewey, an American philosopher, also boils down to imposing various rules and routines on your own consciousness. The practice harkens back to Socrates and the memory exercises of medieval monks and includes ancient Asian techniques of meditation and control of mind-over-body. Learning the tools of deductive logic, statistical analysis and scenario planning could also qualify as humble traditional forms that are proven cognitive enhancers.
The perspective I’m advancing calls for first exhausting “analog” means to achieve mind-expansion before pursuing either pills or processors or both in combination.
While I can be fairly accused of being stuck in the past, my objections to microdosing or neural implants are not moralistic, but empirically-based and in tune with how humans study and evaluate risks from emerging technologies.
There are simply too many uncertainties with bio-pharmacological means to expanded consciousness. The costs are too great or entirely unknown. Digital means, especially those which require invasive surgery, such as electronic implants and anything supplying electric charges strike me as equally risky. And I take seriously a point advanced by Martha Farah, a cognitive neuroscience researcher at the University of Pennsylvania, that highly individualized reactions to a range of cognitive interventions could make more difficult, even impossible, rational assessment of relative risks and rewards.
In short, engineers who pursue heightened consciousness by any means available may find themselves trading short-term gain for long-term pain.
Science fiction, of course, is the master teacher of the perils of following new technologies wherever they lead. The drug soma, of Aldous Huxley’s Brave New World, made people happy whether or not they wanted to be. Because humans are entitled to their emotions and feelings, employers instead emphasize performance on tasks that comprise a job. If you do your job well, while miserable or supremely happy, who cares?
Performance metrics, however, seem like fair game to employers. If they find an enhancer that endows their workers with an advantage, can’t they mandate its use, provided the enhancer is lawful?
I think we are the verge of entering this brave new world of work, where enhancers are essentially mandatory. And not only in polities where individual rights are weak or non-existent. The potential benefits are too great to ignore. Engineers of the future, I humbly submit, will face wicked choices over whether to bio-digitally enhance at work or not.
To highlight the challenge, here’s a simple thought-experiment: You and I work as product architects for Corporation-of-Tomorrow. Our managers announce that everyone on our team will begin taking a daily pill to increase our concentration. The pill is legal, has no apparent side effects, and costs nothing to employees. Corporation-of-Tomorrow even declares that taking pill is voluntary. You can opt out. But the company also makes clear that the stakes are high: their products, on which lives depend, must be highly reliable, as perfect as humans can make them, and the daily pill is now viewed by management as an obligation, part of the company’s commitment to excellence and the public good.
Persuaded, you decide to take the pill daily (and be observed doing so by your smart phone). I say no. After six months, your work steadily improves. Mine does not.
I am fired.
The potential for employer-mandated enhancers should force us to reflect deeply about the importance of work, the relative value of enhancers, and illusion of choice. How might engineers respond in ways other than individually?
Collective responses would seem appealing. Engineers might band together and ask their employers to craft better policies. Or they might appeal to government to limit the power of employers to cajole, pressure or compel an employee to use bio-chemical or digital means to perform better on the job. Government could then create rules of the road for cognitive-enhancers on the job.
I figure that most engineers will reject collectivism and be comfortable with a libertarian framing. Confident individuals, educated and experienced in making design trade-offs, they will choose to engineer their own accommodation with enhancement. They will do what they wish and accept the consequences. And that means allowing individuals to opt out without fear or favor.
Some engineers, because they are clever, will divine effective “analog” means of cognitive enhancement. Praise their enterprise but admit there’s a disturbing possibility that invites comparisons to the present controversies over vaccination: that the government, or your employer, may be right and that legislators do know what’s best for your cognitive health. Won’t resisters merely drag down the group, and endanger the rest of us?
The second half of 2019 saw big engineering workforce moves both positive and negative.
HP (big layoffs), WeWork (more layoffs), Oracle (layoffs and hiring), and TSMC (hiring explosion) made big moves. The bulk of the hiring news came from outside Silicon Valley—with a flurry of activity outside the U.S. And the trends show that it’s a good time to be in AI and machine learning or 5G development, perhaps not such a good time to be developing consumer cybersecurity tools.
The big swings:
HP Inc. in October announced that it would cut up to 16 percent of its workforce, between 7000 and 9000 jobs. How many of those cuts affect technical professionals and how they would be distributed geographically wasn’t announced.
Oracle announced in October plans to hire2000 engineers to work on cloud computing technology around the world, including in Silicon Valley, Seattle, and India and at new data centers to be established. Oracle’s announcement came after a major round of layoffs in March. And in August Oraclelaid off at least 300 engineers from its flash storage operations in Silicon Valley and Colorado.
In Silicon Valley:
Apple in October began ramping up hiring of engineers to work on its smart-home platform and new smart-home devices in its Cupertino and San Diego, Calif., offices, according to Bloomberg. Apple hasn’t announced specific numbers.
JP Morgan, meanwhile, has been recruiting engineers with AI and machine learning expertise for its San Mateo, Calif., office, according to efinancialcareers.
Around the U.S.:
SpotHero, a parking technology developer based in Chicago, in August announced plans to hire 50 software engineers this year, adding to SpotHero’s current total staff of 210.
Amazon in September announced plans to add 400 tech professionals to its Portland, Oregon, tech center, including those with expertise in development, information technology, software architecture. The hires will double the company’s engineering workforce there.
In August, Uberannounced a tech hiring freeze for all software and services jobs based in the U.S. and Canada. Then in September, Uber announced that it had cut 435 from its product and engineering teams, the majority from U.S. operations, but lifted the hiring freeze. Just weeks later, Uberannounced long-term plans to hire 2000 professionals to staff a headquarters and engineering center for Uber Freight in Chicago.
Stratifyd, a four-year-old artificial intelligence and machine learning startup based in Charlotte, N.C., announced in November that it would add at least 200.
Microsoft is also ramping up in North Carolina, announcing in November that it would be adding 430 jobs at its Charlotte campus, mostly in engineering and management. This expansion followed on Microsoft’s October announcement of 575 new positions opening at its tech center in Irving, Texas.
Health tech startup Wellannounced in November plans to hire 400 in North Carolina.
Computer security toolmaker McAfee in October gave notice of 107 layoffs in Hillsboro, Oregon, by year-end, including 44 software engineers.
Symantec, another cybersecurity tools company, in October indicated that it would be cutting 213 software engineering and middle management jobs from its California operations and an additional 24 engineers and other professionals from its Oregon staff. (Broadcom acquired part of Symantec in August.)
Samsung in October gave notice that it would cut a significant but unspecified number of engineers working on CPU development from its Austin, Texas, R&D center, according to Extremetech. That month, Samsung also announced plans to hire an additional 1200 engineers in India for its R&D centers there.
Goldman Sachs in August announced plans to hire 100 software engineers to be based in its trading divisions in New York and London.
Elon Musk, in September announced that Tesla is building a “major engineering team” in China to support Gigafactory 3 and to generally work on software for Tesla’s cars.
Ikea executives in October told the Financial Times that the company aims to add more smart products to its line of home furnishings. The retailer is in the process of adding engineers to its Swedish hub, and is considering setting up development operations in the U.S. and Asia.
Nokia, based in Finland, announced in November that it had recently hired 350 engineers to work on 5G technology.
BFS Capitalannounced in October that it would be hiring 50 to staff its new data science and engineering hub in Toronto.
Essential, the mobile device developer founded by Andy Rubin, tweeted in October news of a hiring push for engineers and designers in Bangalore, India. Essential didn’t release specifics about the eventual size of this team but at this writing listed 10 openings.
Despite its byzantine structure, European Union research funding has been remarkably effective at producing results, and has been notably beneficial for the United Kingdom’s research—the UK received €3.4 billion more in research funding from the EU than it had contributed in the period between 2007-2013.
Britain’s likely exit from the EU in an either “no-deal” crash or a settled agreement (currently scheduled for 31 January 2020), will probably damage scientific research both in the UK and the EU for decades to come, according to Terry Wyatt, a professor at the University of Manchester in the UK and part of the working group at The Royal Society that investigated the impact of Brexit on UK research and development (see sidebar for a detailed breakdown of how the Royal Society thinks things will play out).
“Nothing is irreparable but the danger is that it could take decades to recover from Brexit,” said Wyatt. “The damage which has already been done could continue over the next few years, or could even accelerate.”
Even if a no-deal Brexit is avoided, or Brexit avoided altogether, Wyatt believes that the damage to the UK’s reputation has already been significant, and the effects will not be repaired overnight. “I think there’s no question that damage has been done and will continue to be done to R&D and high tech industry,” he said, adding, “I can’t see how that cannot be the case.”
The impact, according to Wyatt, manifests most clearly in two ways. In the first, there is already a reluctance to engage UK partners for EU research projects. In particular, projects have avoided engaging UK leadership ever since the referendum vote to leave the EU took place in 2016. The second impact has been that EU nationals are less likely to want to apply for short-term jobs in the UK. Wyatt concedes that most of the evidence for this second impact is anecdotal, but this is largely because the data on this is so hard to collect.
“An international workforce that can migrate across international borders is the life blood of science and research,” said Wyatt. “If EU nationals post-Brexit don’t want to come and live and work in the UK, I think there’s no question that that could seriously damage UK science and technology.”
Wyatt acknowledges that press reports have indicated that the British government has promised to continue to fund ongoing or already approved research schemes and projects. In this scenario, anyone who currently has EU funding through the European Research Council, or other EU research bodies, will continue to have their research funding guaranteed by the UK and that funding will not just drop off a cliff on 31 October.
Both explicitly and implicitly the UK government has been trying to encourage people through these guarantees to continue to apply for EU funding, and they’ve been trying to encourage other European countries and scientists to regard the UK as a source of good collaborators, according to Wyatt. Despite the guarantees, Wyatt’s experience has been that these efforts to allay fears haven’t helped.
“In the past, we had lots of extremely well-qualified applicants from EU countries,” said Wyatt. “We’ve had virtually no such applicants in recent times.”
The situation becomes further muddied by the fact that the UK government has never said whether they intend to fund such research to the level that the EU was sending to UK, or instead it will just scale funding to the level that the UK was sending to the EU. In the last fully accounted EU research funding period from between 2007 and 2013, the UK contributed €5.4 billion to the EU but received €8.8 billion back in funding for UK research.
But for Wyatt the issue extends beyond even the fact that the UK has received more funding than it has put it in. He argues that the merit of EU research funding has always been that it’s based on scientific excellence. He fears that the UK will alter this guiding principle more towards targeting domestic agendas.
“The money, of course, is very important, but it’s also about the quality of the funding mechanisms and the research that gets done,” said Wyatt. “It’s important that the quality of the research be determined both in terms of excellence and the science. It is vital that research priorities be driven by the scientist rather than some government minister or bureaucrat.”
Beyond the high-level issues of research strategies and directions, the nuts and bolts of bringing researchers into the UK outside of the EU, such as Asia, is fraught with challenges, according to Wyatt.
“The hoops that you have to jump through to secure a visa for a non-EU national is daunting,” said Wyatt. “If every single research job now has to go through those hoops (unless you can find a UK person), it’s going to be a nightmare and it’s hard to imagine how the visa system is going to cope.”
Wyatt notes that the formal UK involvement in a number of the large European science and technology institutions like CERN will be little impacted by Brexit since those relationships predate the formation of the EU. However, the lack of detail and clarity in how the relationships will work going forward does not offer much confidence.
Wyatt added: “At the moment, people seem to be trading on grand promises and warm words. However, in the end for high-tech industries or for R&D, it’s hard to argue that the net result is going to be positive.”
What tech skills do U.S. employers want? Researchers at job search site Indeed took a deep dive into its database to answer that question. And, at least for now, expertise in SQL came out on top of the list of most highly sought after skills, followed by Java. Python and Amazon Web Services (AWS) are coming on fast, and, should trends continue, may take over the lead in the next year or two. (Python came out on top in IEEE Spectrum’s analysis of top programming languages for 2019.)
Indeed’s team considered U.S. English-language jobs posted on the site between September 2014 and September 2019; those postings encompassed 571 tech skills. Over that period, Docker, the enterprise container platform, sits at number 20 on the list today, but that is the result of a dramatic climb over that five-year period. Demand for proficiency in that platform-as-a-service grew more than 4000 percent, from a barely registering share of 0.1 percent of job post mentions in 2014 to 5.1 percent today. Azure jumped more than 1000 percent during that period, from 0.6 percent to 6.9 percent; and the general category of machine learning climbed 439 percent, closely followed by AWS at 418 percent. (The top 20 for 2019, along with their 2014 shares, are listed in the table below.)
Indeed’s researchers note that the big jumps in demand for engineers skilled in Python stems from the boom in data scientist and engineer jobs, which disproportionately use Python. AES’s growth, they indicated, has been fueled by the proliferation of full stack developer and development operations engineering positions.
Employer Interest in Tech Skills
Key: Green = greater than 10 percent increase, Red = greater than 10 percent decrease, Yellow = less than 10 percent increase or decrease
As a small business owner, there are many important decisions you’ll have to make—from billing/accounting to marketing to choosing the right types of insurance to protect your business.
Most small business owners realize they need basic business insurance, including general liability and property damage coverage. Unfortunately, many small business owners don’t often think about obtaining life insurance to protect their business.
That’s because life insurance is typically thought of as just financial protection for your family. But it can protect more!
What if you were to die unexpectedly? What would happen to the business you’ve worked hard to achieve? Would you want your loved ones to keep your business running or “close” its doors? How will your loved ones pay off any business debt you owe?
Life insurance for a small business owner can provide funds to keep your business doors “open” and pay off any business loans or debt you’ve accumulated. In addition, funds from life insurance coverage can help pay the rent and other office expenses. It can also be used to fund a salary to hire someone to help takeover the everyday operations of your business.
Benefits of Life Insurance
If you have a family and are the sole owner of your business (or have just one partner), life insurance may be all you need. It can be used to cover both your family and your business.
Since you can name your beneficiaries, you can list a spouse, other loved ones and/or a business partner. By doing so, your spouse and other loved ones could receive proceeds you designated to help replace your income and all you do for your family, while your business partner could also receive a portion of your proceeds to keep the business running and pay off any debt.
Level Term Life Insurance is a popular choice for small business owners for two main reasons:
It makes it easy to protect your family and business with one benefit amount that remains the same for the duration of your coverage.
It features fixed rates that won’t change for the life of your coverage. Rates won’t increase or decrease—making it easy to fit within your family and business budgets.
IEEE Offers an Affordable Option
As an IEEE member, you have access to a variety of insurance benefits designed to protect you, your family and your business, including the IEEE Member Group 10-Year Level Term Life Insurance Plan. It features high amounts of coverage and fixed rates to help protect both your family and business. For more details, visit www.IEEEinsurance.com.
This information is provided by the IEEE Member Group Member Insurance Program Administrator, Mercer Health & Benefits Administration, LLC, in partnership with IEEE to provide IEEE Members with important insurance, health and lifestyle information.
*Including features, costs, eligibility, renewability, limitations, and exclusions.
The IEEE Member Group Term Life Insurance Plan is available in the U.S. (except territories), Puerto Rico and Canada (except Quebec). This plan is underwritten by New York Life Insurance Company, 51 Madison Ave., New York, NY 10010 on Policy Form GMR
The IEEE Member Group Insurance Program is administered by:
Mercer Health & Benefits Administration LLC, 12421 Meredith Drive, Urbandale, IA 50398
In CA d/b/a Mercer Health & Benefits Insurance Services LLC
AR Insurance License #100102691 CA Insurance License #0G39709
87573 (11/19) Copyright 2019 Mercer LLC. All rights reserved.
Employer demand for engineers with Bitcoin, blockchain, or general cryptocurrency expertise continued to grow between September 2018 and September 2019—albeit in fits and starts (see graph, below). These figures come from job search site Indeed. The 26 percent increase that occurred over this period was not as dramatic as the jump of 214 percent between September 2017 and September 2018.
On Monday, I attended the 2019 Fall Conference of Stanford’s Institute for Human Centered Artificial Intelligence (HAI). That same night I watched the Season 6 opener for the HBO TV show Silicon Valley. And the debates featured in both surrounded the responsibility of tech companies for the societal effects of the technologies they produce. The two events have jumbled together in my mind, perhaps because I was in a bit of a brain fog, thanks to the nasty combination of a head cold and the smoke that descended on Silicon Valley from the northern California wildfires. But perhaps that mixture turned out to be a good thing.
So, to add to that conversation, here’s my HBO Silicon Valley/Stanford HAI conference mashup.
Silicon Valley’s fictional CEO Richard Hendriks, in the opening scene of the episode, tells Congress that Facebook, Google, and Amazon only care about exploiting personal data for profit. He states:
“These companies are kings, and they rule over kingdoms far larger than any nation in history.”
Meanwhile Marietje Schaake, former member of the European Parliament and a fellow at HAI, told the conference audience of 900:
“There is a lot of power in the hands of few actors—Facebook decides who is a news source, Microsoft will run the defense department’s cloud…. I believe we need a deeper debate about which tasks need to stay in the hands of the public.”
Eric Schmidt, former CEO and executive chairman of Google, agreed. He says:
“It is important that we debate now the ethics of what we are doing, and the impact of the technology that we are building.”
Stanford Associate Professor Ge Wang, also speaking at the HAI conference, pointed out:
“‘Doing no harm’ is a vital goal, and it is not easy. But it is different from a proactive goal, to ‘do good.’”
Had Silicon Valley’s Hendricks been there, he would have agreed. He said in the episode:
“Just because it’s successful, doesn’t mean it’s good. Hiroshima was a successful implementation.”
The speakers at the HAI conference discussed the implications of moving fast and breaking things, of putting untested and unregulated technology into the world now that we know that things like public trust and even democracy can be broken.
Google’s Schmidt told the HAI audience:
“I don’t think that everything that is possible should be put into the wild in society, we should answer the question, collectively, how much risk are we willing to take.
And Silicon Valley denizens real and fictional no longer think it’s OK to just say sorry afterwards. Says Schmidt:
“When you ask Facebook about various scandals, how can they still say ‘We are very sorry; we have a lot of learning to do.’ This kind of naiveté stands out of proportion to the power tech companies have. With great power should come great responsibility, or at least modesty.”
“We need more guarantees, institutions, and policies than stated good intentions. It’s about more than promises.”
Fictional CEO Hendricks thinks saying sorry is a cop-out as well. In the episode, a developer admits that his app collected user data in spite of Hendricks assuring Congress that his company doesn’t do that:
“You didn’t know at the time,” the developer says. “Don’t beat yourself up about it. But in the future, stop saying it. Or don’t; I don’t care. Maybe it will be like Google saying ‘Don’t be evil,’ or Facebook saying ‘I’m sorry, we’ll do better.’”
Hendricks doesn’t buy it:
“This stops now. I’m the boss, and this is over.”
(Well, he is fictional.)
How can government, the tech world, and the general public address this in a more comprehensive way? Out in the real world, the “what to do” discussion at Stanford HAI surrounded regulation—how much, what kind, and when.
Says the European Parliament’s Schaake:
“An often-heard argument is that government should refrain from regulating tech because [regulation] will stifle innovation. [That argument] implies that innovation is more important than democracy or the rule of law. Our problems don’t stem from over regulation, but under regulation of technologies.”
But when should that regulation happen. Stanford provost emeritus John Etchemendy, speaking from the audience at the HAI conference, said:
“I’ve been an advocate of not trying to regulate before you understand it. Like San Francisco banning of use of facial recognition is not a good example of regulation; there are uses of facial recognition that we should allow. We want regulations that are just right, that prevent the bad things and allow the good things. So we are going to get it wrong either way, if we regulate to soon or hold off, we will get some things wrong.”
Schaake would opt for regulating sooner rather than later. She says that she often hears the argument that it is too early to regulate artificial intelligence—as well as the argument that it is too late to regulate ad-based political advertising, or online privacy. Neither, to her, makes sense. She told the HAI attendees:
“We need more than guarantees than stated good intentions.”
U.S. Chief Technology Officer Michael Kratsios would go with later rather than sooner. (And, yes, the country has a CTO. President Barack Obama created the position in 2009; Kratsios is the fourth to hold the office and the first under President Donald Trump. He was confirmed in August.) Also speaking at the HAI conference, Kratsios argued:
“I don’t think we should be running to regulate anything. We are a leader [in technology] not because we had great regulations, but we have taken a free market approach. We have done great in driving innovation in technologies that are born free, like the Internet. Technologies born in captivity, like autonomous vehicles, lag behind.”
In the fictional world of HBO’s Silicon Valley, startup founder Hendricks has a solution—a technical one of course: the decentralized Internet. He tells Congress:
“The way we win is by creating a new, decentralized Internet, one where the behavior of companies like this will be impossible, forever. Where it is the users, not the kings, who have sovereign control over their data. I will help you build an Internet that is of the people, by the people, and for the people.”
(This is not a fictional concept, though it is a long way from wide use. Also called the decentralized Web, the concept takes the content on today’s Web and fragments it, and then replicates and scatters those fragments to hosts around the world, increasing privacy and reducing the ability of governments to restrict access.)
If neither regulation nor technology comes to make the world safe from the unforeseen effects of new technologies, there is one more hope, according to Schaake: the millennials and subsequent generations.
Tech companies can no longer pursue growth at all costs, not if they want to keep attracting the talent they need, says Schaake. She noted that, “the young generation looks at the environment, at homeless on the streets,” and they expect their companies to tackle those and other issues and make the world a better place.
Tel Aviv contains more startups per capita than any city in the world other than Silicon Valley, according to the 2019 Global Startup Ecosystem Report published by Startup Genome and the Global Entrepreneurship Network. Prior to 2019, Tel Aviv contained the most startups per capita, even beating Silicon Valley. With companies including Google, Nielsen, and Nvidia operating incubators, accelerators, and competitions around Israel, some are even calling Tel Aviv the next Silicon Valley.
But the authors of the Global Startup Ecosystem Report disagree—there’s not going to be a “next” Silicon Valley. Quite the opposite actually; there will be many, and Tel Aviv is just one. Still, the report states, Tel Aviv is unique.
The report points to the Tnufa National Pre-Seed Fund, which is a risk-free grant that the government awards to entrepreneurs based in Israel to explore innovative technology. The fund is one possible reason why so many startups exist there. But it’s not the only one—the study fails to mention several other plausible explanations to the question: Why do Tel Aviv and Israel have so many startups?
There is a limited number of Keysight’s Education and Research Resources USB drives still available. Get over 200 technical items such as application notes, technical briefs, links to videos and webinars. Topics include materials research, test and measurement science, software and much more. Don’t miss out on this must-have educational tool that contains the latest educational resources to help you succeed in your classroom and lab.
Please note: This offer is only available in the United States and Canada.
I was walking my dog one morning when I saw a man setting up a surveyor’s laser transit. I stopped to ask him about it, and the man launched into a long explanation, beginning with “I’m an engineer, so I know about these things.”
I didn’t mention that long ago as a college freshman I was required to take a course in surveying. This, as well as drafting, welding, and other forgotten subjects, were deemed to be things that a well-rounded engineer should know. I wasn’t very good at some of them, and I despaired at becoming what I thought of as a “real” engineer.
In later years, I got to know some people who I believed were “real engineers.” They knew things. Lots of things, and across a broad swath of technology. And more than just knowing things, they had an instinctive ability to work with or fix anything mechanical or electronic. Often they were, or had been, radio amateurs.
I think of Thomas Edison as the epitome of a real engineer, but I’m not sure that such people still exist today. My test for being a real engineer is how well you would do as Mark Twain’s Connecticut Yankee in King Arthur’s Court. How much electrical technology could you create yourself if you were transported back in time to the Middle Ages? Would your electrical magic make Merlin jealous, or would all this end badly?
I held these generalist engineers in the highest esteem. They were usually the people I would call when some problem arose. But now I am wondering—how successful were they in their overall careers? I was prompted to consider this by reading Thomas Epstein’s recent popular book Range—Why Generalists Triumph in a Specialized World (Riverhead Books). My immediate reaction to the title was skepticism. Is it true in electrical engineering today that generalists are more likely to succeed than are specialists?
It seems to me that almost all the IEEE major awards go to specialists. IEEE Fellows and members of the National Academy of Engineering get elected because of specialties. Most of the important innovations in our field have been made by specialists. Many of the engineers who have started important tech companies have done so in the field of their specialty. Of course, some of these famous engineers could be real engineers, but their success and fame was initially due to their mastery of a specialty.
Epstein’s book is more nuanced than its title would imply. It does say, sometimes grudgingly, that specialists are nice to have, but their weakness is in having a narrow view. They are often most useful as adjuncts to the generalists. But perhaps in engineering it’s the other way around—it is generalists who are nice to have, but it is specialists who triumph. Yes, we need and respect real engineers, but the pathway to success seems to lead through specialization. Our world is too complex. The most successful among us begin as specialists. Some of the best then become generalists later, showing innate skills in management, interpersonal skills, communications, and business.
It’s an academic argument, literally. Should the education system focus on producing “real engineers,” or has our field become so splintered and complex that early specialization is a necessary step to an employable skill?
This article appears in the November 2019 print issue as “Are Specialist Engineers More Successful?”
“So we just put these last week in a Syrian refugee camp in Amman, Jordan,” Caleb Harper of MIT’s Media Lab told an audience at the Georgia Technology Summit in late March 2017.
He was referring to machines developed by his Open Agriculture Initiative (OpenAg) at the Media Lab, where he is a principal research scientist. The machines had been delivered to a United Nations World Food Programme project that aimed to give refugees in the Azraq camp—located in the Jordanian desert 90 kilometers from the Syrian border—the means to grow their own food, right inside the camp.
The vehicle for this agricultural miracle is called a personal food computer (PFC): an enclosed chamber the size of a dorm-room refrigerator loaded with LEDs, sensors, pumps, fans, control electronics, and a hydroponic tray for growing plants. PFCs are programmed to control light, humidity, and other parameters within the chamber to create the perfect conditions for growing a variety of plants. It’s a simple yet potentially revolutionary idea: a portable box that can grow practically any kind of plant just by downloading a recipe and planting some seeds.
The refugees fleeing war in Syria, leaving their homes, loved ones and possessions behind, had no idea where or when they would leave this temporary desert encampment or how they would make do while they were there. But what the refugees really needed, Harper contended, was “to be connected to other growers to share knowledge.” He added: “So super proud that that’s happening.”
On its face, the project sounds like one of the most ambitious and altruistic uses of high-tech agriculture you could imagine. In his talk in Georgia and presentations elsewhere as recently as this year, Harper enthusiastically conveyed a vision for the PFC that mimics how regular digital computing is scaled: PFCs would find a home in classrooms and home kitchens; food-computer “servers” would be housed in shipping containers to supply, say, a restaurant; and data center–scale vertical farms would feed entire cities.
As the name of the OpenAg initiative suggests, the food computer’s hardware and software are entirely open source—that is, the equipment specs and code are available free to anyone with the desire to experiment with indoor agriculture. “Nerd farmers,” the hashtaggable moniker given to members of the OpenAg maker community, build their own machines and then test their “recipes”—consisting of an array of controlled environmental parameters such as nutrient mix, temperature, carbon dioxide and pH levels, and light color and intensity. The recipe’s purpose is to arrive at a specific expression of a given plant’s phenome, which is an organism’s physical and biochemical traits expressed in response to the interaction of its genes and environment. Nerd farmers share their experiences via the OpenAg community forum and wiki, and can even upload their recipes to a Github repository, allowing others to replicate that exact plant phenome in their own machines.
Launched in 2015, OpenAg differed from other indoor farming efforts in both its ambition and its scope. While the operators of urban indoor farms are careful to locate them in areas that have access to water, electricity, and cheap real estate, often using proprietary software and equipment, open-source food computers could be built by anyone and would be deployable virtually anywhere. Data from food computers all over the world would be fed to machine learning algorithms to optimize recipes and help people grow, say, the most flavorful basil (the subject of this peer-reviewed PLoS-One paper authored by Harper et al.) or replicate an Aleppo pepper grown in Syria in a food computer in the Jordanian desert.
It’s a nice idea—if your food computer works.
But the situation on the ground never matched the fantastic claims that Harper made about the WFP project in public appearances during the spring of 2017 and in briefings for corporate patrons of the Media Lab in the spring and fall of 2017. Harper and a colleague also cited the personal food computer’s successful deployment in the Azraq camp in emails to potential partners and patrons for the Open Agriculture Initiative and for Fenome Inc., a spin-off company that Harper founded in 2016.
Even as Harper took the stage in Georgia, it was clear to those working with the food computer at the World Food Programme (WFP) and at Fenome that the project wasn’t progressing as the team had hoped. Indeed, in September 2017, the WFP project officially ended without any of the machines having completed a single grow cycle, according to the official in charge of the project. The WFP’s personal food computers weren’t even deployed at the Azraq camp, home to some 35,000 Syrian refugees, but rather at a facility run by Jordan’s National Center for Agricultural Research and Extension, in Mafraq, an hour’s drive from Azraq.
Harper did not respond to detailed questions about the WFP project sent to him by IEEE Spectrum for this article.
Recently, the OpenAg initiative has come under scrutiny following the departure in September of Media Lab director Joichi Ito. He was a champion of the project, which started during his tenure and seemed to exemplify his “deploy or die” approach. (In a 2014 TED talk, Ito announced he was changing the lab’s motto from “demo or die” to “deploy or die,” focusing researchers’ efforts on real-world implementations of the technologies they were developing.) MIT is now investigating OpenAg, following allegations that staff were told to demonstrate food computers with plants that were not actually grown in them. Business Insider and the Chronicle of Higher Education first reported these allegations.
Perhaps the only unqualified success of OpenAg was Harper’s ability to sell his idea. His first big public unveiling of the food computer came in a 2015 TED talk that has been viewed more than 1.8 million times. Audiences and the press alike swooned. Glowing reports about the food computer, including one in Spectrum, quickly followed, and continued right up until the most recent revelations. And Harper helped raise the capital to start up his OpenAg spin-off, Fenome.
Last month, The New York Times reported that four former researchers connected to OpenAg have complained about some of the claims Harper makes in his talks, including that the average age of an apple in a U.S. grocery store is 11 months sometimes and 14 months other times, statements refuted by a U.S. Department of Agriculture official in an email reviewed by the Times.
It’s one thing to get an incidental fact wrong. It’s quite another to repeatedly state that refugees were benefiting directly from food computers and enjoying a taste of home, when they were doing no such thing.
The WFP project started off with the best intentions. According to Nina Schroeder, Head of Scale Up Enablement at the WFP Innovation Accelerator and the World Food Programme official in charge of the Jordanian hydroponics project in 2017, the long-term goal of the project was indeed to deploy food computers at refugee camps. “First we wanted to come up with a concept that we could bring to a larger scale that actually makes sense to deploy. For the early research phase, it wouldn’t have made sense to deploy it inside the refugee camp.”
As Schroeder described it, the pilot program would let researchers evaluate the technology and determine if it was appropriate to install PFCs at the camp. If everything went well with the pilot, then the Azraq camp would receive the food computers.
The project launched at the end of January 2017, when a team from Fenome went to Jordan to assemble and install the food computers. At the time the company was based in Salt Lake City, with a staff of 17 there plus two employees in Boston and one in Seattle. Four food computers were placed at the WFP’s office in the Jordanian capital of Amman and six at the National Center for Agricultural Research and Extension (NCARE) facility at Al-Khalydeha Salinity Research Station in Mafraq, about a one-hour drive from the Azraq refugee camp.
The plan was that after installation, the project would be monitored remotely from Utah via the Internet and by three NCARE staff on site in Jordan. The NCARE experiments focused on testing the technology using local water and changing the light spectrum of the food computers’ LEDs and the nutrient mix of the hydroponic solution, according to a person close to the project who spoke with Spectrum anonymously for fear of retaliation. Plants tested included cucumbers, basil, and baby lettuces. The source confirmed that Fenome’s team communicated regularly by phone with their Jordanian counterparts.
Martinez declined to comment on the WFP project or Fenome. Berkowitz, Baker, and Mann did not respond to questions posed by Spectrum.
But despite all this money and brainpower, things soon went awry in Jordan. Schroeder, in a phone interview, told Spectrum that the conditions at the NCARE site were harsh, with a very dry desert climate and high indoor temperatures. The power frequently failed, which shut down the building’s air conditioning and the food computers’ LEDs. When the air conditioning conked out, it sometimes reached 45 °C (113 °F) inside the lab.
Worse, the Wi-Fi was unreliable. A Wi-Fi connection was necessary to remotely monitor some of the parameters inside the grow chambers, which were equipped with cameras and sensors that measured temperature, humidity, and pH levels. Whenever a food computer went down, it had to be connected to Wi-Fi so that the remote team could reboot it. The software was still quite buggy, so not all features could be controlled locally at the NCARE facility. The Fenome team returned a month after the initial deployment to modify the boxes and some functions, and to allow the machines to be rebooted locally, according to our source close to the project.
But while Fenome might have solved some problems, others cropped up, according to Schroeder. Algae grew inside the containers, possibly because of low-quality water and light shining through the food computers’ clear acrylic access doors. The doors also deformed due to the heat, creating gaps that let ambient air into the grow chamber and contaminated what is supposed to be a controlled environment.
In all, the Fenome support team visited NCARE four times to set up the food computers, train local teams, and adjust the personal food computers. The last visit was in May 2017.
In late April, just a few weeks after OpenAg Inc. officially changed its name to Fenome, Inc., 15 of its 17 employees in Utah were dismissed. In the fall of 2017, the company left Utah and relocated to offices at its VC partner Flagship Pioneering, in Cambridge, Mass.
“When they closed down the Utah office, that made it very difficult to continue the experiments that were going on,” says Schroeder. The WFP officially ended the Jordanian project in September 2017. Not a single grow cycle was successfully completed, Schroeder says.
“The food computer we tested there wasn’t ready for our purpose, and it was still in the development stage,” Schroeder says. Her team is now deploying lower tech, locally adapted hydroponic systems to food-insecure communities in Algeria, Chad, Jordan, Kenya, Namibia, Peru, and Sudan.
The concept of the food computer “is so attractive that you have the possibility to grow locally,” she says. “But you need to have the right kind of environment. That food computer version was too early.”
While it may have been the most high profile, the World Food Programme wasn’t the only Fenome partner left high and dry.
In the fall of 2016, Charisa Moore, a biology teacher at Bainbridge Island High School in Washington state, watched a recording of Harper’s TED talk. The food computer sounded like just what Moore had been looking for to beef up her curriculum with content centered on ecology. Moore called Harper.
She says Harper told her he could talk about what OpenAg had done in putting food computers into Boston-area schools but warned her that they didn’t work in a lot of the schools where they were deployed.
“Well, I can make it work!” Moore told Harper. Harper invited Moore, another teacher, and a star student to MIT for a week to learn how to build, program, and troubleshoot the food computer and experiment with plant recipes. Fenome would provide Moore’s school with the hardware, help her and her students build the units on-site, and support them free of charge.
When she got home in late April 2017, Moore and her team decided to give a TED-esque talk themselves to about 400 people in the Bainbridge community about the project they were about to embark on with the help of Fenome.
“We did basically Caleb’s presentation using his Fenome team. And then that week we built the computers.”
Students and teachers started running experiments with the food computers. The food computer cameras and sensors sent data to Fenome in Utah, and the Utah team communicated with Moore on what they saw happening in the machines.
“And then it kind of got really weird,” Moore recalls. “We started not hearing very much. We used [the food computers] through the summer. Starting the next [school] year, we started to hear Fenome was going to go out of business. So that team was not able to then really troubleshoot any of our stuff.”
Without tech support from Fenome, software maintenance proved difficult. Moore struggled to push updated software that had been published on Github to the machines. “It’s very complicated,” she told Spectrum. “This is way beyond my expertise. I can only barely code in Python.”
Hardware bugs were even more difficult to fix. “The equipment is really not sustainable,” Moore says. “It corrodes. You have a cooling unit on it, the Freon comes out, it freezes—it just becomes messy. So to clean it, you have to go and order the stuff and replace those items. And good luck finding them.”
Her team did come up with a solution for one glaring, design-for-demo’s-sake specification: the food computer’s clear acrylic door, which let in ambient light and contributed to the algae problems in Jordan.
“The thing about the food computer that sort of didn’t make a lot of sense to me was that it’s open…. It’s not controlled,” says Moore. At her students’ urging, she went to Home Depot and bought some silver wrapping and clad the chassis with it to shut out unwanted light.
Moore says that she and her students continued to experiment with their food computers, uploading plant recipes to the OpenAg open source forum. They also set up an experiment to see which equipment grew microgreens more effectively: a food computer or a basic UV light bank shining down on plants potted in soil. Moore’s team found that the conventional indoor setup grew microgreens at four inches per week—twice the rate of the food computer.
Moore concluded that the food computers “are pretty much not usable, because they just are not user friendly. They’re too hard to troubleshoot. Any Joe could not just walk up and figure out how to do it. You couldn’t market that to put in your pantry at home unless you knew how to do all that stuff.”
Moore found herself with three food computers on her hands. She gave the “most unusable” one to a student, who took it home and converted the box “into a kind of a simplistic one with [manual controls] instead of electronic ones.” He used it to earn a Boy Scout merit badge.
Even as Fenome and its partners were struggling, Harper continued to enchant audiences with his tales of nerd farmers around the world. Harper, who holds a master’s degree in architecture from MIT and is a member of the World Economic Forum and a National Geographic Explorer, managed to parlay the exposure from his TED talk into a lucrative side gig as a speaker. He earns $20,000 to $30,000 per talk, according to his agent’s website. He also used a version of the TED talk at a pitch meeting with investors in the summer of 2016 to help get his company its first series of funding according to an email obtained by Spectrum.
In May 2017, Harper repurposed his Georgia Technology Summit talk for a Red Hat conference. He again spoke about how a refugee camp in Jordan was using the food computers. He also described what the refugees grew and the significance they attached to the machines: “We didn’t tell them what to grow. They decided to grow things that they missed from home. Things that they can’t get any more.”
The food computer,” he said, “became a cultural object more than just a manufacturing object.”
Meanwhile, Harper’s startup was laying off staff and planning its relocation to Massachusetts. Just days after the Red Hat appearance, Harper posted about Fenome to the OpenAg Community Forum: “hey guys—the startup (fenome) in its infancy has had a couple gaffes (oops) and obv communications is one of them.” He explained that the Fenome team was working to fix bugs and upgrade these “crazy expensive and not fully functional” “1st run prototypes” so that the company could start selling PFC kits. He told the forum that “after some development we all think its [sic] better to be based in Cambridge and is in the process of moving.”
Trouble at his startup did not derail Harper’s traveling show.
Less than a month after Red Hat, he dusted off his talk and delivered it at the EAT Stockholm Food Forum. He repeated his claim that the food computers at the Azraq refugee camp had created much more than mere plants:
“We’ve deployed in the world with the World Food Programme in Amman in a Syrian refugee camp. We did not tell them what to grow. Turns out they wanted to grow things from home. It became a cultural object for them. They missed the flavor of the place that they were from and that creates their culture and creates happiness for them.”
Harper’s story about Azraq evolved further in an interview earlier this year with science journalist Miles O’Brien at a Purdue University event on 26 February. This time, he revealed how St. John’s Wort plants had been grown by a “person at the camp that happened to be a Ph.D. on St. John’s Wort.” Harper claimed that the person started a business selling the medicinal plant to treat a population “rife with depression”:
Besides these public claims posted to YouTube, documents obtained by Spectrum reveal that Harper and at least one associate also misrepresented the World Food Programme project in email correspondence with potential funders and partners.
In a February 2017 email chain that included Nest cofounder and iPod coinventor Tony Fadell, Harper and his assistant tried to arrange a meeting with Fadell, now principal at Future Shape LLC, an investment and advisory firm based in Paris. Harper sent links to a couple of blog posts, one from 2016 about his lab and another about the “2017 expansion of our ecosystem with a nonprofit and a venture.” He ended his 14 February 2017 email with “Btw we just deployed food computers to a Syrian refugee camp in Jordanon [sic] a contract with the UN. Pretty cool. C”.
And about five months after the project at NCARE had ended, OpenAg was in talks with a group at Google about supporting research at OpenAg, according to another email chain obtained by Spectrum. In an email dated 30 January 2018, Google’s Jess Holbrook, senior staff UX researcher and UX manager, asked several questions regarding food computers, including “Has anyone picked up the design and adapted it to specific use cases like edu, refugee groups (I know you mentioned Jordan), etc.”
Hildreth England, the OpenAg Initiative assistant director at the time and currently codirector of the Media Lab’s PlusMinus program, answered the next day, “…yes, the PFC v2.0 was deployed in a Syrian refugee camp with the World Food Program.” England declined Spectrum’s request to comment, citing “an open inquiry being led by MIT’s Office of the VP for Research.”
Around the same time as the exchange between England and Holbrook, Dr. Babak Babakinejad, then the lead researcher for OpenAg, was testing a food server being set up in a shipping container in Middleton, Mass., at MIT Bates Research and Engineering Center. Babakinejad told Spectrum that he documented several problems with the equipment, including differences in temperatures in various areas inside the food server, in what is supposed to be a controlled environment, and a lack of control over carbon dioxide levels, humidity, and temperature. He told Spectrum that he had reported these issues to the OpenAg team.
Babakinejad showed Spectrum an email he sent on 16 April 2018 to officials with MIT Environment, Health and Safety to report that OpenAg was discharging nutrient solutions beyond state-permitted limits, a controversy that was examined last month in a joint report by ProPublica and WBUR. Babakinejad also took his concerns about OpenAg and Harper to Media Lab director Ito.
In an email to Ito on 5 May 2018, Babakinejad stated that Harper was making claims in public talks about “implementations of image processing, microbiome dosing, creating different climates and collecting credible data from bots across the world that are not true.”
In addition, Babakinejad wrote, “He [Harper] takes credit for deployment of PFC’s to schools and internationally including a refugee camp in Amman despite the fact that they have never been validated, tested for functionality and up to now we could never make it work i.e. to grow anything consistently, for an experiment beyond prototyping stage.”
Ito responded and asked Babakinejad if he could share these concerns with Harper. That’s the last Babakinejad says he heard from Ito on the matter. Within a month, Babakinejad had taken a leave of absence. He officially left the OpenAg project in September 2018. Two months later, Harper was promoted to principal research scientist at the Media Lab, a position that as of this writing, he still holds.
This book includes a select set of examples curated to show how researchers and industrial partners are changing the way we produce and consume energy. See what is possible when leveraging the NI platform.
Featured in Handbook:
· Condition Monitoring
The collective thoughts of the interwebz
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.