Tag Archives: Computing/Software

Toshiba’s Optimization Algorithm Sets Speed Record for Solving Combinatorial Problems

Post Syndicated from John Boyd original https://spectrum.ieee.org/tech-talk/computing/software/toshiba--optimization-algorithm-speed-record-combinatorial-problems

Toshiba has come up with a new way of solving combinatorial optimization problems. A classic example of such problems is the traveling salesman dilemma, in which a salesman must find the shortest route between many cities.

Such problems are found aplenty in science, engineering, and business. For instance, how should a utility select the optimal route for electric transmission lines, considering construction costs, safety, time, and the impact on people and the environment? Even the brute force of supercomputers is impractical when new variables increase the complexity of a question exponentially. 

But it turns out that many of these problems can be mapped to ground-state searches made by Ising machines. These specialized computers use mathematical models to describe the up-and-down spins of magnetic materials interacting with each other. Those spins can be used to represent a combinatorial problem. The optimal solution, then, becomes the equivalent of finding the ground state of the model.

Battle of the Video Codecs: Coding-Efficient VVC vs. Royalty-Free AV1

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/battle-video-codecs-hevc-coding-efficiency-vvc-royalty-free-av1

Video is taking over the world. It’s projected to account for 82 percent of Internet traffic by 2022. And what started as an analog electronic medium for moving visuals has transformed into a digital format viewed on social media platforms, video sharing websites, and streaming services.

As video evolves, so too does the video encoding process, which applies compression algorithms to raw video so the files take up less space, making them easier to transmit and reducing the bandwidth required. Part of this evolution involves developing new codecs—encoders to compress videos plus decoders to decompress them for playback—to support higher resolutions, modern formats, and new applications such as 360-degree videos and virtual reality.

Today’s dominant standard, HEVC (High Efficiency Video Coding), was finalized in 2013 as a joint effort between the Moving Picture Experts Group (MPEG) and the Video Coding Experts Group (VCEG). HEVC was designed to have better coding efficiency over the existing Advanced Video Coding (AVC) standard, with tests showing an average of 53 percent lower bit rate than AVC while still achieving the same subjective video quality. (Fun fact: HEVC was recognized with an Engineering Emmy Award in 2017 for enabling “efficient delivery in Ultra High Definition (UHD) content over multiple distribution channels,” while AVC garnered the same award in 2008.)

HEVC may be the incumbent, but two emerging options—VVC and AV1—could upend it.

Mm-Wave Partial Information DE-Embedding: Errors and Sensitivities

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/mmwave-partial-information-deembedding-errors-and-sensitivities

De-embedding methods making significant structural assumptions have become popular in mm-wave fixtures because of the relative immunity to repeatability and standards availability problems at the DUT. The intrinsic errors, repeatability and configuration sensitivities are studied in this work with examples in the WR-10 and WR-2.2 bands.

A More Efficient Way to Process Tasks in the Cloud

Post Syndicated from Nick Stockton original https://spectrum.ieee.org/tech-talk/computing/software/software-algorithm-process-tasks-cloud-services-news

You only needed one tomato. Unfortunately, the three people in line ahead of you at the supermarket have loaded their carts like they’re on an episode of “Doomsday Preppers.” You scan the other checkout lanes, but even the express line is backed up because some dude insists on paying in spare change.

A similar problem bedevils the cloud services industry. Google Maps, for example, must parse simple tasks—for instance, directions to a friend’s house—without getting bogged down by those more complicated—go to a friend’s house, but avoid highways and add a stop at the supermarket to pick up some tomatoes. It, and other cloud-based services, accomplish this thanks to tools that send incoming tasks to different servers, effectively forming queues according to size and complexity. 

Now, the developers of a new task scheduling tool say their program beats the competition at managing and prioritizing such tasks in an efficient way.

World’s First Classical Chinese Programming Language

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/software/classical-chinese

The world’s first programming language based on classical Chinese is only about a month old, and volunteers have already written dozens of programs with it, such as one based on an ancient Chinese fortune-telling algorithm.

The new language’s developer, Lingdong Huang, previously designed an infinite computer-generated Chinese landscape painting. He also helped create the first and so far only AI-generated Chinese opera. He graduated with a degree in computer science and art from Carnegie Mellon University in December.

After coming up with the idea for the new language, wenyan-lang, roughly a year ago, Huang finished the core of the language during his last month at school. It includes a renderer that can display a program in a manner that resembles pages from ancient Chinese texts.

Facing Up to Facial Recognition

Post Syndicated from Zoltan Istvan original https://spectrum.ieee.org/computing/software/facing-up-to-facial-recognition

The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Many people seem to regard facial-recognition software in much the same way they would a nest of spiders: They recognize, in some abstract way, that it probably has some benefits. But it still gives them the creeps.

It’s time for us to get over this squeamishness and embrace face recognition as the life-enhancing—indeed, life-saving—technology that it is. In many cities, closed-circuit cameras increasingly monitor streets, plazas, and parks around the clock. Meanwhile, the price of recognition software is decreasing, while its capabilities are increasing.

I welcome these trends. I want my 9-year-old daughter tracked while she walks alone to school. I want a face scanner at Starbucks to simply withdraw the payment for my coffee from my checking account. I want to board a plane without fumbling for a boarding pass. Most of all, I want murderers or terrorists recognized as they walk on a city street and before they can cause further mayhem.

I understand the very real threats to our civil liberties. Many of us would probably think twice about showing up at a public demonstration if we knew that the authorities were going to compile a list of everyone who was there. And how about those people who will inevitably just happen to be walking by the demonstration, whose facial images will be therefore captured and whose names will be thus added to the list of demonstrators?

Concerns such as these have recently prompted four U.S. cities, including San Francisco and Oakland, Calif., to outlaw the use of facial-recognition technology within their borders. Politicians, pundits, and major media are playing up how this brave new world of AI facial recognition is going to lead to a police state at best, and a dystopia at worst.

As a candidate for the Republican party nomination for president of the United States, I had to ponder these issues deeply. As a politician, it would have been easy for me to play upon people’s fears about facial recognition and declare my opposition to it. But I’m not just a politician. I am also a proud transhumanist. I passionately believe that technology is on the verge of utterly transforming our societies and cultures for the better, and that we should start preparing for that transformation now.

In this context, I note that facial-recognition technology, based on neural networks, is only one of several major tech trends that have for years been wearing away at our privacy. Those of us in developed countries are now living in a world with nearly 24 billion smart devices, which we constantly interact with. Some of them are tracking our consumer preferences, purchases, political contributions, and more.

Our privacy has eroded quite a bit in the past couple of decades. The extent of this erosion was revealed dramatically in a New York Times story published on 18 January. A company called Clearview AI has created a database of some 3 billion facial images culled from public sources such as Facebook. When the company’s software is presented with a photo of a person, the chances are very high it can identify the person if he or she is in the database.

And yet, as a society, we seem to be doing fine. Over the last 30 years, technology has broadly improved our sense of freedom and our ability to prosper, and has made the world safer than it’s ever been.

Privacy, it turns out, is a largely modern construct. Over the millennia when human beings lived in tribes, clans, villages, fiefdoms, and towns, prevailing notions of privacy were very different from today’s. “For all of human history, until the modern era, life was lived more or less publicly, as befits most species on Earth,” wrote Jeremy Rifkin, an economic and social theorist, in his book The Zero Marginal Cost Society.  

The simple fact is that privacy builds walls around people, companies, and government agencies. Transparency, on the other hand, tears walls down. To some people, facial recognition is Big Brother. To others, it’s a guardian angel. Which side you’re on probably depends on how much you have to hide.

In the coming years, neural networks will be trained to recognize when a person is having a seizure, when a child or an elderly person is lost in the mall, or when someone is drowning in a community pool. Emergency services will be automatically and instantly notified. In many cases, lives will be saved and catastrophes averted. But it won’t happen if we decide to shun recognition technologies.

Perhaps facial recognition’s biggest contribution will be in the fight against human trafficking. It is an inexpressible tragedy that countless hundreds of thousands of children are trafficked for sexual abuse every year. And it is part of a larger problem: A couple of years ago, the International Labour Organization estimated that in 2016 there were over 40 million people in the world who had been trafficked and were enduring either slavery or forced marriage.

To combat child sex trafficking, investigators have used facial recognition to identify children pictured in online sex ads. Once a child is identified, law-enforcement authorities have a much better chance of finding and rescuing him or her. Some companies are already successfully implementing software to tackle the issue.

U.S. citizens, perhaps more so than those of other countries, tend to be skeptical of the government’s ability to solve problems, and reluctant to trust the government to do the right thing, especially when no one is looking. To them I say, how about two-way transparency? Let’s turn the tables and use the tech to keep tabs on government officials.  Many police officers are already monitored by body cams while on duty. Why not expand such surveillance to vastly larger numbers of government officials while they are on duty and serving the public? Author David Brin offered a compelling vision of such a society as far back as 1998, in his book The Transparent Society.

Another worry is that facial recognition will be biased. Analyses have indicated that the accuracy of some programs is much worse for minorities, increasing the odds that some of them would be erroneously singled out. I agree that this is a very serious concern; however, more recent studies have confirmed that the bias problem is not inherent in the algorithms. Neural networks are only as good as the data they are trained on, so using more diverse data sets will undoubtedly solve the bias problem. Eventually, recognition software will have far less prejudice than a human being, because people’s attitudes are inevitably shaped by their upbringing, culture, religion, and biology. I trust code more than I trust hormones.

Anyone interested in the promise and perils of facial recognition technology need look no further than China. It has some of the most advanced systems in the world and has made aggressive use of them in law enforcement, surveillance, security, and commerce. While China has used the technology to catch criminals and make electronic payments more convenient, it has also pursued some rather disturbing applications.

For example, the government’s use of the technology to monitor the Uighur population in Xinjiang province has been criticized as intrusive and repressive. I am also dismayed by China’s social credit system, an emerging regime that seeks to summarize a person’s morality and integrity as a numerical value. The system makes use of facial recognition to assess credits or demerits based on good or bad behavior in the streets and public venues.

Unsettling as they are, I do not believe that abuses such as these could occur in a free society. Western and other, similar democracies have checks and balances, such as truly independent judiciaries and legislatures, that would block any such systematic repression.

Of course, outside of criminality, no government should seek to control behavior or to discourage dissent or harmless weirdness. The most innovative and productive societies on Earth got the way they are in large measure because they explicitly rejected any such control and repression.

One last significant issue on AI facial recognition is the private use of it, which worries some people even more than government use. Companies are already starting to use facial recognition for security reasons in sports stadiums, transportation terminals, office buildings, and hotels. This isn’t much different than closed-circuit TV, which many public and private areas already have and which has proven useful in increasing public safety.

But the equation changes when people believe companies are monitoring them for commercial purposes. Might Walmart facial recognition follow us into their stores and determine that we buy an awful lot of vodka? Many people don’t want to be fodder for a data-mining scheme, even if it increases their safety.

Personally, I don’t mind any of this. But for those who do, companies in the future could just give us a choice to stop all facial recognition when we enter their stores or private areas. Maybe they’ll have an app that lets us easily opt out, like the unsubscribe links at the bottom of most targeted emails.

Actually, as far as the commercial sector goes, I see no reason to fear facial recognition run amok. Inevitably, people who want to protect themselves from being recognized will find ways to do that, and some of them will even get rich selling products offering such protection. That’s capitalism at its finest.

We can argue about the promise and perils of facial recognition technology as long as we like, but it’s pretty clear now that there will be no stopping it. The attractions for government agencies, commercial enterprises, and even individuals are simply too great. Do you unlock your phone with your face? Do you like having the ability to search your online photo libraries for a specific person? There are many millions of people who do. While enjoying such uses, we’ll tolerate the others, in much the same way that we like the convenience of email and tolerate the endless spam that goes with it.

The road to ubiquitous facial recognition won’t be smooth or straight. There will be pitfalls and unforeseen twists. But overall, it will make daily life more functional and will help keep us all safer. Rather than fighting and complaining about it, we ought to embrace its promise and be wary and vigilant enough to ensure that its global rollout is sensible, unbiased, and as beneficial as possible for all.  

About the Author

Zoltan Istvan is a Republican candidate for U.S. president in 2020. His sci-fi novel The Transhumanist Wager (2013, Futurity Imagine Media) is taught in futurist studies around the world, and he is the subject of the new documentary Immortality or Bust.

Researchers Can Make AI Forget You

Post Syndicated from Matthew Hutson original https://spectrum.ieee.org/tech-talk/computing/software/researchers-can-make-ai-forget-you

Whether you know it or not, you’re feeding artificial intelligence algorithms. Companies, governments, and universities around the world train machine learning software on unsuspecting citizens’ medical records, shopping history, and social media use. Sometimes the goal is to draw scientific insights, and other times it’s to keep tabs on suspicious individuals. Even AI models that abstract from data to draw conclusions about people in general can be prodded in such a way that individual records fed into them can be reconstructed. Anonymity dissolves.

To restore some amount of privacy, recent legislation such as Europe’s General Data Protection Regulation and the California Consumer Privacy Act provides a right to be forgotten. But making a trained AI model forget you often requires retraining it from scratch with all the data but yours. This process that can take weeks of computation.

Two new papers offer ways to delete records from AI models more efficiently, possibly saving megawatts of energy and making compliance more attractive. “It seemed like we needed some new algorithms to make it easy for companies to actually cooperate, so they wouldn’t have an excuse to not follow these rules,” said Melody Guan, a computer scientist at Stanford and co-author of the first paper.

Neural Networks Can Drive Virtual Racecars Without Learning

Post Syndicated from Matthew Hutson original https://spectrum.ieee.org/tech-talk/computing/software/neural-networks-ai-artificial-intelligence-drives-virtual-racecars-without-learning

Animals are born with innate abilities and predispositions. Horses can walk within hours of birth, ducks can swim soon after hatching, and human infants are automatically attracted to faces. Brains have evolved to take on the world with little or no experience, and many researchers would like to recreate such natural abilities in artificial intelligence.

New research finds that artificial neural networks can evolve to perform tasks without learning. The technique could lead to AI that is much more adept at a wide variety of tasks such as labeling photos or driving a car.

Artificial neural networks are arrangements of small computing elements (“neurons”) that pass information between them. The networks typically learn to perform tasks like playing games or recognizing images by adjusting the “weights” or strengths of the connections between neurons. A technique called neural architecture search tries lots of network shapes and sizes to find ones that learn better for a specific purpose.

5G Small-Cell Base Station Antenna Array Design

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/5g-smallcell-base-station-antenna-array-design

In this eSeminar we will explore state-of-the-art simulation approaches for antenna array design, with a particular focus on 5G small-cell base station antennas.

Realizing the 5G promise of reliably providing high data rate connections to many users simultaneously requires the use of new design approaches for base station antennas. In particular, antenna arrays will increasingly be used to enable agile beam forming and massive MIMO technology, both required to provide good service in dynamic, complex urban environments with a high number of users. The array design capabilities of SIMULIA CST Studio Suite have grown dramatically over the last years and are relied on by many companies around the world.

Join us to learn more about how simulation can help you with your array design, as we answer the following questions.

  • How can antenna elements be designed and evaluated in terms of their suitability as an array element?
  • How can full arrays with real radomes be simulated accurately much more quickly than before using the new simulation-by-zones approach?
  • How can interference between multiple co-located arrays be evaluated using advanced hybrid simulation techniques?
  • Finally, how can the coverage performance of base station arrays in complex urban or indoor environments be predicted?

AI and Economic Productivity: Expect Evolution, Not Revolution

Post Syndicated from Jeffrey Funk original https://spectrum.ieee.org/computing/software/ai-and-economic-productivity-expect-evolution-not-revolution

In 2016, London-based DeepMind Technologies, a subsidiary of Alphabet (which is also the parent company of Google), startled industry watchers when it reported that the application of artificial intelligence had reduced the cooling bill at a Google data center by a whopping 40 percent. What’s more, we learned that year, DeepMind was starting to work with the National Grid in the United Kingdom to save energy throughout the country using deep learning to optimize the flow of electricity.

Could AI really slash energy usage so profoundly? In the three years that have passed, I’ve searched for articles on the application of AI to other data centers but find no evidence of important gains. What’s more, DeepMind’s talks with the National Grid about energy have broken down. And the financial results for DeepMind certainly don’t suggest that customers are lining up for its services: For 2018, the company reported losses of US $571 million on revenues of $125 million, up from losses of $366 million in 2017. Last April, The Economist characterized DeepMind’s 2016 announcement as a publicity stunt, quoting one inside source as saying, “[DeepMind just wants] to have some PR so they can claim some value added within Alphabet.”

This episode encouraged me to look more deeply into the economic promise of AI and the rosy projections made by champions of this technology within the financial sector. This investigation was just the latest twist on a long- standing interest of mine. In the early 1980s, I wrote a doctoral dissertation on the economics of robotics and AI, and throughout my career as a professor and technology consultant I have followed the economic projections for AI, including detailed assessments by consulting organizations such as Accenture, PricewaterhouseCoopers International (PwC), and McKinsey.

These analysts have lately been asserting that AI-enabled technologies will dramatically increase economic output. Accenture claims that by 2035 AI will double growth rates for 12 developed countries and increase labor productivity by as much as a third. PwC claims that AI will add $15.7 trillion to the global economy by 2030, while McKinsey projects a $13 trillion boost by that time.

Other forecasts have focused on specific sectors such as retail, energy, education, and manufacturing. In particular, the McKinsey Global Institute assessed the impact of AI on these four sectors in a 2017 report titled Artificial Intelligence: The New Digital Frontier? and did so for a much longer list of sectors in a 2018 report. In the latter, the institute concluded that AI techniques “have the potential to create between $3.5 trillion and $5.8 trillion in value annually across nine business functions in 19 industries. This constitutes about 40 percent of the overall $9.5 trillion to $15.4 trillion annual impact that could potentially be enabled by all analytical techniques.”

Wow. These are big numbers. If true, they create a powerful incentive for companies to pursue AI—with or without help from McKinsey consultants. But are these predictions really valid?

Many of McKinsey’s estimates were made by extrapolating from claims made by various startups. For instance, its prediction of a 10 percent improvement in energy efficiency in the U.K. and elsewhere was based on the purported success of DeepMind and also of Nest Labs, which became part of Google’s hardware division in 2018. In 2017, Nest, which makes a smart thermostat and other intelligent products for the home, lost $621 million on revenues of $726 million. That fact doesn’t mesh with the notion that Nest and similar companies are contributing, or are poised to contribute, hugely to the world economy.

So I decided to investigate more systematically how well such AI startups were doing. I found that many were proving not nearly as valuable to society as all the hype would suggest. This assertion will certainly rub a lot of people the wrong way, the analysts at McKinsey among them. So I’d like to describe here how I reached my much more pessimistic conclusions.

My investigation of Nest Labs expanded into a search for evidence that smart meters in general are leading to large gains in energy efficiency. In 2016, the British government began a coordinated campaign to install smart meters throughout the country by 2020. And since 2010, the U.S. Department of Energy has invested some $4.5 billion installing more than 15 million smart meters throughout the United States. Curiously enough, all that effort has had little observed impact on energy usage. The U.K. government recently revised downward the amount it figures a smart meter will save each household annually, from £26 to just £11. And the cost of smart meters and their installation has risen, warns the U.K.’s National Audit Office. All of this is not good news for startups banking on the notion that smart thermostats, smart home appliances, and smart meters will lead to great energy savings.

Are other kinds of AI startups having a greater positive effect on the economy? Tech sector analyst CB Insights reports that overall venture capital funding in the United States was $115 billion in 2018 [PDF], of which $9.3 billion went to AI startups. While that’s just 8 percent of the total, it’s still a lot of money, indicating that there are many U.S. startups working on AI (although some overstate the role of AI in their business plans to acquire funding).

To probe further, I gathered data on the U.S. AI startups that have received the most funding and looked at which industries they were hoping to disrupt. The reason for focusing on the United States is that it has the longest history of startup success, so it seems likely that its AI startups are more apt to flourish than those in other countries. My intention was to evaluate whether these U.S. startups had succeeded in shaking up various industries and boosting productivity or whether they promise to do so shortly.

In all, I examined 40 U.S. startups working on AI. These either had valuations greater than $1 billion or had more than $70 million in equity funding. Other than two that had been acquired by public companies, the startups I looked at are all private firms. I found their names and product offerings in lists of leading startups that Crunchbase, Fortune, and Datamation had compiled and published. I then updated my data set with more recent news about these companies (including reports of some shutdowns).

I categorized these 40 startups by the type of product or service they offered. Seventeen are working on what I would call basic computer hardware and software (Wave Computing and OpenAI, respectively, are examples), including cybersecurity (CrowdStrike, for instance). That is, I included in this category companies building tools that are intended to support the computing environment itself.

Making up another large fraction—8 of the 40—are companies that develop software that automates various tasks. The robotic process automation software being developed by Automation Anywhere, UiPath, and WorkFusion, for example, enables higher productivity among professionals and other white-collar workers. Software from Brain Corp. converts manual equipment into intelligent robots. Algolia, Conversica, and Xant offer software to improve sales and marketing. ZipRecruiter targets human resources.

The remaining startups on my list are spread among various industries. Three (Flatiron Health, Freenome, Tempus Labs) work in health care; three more (Avant, Upstart, ZestFinance) are focused on financial technology; two (Indigo, Zymergen) target agriculture or synthetic biology; and three others (Nauto, Nuro, Zoox) involve transportation. There is just one startup each for geospatial analytics (Orbital Insight), patterns of human interaction (Afiniti), photo/video recognition (Vicarious), and music recognition (SoundHound).

Are there indications that these startups will bring large productivity improvements in the near future? In my view, software that automates tasks normally carried out by white-collar workers is probably the most promising of the products and services that AI is being applied to. Similar to past improvements in tools for white-collar professionals, including Excel for accountants and computer-aided design for engineers and architects, these types of AI-based tools have the greatest potential impact on productivity. For instance, there are high hopes for generative design, in which teams of people input constraints and the system proposes specific designs.

But looking at the eight startups on my list that are working on automation tools for white-collar workers, I realized that they are not targeting things that would lead to much higher productivity. Three of them are focused on sales and marketing, which is often a zero-sum game: The company with the best software takes customers from competitors, with only small increases in productivity under certain conditions. Another one of these eight companies is working on human-resource software, whose productivity benefits may be larger than those for sales and marketing but probably not as large as you’d get from improved robotic process automation.

This leaves four startups that do offer such software, which may lead to higher productivity and lower costs. But even among these startups, none currently offers software that helps engineers and architects become more productive through, for example, generative design. Software of this kind isn’t coming from the largest startups, perhaps because there is a strong incumbent, Autodesk, or because the relevant AI is still not developed enough to provide truly useful tools in this area.

The relatively large number of startups I classified as working on basic hardware and software for computing (17) also suggests that productivity improvements are still many years away. Although basic hardware and software are a necessary part of developing higher-level AI-based tools, particularly ones utilizing machine learning, it will take time for the former to enable the latter. I suppose this situation simply reflects that AI is still in its infancy. You certainly get that impression from companies like OpenAI: Although it has received $1 billion in funding (and a great deal of attention), the vagueness of its mission—“Benefiting all of humanity”—suggests that it will take many years yet for specific useful products and services to evolve from this company’s research.

The large number of these startups that are focused on cybersecurity (seven) highlights the increasing threat of security problems, which raise the cost of doing business over the Internet. AI’s ability to address cybersecurity issues will likely make the Internet more safe, secure, and useful. But in the end, this thrust reflects yet higher costs in the future for Internet businesses and will not, to my mind, lead to large productivity improvements within the economy as a whole.

If not from the better software tools it brings, where will AI bring substantial economic gains? Health care, you would think, might benefit greatly from AI. Yet the number of startups on my list that are applying AI to health care (three) seems oddly small if that were really the case. Perhaps this has something to do with IBM’s experience with its Watson AI, which proved a disappointment when it was applied to medicine.

Still, many people remain hopeful that AI-fueled health care startups will fill the gap left by Watson’s failures. Arguing against this is Robert Wachter, who points out that it’s much more difficult to apply computers to health care than to other sectors. His 2015 book, The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, details the many reasons that health care lags other industries in the application of computers and software. It’s not clear that adding AI to the mix of digital technologies available will do anything to change the situation.

There are also some big applications missing from the list of well-funded AI startups. Housing represents the largest category of consumer expenditures in the United States, but none of these startups are addressing this sector of the economy at all. Transportation is the second largest expenditure, and it is the focus of just three of these startups. One is working on a product that identifies distracted drivers. Another intends to provide automated local deliveries. Only one startup on the list is developing driverless passenger vehicles. That there is only one working on self-driving cars is consistent with the pessimism recently expressed by executives of Ford, General Motors, and Mercedes-Benz about the prospects for driverless vehicles taking to the streets in large numbers anytime soon, even though $35 billion has already been spent on R&D for them.

Admittedly, my assessment of what these 40 companies are doing and whether their offerings will shake up the world over the next decade is subjective. Perhaps it makes better sense to consider a more objective measure of whether these companies are providing value to the world economy: their profitability.

Alas, good financial data is not available on privately held startups, only two of the companies on my list are now part of public companies, and startups often take years to turn a profit (Amazon took seven years). So there isn’t a lot to go on here. Still, there are some broad trends in the tech sector that are quite telling.

The fraction of tech companies that are profitable by the time they go public dropped from 76 percent in 1980 to just 17 percent in 2018, even though the average time to IPO has been rising—it went from 2.8 years in 1998 to 7.7 years in 2016, for example. Also, the losses of some well-known startups that took a long time to go public are huge. For instance, none of the big ride-sharing companies are making a profit, including those in the United States (Uber and Lyft), China, India, and Singapore, with total losses of about $5 billion in 2018. Most bicycle and scooter sharing, office sharing, food delivery, P2P (peer-to peer) lending, health care insurance and analysis, and other consumer service startups are also losing vast amounts of money, not only in the United States but also in China and India.

Most of the 40 AI startups I examined will probably stay private, at least in the near term. But even if some do go public several years down the road, it’s unlikely they’ll be profitable at that point, if the experience of many other tech companies is any guide. It may take these companies years more to achieve the distinction of making more money than they are spending.

For the reasons I’ve given, it’s very hard for me to feel confident that any of the AI startups I examined will provide the U.S. economy with a big boost over the next decade. Similar pessimism is also starting to emerge from such normally cheery publications as Technology Review and Scientific American. Even the AI community is beginning to express concerns in books such as The AI Delusion and Rebooting AI: Building Artificial Intelligence We Can Trust, concerns that are growing amid the rising hype about many new technologies.

The most promising areas for rapid gains in productivity are likely to be found in robotic process automation for white-collar workers, continuing a trend that has existed for decades. But these improvements will be gradual, just as those for computer-aided design and computer-aided engineering software, spreadsheets, and word processing have been.

Viewed over the span of decades, the value of such software is impressive, bringing huge gains in productivity for engineers, accountants, lawyers, architects, journalists, and others—gains that enabled some of these professionals (particularly engineers) to enrich the global economy in countless ways.

Such advances will no doubt continue with the aid of machine learning and other forms of AI. But they are unlikely to be nearly as disruptive—for companies, for workers, or for the economy as a whole—as many observers have been arguing.

About the Author

Jeffrey Funk retired from the National University of Singapore in 2017, where he taught (among other subjects) a course on the economics of new technology as a professor of technology management. He remains based in Singapore, where he consults in various areas of technology and business.

Simulating Radar Signals for Meaningful Radar Warning Receiver Tests

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/simulating-radar-signals-for-meaningful-radar-warning-receiver-tests

Testing state-of-the-art radar warning equipment with adequate RF and microwave signals is a challenging task for radar engineers. Generating test signals in a lab that are similar to a real-life RF environment is considered a supreme discipline when evaluating advanced radars. A scenario for testing radar-warning receivers (RWR) can easily contain many highly mobile emitters that are capable of mode changes, many interfering signals, and a moving multi-channel receiver. Today, there are intuitive and easy-to-use software tools available that take a lot of work off the hands of radar engineers.

In this seminar, you will learn how to:

  • Create radar scenarios ranging from simple pulses to the most demanding emitter scenarios using PC software
  • Generate complex radar signals with off-the-shelf vector signal generators
  • Increase flexibility during radar simulation by streaming pulse descriptor words (PDW)

Screening Technique Found 142 Malicious Apps in Apple’s App Store

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/new-screening-technique-reveals-142-malicious-apple-apps

Journal Watch report logo, link to report landing page

Apple’s App Store is renowned for its security—but even Apple is inadvertently allowing a small handful of malicious apps to sneak through its screening process and onto some people’s phones, new research shows. The good news is that the researchers involved, who published their findings on 31 October in IEEE Transactions on Dependable and Secure Computing, have also uncovered a way to detect these Trojan horses.

Thanks to strict guidelines and bans on certain practices that facilitate the spread of malicious apps, the vast majority of apps in Apple’s App Store are safe. However, some malicious apps are still making their way through the screening process by exhibiting one user interface while harboring a second, malicious user interface.   

“[These apps] display a benign user interface under Apple’s review but reveal their hidden, harmful user interfaces after being installed on users’ devices,” explains Yeonjoon Lee, a researcher at Hanyang University who was involved in the study.

Hey, Data Scientists: Show Your Machine-Learning Work

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/computing/software/hey-data-scientists-show-your-machinelearning-work

In the last two years, the U.S. Food and Drug Administration has approved several machine-learning models to accomplish tasks such as classifying skin cancer and detecting pulmonary embolisms. But for the companies who built those models, what happens if the data scientist who wrote the algorithms leaves the organization?

In many businesses, an individual or a small group of data scientists is responsible for building essential machine-learning models. Historically, they have developed these models on their own laptops through trial and error, and pass it along for production when it works. But in that transfer, the data scientist might not think to pass along all the information about the model’s development. And if the data scientist leaves, that information is lost for good.

That potential loss of information is why experts in data science are calling for machine learning to become a formal, documented process overseen by more people inside an organization.

Companies need to think about what could happen if their data scientists take new jobs, or if a government organization or an important customer asks to see an audit of the algorithm to ensure it is fair and accurate. Not knowing what data was used to train the model and how the data was weighted could lead to a loss of business, bad press, and perhaps regulatory scrutiny, if the model turns out to be biased.

David Aronchick, the head of open-source machine-learning strategy at Microsoft Azure, says companies are realizing that they must run their machine-learning efforts the same way they run their software-development practices. That means encouraging documentation and codevelopment as much as possible.

Microsoft has some ideas about what the documentation process should look like. The process starts with the researcher structuring and organizing the raw data and annotating it appropriately. Not having a documented process at this stage could lead to poorly annotated data that has biases associated with it or is unrelated to the problem the business wants to solve.

Next, during training, a researcher feeds the data to a neural network and tweaks how it weighs various factors to get the desired result. Typically, the researcher is still working alone at this point, but other people should get involved to see how the model is being developed—just in case questions come up later during a compliance review or even a lawsuit.

A neural network is a black box when it comes to understanding how it makes its decisions, but the data, the number of layers, and how the network weights different parameters shouldn’t be mysterious. The researchers should be able to tell how the data was structured and weighted at a glance.

It’s also at this point where having good documentation can help make a model more flexible for future use. For example, a shopping site’s model that crunched data specifically for Christmas spending patterns can’t apply that same model to Valentine’s Day spending. Without good documentation, a data scientist would have to essentially rebuild the model, rather than going back and tweaking a few parameters to adjust it for a new holiday.

The last step in the process is actually deploying the model. Historically, only at this point would other people get involved and acquaint themselves with the data scientist’s hard work. Without good documentation, they’re sure to get headaches trying to make sense of it. But now that data is so essential to so many businesses—not to mention the need to adapt quickly—it’s time for companies to build machine-learning processes that rival the quality of their software-development processes.

This article appears in the December 2019 print issue as “Show Your Machine-Learning Work.”

FedEx Ground Uses Virtual Reality to Train and Retain Package Handlers

Post Syndicated from Michelle V. Rafter original https://spectrum.ieee.org/tech-talk/computing/software/fedex-ground-uses-vr-to-train-and-retain-package-handlers

Package handlers who work on FedEx Ground loading docks load and unload 8.5 million packages a day. The volume and the physical nature of the work make it a tough job—tougher than many new hires realize until they do it. Some quit almost immediately, according to Denise Abbott, FedEx Ground’s vice president of human resources.

So, when FedEx Corp.’s truck package delivery division evaluated how best to incorporate virtual reality into employee training, teaching newly hired package handlers what to expect on the job and how to stay safe doing it quickly rose to the top of the list.

“It allows us to bring an immersive learning technology into the classroom so people can practice before they step foot on a dock,” said Jefferson Welch, human resource director for FedEx Ground University, the division’s training arm. He and Abbott talked about the company’s foray into VR-based training during a presentation at the recent HR Technology Conference in Las Vegas.

At Domino’s Biggest Franchisee, a Chatbot Named “Dottie” Speeds Up Hiring

Post Syndicated from Michelle V. Rafter original https://spectrum.ieee.org/tech-talk/computing/software/at-dominos-biggest-franchisee-a-chatbot-named-dottie-speeds-up-hiring

At RPM Pizza, a chatbot nicknamed “Dottie” has made hiring almost as fast as delivering pizzas.

RPM adopted the text message-based chatbot along with live chat and text-based job applications to speed up multiple aspects of the hiring process, including identifying promising job candidates and scheduling initial interviews.

It makes sense to use texting for hiring, said Merrin Mueller, RPM’s head of people and marketing, during a presentation on the chatbot at the recent HR Technology Conference in Las Vegas. Job hunters respond to a text faster than an email. At a time when U.S. unemployment is low, competition for hourly workers is fierce, and company recruiters are overwhelmed, you have to act fast.

“People who apply here are applying at Taco Bell and McDonald’s too, and if we don’t get to them right away and hire them faster, they’ve already been offered a job somewhere else,” Mueller said.

A Guide to Computational Storage

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/a-guide-to-computational-storage

To be successful in this increasingly digital world, organizations need the infrastructure and technology to be capable of delivering and storing data and analytics in a fast, secure and efficient way.

Computational storage enables organizations to create customer value by maximizing the benefits of big data. It puts processing power directly on the storage device, giving companies quick and easy access to vital information.

This easy-to-read guide provides an introduction to computational storage and its benefits, walking through real-world examples of how it’s deployed today and what to consider when implementing it in your device. 

Read this guide to learn: 

  • What is computational storage and how it works
  • The benefits it brings to architects, developers and organizations
  • How to move from a traditional storage solution to a more intelligent device
  • Real-world examples from Arm’s partners

How to Perform a Security Investigation in AWS

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how-to-perform-a-security-investigation-in-aws

Do you have a plan in place describing how to investigate in Amazon Web Services (AWS)? What security controls, techniques, and data sources can you leverage when investigating and containing an incident in the cloud? Join SANS and AWS Marketplace to learn how to leverage different technologies to determine the source and timeline of the event, and the systems targeted to define a reliable starting point from which to begin your investigations.

Attendants will learn:

  • Prerequisites for performing an effective investigation
  • Services that enable an investigation
  • How to plan an investigation
  • Steps to completing an investigation in AWS

Test Ops Agile Design and Test

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/test-ops-agile-design-and-test

In the 1990s, agile software development profoundly transformed software development. Agile is far more than a process; it’s a new way to work. Today, a similar transformation is happening in test and measurement: TestOps. Learn about TestOps and how to accelerate your product development workflow.

photo

Accelerate your innovation with NI Wireless Research Handbook

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/accelerate-your-innovation-with-ni-wireless-research-handbook

Download the latest edition of NI’s Wireless Research Handbook, which includes research examples from around the world and across a wide range of advanced wireless research topics. This comprehensive look at next-generation wireless systems will offer you a more in-depth view of how prototyping can enhance research results.

Applications include:

· Flexible Waveform Shaping Based on UTW-OFDM for 5G and Beyond

· Flexible Real-Time Waveform Generator for Mixed-Service Scenarios

· In-Band Full-Duplex SDR for MAC Protocol with Collision Detection

· Bandwidth-Compressed Spectrally Efficient Communication System

· World-Leading Parallel Channel Sounder Platform

· Distributed Massive MIMO: Algorithm for TDD Reciprocity Calibration

· Wideband/Opportunistic Map-Based Full-Duplex Radios

· An Experimental SDR Platform for In-Band D2D Communications in 5G

· Wideband Multichannel Signal Recorder for Radar Applications

· Passive and Active Radar Imaging

· Multi-antenna Technology for Reliable Wireless Communication

· Radio Propagation Analysis for the Factories of the Future