Tag Archives: computing

Intel Unveils Cryogenic Chips to Speed Quantum Computing

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/intel-unveils-cryogenic-chips-to-speed-quantum-computing

At the IEEE International Electron Devices Meeting in San Francisco this week, Intel is unveiling a cryogenic chip designed to accelerate the development of the quantum computers they are building with Delft University’s QuTech research group. The chip, called Horse Ridge for one of the coldest spots in Oregon, uses specially-designed transistors to provide microwave control signals to Intel’s quantum computing chips.

The quantum computer chips in development at IBM, Google, Intel, and other firms today operate at fractions of a degree above absolute zero and must be kept inside a dilution refrigerator. However, as companies have managed to increase the number of quantum bits (qubits) in the chips, and therefore the chips’ capacity to compute, they’ve begun to run into a problem. Each qubit needs its own set of wires leading to control and readout systems outside of the cryogenic container. It’s already getting crowded and as quantum computers continue to scale—Intel’s is up to 49 qubits now—there soon won’t be enough room for the wires.   

5G Small-Cell Base Station Antenna Array Design

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/5g-smallcell-base-station-antenna-array-design

In this eSeminar we will explore state-of-the-art simulation approaches for antenna array design, with a particular focus on 5G small-cell base station antennas.

Realizing the 5G promise of reliably providing high data rate connections to many users simultaneously requires the use of new design approaches for base station antennas. In particular, antenna arrays will increasingly be used to enable agile beam forming and massive MIMO technology, both required to provide good service in dynamic, complex urban environments with a high number of users. The array design capabilities of SIMULIA CST Studio Suite have grown dramatically over the last years and are relied on by many companies around the world.

Join us to learn more about how simulation can help you with your array design, as we answer the following questions.

  • How can antenna elements be designed and evaluated in terms of their suitability as an array element?
  • How can full arrays with real radomes be simulated accurately much more quickly than before using the new simulation-by-zones approach?
  • How can interference between multiple co-located arrays be evaluated using advanced hybrid simulation techniques?
  • Finally, how can the coverage performance of base station arrays in complex urban or indoor environments be predicted?

AI and Economic Productivity: Expect Evolution, Not Revolution

Post Syndicated from Jeffrey Funk original https://spectrum.ieee.org/computing/software/ai-and-economic-productivity-expect-evolution-not-revolution

In 2016, London-based DeepMind Technologies, a subsidiary of Alphabet (which is also the parent company of Google), startled industry watchers when it reported that the application of artificial intelligence had reduced the cooling bill at a Google data center by a whopping 40 percent. What’s more, we learned that year, DeepMind was starting to work with the National Grid in the United Kingdom to save energy throughout the country using deep learning to optimize the flow of electricity.

Could AI really slash energy usage so profoundly? In the three years that have passed, I’ve searched for articles on the application of AI to other data centers but find no evidence of important gains. What’s more, DeepMind’s talks with the National Grid about energy have broken down. And the financial results for DeepMind certainly don’t suggest that customers are lining up for its services: For 2018, the company reported losses of US $571 million on revenues of $125 million, up from losses of $366 million in 2017. Last April, The Economist characterized DeepMind’s 2016 announcement as a publicity stunt, quoting one inside source as saying, “[DeepMind just wants] to have some PR so they can claim some value added within Alphabet.”

This episode encouraged me to look more deeply into the economic promise of AI and the rosy projections made by champions of this technology within the financial sector. This investigation was just the latest twist on a long- standing interest of mine. In the early 1980s, I wrote a doctoral dissertation on the economics of robotics and AI, and throughout my career as a professor and technology consultant I have followed the economic projections for AI, including detailed assessments by consulting organizations such as Accenture, PricewaterhouseCoopers International (PwC), and McKinsey.

These analysts have lately been asserting that AI-enabled technologies will dramatically increase economic output. Accenture claims that by 2035 AI will double growth rates for 12 developed countries and increase labor productivity by as much as a third. PwC claims that AI will add $15.7 trillion to the global economy by 2030, while McKinsey projects a $13 trillion boost by that time.

Other forecasts have focused on specific sectors such as retail, energy, education, and manufacturing. In particular, the McKinsey Global Institute assessed the impact of AI on these four sectors in a 2017 report titled Artificial Intelligence: The New Digital Frontier? and did so for a much longer list of sectors in a 2018 report. In the latter, the institute concluded that AI techniques “have the potential to create between $3.5 trillion and $5.8 trillion in value annually across nine business functions in 19 industries. This constitutes about 40 percent of the overall $9.5 trillion to $15.4 trillion annual impact that could potentially be enabled by all analytical techniques.”

Wow. These are big numbers. If true, they create a powerful incentive for companies to pursue AI—with or without help from McKinsey consultants. But are these predictions really valid?

Many of McKinsey’s estimates were made by extrapolating from claims made by various startups. For instance, its prediction of a 10 percent improvement in energy efficiency in the U.K. and elsewhere was based on the purported success of DeepMind and also of Nest Labs, which became part of Google’s hardware division in 2018. In 2017, Nest, which makes a smart thermostat and other intelligent products for the home, lost $621 million on revenues of $726 million. That fact doesn’t mesh with the notion that Nest and similar companies are contributing, or are poised to contribute, hugely to the world economy.

So I decided to investigate more systematically how well such AI startups were doing. I found that many were proving not nearly as valuable to society as all the hype would suggest. This assertion will certainly rub a lot of people the wrong way, the analysts at McKinsey among them. So I’d like to describe here how I reached my much more pessimistic conclusions.

My investigation of Nest Labs expanded into a search for evidence that smart meters in general are leading to large gains in energy efficiency. In 2016, the British government began a coordinated campaign to install smart meters throughout the country by 2020. And since 2010, the U.S. Department of Energy has invested some $4.5 billion installing more than 15 million smart meters throughout the United States. Curiously enough, all that effort has had little observed impact on energy usage. The U.K. government recently revised downward the amount it figures a smart meter will save each household annually, from £26 to just £11. And the cost of smart meters and their installation has risen, warns the U.K.’s National Audit Office. All of this is not good news for startups banking on the notion that smart thermostats, smart home appliances, and smart meters will lead to great energy savings.

Are other kinds of AI startups having a greater positive effect on the economy? Tech sector analyst CB Insights reports that overall venture capital funding in the United States was $115 billion in 2018 [PDF], of which $9.3 billion went to AI startups. While that’s just 8 percent of the total, it’s still a lot of money, indicating that there are many U.S. startups working on AI (although some overstate the role of AI in their business plans to acquire funding).

To probe further, I gathered data on the U.S. AI startups that have received the most funding and looked at which industries they were hoping to disrupt. The reason for focusing on the United States is that it has the longest history of startup success, so it seems likely that its AI startups are more apt to flourish than those in other countries. My intention was to evaluate whether these U.S. startups had succeeded in shaking up various industries and boosting productivity or whether they promise to do so shortly.

In all, I examined 40 U.S. startups working on AI. These either had valuations greater than $1 billion or had more than $70 million in equity funding. Other than two that had been acquired by public companies, the startups I looked at are all private firms. I found their names and product offerings in lists of leading startups that Crunchbase, Fortune, and Datamation had compiled and published. I then updated my data set with more recent news about these companies (including reports of some shutdowns).

I categorized these 40 startups by the type of product or service they offered. Seventeen are working on what I would call basic computer hardware and software (Wave Computing and OpenAI, respectively, are examples), including cybersecurity (CrowdStrike, for instance). That is, I included in this category companies building tools that are intended to support the computing environment itself.

Making up another large fraction—8 of the 40—are companies that develop software that automates various tasks. The robotic process automation software being developed by Automation Anywhere, UiPath, and WorkFusion, for example, enables higher productivity among professionals and other white-collar workers. Software from Brain Corp. converts manual equipment into intelligent robots. Algolia, Conversica, and Xant offer software to improve sales and marketing. ZipRecruiter targets human resources.

The remaining startups on my list are spread among various industries. Three (Flatiron Health, Freenome, Tempus Labs) work in health care; three more (Avant, Upstart, ZestFinance) are focused on financial technology; two (Indigo, Zymergen) target agriculture or synthetic biology; and three others (Nauto, Nuro, Zoox) involve transportation. There is just one startup each for geospatial analytics (Orbital Insight), patterns of human interaction (Afiniti), photo/video recognition (Vicarious), and music recognition (SoundHound).

Are there indications that these startups will bring large productivity improvements in the near future? In my view, software that automates tasks normally carried out by white-collar workers is probably the most promising of the products and services that AI is being applied to. Similar to past improvements in tools for white-collar professionals, including Excel for accountants and computer-aided design for engineers and architects, these types of AI-based tools have the greatest potential impact on productivity. For instance, there are high hopes for generative design, in which teams of people input constraints and the system proposes specific designs.

But looking at the eight startups on my list that are working on automation tools for white-collar workers, I realized that they are not targeting things that would lead to much higher productivity. Three of them are focused on sales and marketing, which is often a zero-sum game: The company with the best software takes customers from competitors, with only small increases in productivity under certain conditions. Another one of these eight companies is working on human-resource software, whose productivity benefits may be larger than those for sales and marketing but probably not as large as you’d get from improved robotic process automation.

This leaves four startups that do offer such software, which may lead to higher productivity and lower costs. But even among these startups, none currently offers software that helps engineers and architects become more productive through, for example, generative design. Software of this kind isn’t coming from the largest startups, perhaps because there is a strong incumbent, Autodesk, or because the relevant AI is still not developed enough to provide truly useful tools in this area.

The relatively large number of startups I classified as working on basic hardware and software for computing (17) also suggests that productivity improvements are still many years away. Although basic hardware and software are a necessary part of developing higher-level AI-based tools, particularly ones utilizing machine learning, it will take time for the former to enable the latter. I suppose this situation simply reflects that AI is still in its infancy. You certainly get that impression from companies like OpenAI: Although it has received $1 billion in funding (and a great deal of attention), the vagueness of its mission—“Benefiting all of humanity”—suggests that it will take many years yet for specific useful products and services to evolve from this company’s research.

The large number of these startups that are focused on cybersecurity (seven) highlights the increasing threat of security problems, which raise the cost of doing business over the Internet. AI’s ability to address cybersecurity issues will likely make the Internet more safe, secure, and useful. But in the end, this thrust reflects yet higher costs in the future for Internet businesses and will not, to my mind, lead to large productivity improvements within the economy as a whole.

If not from the better software tools it brings, where will AI bring substantial economic gains? Health care, you would think, might benefit greatly from AI. Yet the number of startups on my list that are applying AI to health care (three) seems oddly small if that were really the case. Perhaps this has something to do with IBM’s experience with its Watson AI, which proved a disappointment when it was applied to medicine.

Still, many people remain hopeful that AI-fueled health care startups will fill the gap left by Watson’s failures. Arguing against this is Robert Wachter, who points out that it’s much more difficult to apply computers to health care than to other sectors. His 2015 book, The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, details the many reasons that health care lags other industries in the application of computers and software. It’s not clear that adding AI to the mix of digital technologies available will do anything to change the situation.

There are also some big applications missing from the list of well-funded AI startups. Housing represents the largest category of consumer expenditures in the United States, but none of these startups are addressing this sector of the economy at all. Transportation is the second largest expenditure, and it is the focus of just three of these startups. One is working on a product that identifies distracted drivers. Another intends to provide automated local deliveries. Only one startup on the list is developing driverless passenger vehicles. That there is only one working on self-driving cars is consistent with the pessimism recently expressed by executives of Ford, General Motors, and Mercedes-Benz about the prospects for driverless vehicles taking to the streets in large numbers anytime soon, even though $35 billion has already been spent on R&D for them.

Admittedly, my assessment of what these 40 companies are doing and whether their offerings will shake up the world over the next decade is subjective. Perhaps it makes better sense to consider a more objective measure of whether these companies are providing value to the world economy: their profitability.

Alas, good financial data is not available on privately held startups, only two of the companies on my list are now part of public companies, and startups often take years to turn a profit (Amazon took seven years). So there isn’t a lot to go on here. Still, there are some broad trends in the tech sector that are quite telling.

The fraction of tech companies that are profitable by the time they go public dropped from 76 percent in 1980 to just 17 percent in 2018, even though the average time to IPO has been rising—it went from 2.8 years in 1998 to 7.7 years in 2016, for example. Also, the losses of some well-known startups that took a long time to go public are huge. For instance, none of the big ride-sharing companies are making a profit, including those in the United States (Uber and Lyft), China, India, and Singapore, with total losses of about $5 billion in 2018. Most bicycle and scooter sharing, office sharing, food delivery, P2P (peer-to peer) lending, health care insurance and analysis, and other consumer service startups are also losing vast amounts of money, not only in the United States but also in China and India.

Most of the 40 AI startups I examined will probably stay private, at least in the near term. But even if some do go public several years down the road, it’s unlikely they’ll be profitable at that point, if the experience of many other tech companies is any guide. It may take these companies years more to achieve the distinction of making more money than they are spending.

For the reasons I’ve given, it’s very hard for me to feel confident that any of the AI startups I examined will provide the U.S. economy with a big boost over the next decade. Similar pessimism is also starting to emerge from such normally cheery publications as Technology Review and Scientific American. Even the AI community is beginning to express concerns in books such as The AI Delusion and Rebooting AI: Building Artificial Intelligence We Can Trust, concerns that are growing amid the rising hype about many new technologies.

The most promising areas for rapid gains in productivity are likely to be found in robotic process automation for white-collar workers, continuing a trend that has existed for decades. But these improvements will be gradual, just as those for computer-aided design and computer-aided engineering software, spreadsheets, and word processing have been.

Viewed over the span of decades, the value of such software is impressive, bringing huge gains in productivity for engineers, accountants, lawyers, architects, journalists, and others—gains that enabled some of these professionals (particularly engineers) to enrich the global economy in countless ways.

Such advances will no doubt continue with the aid of machine learning and other forms of AI. But they are unlikely to be nearly as disruptive—for companies, for workers, or for the economy as a whole—as many observers have been arguing.

About the Author

Jeffrey Funk retired from the National University of Singapore in 2017, where he taught (among other subjects) a course on the economics of new technology as a professor of technology management. He remains based in Singapore, where he consults in various areas of technology and business.

Simulating Radar Signals for Meaningful Radar Warning Receiver Tests

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/simulating-radar-signals-for-meaningful-radar-warning-receiver-tests

Testing state-of-the-art radar warning equipment with adequate RF and microwave signals is a challenging task for radar engineers. Generating test signals in a lab that are similar to a real-life RF environment is considered a supreme discipline when evaluating advanced radars. A scenario for testing radar-warning receivers (RWR) can easily contain many highly mobile emitters that are capable of mode changes, many interfering signals, and a moving multi-channel receiver. Today, there are intuitive and easy-to-use software tools available that take a lot of work off the hands of radar engineers.

In this seminar, you will learn how to:

  • Create radar scenarios ranging from simple pulses to the most demanding emitter scenarios using PC software
  • Generate complex radar signals with off-the-shelf vector signal generators
  • Increase flexibility during radar simulation by streaming pulse descriptor words (PDW)

3D Electromagnetic Simulation of Antennas Installed on an Aircraft

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/3d-electromagnetic-simulation-of-antennas-installed-on-an-aircraft

Learn about the importance of using a range of methods in the 3D electromagnetic (EM) simulation of installed antenna performance.

3D EM fullwave simulation is a common practice for antenna designers and Analyst. The variety in antenna topologies and design specifications necessitate a range of simulation algorithms to ensure an efficient or even feasible solution is available.

This range of technology becomes even more important as antennas’ operation in the installed environment is considered. The application of different EM methods as well as hybridization is discussed via an example workflow of antennas installed on an aircraft. Techniques for specific installed antenna scenarios including radome analysis, co-site interference and initial field of view analysis are presented using the SIMULIA CST Studio Suite toolset.

Why Use Time-Of-Flight for Distance Measurement?

Post Syndicated from Terabee original https://spectrum.ieee.org/computing/hardware/why-use-timeofflight-for-distance-measurement

The most sophisticated of sensor module technologies are our innovative 3D ToF cameras which can capture depth data across three spatial dimensions. Depth sensing with Time-of-Flight sensors is discreet, completely eye-safe, and it is designed to work indoors even in low light or complete darkness.

This technology is a powerful enabler for applications such as people counting, digital stock monitoring, and room occupancy monitoring. With fast refresh rates, our 3D TOF sensor modules are also able to distinguish between simple gestures to assist with innovative human-machine interfaces and next-generation contactless controls.

 Shifting to ever smarter and versatile sensor modules

Terabee has just released a new type of indirect Time-of-Flight smart sensor for industrial and logistics applications. The sensor offers 12.5 meter detection capabilities using Time-of-Flight technology. It features a robust IP65 enclosure to ensure dust-proof and water-resistant capabilities in a compact and lightweight form factor (99 grams).

The sensor provides proximity notification with a classic NO/NC switching output (0-24V), while also communicating calibrated distance data via RS485 interface.

The sensor module features 6 embedded operating modes allowing for programmable distance thresholds. This makes it easy to set on-the-field in a matter of minutes thanks to teach-in buttons.

Operating modes allow the same sensor module to trigger alarms, detect movements, count objects and check for object alignment.

This versatility means that a single sensor can be purchased in bulk and programmed to automate many different control and monitoring processes. This is especially useful in reconfigurable warehouses and factories to save precious setup time.

image

Next steps

Terabee plans to build on its deep technology expertise in sensing hardware and to develop cutting-edge applications. In the coming 12 months, we will offer further solutions for many markets such as mobile robotics, smart farming, smart city, smart buildings and industrial automation in the form of devices, software and OEM services.

Learn more about Terabee

What is Time-of-Flight (ToF) distance sensing?

Several methods of detection are available for determining the proximity of an object or objects in real-time, each of which is differentiated by a diverse range of underlying hardware. As a result, distance sensors incorporate an extremely broad field of technologies: infrared (IR) triangulation, laser, light-emitting diode Time-of-Flight, ultrasonic, etc.

Various types of signals, or carriers, can be used to apply the Time-of-Flight principle, the most common being sound and light. Sound is mostly used in ultrasound sensors or radars.

Active optical Time-of-Flight, is a remote-sensing method to estimate range between a sensor and a targeted object by illuminating an object with a light source and by measuring the travel time from the emitter to the object and back to the receiver.

For light carriers, two technologies are available today: direct ToF ,based on pulsed-light, and indirect ToF based on continuous wave modulation.

 Terabee’s unique Time-of-Flight sensing technology

Established in 2012, Terabee has since grown into a diverse organization made up of leading experts in the sensing sector. As a certified CERN Technology partner, we offer an expansive range of sensor modules and solutions for some of the most cutting-edge fields on the market, from robotics to industry 4.0 and IoT applications.

At Terabee, we use light as carriers for our sensors to combine higher update rates, longer range, lower weight, and eye-safety. By carefully tuning emitted infrared light to specific wavelengths, we can ensure less signal disturbance and easier distinction from natural ambient light, resulting in the highest performing distance sensors for their given size and weight.

Terabee ToF sensor modules utilize infrared LEDs which are eye-safe and energy-efficient, while providing broader fields of view than lasers, offering a larger detection area per pixel. Our single 2D infrared sensors have a 2 to 3° field of view (FoV), which provides a more stable data stream for improved consistency of results in many applications.

Over the years we have mastered the product of sensor modules using indirect ToF technology. Thanks to our in-house R&D, Product Development, Production and Logistics departments, we have managed to push the boundaries of this technology for increased precision, longer range and smaller size, offering great value to customers at competitive prices.

We also offer multidirectional sensor module arrays, combining the functionalities of multiple ToF sensors for simultaneous monitoring of multiple directions in real-time, which is especially useful for short-range anti-collision applications. Our unique Hub comes with different operating modes to avoid crosstalk issues and transmit data from up to 8 sensors to a machine.

Screening Technique Found 142 Malicious Apps in Apple’s App Store

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/new-screening-technique-reveals-142-malicious-apple-apps

Journal Watch report logo, link to report landing page

Apple’s App Store is renowned for its security—but even Apple is inadvertently allowing a small handful of malicious apps to sneak through its screening process and onto some people’s phones, new research shows. The good news is that the researchers involved, who published their findings on 31 October in IEEE Transactions on Dependable and Secure Computing, have also uncovered a way to detect these Trojan horses.

Thanks to strict guidelines and bans on certain practices that facilitate the spread of malicious apps, the vast majority of apps in Apple’s App Store are safe. However, some malicious apps are still making their way through the screening process by exhibiting one user interface while harboring a second, malicious user interface.   

“[These apps] display a benign user interface under Apple’s review but reveal their hidden, harmful user interfaces after being installed on users’ devices,” explains Yeonjoon Lee, a researcher at Hanyang University who was involved in the study.

AI and the Future of Work: What to look out for

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/computing/it/ai-and-the-future-of-work-what-to-look-out-for

The robots have come for our jobs. This is the fear that artificial intelligence increasingly stokes with both the tech and policy elite and the public at large. But how worried should we really be?

To consider what impact AI will have on employment, a conference at MIT titled The Future of Work is convening this week, bringing together some leading thinkers and analysts. In advance of the conference, IEEE Spectrum talked with R. David Edelman, director of MIT’s Project on Technology, Economy & National Security about his take on AI’s coming role.

Edelman says he’s lately seen AI-related worry (mixed with economic anxiety) ramp up in much the same way that cybersecurity worries began ratcheting up ten or fifteen years ago.

“Increasingly, issues like the implications of AI on the future of work are table stakes for understanding our economic future and our ability to deliver prosperity for Americans,” he says. “That’s why you’re seeing broad interest, not just from the Department of Labor, not just from the Council of Economic Advisors, but across the government and, in turn, across society.”

Before coming to MIT, Edelman worked in the White House from 2010-’17 under a number of titles, including Special Assistant to the President on Economic and Technology Policy. Edelman also organizes a related conference in the spring at MIT, the MIT AI Policy Congress.

At this week’s Future of Work conference though, Edelman say he’ll be keeping his ears open for a number of issues that he thinks are not quite on everyone’s radar yet. But they may be soon.

For starters, Edelman says, not enough attention in mainstream conversations concerns the boundary between AI-controlled systems and human-controlled ones.

“We need to figure out when people are comfortable handing decisions over to robots, and when they’re not,” he says. “There is a yawning gap between the percentage of Americans who are willing to turn their lives over to autopilot on a plane, and the percentage of Americans willing to turn their lives over to autopilot on a Tesla.”

Which, to be clear, Edelman is not saying represents any sort of unfounded fear. Just that public discussion over self-driving or self-piloting systems is very either/or. Either a self-driving system is seen as 100 percent reliable for all situations, or it’s seen as immature and never to be used.

Second, not enough attention has yet been devoted to the question of metrics we can put in place to understand when an AI system has earned public trust and when it has not.

AI systems are, Edelman points out, only as reliable as the data that created them. So questions about racial and socioeconomic bias in, for instance, AI hiring algorithms are entirely appropriate.“Claims about AI-driven hiring are careening evermore quickly forward,” Edelman says. “There seems to be a disconnect. I’m eager to know, Are we in a place where we need to pump the brakes on AI-influenced hiring? Or do we have some models in a technical or legal context that can give us the confidence we lack today that these systems won’t create a second source of bias?”

A third area of the conversation that Edelman says deserves more media and policy attention is the question of what industries does AI threaten most. While there’s been discussion about jobs that have been put in the AI cross hairs, less discussed, he says, is the bias inherent in the question itself.

A 2017 study by Yale University and the University of Oxford’s Future of Humanity Institute surveyed AI experts for their predictions about, in part, the gravest threats AI poses to jobs and economic prosperity. Edelman points out that the industry professionals surveyed all tipped their hands a bit in the survey: The very last profession AI researchers said would ever be automated was—surprise, surprise—AI researchers.

“Everyone believes that their job will be the last job to be automated, because it’s too complex for machines to possibly master,” Edelman says.

“It’s time we make sure we’re appropriately challenging this consensus that the only sort of preparation we need to do is for the lowest-wage and lowest-skilled jobs,” he says. “Because it may well be that what we think of as good middle-income jobs, maybe even requiring some education, might be displaced or have major skills within them displaced.”

Last is the belief that AI’s effect on industries will be to eliminate jobs and only to eliminate jobs. When, Edelman says, the evidence suggests any such threats could be more nuanced.

AI may indeed eliminate some categories of jobs but may also spawn hybrid jobs that incorporate the new technology into an old format. As was the case with the rollout of electricity at the turn of the 20th century, new fields of study spring up too. Electrical engineers weren’t really needed before electricity became something more than a parlor curiosity, after all. Could AI engineering one day be a field unto its own? (With, surely, its own categories of jobs and academic fields of study and professional membership organizations?)

“We should be doing the hard and technical and often untechnical and unglamorous work of designing systems to earn our trust,” he says. “Humanity has been spectacularly unsuccessful in placing technological genies back in bottles. … We’re at the vanguard of a revolution in teaching technology how to play nice with humans. But that’s gonna be a lot of work. Because it’s got a lot to learn.”

Hey, Data Scientists: Show Your Machine-Learning Work

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/computing/software/hey-data-scientists-show-your-machinelearning-work

In the last two years, the U.S. Food and Drug Administration has approved several machine-learning models to accomplish tasks such as classifying skin cancer and detecting pulmonary embolisms. But for the companies who built those models, what happens if the data scientist who wrote the algorithms leaves the organization?

In many businesses, an individual or a small group of data scientists is responsible for building essential machine-learning models. Historically, they have developed these models on their own laptops through trial and error, and pass it along for production when it works. But in that transfer, the data scientist might not think to pass along all the information about the model’s development. And if the data scientist leaves, that information is lost for good.

That potential loss of information is why experts in data science are calling for machine learning to become a formal, documented process overseen by more people inside an organization.

Companies need to think about what could happen if their data scientists take new jobs, or if a government organization or an important customer asks to see an audit of the algorithm to ensure it is fair and accurate. Not knowing what data was used to train the model and how the data was weighted could lead to a loss of business, bad press, and perhaps regulatory scrutiny, if the model turns out to be biased.

David Aronchick, the head of open-source machine-learning strategy at Microsoft Azure, says companies are realizing that they must run their machine-learning efforts the same way they run their software-development practices. That means encouraging documentation and codevelopment as much as possible.

Microsoft has some ideas about what the documentation process should look like. The process starts with the researcher structuring and organizing the raw data and annotating it appropriately. Not having a documented process at this stage could lead to poorly annotated data that has biases associated with it or is unrelated to the problem the business wants to solve.

Next, during training, a researcher feeds the data to a neural network and tweaks how it weighs various factors to get the desired result. Typically, the researcher is still working alone at this point, but other people should get involved to see how the model is being developed—just in case questions come up later during a compliance review or even a lawsuit.

A neural network is a black box when it comes to understanding how it makes its decisions, but the data, the number of layers, and how the network weights different parameters shouldn’t be mysterious. The researchers should be able to tell how the data was structured and weighted at a glance.

It’s also at this point where having good documentation can help make a model more flexible for future use. For example, a shopping site’s model that crunched data specifically for Christmas spending patterns can’t apply that same model to Valentine’s Day spending. Without good documentation, a data scientist would have to essentially rebuild the model, rather than going back and tweaking a few parameters to adjust it for a new holiday.

The last step in the process is actually deploying the model. Historically, only at this point would other people get involved and acquaint themselves with the data scientist’s hard work. Without good documentation, they’re sure to get headaches trying to make sense of it. But now that data is so essential to so many businesses—not to mention the need to adapt quickly—it’s time for companies to build machine-learning processes that rival the quality of their software-development processes.

This article appears in the December 2019 print issue as “Show Your Machine-Learning Work.”

Get Keysight’s Basic Instruments Flyer Featuring PathWave BenchVue Software

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/get-keysights-basic-instruments-flyer-featuring-pathwave-benchvue-software

As the complexity of today’s bench setups increase, so does the difficulty of collecting and correlating data from multiple pieces of test equipment. This latest Basic Instruments flyer showcases Keysight’s PathWave BenchVue software. See how to quickly move through your test development phase and get more from your instruments!

photo

Cerebras Unveils First Installation of Its AI Supercomputer at Argonne National Labs

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/cerebras-unveils-ai-supercomputer-argonne-national-lab-first-installation

At Supercomputing 2019 in Denver, Colo., Cerebras Systems unveiled the computer powered by the world’s biggest chip. Cerebras says the computer, the CS-1, has the equivalent machine learning capabilities of hundreds of racks worth of GPU-based computers consuming hundreds of kilowatts, but it takes up only one-third of a standard rack and consumes about 17 kW. Argonne National Labs, future home of what’s expected to be the United States’ first exascale supercomputer, says it has already deployed a CS-1. Argonne is one of two announced U.S. National Laboratories customers for Cerebras, the other being Lawrence Livermore National Laboratory.

Developing Purpose-Built & Turnkey RF Applications

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/developing-purposebuilt-turnkey-rf-applications

Developing Purpose-Built & Turnkey RF Applications 

This ThinkRF white paper will explore how SIs can develop a purpose-built, turnkey RF application that lets end-users improve their business and understand the spectrum environment.

image

Save Time with Ready-To-Use Measurements

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/save-time-with-readytouse-measurements

The right measurement applications can increase the functionality of your signal analyzer and reduce your time to insight with ready-to-use measurements, built-in results displays, and standards conformance tests. They can also help ensure consistent measurement results across different teams and your design cycle. This efficiency means you can spend less time setting up measurements and more time evaluating and improving your designs. Learn about general-purpose or application-specific measurements that can help save you time and maintain measurement consistency in this eBook.

image

Register for Our Application Note “Tips and Tricks on How to Verify Control Loop Stability”

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/register-for-our-application-note-tips-and-tricks-on-how-to-verify-control-loop-stability

The Application Note explains the main measurement concept and will guide the user during the measurements and mention the main topics in a practical manner. Wherever possible, a hint is given where the user should pay attention. 

FedEx Ground Uses Virtual Reality to Train and Retain Package Handlers

Post Syndicated from Michelle V. Rafter original https://spectrum.ieee.org/tech-talk/computing/software/fedex-ground-uses-vr-to-train-and-retain-package-handlers

Package handlers who work on FedEx Ground loading docks load and unload 8.5 million packages a day. The volume and the physical nature of the work make it a tough job—tougher than many new hires realize until they do it. Some quit almost immediately, according to Denise Abbott, FedEx Ground’s vice president of human resources.

So, when FedEx Corp.’s truck package delivery division evaluated how best to incorporate virtual reality into employee training, teaching newly hired package handlers what to expect on the job and how to stay safe doing it quickly rose to the top of the list.

“It allows us to bring an immersive learning technology into the classroom so people can practice before they step foot on a dock,” said Jefferson Welch, human resource director for FedEx Ground University, the division’s training arm. He and Abbott talked about the company’s foray into VR-based training during a presentation at the recent HR Technology Conference in Las Vegas.

The Latest Techniques in Power Supply Test – Get the App Note

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/the-latest-techniques-in-power-supply-test-get-the-app-note

DC Electronic Loads are becoming more popular in test systems as more electronic devices convert or store energy. Learn about Keysight’s next-generation electronic loads, allowing for a complete DC power conversion solution on the popular N6700 modular power system.

image

At Domino’s Biggest Franchisee, a Chatbot Named “Dottie” Speeds Up Hiring

Post Syndicated from Michelle V. Rafter original https://spectrum.ieee.org/tech-talk/computing/software/at-dominos-biggest-franchisee-a-chatbot-named-dottie-speeds-up-hiring

At RPM Pizza, a chatbot nicknamed “Dottie” has made hiring almost as fast as delivering pizzas.

RPM adopted the text message-based chatbot along with live chat and text-based job applications to speed up multiple aspects of the hiring process, including identifying promising job candidates and scheduling initial interviews.

It makes sense to use texting for hiring, said Merrin Mueller, RPM’s head of people and marketing, during a presentation on the chatbot at the recent HR Technology Conference in Las Vegas. Job hunters respond to a text faster than an email. At a time when U.S. unemployment is low, competition for hourly workers is fierce, and company recruiters are overwhelmed, you have to act fast.

“People who apply here are applying at Taco Bell and McDonald’s too, and if we don’t get to them right away and hire them faster, they’ve already been offered a job somewhere else,” Mueller said.