Tag Archives: computing

Gate Drive Measurement Considerations

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/gate-drive-measurement-considerations

One of the primary purposes of a gate driver is to enable power switches to turn on and off faster, improving rise and fall times. Faster switching enables higher efficiency and higher power density, reducing losses in the power stage associated with high slew rates. However, as slew rates increase, so do measurement and characterization uncertainty. 

Effective measurement and characterization considerations must account for: ► Proper gate driver design – Accurate timing (propagation delay in regard to skew, PWD, jitter) – Controllable gate rise and fall times  – Robustness against noise sources (input glitches and CMTI) ► Minimized noise coupling ► Minimized parasitic inductance

The trend for silicon based power designs over wide bandgap power designs makes measurement and characterization a greater challenge. High slew rates in SiC and GaN devices present  designers with hazards such as large overshoots and ringing, and potentially large  unwanted voltage transients that can cause spurious switching of the MOSFETs.

Researchers Can Make AI Forget You

Post Syndicated from Matthew Hutson original https://spectrum.ieee.org/tech-talk/computing/software/researchers-can-make-ai-forget-you

Whether you know it or not, you’re feeding artificial intelligence algorithms. Companies, governments, and universities around the world train machine learning software on unsuspecting citizens’ medical records, shopping history, and social media use. Sometimes the goal is to draw scientific insights, and other times it’s to keep tabs on suspicious individuals. Even AI models that abstract from data to draw conclusions about people in general can be prodded in such a way that individual records fed into them can be reconstructed. Anonymity dissolves.

To restore some amount of privacy, recent legislation such as Europe’s General Data Protection Regulation and the California Consumer Privacy Act provides a right to be forgotten. But making a trained AI model forget you often requires retraining it from scratch with all the data but yours. This process that can take weeks of computation.

Two new papers offer ways to delete records from AI models more efficiently, possibly saving megawatts of energy and making compliance more attractive. “It seemed like we needed some new algorithms to make it easy for companies to actually cooperate, so they wouldn’t have an excuse to not follow these rules,” said Melody Guan, a computer scientist at Stanford and co-author of the first paper.

How to Improve Security Visibility and Detection-Response Operations in AWS

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how-to-improve-security-visibility-and-detectionresponse-operations-in-aws

Security teams often handle a large stream of alerts, creating noise and impairing their ability to determine which incidents to prioritize. By aggregating security information from various sources and automating incident response, organizations can increase visibility into their environment and focus on the most important potential threats. In this webinar, SANS and AWS Marketplace explore how organizations can leverage solutions to create more signal and less noise for actionable responses, enhancing and accelerating security operations.

Register today to be among the first to receive the associated whitepaper written by SANS Analyst and Senior Instructor Dave Shackleford.

Attendees will learn to:

  • Use continuous monitoring to gain insight into events and behaviors that move into and through your cloud environment
  • Integrate security incident and event management (SIEM) solutions to enhance detection and investigation of potential threats
  • Leverage security orchestration automation and response (SOAR) technologies to auto-remediate events and reduce noise in your environment

Neural Networks Can Drive Virtual Racecars Without Learning

Post Syndicated from Matthew Hutson original https://spectrum.ieee.org/tech-talk/computing/software/neural-networks-ai-artificial-intelligence-drives-virtual-racecars-without-learning

Animals are born with innate abilities and predispositions. Horses can walk within hours of birth, ducks can swim soon after hatching, and human infants are automatically attracted to faces. Brains have evolved to take on the world with little or no experience, and many researchers would like to recreate such natural abilities in artificial intelligence.

New research finds that artificial neural networks can evolve to perform tasks without learning. The technique could lead to AI that is much more adept at a wide variety of tasks such as labeling photos or driving a car.

Artificial neural networks are arrangements of small computing elements (“neurons”) that pass information between them. The networks typically learn to perform tasks like playing games or recognizing images by adjusting the “weights” or strengths of the connections between neurons. A technique called neural architecture search tries lots of network shapes and sizes to find ones that learn better for a specific purpose.

Will China Attain Exascale Supercomputing in 2020?

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/computing/hardware/will-china-attain-exascale-supercomputing-in-2020

graphic link to special report landing page

To the supercomputer world, what separates “peta” from “exa” is more than just three orders of magnitude.

As measured in floating-point operations per second (a.k.a. FLOPS), one petaflop (1015 FLOPS) falls in the middle of what might be called commodity high-performance computing (HPC). In this domain, hardware is hardware, and what matters most is increasing processing speed as cost-effectively as possible.

Now the United States, China, Japan, and the European Union are all striving to reach the exaflop (1018 ) scale. The Chinese have claimed they will hit that mark in 2020. But they haven’t said so lately: Attempts to contact officials at the National Supercomputer Center, in Guangzhou; Tsinghua University, in Beijing; and Xi’an Jiaotong University yielded either no response or no comment.

It’s a fine question of when exactly the exascale barrier is deemed to have been broken—when a computer’s theoretical peak performance exceeds 1 exaflop or when its maximum real-world compute speed hits that mark. Indeed, the sheer volume of compute power is less important than it used to be.

“Now it’s more about customization, special-purpose systems,” says Bob Sorensen, vice president of research and technology with the HPC consulting firm Hyperion Research. “We’re starting to see almost a trend towards specialization of HPC hardware, as opposed to a drive towards a one-size-fits-all commodity” approach.

The United States’ exascale computing efforts, involving three separate machines, total US $1.8 billion for the hardware alone, says Jack Dongarra, a professor of electrical engineering and computer science at the University of Tennessee. He says exascale algorithms and applications may cost another $1.8 billion to develop.

And as for the electric bill, it’s still unclear exactly how many megawatts one of these machines might gulp down. One recent ballpark estimate puts the power consumption of a projected Chinese exaflop system at 65 megawatts. If the machine ran continuously for one year, the electricity bill alone would come to about $60 million.

Dongarra says he’s skeptical that any system, in China or anywhere else, will achieve one sustained exaflop anytime before 2021, or possibly even 2022. In the United States, he says, two exascale machines will be used for public research and development, including seismic analysis, weather and climate modeling, and AI research. The third will be reserved for national-security research, such as simulating nuclear weapons.

“The first one that’ll be deployed will be at Argonne [National Laboratory, near Chicago], an open-science lab. That goes by the name Aurora or, sometimes, A21,” Dongarra says. It will have Intel processors, with Cray developing the interconnecting fabric between the more than 200 cabinets projected to house the supercomputer. A21’s architecture will reportedly include Intel’s Optane memory modules, which represent a hybrid of DRAM and flash memory. Peak capacity for the machine should reach 1 exaflop when it’s deployed in 2021.

The other U.S. open-science machine, at Oak Ridge National Laboratory, in Tennessee, will be called Frontier and is projected to launch later in 2021 with a peak capacity in the neighborhood of 1.5 exaflops. Its AMD processors will be dispersed in more than 100 cabinets, with four graphics processing units for each CPU.

The third, El Capitan, will be operated out of Lawrence Livermore National Laboratory, in California. Its peak capacity is also projected to come in at 1.5 exaflops. Launching sometime in 2022, El Capitan will be restricted to users in the national security field.

China’s three announced exascale projects, Dongarra says, also each have their own configurations and hardware. In part because of President Trump’s China trade war, China will be developing its own processors and high-speed interconnects.

“China is very aggressive in high-performance computing,” Dongarra notes. “Back in 2001, the Top 500 list had no Chinese machines. Today they’re dominant.” As of June 2019, China had 219 of the world’s 500 fastest supercomputers, whereas the United States had 116. (Tally together the number of petaflops in each machine and the numbers come out a little different. In terms of performance, the United States has 38 percent of the world’s HPC resources, whereas China has 30 percent.)

China’s three exascale systems are all built around CPUs manufactured in China. They are to be based at the National University of Defense Technology, using a yet-to-be-announced CPU; the National Research Center of Parallel Computer Engineering and Technology, using a nonaccelerated ARM-based CPU; and the Chinese HPC company Sugon, using an AMD-licensed x 86 with accelerators from the Chinese company HyGon.

Japan’s future exascale machine, Fugaku, is being jointly developed by Fujitsu and Riken, using ARM architecture. And not to be left out, the EU also has exascale projects in the works, the most interesting of which centers on a European processor initiative, which Dongarra speculates may use the open-source RISC-V architecture.

All four of the major players—China, the United States, Japan, and the EU—have gone all-in on building out their own CPU and accelerator technologies, Sorensen says. “It’s a rebirth of interesting architectures,” he says. “There’s lots of innovation out there.”

Building a Quantum Computer From Off-the-Shelf Parts

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/computing/hardware/scalable-qubits-quantum-computer-news-silicon-wafer

A new technique for fabricating quantum bits in silicon carbide wafers could provide a scalable platform for future quantum computers. The quantum bits, to the surprise of the researchers, can even be fabricated from a commercial chip built for conventional computing.

The recipe was surprisingly simple: Buy a commercially available wafer of silicon carbide (a temperature-robust semiconductor used in electric vehicles, LED lights, solar cells, and 5G gear) and shoot an electron beam at it. The beam creates a deficiency in the wafer which behaves, essentially, as a single electron spin that can be manipulated electrically, magnetically, or optically.

Breaking Down Barriers in FPGA Engineering Speeds up Development

Post Syndicated from Digilent original https://spectrum.ieee.org/computing/networks/breaking-down-barriers-in-fpga-engineering-speeds-up-development

It’s hard to reinvent the wheel—they’re round and they spin. But you can make them more efficient, faster, and easier for anyone to use. This is essentially what Digilent Inc. has done with its new Eclypse Z7 Field-Programmable Gate Array (FPGA) board. The Eclypse Z7 represents the first host board of Digilent’s new Eclyspe platform that aims at increasing productivity and accelerating FPGA system design.

To accomplish this, Digilent has taken the design and development of FPGAs out of the silo restricted to highly specialized digital design engineers or embedded systems engineers and opened it up to a much broader group of people that have knowledge of common programming languages, like C and C++. Additional languages like Python and LabVIEW are expected to be supported in future updates.

FPGAs have been a key tool for engineers to tailor a circuit board exactly the way it is needed to be for a particular application. To program these FPGAs specialized development tools are needed. Typically, the tool chain used for Xilinx FPGAs is a programming environment known as Vivado, provided by Xilinx, one of the original developers of FPGAs.

“FPGA development environments like Vivado really require a very niche understanding and knowledge,” said Steve Johnson, president of Digilent. “As a result, they are relegated to a pretty small cadre of engineers.”

Johnson added, “Our intent with the Eclypse Z7 is to empower a much larger number of engineers and even scientists so that they can harness the power of these FPGAs and Systems on a Chip (SoCs), which typically would be out of their reach. We want to broaden the customer base and empower a much larger group of people.”

Digilent didn’t just target relatively easy SoC chip devices. Instead, the company jumped into the deep end of the FPGA pool and focused on the development of a Zynq 7020 FPGA SoC from Xilinx, which has a fairly complex combination of a dual-core ARM processor with an FPGA fabric. This complex part presents even more of challenge for most engineers.

To overcome this complexity, Johnson explains that they essentially abstracted the complexity out of the system level development of Input/Output (I/O) modules by incorporating a software layer and FPGA “blocks” that serve as a kind of driver.

“You can almost think of it as when you plug a printer into a computer, you don’t need to know all of the details of how that printer works,” explained Johnson. “We’re essentially providing a low-level driver for each of these I/O modules so that someone can just plug it in.”

With this capability, a user can configure an I/O device that they just plugged in and start acquiring data from it, according to Johnson. Typically, this would require weeks of work involving the pouring over of data sheets and understanding the registers of the devices that you’ve plugged in. You would need to learn how to communicate with that device at a very low-level so that it was properly configured to move data back and forth. With the new Eclypse Z7 all of that trouble has been taken off the table.

image

Beyond the software element of the new platform, there’s a focus on high-speed analog and digital I/O. This focus is partly due to Digilent’s alignment with its parent company—National Instruments—and its focus around automated measurements. This high-speed analog and digital I/O is expected to be a key feature for applications where FPGAs and SoCs are really powerful: Edge Computing. 

In these Edge Computing environments, such as in predictive maintenance, you need analog inputs to be able to do vibration or signal monitoring applications. In these types of applications you need high-speed analog inputs and outputs and a lot of processing power near the sensor.

The capabilities of these FPGA and SoC devices in Edge Computing could lead to applying machine learning or artificial intelligence to these devices, ushering in a convergence between two important trends – Artificial Intelligence (AI) and the Internet of Things (IoT) that’s coming to be known as the Artificial Intelligence of Things (AIoT), according to Johnson.

Currently, the FPGA and SoC platforms used in these devices can take advantage of 4G networks to enable Edge devices like those envisioned in AIoT scenarios. But this capability will be greatly enhanced when 5G networks are mature. At that time, Johnson envisions you’ll just have a 5G module that you can plug into a USB or miniPCIe port on an Edge device.

“These SoCs—these ARM processors with the FPGAs attached to them—are exactly the right kind of architecture to do this low-power, small form factor, Edge Computing,” said Johnson. “The analog input that we’re focusing on is intended to both sense the real world and then process and deliver that information. So they’re meant exactly for that kind of application.”

This move by Digilent to empower a greater spectrum of engineers and scientists is in line with their overall aim of helping customers create, prototype and develop small, embedded systems—whether they are medical devices or edge computing devices.

Improving Codec Execution With ARM Cortex-M Processors

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/improving-codec-execution-with-arm-cortexm-processors

Digital Signal Processing (DSP) has traditionally required the use of an expensive dedicated DSP processor. While solutions have been implemented in microcontrollers using fixed-point math libraries for decades, this does require software libraries that can use more processing cycles than a processor capable of executing DSP instructions.

In this paper, we will explore how we can speed up DSP codecs using the DSP extensions built-in to the Arm Cortex-M processors.

You will learn:

  • The technology trends moving data processing to the edge of the network to enable more compute performance
  • What are the DSP extensions on the Arm Cortex-M processors and the benefits they bring, including cost savings and decreased system-level complexity
  • How to convert analog circuits to software using modeling software such as MathWorks MATLAB or Advanced Solutions Nederlands (ASN) filter designer
  • How to utilize the floating-point unit (FPU) with Cortex-M to improve performance
  • How to use the open-source CMSIS-DSP software library to create IIR and FIR filters in addition to calculating a Fast Fourier Transform (FFT)
  • How to implement the IIR filter that utilizes CMSIS-DSP using the Advanced Solutions Nederlands (ASN) designer

Monitoring Your Network with Time Series

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/monitoring-your-network-with-time-series

Networks play a fundamental role in the adoption and growth of Internet applications. Penetrating enterprises, homes, factories, and even cities, networks sustain modern society. In this webinar, Daniella Pontes of InfluxData will explore the flexibility and potential use cases of open source and time series databases.

In this webinar you will:

-Learn how to use a time series database platform to monitor your network

-Understand the value of using open source tools

-Gain insight into what key aspects of network monitoring you should focus on

Intel Unveils Cryogenic Chips to Speed Quantum Computing

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/intel-unveils-cryogenic-chips-to-speed-quantum-computing

At the IEEE International Electron Devices Meeting in San Francisco this week, Intel is unveiling a cryogenic chip designed to accelerate the development of the quantum computers they are building with Delft University’s QuTech research group. The chip, called Horse Ridge for one of the coldest spots in Oregon, uses specially-designed transistors to provide microwave control signals to Intel’s quantum computing chips.

The quantum computer chips in development at IBM, Google, Intel, and other firms today operate at fractions of a degree above absolute zero and must be kept inside a dilution refrigerator. However, as companies have managed to increase the number of quantum bits (qubits) in the chips, and therefore the chips’ capacity to compute, they’ve begun to run into a problem. Each qubit needs its own set of wires leading to control and readout systems outside of the cryogenic container. It’s already getting crowded and as quantum computers continue to scale—Intel’s is up to 49 qubits now—there soon won’t be enough room for the wires.   

5G Small-Cell Base Station Antenna Array Design

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/5g-smallcell-base-station-antenna-array-design

In this eSeminar we will explore state-of-the-art simulation approaches for antenna array design, with a particular focus on 5G small-cell base station antennas.

Realizing the 5G promise of reliably providing high data rate connections to many users simultaneously requires the use of new design approaches for base station antennas. In particular, antenna arrays will increasingly be used to enable agile beam forming and massive MIMO technology, both required to provide good service in dynamic, complex urban environments with a high number of users. The array design capabilities of SIMULIA CST Studio Suite have grown dramatically over the last years and are relied on by many companies around the world.

Join us to learn more about how simulation can help you with your array design, as we answer the following questions.

  • How can antenna elements be designed and evaluated in terms of their suitability as an array element?
  • How can full arrays with real radomes be simulated accurately much more quickly than before using the new simulation-by-zones approach?
  • How can interference between multiple co-located arrays be evaluated using advanced hybrid simulation techniques?
  • Finally, how can the coverage performance of base station arrays in complex urban or indoor environments be predicted?

AI and Economic Productivity: Expect Evolution, Not Revolution

Post Syndicated from Jeffrey Funk original https://spectrum.ieee.org/computing/software/ai-and-economic-productivity-expect-evolution-not-revolution

In 2016, London-based DeepMind Technologies, a subsidiary of Alphabet (which is also the parent company of Google), startled industry watchers when it reported that the application of artificial intelligence had reduced the cooling bill at a Google data center by a whopping 40 percent. What’s more, we learned that year, DeepMind was starting to work with the National Grid in the United Kingdom to save energy throughout the country using deep learning to optimize the flow of electricity.

Could AI really slash energy usage so profoundly? In the three years that have passed, I’ve searched for articles on the application of AI to other data centers but find no evidence of important gains. What’s more, DeepMind’s talks with the National Grid about energy have broken down. And the financial results for DeepMind certainly don’t suggest that customers are lining up for its services: For 2018, the company reported losses of US $571 million on revenues of $125 million, up from losses of $366 million in 2017. Last April, The Economist characterized DeepMind’s 2016 announcement as a publicity stunt, quoting one inside source as saying, “[DeepMind just wants] to have some PR so they can claim some value added within Alphabet.”

This episode encouraged me to look more deeply into the economic promise of AI and the rosy projections made by champions of this technology within the financial sector. This investigation was just the latest twist on a long- standing interest of mine. In the early 1980s, I wrote a doctoral dissertation on the economics of robotics and AI, and throughout my career as a professor and technology consultant I have followed the economic projections for AI, including detailed assessments by consulting organizations such as Accenture, PricewaterhouseCoopers International (PwC), and McKinsey.

These analysts have lately been asserting that AI-enabled technologies will dramatically increase economic output. Accenture claims that by 2035 AI will double growth rates for 12 developed countries and increase labor productivity by as much as a third. PwC claims that AI will add $15.7 trillion to the global economy by 2030, while McKinsey projects a $13 trillion boost by that time.

Other forecasts have focused on specific sectors such as retail, energy, education, and manufacturing. In particular, the McKinsey Global Institute assessed the impact of AI on these four sectors in a 2017 report titled Artificial Intelligence: The New Digital Frontier? and did so for a much longer list of sectors in a 2018 report. In the latter, the institute concluded that AI techniques “have the potential to create between $3.5 trillion and $5.8 trillion in value annually across nine business functions in 19 industries. This constitutes about 40 percent of the overall $9.5 trillion to $15.4 trillion annual impact that could potentially be enabled by all analytical techniques.”

Wow. These are big numbers. If true, they create a powerful incentive for companies to pursue AI—with or without help from McKinsey consultants. But are these predictions really valid?

Many of McKinsey’s estimates were made by extrapolating from claims made by various startups. For instance, its prediction of a 10 percent improvement in energy efficiency in the U.K. and elsewhere was based on the purported success of DeepMind and also of Nest Labs, which became part of Google’s hardware division in 2018. In 2017, Nest, which makes a smart thermostat and other intelligent products for the home, lost $621 million on revenues of $726 million. That fact doesn’t mesh with the notion that Nest and similar companies are contributing, or are poised to contribute, hugely to the world economy.

So I decided to investigate more systematically how well such AI startups were doing. I found that many were proving not nearly as valuable to society as all the hype would suggest. This assertion will certainly rub a lot of people the wrong way, the analysts at McKinsey among them. So I’d like to describe here how I reached my much more pessimistic conclusions.

My investigation of Nest Labs expanded into a search for evidence that smart meters in general are leading to large gains in energy efficiency. In 2016, the British government began a coordinated campaign to install smart meters throughout the country by 2020. And since 2010, the U.S. Department of Energy has invested some $4.5 billion installing more than 15 million smart meters throughout the United States. Curiously enough, all that effort has had little observed impact on energy usage. The U.K. government recently revised downward the amount it figures a smart meter will save each household annually, from £26 to just £11. And the cost of smart meters and their installation has risen, warns the U.K.’s National Audit Office. All of this is not good news for startups banking on the notion that smart thermostats, smart home appliances, and smart meters will lead to great energy savings.

Are other kinds of AI startups having a greater positive effect on the economy? Tech sector analyst CB Insights reports that overall venture capital funding in the United States was $115 billion in 2018 [PDF], of which $9.3 billion went to AI startups. While that’s just 8 percent of the total, it’s still a lot of money, indicating that there are many U.S. startups working on AI (although some overstate the role of AI in their business plans to acquire funding).

To probe further, I gathered data on the U.S. AI startups that have received the most funding and looked at which industries they were hoping to disrupt. The reason for focusing on the United States is that it has the longest history of startup success, so it seems likely that its AI startups are more apt to flourish than those in other countries. My intention was to evaluate whether these U.S. startups had succeeded in shaking up various industries and boosting productivity or whether they promise to do so shortly.

In all, I examined 40 U.S. startups working on AI. These either had valuations greater than $1 billion or had more than $70 million in equity funding. Other than two that had been acquired by public companies, the startups I looked at are all private firms. I found their names and product offerings in lists of leading startups that Crunchbase, Fortune, and Datamation had compiled and published. I then updated my data set with more recent news about these companies (including reports of some shutdowns).

I categorized these 40 startups by the type of product or service they offered. Seventeen are working on what I would call basic computer hardware and software (Wave Computing and OpenAI, respectively, are examples), including cybersecurity (CrowdStrike, for instance). That is, I included in this category companies building tools that are intended to support the computing environment itself.

Making up another large fraction—8 of the 40—are companies that develop software that automates various tasks. The robotic process automation software being developed by Automation Anywhere, UiPath, and WorkFusion, for example, enables higher productivity among professionals and other white-collar workers. Software from Brain Corp. converts manual equipment into intelligent robots. Algolia, Conversica, and Xant offer software to improve sales and marketing. ZipRecruiter targets human resources.

The remaining startups on my list are spread among various industries. Three (Flatiron Health, Freenome, Tempus Labs) work in health care; three more (Avant, Upstart, ZestFinance) are focused on financial technology; two (Indigo, Zymergen) target agriculture or synthetic biology; and three others (Nauto, Nuro, Zoox) involve transportation. There is just one startup each for geospatial analytics (Orbital Insight), patterns of human interaction (Afiniti), photo/video recognition (Vicarious), and music recognition (SoundHound).

Are there indications that these startups will bring large productivity improvements in the near future? In my view, software that automates tasks normally carried out by white-collar workers is probably the most promising of the products and services that AI is being applied to. Similar to past improvements in tools for white-collar professionals, including Excel for accountants and computer-aided design for engineers and architects, these types of AI-based tools have the greatest potential impact on productivity. For instance, there are high hopes for generative design, in which teams of people input constraints and the system proposes specific designs.

But looking at the eight startups on my list that are working on automation tools for white-collar workers, I realized that they are not targeting things that would lead to much higher productivity. Three of them are focused on sales and marketing, which is often a zero-sum game: The company with the best software takes customers from competitors, with only small increases in productivity under certain conditions. Another one of these eight companies is working on human-resource software, whose productivity benefits may be larger than those for sales and marketing but probably not as large as you’d get from improved robotic process automation.

This leaves four startups that do offer such software, which may lead to higher productivity and lower costs. But even among these startups, none currently offers software that helps engineers and architects become more productive through, for example, generative design. Software of this kind isn’t coming from the largest startups, perhaps because there is a strong incumbent, Autodesk, or because the relevant AI is still not developed enough to provide truly useful tools in this area.

The relatively large number of startups I classified as working on basic hardware and software for computing (17) also suggests that productivity improvements are still many years away. Although basic hardware and software are a necessary part of developing higher-level AI-based tools, particularly ones utilizing machine learning, it will take time for the former to enable the latter. I suppose this situation simply reflects that AI is still in its infancy. You certainly get that impression from companies like OpenAI: Although it has received $1 billion in funding (and a great deal of attention), the vagueness of its mission—“Benefiting all of humanity”—suggests that it will take many years yet for specific useful products and services to evolve from this company’s research.

The large number of these startups that are focused on cybersecurity (seven) highlights the increasing threat of security problems, which raise the cost of doing business over the Internet. AI’s ability to address cybersecurity issues will likely make the Internet more safe, secure, and useful. But in the end, this thrust reflects yet higher costs in the future for Internet businesses and will not, to my mind, lead to large productivity improvements within the economy as a whole.

If not from the better software tools it brings, where will AI bring substantial economic gains? Health care, you would think, might benefit greatly from AI. Yet the number of startups on my list that are applying AI to health care (three) seems oddly small if that were really the case. Perhaps this has something to do with IBM’s experience with its Watson AI, which proved a disappointment when it was applied to medicine.

Still, many people remain hopeful that AI-fueled health care startups will fill the gap left by Watson’s failures. Arguing against this is Robert Wachter, who points out that it’s much more difficult to apply computers to health care than to other sectors. His 2015 book, The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, details the many reasons that health care lags other industries in the application of computers and software. It’s not clear that adding AI to the mix of digital technologies available will do anything to change the situation.

There are also some big applications missing from the list of well-funded AI startups. Housing represents the largest category of consumer expenditures in the United States, but none of these startups are addressing this sector of the economy at all. Transportation is the second largest expenditure, and it is the focus of just three of these startups. One is working on a product that identifies distracted drivers. Another intends to provide automated local deliveries. Only one startup on the list is developing driverless passenger vehicles. That there is only one working on self-driving cars is consistent with the pessimism recently expressed by executives of Ford, General Motors, and Mercedes-Benz about the prospects for driverless vehicles taking to the streets in large numbers anytime soon, even though $35 billion has already been spent on R&D for them.

Admittedly, my assessment of what these 40 companies are doing and whether their offerings will shake up the world over the next decade is subjective. Perhaps it makes better sense to consider a more objective measure of whether these companies are providing value to the world economy: their profitability.

Alas, good financial data is not available on privately held startups, only two of the companies on my list are now part of public companies, and startups often take years to turn a profit (Amazon took seven years). So there isn’t a lot to go on here. Still, there are some broad trends in the tech sector that are quite telling.

The fraction of tech companies that are profitable by the time they go public dropped from 76 percent in 1980 to just 17 percent in 2018, even though the average time to IPO has been rising—it went from 2.8 years in 1998 to 7.7 years in 2016, for example. Also, the losses of some well-known startups that took a long time to go public are huge. For instance, none of the big ride-sharing companies are making a profit, including those in the United States (Uber and Lyft), China, India, and Singapore, with total losses of about $5 billion in 2018. Most bicycle and scooter sharing, office sharing, food delivery, P2P (peer-to peer) lending, health care insurance and analysis, and other consumer service startups are also losing vast amounts of money, not only in the United States but also in China and India.

Most of the 40 AI startups I examined will probably stay private, at least in the near term. But even if some do go public several years down the road, it’s unlikely they’ll be profitable at that point, if the experience of many other tech companies is any guide. It may take these companies years more to achieve the distinction of making more money than they are spending.

For the reasons I’ve given, it’s very hard for me to feel confident that any of the AI startups I examined will provide the U.S. economy with a big boost over the next decade. Similar pessimism is also starting to emerge from such normally cheery publications as Technology Review and Scientific American. Even the AI community is beginning to express concerns in books such as The AI Delusion and Rebooting AI: Building Artificial Intelligence We Can Trust, concerns that are growing amid the rising hype about many new technologies.

The most promising areas for rapid gains in productivity are likely to be found in robotic process automation for white-collar workers, continuing a trend that has existed for decades. But these improvements will be gradual, just as those for computer-aided design and computer-aided engineering software, spreadsheets, and word processing have been.

Viewed over the span of decades, the value of such software is impressive, bringing huge gains in productivity for engineers, accountants, lawyers, architects, journalists, and others—gains that enabled some of these professionals (particularly engineers) to enrich the global economy in countless ways.

Such advances will no doubt continue with the aid of machine learning and other forms of AI. But they are unlikely to be nearly as disruptive—for companies, for workers, or for the economy as a whole—as many observers have been arguing.

About the Author

Jeffrey Funk retired from the National University of Singapore in 2017, where he taught (among other subjects) a course on the economics of new technology as a professor of technology management. He remains based in Singapore, where he consults in various areas of technology and business.

Simulating Radar Signals for Meaningful Radar Warning Receiver Tests

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/simulating-radar-signals-for-meaningful-radar-warning-receiver-tests

Testing state-of-the-art radar warning equipment with adequate RF and microwave signals is a challenging task for radar engineers. Generating test signals in a lab that are similar to a real-life RF environment is considered a supreme discipline when evaluating advanced radars. A scenario for testing radar-warning receivers (RWR) can easily contain many highly mobile emitters that are capable of mode changes, many interfering signals, and a moving multi-channel receiver. Today, there are intuitive and easy-to-use software tools available that take a lot of work off the hands of radar engineers.

In this seminar, you will learn how to:

  • Create radar scenarios ranging from simple pulses to the most demanding emitter scenarios using PC software
  • Generate complex radar signals with off-the-shelf vector signal generators
  • Increase flexibility during radar simulation by streaming pulse descriptor words (PDW)

3D Electromagnetic Simulation of Antennas Installed on an Aircraft

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/3d-electromagnetic-simulation-of-antennas-installed-on-an-aircraft

Learn about the importance of using a range of methods in the 3D electromagnetic (EM) simulation of installed antenna performance.

3D EM fullwave simulation is a common practice for antenna designers and Analyst. The variety in antenna topologies and design specifications necessitate a range of simulation algorithms to ensure an efficient or even feasible solution is available.

This range of technology becomes even more important as antennas’ operation in the installed environment is considered. The application of different EM methods as well as hybridization is discussed via an example workflow of antennas installed on an aircraft. Techniques for specific installed antenna scenarios including radome analysis, co-site interference and initial field of view analysis are presented using the SIMULIA CST Studio Suite toolset.

Why Use Time-Of-Flight for Distance Measurement?

Post Syndicated from Terabee original https://spectrum.ieee.org/computing/hardware/why-use-timeofflight-for-distance-measurement

The most sophisticated of sensor module technologies are our innovative 3D ToF cameras which can capture depth data across three spatial dimensions. Depth sensing with Time-of-Flight sensors is discreet, completely eye-safe, and it is designed to work indoors even in low light or complete darkness.

This technology is a powerful enabler for applications such as people counting, digital stock monitoring, and room occupancy monitoring. With fast refresh rates, our 3D TOF sensor modules are also able to distinguish between simple gestures to assist with innovative human-machine interfaces and next-generation contactless controls.

 Shifting to ever smarter and versatile sensor modules

Terabee has just released a new type of indirect Time-of-Flight smart sensor for industrial and logistics applications. The sensor offers 12.5 meter detection capabilities using Time-of-Flight technology. It features a robust IP65 enclosure to ensure dust-proof and water-resistant capabilities in a compact and lightweight form factor (99 grams).

The sensor provides proximity notification with a classic NO/NC switching output (0-24V), while also communicating calibrated distance data via RS485 interface.

The sensor module features 6 embedded operating modes allowing for programmable distance thresholds. This makes it easy to set on-the-field in a matter of minutes thanks to teach-in buttons.

Operating modes allow the same sensor module to trigger alarms, detect movements, count objects and check for object alignment.

This versatility means that a single sensor can be purchased in bulk and programmed to automate many different control and monitoring processes. This is especially useful in reconfigurable warehouses and factories to save precious setup time.

image

Next steps

Terabee plans to build on its deep technology expertise in sensing hardware and to develop cutting-edge applications. In the coming 12 months, we will offer further solutions for many markets such as mobile robotics, smart farming, smart city, smart buildings and industrial automation in the form of devices, software and OEM services.

Learn more about Terabee

What is Time-of-Flight (ToF) distance sensing?

Several methods of detection are available for determining the proximity of an object or objects in real-time, each of which is differentiated by a diverse range of underlying hardware. As a result, distance sensors incorporate an extremely broad field of technologies: infrared (IR) triangulation, laser, light-emitting diode Time-of-Flight, ultrasonic, etc.

Various types of signals, or carriers, can be used to apply the Time-of-Flight principle, the most common being sound and light. Sound is mostly used in ultrasound sensors or radars.

Active optical Time-of-Flight, is a remote-sensing method to estimate range between a sensor and a targeted object by illuminating an object with a light source and by measuring the travel time from the emitter to the object and back to the receiver.

For light carriers, two technologies are available today: direct ToF ,based on pulsed-light, and indirect ToF based on continuous wave modulation.

 Terabee’s unique Time-of-Flight sensing technology

Established in 2012, Terabee has since grown into a diverse organization made up of leading experts in the sensing sector. As a certified CERN Technology partner, we offer an expansive range of sensor modules and solutions for some of the most cutting-edge fields on the market, from robotics to industry 4.0 and IoT applications.

At Terabee, we use light as carriers for our sensors to combine higher update rates, longer range, lower weight, and eye-safety. By carefully tuning emitted infrared light to specific wavelengths, we can ensure less signal disturbance and easier distinction from natural ambient light, resulting in the highest performing distance sensors for their given size and weight.

Terabee ToF sensor modules utilize infrared LEDs which are eye-safe and energy-efficient, while providing broader fields of view than lasers, offering a larger detection area per pixel. Our single 2D infrared sensors have a 2 to 3° field of view (FoV), which provides a more stable data stream for improved consistency of results in many applications.

Over the years we have mastered the product of sensor modules using indirect ToF technology. Thanks to our in-house R&D, Product Development, Production and Logistics departments, we have managed to push the boundaries of this technology for increased precision, longer range and smaller size, offering great value to customers at competitive prices.

We also offer multidirectional sensor module arrays, combining the functionalities of multiple ToF sensors for simultaneous monitoring of multiple directions in real-time, which is especially useful for short-range anti-collision applications. Our unique Hub comes with different operating modes to avoid crosstalk issues and transmit data from up to 8 sensors to a machine.

Screening Technique Found 142 Malicious Apps in Apple’s App Store

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/new-screening-technique-reveals-142-malicious-apple-apps

Journal Watch report logo, link to report landing page

Apple’s App Store is renowned for its security—but even Apple is inadvertently allowing a small handful of malicious apps to sneak through its screening process and onto some people’s phones, new research shows. The good news is that the researchers involved, who published their findings on 31 October in IEEE Transactions on Dependable and Secure Computing, have also uncovered a way to detect these Trojan horses.

Thanks to strict guidelines and bans on certain practices that facilitate the spread of malicious apps, the vast majority of apps in Apple’s App Store are safe. However, some malicious apps are still making their way through the screening process by exhibiting one user interface while harboring a second, malicious user interface.   

“[These apps] display a benign user interface under Apple’s review but reveal their hidden, harmful user interfaces after being installed on users’ devices,” explains Yeonjoon Lee, a researcher at Hanyang University who was involved in the study.

AI and the Future of Work: What to look out for

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/computing/it/ai-and-the-future-of-work-what-to-look-out-for

The robots have come for our jobs. This is the fear that artificial intelligence increasingly stokes with both the tech and policy elite and the public at large. But how worried should we really be?

To consider what impact AI will have on employment, a conference at MIT titled The Future of Work is convening this week, bringing together some leading thinkers and analysts. In advance of the conference, IEEE Spectrum talked with R. David Edelman, director of MIT’s Project on Technology, Economy & National Security about his take on AI’s coming role.

Edelman says he’s lately seen AI-related worry (mixed with economic anxiety) ramp up in much the same way that cybersecurity worries began ratcheting up ten or fifteen years ago.

“Increasingly, issues like the implications of AI on the future of work are table stakes for understanding our economic future and our ability to deliver prosperity for Americans,” he says. “That’s why you’re seeing broad interest, not just from the Department of Labor, not just from the Council of Economic Advisors, but across the government and, in turn, across society.”

Before coming to MIT, Edelman worked in the White House from 2010-’17 under a number of titles, including Special Assistant to the President on Economic and Technology Policy. Edelman also organizes a related conference in the spring at MIT, the MIT AI Policy Congress.

At this week’s Future of Work conference though, Edelman say he’ll be keeping his ears open for a number of issues that he thinks are not quite on everyone’s radar yet. But they may be soon.

For starters, Edelman says, not enough attention in mainstream conversations concerns the boundary between AI-controlled systems and human-controlled ones.

“We need to figure out when people are comfortable handing decisions over to robots, and when they’re not,” he says. “There is a yawning gap between the percentage of Americans who are willing to turn their lives over to autopilot on a plane, and the percentage of Americans willing to turn their lives over to autopilot on a Tesla.”

Which, to be clear, Edelman is not saying represents any sort of unfounded fear. Just that public discussion over self-driving or self-piloting systems is very either/or. Either a self-driving system is seen as 100 percent reliable for all situations, or it’s seen as immature and never to be used.

Second, not enough attention has yet been devoted to the question of metrics we can put in place to understand when an AI system has earned public trust and when it has not.

AI systems are, Edelman points out, only as reliable as the data that created them. So questions about racial and socioeconomic bias in, for instance, AI hiring algorithms are entirely appropriate.“Claims about AI-driven hiring are careening evermore quickly forward,” Edelman says. “There seems to be a disconnect. I’m eager to know, Are we in a place where we need to pump the brakes on AI-influenced hiring? Or do we have some models in a technical or legal context that can give us the confidence we lack today that these systems won’t create a second source of bias?”

A third area of the conversation that Edelman says deserves more media and policy attention is the question of what industries does AI threaten most. While there’s been discussion about jobs that have been put in the AI cross hairs, less discussed, he says, is the bias inherent in the question itself.

A 2017 study by Yale University and the University of Oxford’s Future of Humanity Institute surveyed AI experts for their predictions about, in part, the gravest threats AI poses to jobs and economic prosperity. Edelman points out that the industry professionals surveyed all tipped their hands a bit in the survey: The very last profession AI researchers said would ever be automated was—surprise, surprise—AI researchers.

“Everyone believes that their job will be the last job to be automated, because it’s too complex for machines to possibly master,” Edelman says.

“It’s time we make sure we’re appropriately challenging this consensus that the only sort of preparation we need to do is for the lowest-wage and lowest-skilled jobs,” he says. “Because it may well be that what we think of as good middle-income jobs, maybe even requiring some education, might be displaced or have major skills within them displaced.”

Last is the belief that AI’s effect on industries will be to eliminate jobs and only to eliminate jobs. When, Edelman says, the evidence suggests any such threats could be more nuanced.

AI may indeed eliminate some categories of jobs but may also spawn hybrid jobs that incorporate the new technology into an old format. As was the case with the rollout of electricity at the turn of the 20th century, new fields of study spring up too. Electrical engineers weren’t really needed before electricity became something more than a parlor curiosity, after all. Could AI engineering one day be a field unto its own? (With, surely, its own categories of jobs and academic fields of study and professional membership organizations?)

“We should be doing the hard and technical and often untechnical and unglamorous work of designing systems to earn our trust,” he says. “Humanity has been spectacularly unsuccessful in placing technological genies back in bottles. … We’re at the vanguard of a revolution in teaching technology how to play nice with humans. But that’s gonna be a lot of work. Because it’s got a lot to learn.”